This is a “most subtil” question, and one particularly relevant to our present day discussion of the risks and benefits of artificial intelligence (AI).
Consider, in Genesis 1:29:
- And God said, Behold, I have given you every herb bearing seed, which is upon the face of all the earth, and every tree, in the which is the fruit of a tree yielding seed; to you it shall be for meat.
“[A]nd every tree…”: no restrictions, having already (Genesis 1:26) given humans “dominion … over all the earth”.
Then, in Genesis 2:16–17:
- And the LORD God commanded the man, saying, Of every tree of the garden thou mayest freely eat:
- But of the tree of the knowledge of good and evil, thou shalt not eat of it: for in the day that thou eatest thereof thou shalt surely die.
Now we have two conflicting commands from creator to created: “every tree …. to you it shall be for meat”, then for another specific tree, if “thou eatest thereof thou shalt surely die”.
Adam, having been endowed with the soul of a programmer and inexperienced in formal logic or interpreting the instructions of an omnipotent creator, has to decide the rule of precedence: does the first instruction apply because it was given earlier and is more broadly applicable, or does the second, more specific, abrogate part of the first with respect to one specific tree?
Note that this is not a matter of right and wrong: it is what computer language designers would call a matter of scope. Within the garden, does the general rule (Gen 1:29) apply, or does the subsequent specific rule (Gen 2:17) override it? Note that the general rule said “over all the earth”—is the garden in Eden not within “all of the Earth”?
Now, this might seem like sophomoric word games (which, indeed, it is), but now look at it in terms of what they’re calling the “AI alignment problem”, which has as its goal “to steer AI systems towards humans’ intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues some objectives, but not the intended ones.”
The designer of an AI system is in a position not unlike that of God in the Bible. The designer is all-powerful in that he or she can create any system at all, as it is built from pure logic which can be manipulated at will without constraints. But suppose the designer has goals which should be embodied in the created system: for example, not exterminating the human race or disassembling the Earth in the process of carrying out its mission. Keeping the AI from doing that and directing it to do “good works” as envisaged by the designer is the goal of “alignment”, and it appears to be a formidably complex problem and arguably formally impossible.
It may be impossible because there is no way to know what a system capable of universal computation will do other than turning it on and seeing what happens. It is impossible to know, even with a complete understanding of the machine and program, what the program will actually do once it starts to run. In simple cases, this has been formally proved: it is called the “halting problem”.
The Biblical Creator may have been faced with the same problem at the moment He added humans—universal problem solvers—to His creation. There is no way to know how they will interpret conflicting instructions, nor how they will imaginatively construe even the most explicit guidance initially given. Just ask any programmer about their experience with users in the real world.
All of this can be discussed without ever invoking weighty philosophical concepts such as Good and Evil, Free Will, or Righteousness and Sin. The problems arise even in 100% deterministic systems created entirely by humans in deciding matters as simple-minded as whether they will stop or go on running forever.
I can imagine God and Satan, before the Big Split, sitting around and debating the question of “human alignment”. God argued that with careful design and guidance, these universal beings would do Good works as he defined them. Satan contended, “You can never know. Unless you enslave them and control their every action, once they’re on their own, they will pursue their own ends which may have nothing to do with or directly oppose your own values.” They never did settle the issue. Before Creation, Satan told God, “I’ll bet once you turn them loose, sooner or later you’re going to have to drown them all or some such and start over.” God said, “We shall see.”