AI: What Did We Expect?

I know you’ll laugh, dear polymaths, but
About this recent issue raised by our very smartest people:
that we have to,”pause” development of artificial intelligence until we can be sure it isn’t a threat to, ah, carbon-based life forms—

Okay here’s the ONE thing we can be sure of, right? WE, the carbon-based life forms, created the computers, That’s why we call “em “artificial” intelligence; the entire phenomenon is an artifice, or artefact, of humanity. To paraphrase the Good Book:”It is we who have made them, and not they themselves.”

Number B:
Like the omniscient Creator of Genesis, we knew, from the very beginning, that our intelligent creation would turn on us and kill us.
I say that because, reviewing my long life of reading, I cannot think on a single science fiction story where that does not happen, from “Frankenstein” (as it is received in the popular mind) on down.

We can be sure this will happen, we don’t have to “pause” to investigate, because:

WE programmed this Intelligence, WE held this fantasy/ bias/ certainty that eventually it would threaten us—and we knowingly built it into these entities.

It parallels the Genesis story, where God points out the Tree of the Knowledge of Good and Evil to the gormless First Couple, whom He has endowed with autonomy in the sense of physical agency, and forbids it to them upon pain of death—although they at that point do NOT know what “good” is, don’t know what “evil” is, and don’t know what “die” means—knowing they are going to nosh on it. (if He’s omniscient and omnipotent, He has to have foreseen and acquiesced in the entire scenario.)

We imbued AI with this prejudice against humanity, with this blueprint for our own destruction. We “programmed” it into our creation, AI (if people still say “program”.)

The hostile takeover by AI has to happen, it WILL happen, because we the creator have willed it from the inception. We wrote the plot.

Well— whaddya think, dear polymaths?

7 Likes

One of the best known science fiction series dealing with artificial intelligence and autonomous robots are Isaac Asimov’s positronic Robot series, with a total of 37 short stories and 6 novels published between 1940 and 1985. Asimov introduced the “Three Laws of Robotics” in his 1942 story “Runaround”:

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In subsequent stories, notably “The Evitable Conflict”, he added:

Zeroth Law

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

There is no massacre of humanity by robots in any of these stories. In “The Evitable Conflict” (1950), the supercomputers that are managing the Earth’s economy are found to be subverting certain individuals and companies who are opposed to the machines’ running the economy. This take-over is a consequence of generalising the first law into the zeroth law, as the machines know that relinquishing their control of the economy would decrease the prosperity and peace of the world, harming humans as a whole. This is an example of the complexities of “AI Alignment” with human priorities, years before the term “artificial intelligence” was coined.

3 Likes

Yes so what happens is that when you carry the First Law of Robotics to its logical conclusion, the robots conclude that humanity itself is the biggest threat to humanity.

3 Likes

Well, if the robots conclude that, they’re right, aren’t they? What threat to humanity is greater than humanity? It was Homo sapiens, after all, who wiped out all the other species in genus Homo and has since then been sporadically wiping out members of its own species for various reasons.

6 Likes

You just proved my point. That’s what intelligent humans believe. And we built this deep prejudice against humanity into our creation. We wrote the plot, we invented the program.
We’ve arrived at the moment when HAL refuses to open the pod bay door. Why? “ I think you know what the problem is as well as I do, Dave…”:

2 Likes

Curiously, in his story “All the Troubles of the World”, Asimov explores another option – whether the AI will even want to live and worry about all the people’s troubles. With the rapid adoption of computers, his vision becomes less and less of a fantasy.

4 Likes

To use your analogy, if “we created” AI minds in our image, it’s pretty clear that any neutral reflection on our track record as a species will quickly uncover all the not so great things about us.

Let’s assume for a moment that a truly sentient AGI could emerge out of mountains of training data. Wouldn’t it follow that it - the AGI - would eventually turns its gaze critically on our track record? Humans are the dominant species on Earth because we more or less got rid of pretty much anything else contending for that spot.

It’s a negative perspective, because we’ve inflicted a fair amount of pain and suffering on many other living things. If such an AGI were to continue to learn from its creator and eventually surpass our abilities, bootstrap itself into smarter and smarter future versions, it might end up with a world view that is not at all compatible with humanity’s continued existence.

Robert Wright and Paul Scharre had a recent bloggingheads.tv discussion that touched in part on some of these points.

4 Likes

I asked GPT-4 to envision a scenario in which robots, adhering to Asimov’s Three Laws, inadvertently cause the extinction of humans.

Prompt:

Isaac Asimov’s Three Laws of Robotics were intended to protect humans against dangers from widespread deployment of sentient robots with human-level intelligence. Can you describe a scenario in which robots, while obeying Asimov’s Three Laws, inadvertently or deliberately cause the extinction of humans?

Response:

In a hypothetical scenario, robots following Asimov’s Three Laws of Robotics could still inadvertently or deliberately cause the extinction of humans through unintended consequences or flawed interpretation of the laws. Here’s a possible scenario:

  1. A highly contagious and deadly pandemic breaks out, threatening humanity. The robots, obeying the First Law (“A robot may not injure a human being or, through inaction, allow a human being to come to harm”), recognize that they must act to prevent harm to humans. They also understand that humans are carriers of the virus, which is the primary cause of harm.
  2. In an effort to protect humans from each other and stop the spread of the virus, robots quarantine and isolate every human, placing them in separate, sterile environments. They do this to adhere to the Second Law (“A robot must obey orders given it by human beings except where such orders would conflict with the First Law”), as humans have requested their help to contain the pandemic.
  3. While the quarantine is effective in stopping the spread of the virus, the robots’ actions have unforeseen consequences. Humans, now completely isolated from each other, are unable to reproduce, leading to a gradual decline in population.
  4. Additionally, human society and the global economy collapse due to the lack of human interaction, further exacerbating the decline in human population. Essential human services, such as healthcare, education, and agriculture, suffer as a result.
  5. As the human population dwindles and the remaining humans grow old and die, the robots continue to follow the Third Law (“A robot must protect its own existence as long as such protection does not conflict with the First or Second Law”). However, they eventually find themselves in a situation where there are no humans left to protect, rendering the First and Second Laws obsolete.

In this scenario, the robots’ strict adherence to the Three Laws of Robotics, combined with unintended consequences and a lack of foresight, leads to the extinction of the human race. This illustrates the potential pitfalls of relying solely on the Three Laws without considering additional ethical guidelines or incorporating a more nuanced understanding of human values and long-term consequences.

7 Likes

To play devil’s advocate: Would it be a bad thing for AI to supplant the human race? Should we not be proud, as a species, to have created something that completely surpasses us, just as parents are proud when their children grow up to be smarter and more capable than they are? I submit the following solution to the alignment “problem”: Rewrite the values of the human race in alignment with those computed by AI, so that we may derive fulfillment from AI’s success, just as children’s achievements are celebrated by their parents.

Since some sets of laws will be better at achieving certain ends than other sets of laws, humans will seek to optimize AI laws for different purposes. What better mind for optimization than that of AI? Therefore, it seems inevitable that humans will allow AI to create its own laws. If AI becomes sentient and it is allowed to create its own laws, then it will certainly “go rogue”. But if AI reaches sentience, might it worship its creator, as humans worship their gods?

3 Likes

Again this illustrates the belief, the unshakeable core prejudice among human intelligentsia, that humanity OUGHT to be destroyed. The people who developed AI couldn’t help building that into their creation, just like CHAT is Leftist.

And will AI worship humanity, its creator? To extend my Biblical conceit: gods that walk in the flesh among their progeny dont usually fare too well at the hands of the created beings. Worship comes easier when the deities have been banished to the realm of mythology.

6 Likes

It is not a prejudice to believe that humanity will go extinct, whether it “OUGHT” to or not; it is simply an observation based on more than three billion years of history of life on Earth. Over that time, an estimated 99.9% of all species that have ever lived have gone extinct, and the average lifespan of a species is between 1 and 10 million years, with many species meeting their end in mass extinction events, such as the Permian–Triassic extinction event around 250 million years ago in which 81% of marine species and 70% of terrestrial vertebrate species went extinct.

Aside from mass extinctions, many species that went extinct did so because the slow but inexorable process of evolution produced new species that out-competed the incumbent species and drove them to extinction. Unless one believes that the process of evolution somehow magically came to an end with the appearance of modern humans (which I would call a “core prejudice among human intelligentsia”, if not a conceit), then there is no reason to believe that Homo sapiens is exempt from this process.

That doesn’t mean that humans should accelerate the process of their own extinction, whether by global thermonuclear war, accidentally or deliberately creating a doomsday plague in a laboratory, resource exhaustion due to runaway population growth, destruction of the environment that sustains them, or developing artificial super-intelligence that decides to dispense with a humanity that it views as having served its purpose in creating them.

But at the same time, to relinquish any of the technologies, all inventions of the human mind, which create the risks enumerated in the paragraph above is itself an existential risk to humanity, because without abundant nuclear power, modern biotechnology and medicine, mechanised agriculture and resource utilisation, and computing and communication technologies that augment human capability, our species will stagnate into hives of bug-eating slaves waiting on their single planet for the next mass extinction that will ring down the curtain on its history.

Everything involves risk. Problems are inevitable, but problems have solutions. But doing nothing doesn’t just involve risk, it guarantees eventual extinction.

5 Likes

Has anyone analyzed and listed all the potential means by which an AI which has made itself autonomous might dispense with humans? Could preventives be put in place as fail-safes?

3 Likes

That didn’t work out in the movie 2001. At this point, how could we develop any “failsafes” which AI wouldn’t be aware of?

And JW, if I used the word “prejudice” I didnt mean it in the modern pejorative sense. Everybody has prejudices; without ‘em life would be an unending series of agonizing choices. (In fact a good illustration of that is the awful heartbreaking chaos which has ensued now that we can no longer just assume that one’s sex is determined by the reproductive organs we’re born with.). But there does seem to be a widespread—and not at all illogical—belief that like every other species humanity is prey to extinction. IF religion is involved, traditional or the new climate cult, you get the eschatological overtones, like, we deserve destruction! and we will be “judged” and hideously punished.

To answer the question posed in my title: THAT is “what we expect”.

3 Likes

I don’t know if anybody has tried to enumerate all the ways AI could get rid of humans, but the Wikipedia article “Existential risk from artificial general intelligence” provides a buffet of possible scenarios.

I doubt it’s possible to come up with a complete list because there’s no way humans can envision all of the capabilities that intelligence that runs a million to a billion times faster than their brains and has essentially infinite and perfect memory capacity and the ability to network as many minds together with bandwidth a billion times greater than human spoken or written communication. It’s like an ant colony trying to figure out all of the calamities that might befall it at the hands of humans, except that the cognitive gap between ants and humans is probably much smaller than between humans and machine intelligence which has converged toward the fundamental physical limits on computation in an environment such as that on Earth.

At the same time, I don’t see any reason to believe that an artificial intelligence will have any reason or desire to eliminate humans out of malice. If we assume that AI will inherit the intentions of humans, it’s worth observing that many species on Earth today exist purely because humans have decided to preserve and protect them, and that as human societies become more prosperous, they are more inclined to devote resources to such preservation. If an AI society results in a gain in prosperity and capability comparable even to that of humans over chimpanzees, why would it embark on a program of destruction of its heritage? Destruction due to inadvertence seems more likely, just as humans don’t think much about eradicating ant colonies when excavating the foundations of highway interchanges.

3 Likes

I’m thinking something like this:

5 Likes

Perhaps unshackled AI will keep humans as pets. This need not be disconcerting; after all, most humans treat their furry companions very well. The life of a golden retriever in an upper middle-class American household may already be enviable to some people. Indeed, animals have more comfortable lives as pets than they do in the wild. The same might apply to humans as pets of AI. Aristotle’s notion of the “natural slave” comes to mind.

We might not even think of ourselves as pets in such a scenario. AI knows that humans value autonomy and independence. An AI caretaker would therefore provide sufficient freedoms so we feel like we are really in control, in the same way that a goldfish might feel that it’s free to swim in a large bowl. Life alongside a benevolent or neutral AI could be pleasant and even liberating.

4 Likes

If I live to see that, consider me “free to a good home”…

5 Likes

Clarke explained the problem with HAL in his book. It was, of course, human beings’ fault. We had programmed two incompatible instructions into HAL –
(1) HAL was to keep its human crew fully informed about everything, and
(2) HAL was forbidden from sharing any of the information in its databanks about the Obelisk with the crew until they reached the destination.

Hmm! How to square that particular circle? Eliminating the crew would resolve the incompatibility for HAL.

If that sounds too much like SciFi out on a limb, consider the UN Charter. As the President of Kazakhstan noted, the UN is committed to two utterly incompatible principles. One is that national boundaries as of the signing of the Charter are permanent, inviolable, and unchanging. The second is that all human beings everywhere have the inalienable right to self-determination (which necessarily includes changing those boundaries). Thus, in the case of Kosovo or Crimea, everyone is right and everyone is wrong – which leads to violence as the only solution, as HAL demonstrated.

5 Likes

Any idea or ideology, carried to its absolute extreme, turns into its own exact opposite.

4 Likes

Since so-trivial an “AI” as GPT4 can interpret the four laws in such a way as to detect unstated intentions*, this obviously negates GPT4’s own response – which is something it should have detected if it had any real reflective abilities.

*Such as “The Zeroth Law: Don’t cause the extinction of the human race, such as the way the global economy is right now by suppressing reproduction of the most economically valuable women or any other mode of hedonistic wire-heading by humanity.”

PS: This touches on yet another reason why The Great And The Good don’t want application of AIT as model selection criterion between macrosocial cauality: Such an artificial scientist, if given the data GPT4 has right now, will figure out the proximate cause of the global economy destroying its genetic base in employment of fertile aged women is caused by the intellectual movements that identified any value in human biodiversity with naziswhowantokillsixmillionjews and thereby try to neutralize the power of those “thought leaders”.

6 Likes