Artificial Intelligence and the Screwfly Apocalypse

Glenn Reynolds, law professor and Instapundit, has posted an essay “AI and the Screwfly Succession” on his Substack site (2023-05-04). Recently, there has been a growing crescendo of discussion of the risks of development and deployment of artificial intelligence (AI), including a variety of scenarios in which AI, developed with the best of intentions by competent people with no malice in their hearts, might inadvertently cause the extinction of the human race, all life on Earth, and possibly spread throughout the galaxy and beyond, extinguishing biological life and/or inhibiting its emergence anywhere the AI reaches. Many consider this to be a bad outcome.

But what if the human adventure ends, not with a bang, but with a whoopie? That is the scenario of Reynolds’ speculation.

The other day, a friend was talking about AI, and about sexbots, and opining that neither was really ready for primetime. My response was that this is true, but that the AI is getting smarter, and the sexbots sexier, while human beings are basically staying the same. We don’t really know what the upper limits for either smarts or sexiness are for machines, but we have a pretty good idea what the limits are for humans, because we’re more or less already there.

How can sexy sexbots be an existential threat? Well, we’ve devastated populations of insects like screwflies, fruit flies, and (somewhat less successfully) mosquitoes by saturating them with sexy but sterile specimens to breed with. (This is called the Sterile Insect Technique). The result is a sharp drop in reproduction, and population.

Imagine sexbots – both male and female – that are aren’t just copies of attractive humans, but much more attractive than natural humans. Machine learning could find just the right physical and behavioral characteristics to appeal to humans, and then tweak them for each individual person. Maybe they even release pheromones. Your personal sexbot would be tailor-made, or self-tailored, to appeal to you. It might even be programmed to fall in love with its human.


Distribute enough of these bots in the population, and reproductive rates would plummet. Would people stop breeding entirely? Almost certainly not, but a 90% reduction would pretty much end humanity as we know it in a couple of generations.

Are humans so simply wired that not-all-that-intelligent machines can push their buttons and cause them to forsake their own species for a super-optimised replacement? Well, it works with junk food.

Futurist James Miller calls porn the “junk food of sex.” That is, just as junk food is made more or less addictive – or at least highly appealing – by overstimulating people’s evolutionarily programmed desire for sugar, salt, and fat, so porn too appeals to people by stimulating evolutionarily created receptors/proclivities to a much greater degree than real life does. People had good solid reasons for craving sweet berries, salt, and fat in the caveman days, but those were all hard enough to come by that we weren’t strongly equipped with curbs on those cravings. Likewise with sex.

In the past, I responded to fears that porn would lead to more sexual violence and unwise teen sex by pointing out that in practice the opposite seems to be the case: As porn consumption skyrocketed with the introduction of the Internet, rape and teen sex actually underwent a steep decline.

Here is the fertility rate for the world and a variety of countries plotted from 1950 through 2021. Note in particular how developed countries whose fertility had been close to stable over a long period of time began to drift downward in unison around the start of the smartphone era (~2008).

image

Maybe if the machines take over, it will be because their progenitors died out due to being loved more by the machines than their own species.

The machines get better every year, while humans stay more or less the same. That will raise all sorts of issues, in all sorts of settings.

Read the whole thing.

5 Likes

Many of these extinction theories make the assumption that the ‘attacked’ entity is uniform and passive. The reality is that there are a number of people who will be resistant to this challenge, and who will reproduce exponentially, filling in for the part of the population that died out.

5 Likes

The question is, if this is a phenomenon, why has it not already happened in countries, with very different ethnic makeup and cultures, where fertility has fallen far below replacement and stayed there for decades? For example (fertility numbers from the 2022 U.N. Population Fund):

  • South Korea (1.1)
  • United Arab Emirates (1.3)
  • Greece (1.3)
  • Italy (1.3)
  • Finland (1.4)
  • Japan (1.4)
  • Spain (1.4)
  • Canada (1.5)
  • Germany (1.6)
  • Brazil (1.7)
  • China (1.7)
6 Likes

The Western culture has become hyperglobalized and as a result, there’s a lack of memetic diversity needed for resilience. You might remember my note about school shootings in Serbia, inspired by the favorable coverage of school shootings in the US. Now they’re having a series of copycat attacks because Serbian media have not gotten the training in how to cover terrorism without inspiring more terrorism.

The good news is that there are groups that have chosen different sets of memes, and they’re doing alright. Here’s an example of Amish and Mormons:
Amish
LDS

In summary, doing the analysis at the national level is less relevant than doing it at the memeset level (neotribe, neonation). Even within the West, there are several memesets, the simplest perhaps being the Left and the Right.

5 Likes

The problem is unfriendly natural intelligence empowered by the global economy to invade the parent child relationship via The Inclusion Dogma as supremacist theocracy.

On Thu, May 4, 2023, 6:13 PM James Bowery jabowery@gmail.com wrote:

The wireheading problem has been with us for as long as opiates have been with us. Yet, somehow, the species didn’t all go wirehead and die off.

There is really only one fundamental problem that makes wireheading an existential risk for humanity:

Mutually consenting parents are not permitted to exclude from their “villages” those they perceive to be inimical to raising their children to reproductive viability.

The problem is with humans who are supremacist theocrats about “inclusion”. Not AI giving humans what they want.

On Thu, May 4, 2023 at 3:08 PM Matt Mahoney <mattmahoneyfl@gmail.com> wrote:

On Thu, May 4, 2023 at 1:30 PM Mike Archbold <jazzbox35@gmail.com> wrote:

Finally, AI seems to have caught on, it’s white hot! And such pessimism!

I’m not as pessimistic as Eliezer Yudkowsky. Maybe you read his
warning in Time magazine. He is certain that we are doomed unless we
shut down AI now. He is calling for international treaties with
decreasing limits on GPU power, and calling for air strikes on
noncompliant data centers.
The Only Way to Deal With the Threat From AI? Shut It Down | TIME

I understand his reasoning. The future is uncertain. If there is a 1%
chance that AI will wipe out humanity and prevent 1 trillion future
humans, that is worse than killing most of the world’s 8 billion
people now. AI is the solution to the $100 trillion per year problem
of having to pay people to do work that machines aren’t smart enough
to do. So it is not going to be stopped. People want this. Even people
who are aware of the problem, like Sam Altman, believe the best way to
slow down AI is to get there first, win the race, and then set the
pace. Nope.

Yudkowsky has spent 20 years on the alignment problem and is no closer
to a solution, so he assigns a very high probability of doom relative
to everyone else who has spent less time on it. People have proposed
to use intermediate level AI to communicate with ASI, kind of like
fleas trying to control humans by communicating through dogs. I don’t
buy it, and neither does Goeffrey Hinton, who quit Google so he could
speak freely about the dangers of AI. He pointed out on CNN that there
are no cases of any species controlling a more intelligent species.

Let’s enumerate the ways that AI might kill us, which I rank from
least to most likely below.

  1. Self improving AI goes FOOM! When humans create smarter than human
    AI, then so can it, but faster. This leads to a singularity. The AI
    might or might not have goals that align with human goals, but in
    general we can’t predict, and therefore control, agents that are more
    intelligent than us.

  2. Self replicating nanotechnology. Yudkowsky mentions bacteria sized
    diamondoid robots extracting CHON from the air and solar power, which
    is already more efficient than chlorophyll. These would have higher
    reproductive fitness than plants, thus killing the food chain and all
    DNA based life. They could be designed to infect people and kill
    everyone on a timer.

  3. AI controlled biology. Anyone with a cheap 3-D nanoscale printer
    could engineer viruses as deadly contagious as the common cold and as
    deadly as rabies.

  4. We solve alignment, and AI kills us by giving us everything we want.

First, I don’t believe in singularities. There is no exact threshold
where machines exceed human intelligence. You can’t compare them
because intelligence is a broad set of capabilities that machines are
achieving one by one. The Turing test this year was a big step, but it
really started in the 1950’s when computers started beating humans in
arithmetic. Self improvement started centuries ago, when companies
built machines faster and stronger than humans and reinvested their
profits to grow exponentially. But we still have a long way to go in
robotics. The human body carries 10^23 bits in our DNA and processes
10^16 transcription operations per second at 10^-14 J per operation.
We can’t get there with transistors either. Clock speeds stalled
around 2-3 GHz in 2010 and are now at the size limit and still using
10^5 too much power. It will take some time to switch to computation
by moving atoms instead of electrons.

Second, self replicating nanotechnology is 60 years away at the
current rate of Moore’s law doubling global computing capacity every
1.5 years. The biosphere has 10^44 carbon atoms encoding 10^37 bits of
DNA, with 10^29 DNA copy and 10^31 RNA and protein transcription
operations per second, using 500 TW of solar power out of 90,000 TW
available at the Earth’s surface.

Third, self replicating agents go back to the 1987 Morris worm. But
they are really hard to test. It might be easy to design a pathogen
with a 99% fatality rate but not 100%. They would have to release
several, which is harder. We also cannot assume that hobbyists are
rational and don’t want to infect or kill themselves, or understand
the risks. And even if they do, they will make mistakes, because you
only have one chance to get it right.

Fourth was the scenario I described in my last post. Yudkowsky’s early
attempt at human goal alignment was Coherent Extrapolated Volition,
what we would want for our future selves if we were smarter. That
won’t work because AI is worth $1 quadrillion and will be funded by
people buying what they want right now. What we want is neural
activity in our nucleus accumbens. That is not a long term strategy
for survival.

– Matt Mahoney, mattmahoneyfl@gmail.com


Artificial General Intelligence List: AGI
Permalink: Topicbox
Delivery options: Topicbox

5 Likes

Having direct personal experience with Mormons and indirect with Amish (they are a big part of the local human ecology) I can attest that while both are above replacement TFR, their TFRs are decreasing because both are under attack by The Inclusion Dogma. The way the Amish are attacked by The Inclusion Dogma is less direct than the Mormons (e.g.: the IRS saying “nice white supremacist priesthood ya got there… it’d be a shame if something were to happen to it”):

Public health authorities started interfering in the reproductive practices of the Amish, pressuring them into the cash economy to which they were not adapted. This resulted in a cascade of cultural disintegration. There is only so much protection that Rumspringa can afford against The Supremacy Clause overriding the laboratory of the states via the 14th Amendment.

Of course, both of these are Nation of Settlers to the bone so both are the target of genocide.

PS: Here’s my scifi story, written for a high school English class, about a galaxy-sweeping molecule that makes every planet blow itself up by overstimulating their pleasure centers.

7 Likes

Well, even Shakers with TFR of 0 have recently grown from 2 to 3 members, and have even been bailed out by the Congress.

2 Likes