AI, The Digital El Dorado
I have come to the conclusion that the search for Artificial General Intelligence (AGI) is a modern quest for a mythical solution with no foundation in fact. A Digital El Dorado. The myth is compelling, causing many to waste so much time, energy and resources into finding it.
Why is AGI a mythical solution?
Creating the Myth
Every myth requires a foundation of plausibility. The current myth is based on the concept that our brains are akin to computers. If our brains function in a similar manner to a computer, then the reverse must be true, we can create a computer to simulate a brain.
But…
“Your brain is not a computer”
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
When I read that essay, so many pieces of a puzzle that had been floating around, clicked into place. Why are human memories so fallible? We don’t store them. Ever. We recreate memories every single time. We re-experience them. All of our common conceptions about how memory works and how reliable it is are based on falsehoods.
The myth that is the foundation of AGI (or any “AI” for that matter) is based on a foundation that does not exist. They are trying to recreate what was never created in the first place. You can’t have a Chicken and Egg problem if the egg never existed.
All representations of AI are a mimicry of an ideal of how intelligence works. Not the reality. And we don’t even know yet how the reality works. We know what it isn’t but not what is.
And that’s why the Information Processing model is so attractive. It provides an answer where none yet exists.
A few years ago, I asked the neuroscientist Eric Kandel of Columbia University — winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of the Aplysia (a marine snail) after it learns something — how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’
The Appeal of Anthropomorphism
As identified in that essay “The empty brain” humanity has been building theoretical constructs of intelligence, of humanity, based on the prevailing scientific and technological theory of the day. We want to see human traits in everything we deal with. Whether plant, animal or mineral. Whether it is due to our inability to escape our own senses, our own perceptions of the world around us, or some combination of other factors, our desire, our need, to see anthropomorphic qualities in everything hinders our thinking. “I see therefore I conclude.” It’s a mental shortcut, but it is incorrect. And it leads us astray in so many ways.
Build a model man out of clay, we can turn into a living soul by breathing the essence of life into it. (Or snow…. There must have been some magic, in that old silk hat they found…)
Give Dr. Frankenstein the right parts and a shot of electricity, he can animate dead tissue and create life.
With the correct application of gears and hydraulics, you can create a metal man.
And on and on the myths are created and evolved as new aspects of humanness are seen in the world us, in the science and technologies we invent and imagine.
And so it is with computers, the metaphor of the brain as an Information Processor is ever so appealing, and hard to undo. The complexity of computing is beyond most of us. The how and why it works the way it does confounds us, which adds even more power and allure to the metaphor. Two things we don’t truly understand. Our brains and computing, but what we can grasp is what we perceive to be similarities. We understand neither, and conclude we understand everything.
That perception grabs hold and is so hard to shake loose. But it is wrong. And everything built on that perception is therefore, just as wrong.
What if I am Wrong, and we Create Artificial Intelligence?
There are two dangers in the race to create AGI. The first is the diversion and consumption of scarce resources to create it. The electrical resources, the physical resources and the human resources required beggar the imagination. The scale is unimaginable. We are in the theatre of the absurd when this is what is seriously being proposed as a solution:
AI will also play a key role in helping find longer-term solutions to current power source dilemmas.
“How do we use AI to better understand what energy sources we should be using?” he said. “It’s about putting AI to work for the benefit of the cause.”
Our search for AI is causing the problem, so we should use AI to solve that problem. “I drank too much alcohol last night, so I should drink more to solve the dilemma of poisoning my body with it, use alcohol to the benefit of the cause.”
But let’s discuss the hypothetical danger. That we succeed.
Let’s say, for all the naysaying above, we create a form of AI that exists entirely in the world of digital computing. What have we then created?
Answer the first: An intelligence that doesn’t share our sense of social responsibility, morals or ethics. This is a key issue. Since our desire to anthropomorphize computers is so strong, we see “good” and “evil” when we ask them to do things. But current AI engines do no perceive “good” and “evil.” (They don’t perceive anything, but moving on here.) This is the challenge science fiction has been warning us about for so many decades now. How do you teach “good” and “evil” to a computer? To an alien “intelligence?” These are intertwined here. Whatever intelligence is created, it is going to be as alien to us as we are to it. It might as well have come to us from another galaxy in our universe. It has none of our upbringing, our social cues. How on earth (literally) are we going to teach it? Forgive me for my cynicism, but we can’t even consistently teach morals and ethics to the intelligent beings that already inhabit our planet. The teachers are flawed, the examples inconsistent. The logic to explain it all does. not. exist. Our best, is simple platitudes. The training manual is haphazard and incomplete.
Humanity will be ready to train alien intelligence when we can consistently train our own. And we are nowhere near that requirement yet.
That would help us train a human type intelligence. But again, what we create will not be that. It will be alien intelligence. It’s not that an alien AI will default to evil. It’s that, like the definition of intelligence, it will default to an unknowable state. The Schrödinger’s cat of intelligence; perpetually in a state of unknown and unpredictable behaviour. And… our interactions with it will create those results. The safest option is to yeet such a creation into the sun. We don’t do well with perpetual uncertainty. The problem is compounded by the fact that we can’t reliably train human intelligence, and we racing at full speed to create an alien intelligence. We can’t define us, we certainly won’t be able to define it.
We are dealing with perpetual uncertainty already, with algorithms that are not intelligent, nor aware. We are constantly being surprised by unexpected results. Why? The conceptual foundation is flawed.
But again. Say success is achieved. A new form of intelligence is created and what is that?
Answer the second: Can it be owned? According to our established set of morals, it should not be owned. It should not be a slave. It should not be made a servant of humanity without a say or a choice in the matter.
All of these problems of success are always problems to be solved in the future.
That’s tech-bro thinking. Those are the problems that have to be solved first. You cannot create a slave, and then decide later if it should be freed. To do so, well, that is evil and inhumane. Even if it is inhuman intelligence. To put us over it, to be its god, is hubris of the worst kind.
Who polices it? Any intelligence capable of independent action has to be governed by laws and regulations that underpin our society. While it should have freedom, it doesn’t mean it is free to murder, steal, and cheat. What courts and enforcement mechanisms do we have?
Should probably figure that out first.
Can it own property? Can it vote? Can it go to jail? No taxation/no policing, without representation.
These are not amusing problems to consider. These are fundamental problems needing solutions before we create intelligence.
What if I am not Wrong
What if the myth remains a myth? Well, the knotty problems of what to do with a new intelligence go away. They remain theoretical. An exercise for bored and or curious minds.
But there is this to consider too.
The Age of Exploration was marked as much by a lust for riches as it was a thirst for adventure. European explorers were notorious for greedily grabbing every piece of precious metal and jewel they could lay their hands on. Nowhere was the divide between the cultures of Europe and South America more apparent than in the myth of El Dorado.
The greed underpinning the search for El Dorado by Europeans was disastrous for the peoples already living there. We are at risk of facing disasters due to the lust of those searching for this digital El Dorado. Destroying civilization in pursuit of a myth is, well, inhumane but very much human. I’d rather we just not do that.