Artificial Intelligence Showstoppers
Posted November 11, 2020
With Honda apparently moving ahead of Volvo, Google’s Waymo, and VW’s Porche in announcing AI-based automobiles, artificial intelligence cruises ahead. But perhaps, we drivers should remain alert to avoid a crash.
During this portentous month, my publication Gaming AI: Why AI Can’t Think But Can Transform Jobs went live on Amazon and at Discovery Institute. I received a provocative note from philosophical writer Denyse O’Leary.
She wanted more clarification and specifics in my list of artificial intelligence and self-driving showstoppers on page 50.
My book separates the hype about a “singularity” or new machine “mind” from the real prospects of artificial intelligence to enhance the power and value of human jobs and minds.
I maintain that when Silicon Valley’s AI theorists, such as Elon Musk, Ray Kurzweil, and Larry Page push the logic of their case to a “singularity,” they defy the most crucial findings of twentieth century mathematics and computer science.
Kurt Godel’s incompleteness theorem in 1931 and Alan Turing’s “oracle rule” in 1948 both prohibit the creation of machine minds independent of human input.
The fashionable singularity scenario depends on a set of little-understood assumptions common in the artificial intelligence movement.
Let’s dive in…
Assumptions in the AI Movement
In response to Denyse O’Leary, I hereby add contrary reality assumption of an actual mind.
- The Modeling Assumption: A computer can deterministically model a brain. Reality: A brain operates by principles radically different from a machine and irreducible to mechanics. Information is measured by entropy or surprising bits that cannot be reliably generated by a deterministic system.
- The Big Data Assumption: The bigger the dataset, the better. No diminishing returns to big data. Reality: digital data must be selected and defined by humans and declines in value and reliability as it expands. Some of the most successful companies in the AI era, such as Peter Thiel’s Palantir are devoted to selecting and preparing data for machine processing.
- The Binary Reality Assumption: Reliable links exist between maps and territories, computational symbols, and their objects. Reality: As Harvard mathematician Charles Sanders Peirce explained in the early 20th century, symbols, such as mathematical codes and computer algorithms, must be linked to their objects by a human “interpretant.”
- The Ergodicity Assumption: In the world, the same inputs always produce the same outputs. Reality: the world generates a huge multiplicity of outcomes from a tiny number of regularities (physical laws).
- The Locality Assumption: Actions of human agents reflect only immediate physical forces impinging directly on them. Reality: minds respond not only to local inputs but also to imagined contraries or remembered outcomes and influences elsewhere, recalled history and projected futures around the globe.
- The Digital Time Assumption: Time is objective and measured by discrete increments according to a computer’s clock pulse. Reality: Time runs in analog continuity with infinite gradations. Time is both continuous and infinite not discrete and capped.
Denyse O’Leary thought that these assumptions — particularly the final two — need further clarification. Particularly to describe the alternative assumptions that apply to human minds, which are neither mechanical nor deterministic.
If my readers wish to pursue further complexities in the physics itself, you can explore “quantum entanglement theory” (as in my daughter's Age of Entanglement [Knopf]), whereby the locality assumption is denied in physics itself).
In order to assure correspondence between logical systems and real-world causes and effects, engineers have to interpret the symbols rigorously and control them punctiliously and continuously. You need real minds involved. Computers “learning” from big data do not suffice.
As I discuss in Gaming AI, the autonomous automobile is a good test case of the limits of AI. Honda has just announced that it has accomplished third degree autonomy and will be launching self-driving cars early next year.
But there is a catch.
The Fine Print
Honda, for example, wants the drivers to remain alert for emergencies. Having the drivers on constant alert nullifies most of the benefit of self-driving cars. Do you really want to sit with your hands hovering over the steering wheel and foot over the brake?
My view is that self-driving is a hardware problem. It requires sensors that can outperform humans in seeing objects far ahead on the road and off it by using frequency bands beyond the small span of visible light. The car’s “eyes” must be able to compensate for the rearview foibles of the AI map. It cannot assume that the cumulative database from the past — its deterministic rearview mirror world — will hold in the future.
Your self-driving car must navigate a world that everywhere diverges from its maps, that undergoes combinatorial explosions of novelty, black swans fluttering up and butterfly effects flapping, that incurs narrowly local weather events, that presents a phantasmagoria of tumbling tumbleweeds, plastic bags inflated by wind, inebriated human drivers, pot headed pedestrians and other high-entropy surprises.
Self-driving vehicles assume the congruence of digital maps with digital territories. But to achieve real congruence, you have either to change the cars or change the territories.
Most existing self-driving projects rely on changing the territories. The Chinese, who lead the field, are building entire new urban architectures to accommodate the cars, which in turn become new virtual railroads. This goal differs radically from the idea of the singularitarians foreseeing vehicles independent of human guidance or control.
The map is a low entropy carrier. The world is a flurry of high entropy and noisy messages, with its relevant information gauged by its degree of surprisal. To deal with the real world, self-driving cars need to throw away the AI assumptions and learn to see.
Today’s Prophecy: Computers cannot truly see. To define the “connectome” of a single human brain — all its dendrites, synapses, and other links — entails more than a zettabyte of information. Ten to the 21st power, a zettabyte comprises close to all the memory attached to the entire global internet.
Meanwhile the energy use of the human zetta brain differs so radically from the energy consumption of a computer or datacenter as to signify completely different principles of operation. While the internet and its silicon devices consume gigawatts, a human brain is made of carbon and works on 12 to 14 watts. That means a human brain is on the order of a billion times more energy efficient than a computer brain.
Real artificial intelligence will require going beyond mere silicon and binary logic into a new carbon substrate for intelligent machines.
Editor, Gilder's Daily Prophecy