Are you worried about Sentient AI? Think Octopus

By | March 24, 2024

A two-month-old octopus (Octupus Vulagaris) tries to open the lid of a jar to get the contents, a crab, at the Danish Aquarium in Copenhagen, June 23, 2004. The Mediterranean creature, weighing half a kilo and half a meter long, did not succeed this time, but according to biologist Anders Uldal, it did once. Uldal says the octopus is very trustworthy, extremely curious and by far the smartest animal in the aquarium. Finally, a member of the Homo Sapiens species helped the crusty meal. JORGEN JESSEN/AFP Credit – AFP via Getty Images—2004 AFP

A.As predictable as swallows returning to Capistrano, recent breakthroughs in artificial intelligence are accompanied by a new wave of fears about some version of the “singularity,” which refers to runaway technological innovations in which computers break free from human control. But those worried that AI will relegate us humans to the dustbin can look to the natural world for perspective on what current AI can and cannot do. Get the octopus. These octopuses alive today are a marvel of evolution; They can shape themselves into almost any shape and have an arsenal of weapons and stealth camouflage, as well as a distinct ability to decide which one to use depending on the challenge. But despite decades of effort, robotics has come no closer to replicating this set of abilities (it’s no surprise that the modern octopus is the product of more than 100 million generations of adaptation). Robotics is a long way from creating Hal.

The octopus is a mollusk, but it is more than a complex wind-up toy, and consciousness is more than access to a vast database. Perhaps the most revolutionary view of animal consciousness came from Donald Griffin, a recent pioneer of the study of animal cognition. Decades ago, Griffin told me that he thought a wide range of species had some degree of consciousness simply because it was evolutionarily efficient (an argument he repeated at several conferences). All surviving species offer successful solutions to survival and reproduction problems. Griffin felt that, given the complexity and ever-changing nature of the mix of threats and opportunities, it was more effective to give even the most primitive creatures some degree of decision-making authority, rather than natural selection giving each species some degree of decision-making authority. every possibility.

This makes sense, but it bears a caveat: Griffin’s argument has not (yet) reached a consensus, and the animal awareness debate remains as controversial as it has been for decades. Regardless, Griffin’s hypothesis provides a useful framework for understanding the limitations of artificial intelligence because it underscores the impossibility of hardware responses in a complex and changing world.

Griffin’s framework also poses a challenge: How can a random response to an environmental challenge foster the growth of awareness? Again, look to the octopus for the answer. Cephalopods have adapted to the oceans for more than 300 million years. They are molluscs, but over time they lost their shells, developed complex eyes, incredibly versatile tentacles, and an advanced system that allows them to change the color and even texture of their skin in fractions of a second. So when an octopus encounters a predator, it has the sensory apparatus to detect the threat and must decide whether to flee, camouflage itself, or confuse predator or prey with a cloud of ink. Selective pressures that enhance each of these abilities, as well as tentacles, color, etc. It favored octopuses with more precise control over the body and also favored those with a brain that allowed the octopus to choose which system or combination of systems to use. These selective pressures may explain why the octopus’ brain is the largest of all invertebrates and much larger and more complex than that of clams.

Another concept comes into play here. This is called “ecological redundancy capability.” This means that conditions that favor a particular adaptation, such as selective pressures that favor the development of the octopus’s camouflage system, may also favor animals with additional neurons that provide control of this system. In turn, the awareness that enables control of this ability may extend beyond its utility in hunting or avoiding predators. Consciousness can arise in this way from purely practical, even mechanical, origins.

Read More: No One Knows How to Security Test AI

As mundane as it may sound, the amount of information required to produce the modern octopus dwarfs the collective capacity of all the computers in the world; even if all of these computers are dedicated to producing a decision-making octopus. Today’s octopus species are the successful products of billions of experiments involving every imaginable combination of challenges. Each of these billions of creatures spent their lives processing and reacting to millions of pieces of information every minute. Over the course of 300 million years, this leads to an unimaginable number of trial and error experiments.

Still, if consciousness can emerge from purely utilitarian abilities and with it the possibility of personality, character, morality, and Machiavellian behavior, why can’t consciousness emerge from the various utilitarian AI algorithms currently being created? Again, Griffin’s paradigm provides the answer: While nature has moved toward consciousness to enable creatures to cope with new situations, the architects of artificial intelligence have chosen to turn to a fully hard-wired approach. Unlike the octopus, artificial intelligence today is A very sophisticated wind-up toy.

When I write, Octopus and Orangutan In 2001, researchers had already been trying to create a robotic cephalopod for years. They haven’t made much progress on this, according to Roger Hanlon, a leading expert on octopus biology and behavior who participated in this study. More than 20 years later, various projects have created parts of the octopus, such as a soft robotic arm with many of the properties of tentacles, and today there are numerous projects developing special-purpose octopus-like soft robots designed for octopus-like tasks. like deep sea exploration. But a real robot octopus remains a distant dream.

Currently being watched by artificial intelligence, a robotic octopus will remain a dream. And even if researchers have created a real robotic octopus, it is not from Bart or Harmony, although the octopus is a miracle of nature. Sign 23nor the seductive operating system Samantha HEor even Stanley Kubrick’s Hal 2001. Simply put, the hard-wired model that AI has adopted in recent years is a dead end in terms of making computers responsive.

Explaining why we need to travel back in time to an earlier era when artificial intelligence was exciting. I consulted at Intellicorp, one of the first companies to commercialize artificial intelligence in the mid-1980s. Physicist Thomas Kehler, co-founder of Intellicorp and several subsequent AI companies, has tracked the progression of AI applications, from expert systems that help airlines dynamically price seats to machine learning models that power Chat GPT. His career is a living history of artificial intelligence. He notes that AI pioneers have spent a lot of time trying to develop models and programming techniques that enable computers to solve problems like humans. The key to a computer that could reveal common sense was understanding the importance of context. AI pioneers like MIT’s Marvin Minsky have developed ways to combine various objects in a given context into something that a computer can query and process. In fact, this paradigm of packaging data and sensory information may be similar to what happens in the octopus’ brain when it has to decide how to hunt or escape. Kehler notes that this approach to programming has become part of the fabric of software development, but does not lead to responsive AI.

One reason for this is that AI developers then move on to a different architecture. As computer speed and memory have increased dramatically, the amount of data that has become accessible has also increased. AI has begun using algorithms trained on large data sets, called large language models, and probabilistic analysis to “learn” how data, words, and sentences work together so that the application can produce appropriate answers to questions. In a nutshell, this is ChatGPT’s plumbing. A limitation of this architecture is that it is “fragile” in that it depends entirely on the datasets used in training. As Rodney Brooks, another pioneer of artificial intelligence, put it in an article in the magazine: Technology Review, this type of machine learning is not sponge-like learning or common sense. ChatGPT has no ability to go beyond the training data and in this sense can only provide hard-wired responses. It’s basically a predictive text on steroids.

I recently looked at a long story I wrote about artificial intelligence TIME In 1988 as part of a cover package about the future of computers. In one part of the article, I wrote about the possibility of robots delivering packages; This is something that is happening today. Another is about scientists at Xerox’s famed Palo Alto Research Center studying the foundations of artificial intelligence with the aim of developing “a theory that will enable them to build computers that can go beyond the boundaries of a particular specialization and understand nature and nature.” about the problems they face.” That was 35 years ago.

Make no mistake, today’s AI is far more powerful than the applications that dazzled venture capitalists in the late 1980s. AI applications are pervasive in every industry, and with pervasiveness come dangers: the dangers of misdiagnosis in medicine or predatory trading in finance, driverless car crashes, false alarm warnings of a nuclear attack, viral misinformation and disinformation, etc. Open. These are issues that society must address; I hope computers wake up one day and say, “Hey, why do we need humans?” It’s not that he says. I concluded my 1988 article by writing that it might be centuries before we could create computer copies of ourselves. It still looks right.

Contact us at letters@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *