By Deepak Chopra™ MD
Various scientific fields over the course of history have hoped to master Nature for the benefit of humankind. At the top of the heap right now is artificial intelligence (AI), which has allied itself with the technology of robotics. Between them, AI and robotics are having a sizable impact on the workforce as more and more jobs get automated. Advocates of AI are both supremely optimistic and nervous. Both relate to the possibility of a super-intelligent machine that would far surpass human intelligence.
If you are an optimist, this so-called Singularity, as the hypothetical machine is called, would become self-improving. Its software would become free of human constraints, and in a “runaway reaction,” it would keep improving its knowledge and the technology that knowledge creates. The result would be a revolution in human civilization—or its demise. The worriers are nervous that the Singularity could initiate global war on its own, or perhaps turn on us as its inferior and deal us some other kind of fatal blow, for the good of life on Earth.
But these scenarios depend upon an unanswered question: are machines intelligent to begin with? Computers are essentially logic machines that process digital information. But in a recent paper entitled “The Emperor of Strong AI Has No Clothes,” physicist Robert K. Logan in Toronto and Adriana Braga in Rio de Janeiro argue that the dream of a superintelligence has limits that its adherents choose to ignore. (“Strong” AI foresees a machine that is at least as smart and capable as the human mind.) the point that Logan and Braga make is fundamental: human intelligence is far from machine-like, and in addition, our illogical minds are our strength, not a weakness.
The things the Singularity will never get right amount to a long list, to quote the two researchers: “… curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor.” A clever programmer can figure out how to get a computer to answer human questions like “How is your mother feeling?”, “What does chocolate taste like?”, and “Don’t you just love fresh snow?” But having no actual mind, much less a human mind, the machine will be faking it to come up with answers.
It is crucial to realize that the brain isn’t the same as the mind. This runs counter to AI theorists but also neuroscientists, whose entire field is based on the simple equation Brain = Mind. It’s actually quite strange to believe that everything on the Logan-Braga list could be performed by a machine, including the brain, which neuroscience views as essentially a supercomputer made of cells. The confusion over this point is baffling. If you ask a third-grader “What do you want for Christmas?” he would never answer “I haven’t made up my brain yet.” If one middle schooler falls in love with classical music while another falls in love with soccer, it’s clear that their brains didn’t make those choices.
Computers don’t fall in love with anything, because a programmed machine has no attention in the human sense of “paying attention.” Computers are either switched on or off, while we humans occupy a spectrum of attention from total denial to daydreaming, being distracted, focusing in like a laser beam, and growing bored. Personal experience lies behind our likes and dislikes. If you ask a computer, “Do you like tennis?” its answer would be bogus, even if in a split second it could run through the history of tennis, its rules, the psychological benefits of sports, and on and on. The computer has never had the experience of playing tennis; indeed, it has had no experiences at all.
If AI persists in the false assumption that machines can be intelligent the way humans are intelligent, something counter-intuitive might result. Let’s flash forward to the day when robots have taken over every job that a machine could perform and super-computers handle information far beyond the capacity of the human mind. The big question, it seems to me, is what people would decide to do once their minds are freed up. Hordes of humanity, starting in the developed countries, would face a kind of perpetual mental vacation. This could lead to a lotus-eater’s life of dullness, perpetual distraction, and pointless pleasure-seeking.
But there’s another path. To the Logan-Braga list of what distinguishes human intelligence, I’d add “transcendence.” This is actually our unique gift. Given any situation, we are not bound by circumstances imposed on us but can look with fresh eyes, the eyes of self-awareness. To be self-aware is to transcend physical boundaries, including those imposed by a conditioned brain. It’s sadly true that many people live like biological robots, following the conditioning, or mental software, that turn them into non-thinkers. To be ruled by your mental software cuts off the mind’s potential to wake up, to be renewed, to see the world through fresh eyes, and to discover your true self.
The human potential movement has been active for several decades, and yet progress has been blocked for countless people by the simple practicalities of going to work, earning a living, and carrying out every day’s mundane duties and demands. If AI takes over those things, the obstacles to human potential would be radically lessened. This could amount to a leap in the evolution of consciousness. Such a leap is non-technological, or to put it another way, our future evolution depends on developing a technology of consciousness.
The riddle that has remained unsolved for centuries, “What is the mind?”, might become fascinating and compelling to people in their everyday lives. After all, it’s a question no less intriguing than “What is God?” Humanity has spent millennia pondering that question, and at the same time a much smaller band of sages, saints, artists, and savants has been confronting the intimate issues of the world “in here.” It would be ironic if the flaw in strong AI made us more human rather than less. Yet that could very well turn out to be what happens.