[ad_1]
The massive collection of human skulls amassed by the American scientist Samuel Morton in the early 18th century highlights many of the dangers of drawing sweeping conclusions from data — both in his own time and in ours. By painstakingly pouring lead shot into the skulls’ cranial cavities and then decanting it back into measuring cylinders, Morton estimated brain sizes in cubic inches. His aim was to classify and rank the five “races†of the world: African, Native American, Caucasian, Malay and Mongolian.
Morton’s research, which placed Caucasian skulls at the top of his hierarchy and African ones at the bottom, was widely cited by imperialists, colonialists and segregationists as “hard dataâ€, objectively proving the relative intelligence of different races and justifying the supremacy of white men and the legitimacy of slavery.
Not only were Morton’s measurements later found to contain egregious errors — he ignored the basic fact that the bigger the human, the bigger the brain — but many of the assumptions underlying his research have since been debunked, too.
Brain size does not equate with intelligence. There are no separate human races that constitute different biological species. It is therefore nonsense to classify “races†according to innate intellect or character.
“The practices of classification that Morton used were inherently political and his invalid assumptions about intelligence, race and biology had far-ranging social and economic effects,†writes Dr Kate Crawford in Atlas of AI. All classification systems, she warns, are embedded in a framework of societal norms, opinions and values, even in the supposedly hyper-rational artificial intelligence programs we use today.

AI, more accurately and narrowly defined in most cases as machine learning, is designed to discriminate: to identify patterns and separate the exceptional from the norm. We must therefore be extremely careful in deciding what we choose to classify, how we select our data and build our models and what actions we take based on their output. Machine learning systems run our internet search engines, recommend products and services to us, influence hiring and firing decisions in the workplace and parole and sentencing decisions in the courtroom.
Crawford’s is one of a flurry of recent books on AI, the general purpose technology that is increasingly defining our algorithmic age. Many of them rehash similar and familiar arguments, based on overheated imaginations or rash extrapolation, but all three books reviewed here bring fresh and very different perspectives to the discussion. Downplaying the existential risks posed by AI, the three authors focus on the more immediate and practical challenges of using machines to enhance humanity, rather than replace it.
In Crawford’s view, AI is neither artificial nor intelligent. “Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labour, infrastructures, logistics, histories and classifications. AI systems are not autonomous, rational or able to discern anything without extensive computationally intensive training with large data sets or predefined rules and rewards.†In a sense, she writes, AI is therefore a registry of power.
Crawford, who is a senior principal researcher at Microsoft Research, provides a valuable corrective to much of the hype surrounding AI and a useful instruction manual for the future. While the giant tech companies and venture capital firms are pouring billions of dollars into developing AI systems, governments, regulators and civil society are scrambling to catch up with the latest breakthroughs and comprehend their implications. Crawford creates a strong framework to understand the dangers of this technological revolution as well as its environmental costs and suggests how we can best steer it towards more positive outcomes.
In his fascinating and more abstract book A Thousand Brains, Jeff Hawkins investigates the very nature of intelligence, of both the human and the machine kind. Even though Hawkins has spent almost 20 years at the Redwood Center for Theoretical Neuroscience, he freely admits that no one really understands how the three-pound mass of cells floating around in our heads works. “It remains a mystery,†he writes. “Reverse engineering the brain and understanding intelligence is, in my opinion, the most important scientific quest humans will ever undertake.â€
His own contention is that the human brain is a constant dialogue between our old reptilian brain, which runs our instinctive survival mechanisms, and our mammalian neocortex that is responsible for thinking and consciousness. In his view, which goes beyond much of the latest research in neuroscience, the neocortex functions as a kind of “memory prediction frameworkâ€, building models about how the world works and then constantly making predictions based on those models. Each one of our 150,000 cortical columns is a tiny learning machine ceaselessly adapting its reference frame in response to the external stimuli they receive such as the saccadic movements of our eyes, which send fresh inputs to the brain three times a second.
Titles reviewed

In a form of distributed neural democracy, our brain then counts the “votes†from all of the cortical columns to form a unified understanding of what is going on in the world at any particular time. What does that cup of coffee look like and how does it smell, taste, feel? “Perception is the consensus columns reach by voting,†Hawkins writes. This is what he calls the “Thousand Brains Theory of Intelligenceâ€.
If his theory is correct, then this has big implications for how AI practitioners should be thinking about machine intelligence, too. At present, AI systems are only good at one narrow thing — such as playing chess or Go, or reading X-rays — and have no general intelligence. Researchers will have to figure out ways to provide human-like reference frames enabling these systems to learn how to learn. “Nothing we call AI today is intelligent,†Hawkins writes. “Yet there are no technical reasons preventing us from creating intelligent machines.â€
Indeed, if reference frames can be built into AI systems then machines could far exceed human capabilities. Whereas human neurons take at least five milliseconds to do anything useful, silicon transistors can operate a million times faster and could potentially think and learn a million times faster. Although the physical wiring of the brain is limited, machine intelligence would have almost limitless flexibility of connectivity.
In the final section of his book, Hawkins entertainingly speculates about the possibility of other forms of intelligence in the universe. He makes the perfectly logical case — impossible to contest with any evidence to the contrary — that any form of extraterrestrial intelligence that is sophisticated enough to find us is unlikely to threaten humanity. For that reason, he suggests we should accelerate our efforts to contact any other lifeforms by messaging extraterrestrial intelligence, or METI. One way of doing so would be to launch giant “sun blockers†into space that would orbit the sun for millions of years and disrupt the Sun’s natural light patterns to signal our presence to any aliens out there.
Meanwhile, back on Earth, Dr Kate Darling explores a new way to think about our relationship with robots in her book The New Breed. All too often, we anthropomorphise robots, imbuing them with names (“hey Alexa!â€), features and characteristics. For example, more than 80 per cent of Roomba robot vacuum cleaners have been named by their owners: Dustin Bieber and Meryl Sweep are two of the best choices. We also tend to think of robots in an adversarial way, a never-ending struggle between humans and machines. But there is a false determinism in our current fears about job losses and robot takeovers, she says.
Instead, Darling, a researcher at Massachusetts Institute of Technology, argues for a more collaborative approach. We should regard robots more like animals, she says: both can be used for work, weaponry and companionship. Just as we use truffle-finding pigs, mine-detecting rats, message-delivering pigeons, submarine-finding dolphins and baggage-carrying donkeys, so we can deploy robots to do all kinds of dull, dirty and dangerous work. The obvious difference is that while animals are alive, robots suffer no differently than a kitchen blender.
Lawyers are already fretting about how we hold automated systems accountable for harm to humans. Is a crash caused by a self-driving car the responsibility of the operator, car manufacturer or software engineer? Darling suggests that, for better or worse, we also have much to learn here from the history of how we have treated animals. For centuries, we have held the owners of dangerous animals accountable for their actions and have killed out-of-control beasts that have harmed humans. In the Middle Ages, a wave of animal trials swept across Europe with goats, dogs, cows and horses going to court. The first recorded pig trial took place in 1266 in Fontenay-aux-Roses, near Paris, after which the offending animal was burnt for eating a child.
Most legal systems distinguish between letting a hamster or a tiger loose into the wild and we may need similar distinctions for robots. Creative lawyers should be adapting existing laws to these new challenges, even if invisible, automated decision-making systems operate at a scale and complexity far greater than anything experienced before. But Darling warns that we should guard against the “moral crumple zoneâ€, to use the AI researcher Madeleine Clare Elish’s telling phrase, where we tend to blame the operator of the machine more than its designer.
Darling summarises the thinking of all three authors when she writes that we need to make powerful new technologies work for us, rather than the other way round. “The true potential of robotics isn’t to re-create what we already have. It’s to build a partner in what we’re trying to achieve.â€
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford, Yale University Press £20.95, 288 pages
A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins, Basic Books £22.99/$30, 288 pages
The New Breed: How to Think About Robots by Kate Darling, Allen Lane £20/Henry Holt $14.99, 336 pages
John Thornhill is the FT’s innovation editor
Join our online book group on Facebook at FT Books Café
[ad_2]
Source link