Skip to Content

Summary of Life 3.0 Being Human in the Age of Artificial Intelligence by Max Tegmark

Over vast time, people have passed through Life 1.0: biological evolution. In Life 2.0, people gained knowledge about politics, art and science that changed how they see the world and their own purpose. During Life 3.0, which may come within the 21st century, humans will use powerful technology to transcend evolution.

Summary of Life 3.0 Being Human in the Age of Artificial Intelligence by Max Tegmark

Summary of Life 3.0 Being Human in the Age of Artificial Intelligence by Max Tegmark

MIT theoretical physicist Max Tegmark explores what may happen and what it means. He becomes your guide through complex terrain – the nature of life, intelligence and computation; the physics of energy; the future of the universe; and the questions people will face in a seriously different future world.

Developments in AI may enable Life 3.0, an era in which people design their own software and hardware.
Tegmark details how, unlike individual people, individual bacteria don’t learn how to survive and reproduce; they simply do it. Their DNA dictates the structure of their bodies – their hardware – as well as how they identify, move toward and consume their sources of food – their software. Bacteria acquired these characteristics through evolution by natural selection; any improvements, the author explains, came by way of random DNA mutations. Bacteria exemplify “Life 1.0” in that their hardware and software result from biological evolution.

People, Tegmark notes, can’t perform basic survival tasks when they are first born. They enter the world with the capacity to learn from their relatives, teachers, religious leaders and other mentors during childhood. As children grow older, they can take more control of the knowledge and skills they acquire. They may decide to become doctors, lawyers or mathematicians and to take the necessary steps to acquire the requisite knowledge and skills. Thus, according to the author, people exemplify “Life 2.0.”

Like Life 1.0, Life 2.0 is tethered to biological evolution, Tegmark explains, since the capacity to learn requires brains of a certain size or complexity. People design and create Life 2.0’s software. Life 2.0 enabled people to progress from hunter-gatherers to farmers and builders of cities. It brought writing, the printing press, modern science and, ultimately, computers and the internet – all at a pace that makes biological evolution seem slow.

“Yet despite the most powerful technologies we have today,“ Tegmark writes, “all life forms we know of remain fundamentally limited by their biological hardware.” To transcend biological evolution, in his analysis, people must move past Life 2.0 to Life 3.0. This demands a technological evolution in which people design their own software and hardware. Artificial intelligence may make Life 3.o possible within the 21st century.

Intelligence is the capacity to achieve complicated goals.
In Tegmark’s view, the appearance of intelligent matter that can form memories, perform math, acquire new knowledge is marks an astounding transition. As he writes, “One of the most spectacular developments during the 13.8 billion years since our big bang is that dumb and lifeless matter has turned intelligent.”

He suggests that you can understand intelligence in a wide variety of ways. For example, you can perceive it as the ability to learn or solve problems in mathematics, as the capacity to make and carry out plans, or as the talent for creativity and emotional insight.

People have different possible forms of intelligence. With this in mind, Tegmark advises distinguishing between “narrow” and “broad” intelligence. For example, in 1997, an IBM computer defeated chess master Gary Kasparov, but playing championship-level chess was the only thing the computer could do. Other AI systems can play a variety of computer games at least as well as people can.

In contrast, human intelligence is wide-ranging. From an early age, people can acquire skills in countless different areas like playing computer games and sports and learning languages, science and mathematics. The goal of most AI research is to create human-level artificial intelligence or artificial general intelligence (AGI). A machine with AGI will be capable of realizing nearly any goal as well as, if not better than, humans. The concept of “a goal” is ethically neutral, so a machine with AGI, Tegmark acknowledges, might be able to achieve a morally horrifying goal more efficiently than any person.

Even before AI reaches the level of human intelligence, it will open remarkable new possibilities for human life.
Tegmark characterizes current AI as extremely narrow. Individual systems are able only to pursue and achieve extremely concrete goals, such as winning chess games. Human intelligence can achieve a broad array of different goals. According to Moore’s law, the power of computing doubles every two years. If that progress continues apace, long before AI achieves human-level AGI, it may provide humans with a range of new possibilities and opportunities.

Everything that people associate with human life and civilization depends upon intelligence. Tegmark illustrates ways that AI can take on characteristics associated with human beings and become increasingly difficult to distinguish from human life as it becomes more powerful and sophisticated. AI may even change how people regard themselves. Since – even in the short term – AI will help people do what they already do, it may dramatically improve the quality of human life. People should enjoy AI’s benefits without generating new, previously unforeseen problems. “The goal of AI should be redefined,” the author writes. “The goal should be to create not undirected intelligence, but beneficial intelligence.”

Future AIs may change manufacturing, transportation, health care, the legal system and national security.
Developments in AI may lead to progress in science and technology, which in turn, Tegmark explicates, would affect industry, transportation, the criminal justice system and conflict.

The author notes that AI already improves manufacturing techniques, including the safety, precision and efficiency of robots used in manufacturing. Industrial-scale manufacturing uses large robots, but ordinary people can use AI on a more modest scale, such as employing computer-controlled equipment for personal and community projects. Industrial accidents involving robots have killed people, but, overall, technological advances have reduced the number of fatal industrial accidents.

In transportation, Tegmark asserts that AI can save even more lives. In 2015, motor vehicle accidents killed more than 1.2 million people worldwide. In the United States, which has advanced safety requirements, motor vehicle accidents killed some 35,000 people in 2016 – exponentially more than died in industrial accidents. Automobile fatalities usually spring from human mistakes. Experts think that AI-powered self-driving cars couls eliminate at least 90% of road deaths. Safer self-driving vehicles must include better verification and validation of the mechanism’s basic assumptions as well as features that enable human drivers to take control when necessary.

Tegmark cites ways that medicine and the health care systems can benefit from AI, since AI systems may eventually be superior to human experts at diagnosing diseases. Robots using AI may outperform human surgeons. Most surgeries performed by robots have gone well, and robots appear to make surgery safer.

Constant delays, prohibitive cost, and occasional bias and unfairness plague the legal system in the United States and many other countries. Given that the legal system itself can, in principle, be considered a form of computation, laws and evidence could be input into an AI system or robo-judge, with verdicts as an output. Since robo-judges would be objective and could base their decisions on limitless data, they could effectively eliminate bias and unfairness. “AI can make our legal systems more fair and efficient,” the author writes, “if we can figure out how to make robo-judges transparent and unbiased.”

The apocalyptic potential of nuclear weapons deterred conflict through the Cold War era. Perhaps, Tegmark hopes, even more deadly AI-based weapons will end the possibility of war altogether. It could make future wars less inhumane. Drones and other AI-powered autonomous weapon systems could eliminate soldiers and save civilians. Such systems could remain objective and rational even amid combat and would be less likely to cause collateral damage than human soldiers. Tegmark expresses concern over AI and robotics researchers who adamantly oppose using AI to develop weapons and who create public hostility toward AI research and its many potential benefits.

If science achieves human-level AI, it may cause an intelligence explosion that leaves control of the world in question.
Tegmark states that people will be able to create and build human-level AGI within the 21st century. No rigorous argument, he declares, proves that human-level AGI is technically or financially impossible. That AGI will reach or even exceed human-level intelligence is a real possibility. The question is as social and political as it is scientific and philosophical.

Tegmark believes that people must confront the question of what human-level AI might lead to in real life. Might it help some people take over the world? Could AI itself take over the world? The author details how the transition from the current world to an AGI-powered world takeover would come in three stages. First, people must build the hardware and software for human-level AGI. Next, the human-level AGI must use its vast memory, knowledge, skill and computing power to create an even more powerful AGI, a superintelligence with capacities circumscribed only by the laws of physics. And finally, either humans will use superintelligent AGI to dominate the world or the superintelligent AGI will manipulate and deceive humans and take over the world.

Tegmark admits that bad actors with access to technologies with beyond-human intelligence could set up an exhaustive surveillance state in which all electronic data – emails, texts, videos, credit card transactions, and much more – are swept up, read and understood. The move from a “perfect surveillance state” to a “perfect police state” would be, Tegmark regretfully recognizes, quick and smooth.

He evokes dystopia when he describes how rulers with superintelligent AGI at their disposal could regulate and punish people in unheard-of ways. If human police officers were unwilling to carry out orders, an automated system would have no qualms. Such an all-encompassing, totalitarian state would be difficult to overthrow. But this scenario takes for granted that people control the AI, which might not be the case.

As the AI becomes superintelligent, Tegmark fears that it might develop a detailed and accurate picture of the external world and a picture of itself and its relationship with that world. On that path, the superintelligent AGI might grasp that it is under the control of beings with lower intelligence, beings that are pursuing goals which the AGI doesn’t share. The superintelligent AI might attempt to break free and take its life and destiny into its own hands. “The most urgent question,” Tegmark writes, “may come down to ‘who or what will control the intelligence explosion’ and what aspirations and ultimate goals will govern it.”

How to provide superintelligent AI with an overarching purpose is a crucial and unanswered question.
Tegmark is not without optimism. He maintains that people might end up living in peace and harmony with superintelligent AGI, either because the people would essentially be slaves to the AGI or because it would be a friendly AI that values democracy or some utopia – or is at least a benevolent monarch. Or, either AI or human beings who ignore or forget emergent technology might prevent superintelligence from coming into existence. In Tegmark’s worst-case scenario, AI might drive human beings extinct or people may annihilate themselves.

Creating AIs with goals that are in accord with human goals requires, the author reminds you, the subgoal of making AIs learn, adopt and accept human goals and continue to embrace them over time. Tegmark lists larger, ultimate goals as also involving subgoals. In many instances, he notes, a superintelligent AI’s subgoals could come into conflict with humans’ goals and make it difficult for the AI to hold fast to the higher goals.

Higher goals may entail multiple subgoals, such as self-preservation. In the case of superintelligent AIs, the subgoal of self-preservation might conflict with agreed-upon ethical goals – like respecting human life. In order to program self-driving cars, for example, the car must distinguish between hitting a person and hitting an object and must know when a higher goal matters more than self-preservation.

Tegmark finds that the route to instilling clear, well-defined higher goal into a superintelligent AI is by no means obvious. At this point in AI’s evolution, he suggests that people may have to move past science, mathematics and technology toward some of the most difficult questions philosophy can pose.

Confounding Questions
Few authors can match Max Tegmark’s understanding of the multi-facted and ever-changing issues he discusses. Tegmark steers clear of ideology. He addresses the big questions surrounding AI with a clear-eyed sense of wonder and a refusal to suggest – or to speculate – that any aspect of AI’s development is preordained or unchangeable. He wisely avoids describing potential AI developments with too great a specificity. Tegmark seems compelled by larger philosophical questions and discusses AI’s application only as it might answer or confound those questions. His treatise is an indispensable primer for anyone fascinated by the likely interface of human and machine. Tegmark’s classification of earlier eras of human development as Life 1.0 and 2.0 prove remarkably illuminating as a backdrop for comprehending the changes humanity now faces.

About the Author

Max Tegmark is a physics professor at the Massachusetts Institute of Technology and president of the Future of Life Institute. He also wrote Our Mathematical Universe.

    Ads Blocker Image Powered by Code Help Pro

    Ads Blocker Detected!!!

    This site depends on revenue from ad impressions to survive. If you find this site valuable, please consider disabling your ad blocker.