AI 2041 (2021) is a provocative work of speculative fiction with analysis that explores the ways in which AI will shake up our world over the next twenty years. We’re just at the beginning of the technological revolution that AI will bring. By imagining what that future will look like, we can start preparing for the changes to come.
Introduction: Learn about how AI will transform our lives in the next two decades.
Often it feels like the modern world is already a kind of sci-fi fantasy. Who’d have thought that one day you could ask household gadgets to play you a song, or that you’d have a computer in your pocket to remind you when it’s time to go for a walk?
But this is just the beginning. The development of deep learning and natural language acquisition will turbo-charge AI innovations. Autonomous cars and weapons are already in development. And deepfake videos and virtual reality games are becoming so convincing that it’s hard to distinguish fiction from fact.
Each of the following summaries begins with a short, fictional story about what the world could look like in 2041 – that is, after 20 more years of AI development – followed by an analysis of the impact these developments could have on society. Together, they’ll help you prepare for the AI revolution.
In these summaries, you’ll learn
- why we still don’t have completely self-driving cars;
- how social media companies are using your data; and
- why deepfakes are so dangerous.
AI can help you optimize your life – but it can also weaponize your data.
In Mumbai, in 2041, Nayana’s family dramatically lowered their insurance premiums by signing up with a new insurance company called Ganesh Insurance. The catch? They had to agree to share all their personal data with the company.
Ganesh instructed the family to use a certain set of apps for everything from investing to finding the best supermarket deals, and over the next few weeks, their phones were constantly pinging with recommendations. The apps told them when to drink water, instructed her grandfather to drive more slowly, and nagged her dad so much about his smoking that he eventually quit. With every healthy decision they made, their insurance premiums fell. It seemed like a win-win for everyone.
But when Nayana fell in love with a man who lived in a less-wealthy neighborhood, the family’s premiums soared. Somehow, the AI had inferred that he was of a different social status, and interpreted that as a health risk to the family.
The key message here is: AI can help you optimize your life – but it can also weaponize your data.
Nayana’s story provides a chilling insight into how AI can reproduce the discrimination already present in society. One of the most significant AI developments in the last decade has been deep learning. Deep learning allows computers to make predictions, classify data, and recognize patterns. Deep learning is the technology Facebook uses to generate personalized recommendations and maximize the time you spend on its network. By analyzing every click you make and comparing your data to the millions of others in their system, the platform is able to accurately predict what will engage you.
Deep learning can have enormous benefits. AI can analyze millions of data points, and make connections that would elude the human mind. But AI lacks the nuance and complexity of human thinking. It can’t draw on personal experience, abstract concepts, or common sense.
And AI is vulnerable to bias and discrimination. In Nayana’s story, the app didn’t know that her love interest was from a different “caste” and that the match would be seen as socially undesirable. But by analyzing his family’s data and tracking where he lived, it still suggested that the match would be a “threat” to the health of Nayana’s family. Deep learning will only grow more prevalent and powerful in the years to come. The question of how to make it beneficial to society as a whole will be one of our most urgent preoccupations in the near future.
By 2041, deepfakes will become so convincing that it will be almost impossible to spot frauds.
Amaka was scared. A shady company called Ljele had asked him to use his skills as an expert programmer to create a deepfake video for them. He had to make a video of a prominent Nigerian politician admitting to scandalous behavior. If Amaka refused to do it, the company said they would release a fake video of their own, showing Amaka kissing another man in a nightclub. This could land him in prison and cause even bigger problems between him and his family.
Here’s the key message: By 2041, deepfakes will become so convincing that it will be impossible to spot frauds.
In 2018, a video of former President Obama calling President Trump a “total dipshit” went viral online, causing an uproar. The catch? It wasn’t real – it was a deepfake created by Buzzfeed to show what was possible with AI technology, and to warn people to be skeptical of what they see.
To develop the technology for making deepfakes, developers first needed to teach computers to process and make sense of images. So they took inspiration from the human brain, which has a visual cortex that gathers information about an image before sending it to the neocortex, which processes that information and then assigns more complex meaning to what’s being seen. Using this model, designers created a convolutional neural network, or CNN.
To create deepfakes, you need a specific kind of technology called a Generative Adversarial Network, or GAN, which consists of two CNNs. One of these is a “forger,” which analyzes tens of millions of pixels in every image it sees, picking out the unique characteristics of every image. If the forger has analyzed images of, say, dogs, it can then synthesize a fake dog image. It sends this to the second CNN in the network, which is a sort of “detective.” It tests the fake picture against real ones and informs the forger of any errors. The forger then uses that feedback to improve the image and sends it back to the detective. This cycle recurs millions of times, until the fake dog is indistinguishable from a real one. And this process can be used to create very convincing deepfake videos as well as images.
This can have dangerous consequences, of course. As Amaka’s story highlights, deepfakes can be employed as political weapons, discrediting candidates or spreading propaganda. They can also be used to intimidate or blackmail people. In the real world, in 2019, a slew of deepfake porn featuring celebrities’ faces flooded porn sites.
To counter deepfakes, programmers have been racing to create software that can detect anomalies the human eye can’t see. But deepfakes are evolving just as quickly.
AI companions will help people learn in new ways.
Golden Sparrow’s parents were killed in a car accident when he was just a young boy, and he was sent to an orphanage. The people who looked after him there created a special friend for him, a companion that he called Atoman, after his favorite superhero. To see Atoman, Golden Sparrow needed to wear special glasses with a virtual reality interface. Soon, he wore them all the time. Atoman was a perfect companion because he knew everything about Golden Sparrow; after all, he had access to his data cloud. And a biometric ribbon attached to Golden Sparrow’s wrist provided a constant flow of real-time information about his physiological state as well as his behavior.
Atoman became Golden Sparrow’s best friend, helping him with his homework, answering his questions, and planning adventures for him. But, above all, he talked to Golden Sparrow constantly – an ideal conversation partner for an isolated young boy.
The key message is this: AI companions will help people learn in new ways.
Although most people don’t have a supportive virtual companion like Atoman, many have interacted with AI helpers – having a conversation with an automated assistant while changing a flight reservation, for example.
Computer scientists have been trying to give computers the capacity to talk to humans for decades. But until very recently, these attempts have been limited by how labor-intensive it was to teach computers how to respond to conversational prompts. Google’s invention of the “transformer” in 2017 changed all that. The transformer is a sequence transduction neural network. By analyzing millions of pages of text, it’s able to identify conversation patterns and predict what it should say in response to a given statement without any human intervention. Recently, the OpenAI institute launched its own machine, which can copy any writer’s style, and write poetry. It was created by feeding the machine text that would take the average human 500,000 lifetimes to read.
And remember how Atoman helped Golden Sparrow with his homework? These developments have the potential to completely transform education. Personalized AI “teachers” could give students attention they don’t receive in rowdy classrooms. AI tutors could relieve teachers of some of their most burdensome tasks, like grading papers, assigning homework, and answering routine questions. Human teachers, meanwhile, will still have an essential role in helping students develop emotional intelligence, creativity, and social skills – all things at which humans, unlike AI companions, excel.
AI will revolutionize healthcare for the COVID generation.
Chen Nan is part of the so-called COVID generation, which grew up in the two decades after COVID-19 broke out and devastated the world. She can’t even remember a time when people weren’t afraid of getting sick. COVID returns every year as a seasonal flu. Everyone has to wear a biosensor membrane on their wrists that transmits their physiological data in real time. Nan has a traumatic memory of her grandparents dying in the first outbreak, and she’s so fearful of getting infected herself that she never leaves her apartment.
Everything she needs is disinfected and brought to her door by delivery bots. Household bots help her keep things clean. And she works online. So there’s no reason for her to leave – except that she’s completely isolated. How can she form relationships, or fall in love, while being stuck inside?
The key message here is: AI will revolutionize healthcare for the COVID generation.
Chen Nan’s story is a speculative look at what COVID’s long-term effects on people’s lives might be. But one thing is clear: Developments in AI will be instrumental in both treating the virus and allowing people to adapt their behavior to avoid getting it.
Even now, it’s not uncommon to rely on a smartphone to calculate the risk of infection, and many countries have developed apps that send out alerts when a potentially infected person is nearby. These apps will only become more prevalent, which will likely lead to bitter fights over privacy versus safety. Today, people are required to prove their vaccination status through a QR code in an app. In the future, perhaps they’ll need to wear biosensor membranes, like the ones mentioned in the story, which transmit physiological data and indicate when their vaccination expires. And as health records become digitized, doctors will increasingly rely on it when diagnosing and treating diseases. And new software will revolutionize the speed at which COVID vaccines can be developed.
COVID may also have long-lasting effects on humanity’s mental health and social habits. While the majority of people won’t hide from the world entirely, many are already much more cautious about where they go and who they see. AI technologies like delivery bots will help enable these changes in behavior. But so far, robots can’t experience love. So people who find themselves desperate for face-to-face connection – like Chen Nan – will eventually have to push themselves to leave their homes.
Mixed reality will blur the line between real and fictional worlds.
The seance took place in a dark room with flickering candles and rose petals on the table. Aiko felt excited and scared. An old woman, the medium, began the ceremony. Suddenly the table started shaking violently, and the woman’s voice changed into that of a young man. Aiko knew that it was Hiroshi – her rock-star idol who had died in mysterious circumstances.
Aiko was participating in an XR – or “extra reality” – game. It allowed her to feel like she was really talking to the man she admired so much. In fact, her XR glasses allowed her to view his ghost, which appeared like an apparition when she least expected it. And the plot line was perfectly tailored to her interests, desires, and fears. The extensive questionnaire she’d filled out on her smartphone made sure of that.
Here’s the key message: Mixed reality will blur the line between real and fictional worlds.
The game Aiko played was so immersive because it didn’t only contain virtual reality elements in which she saw an imaginary world through her glasses. Hiroshi also appeared to her in familiar environments like her home, or a busy street. And he interacted with the elements in her actual physical environment. This is mixed reality, the latest development in XR. It’s still in the early stages of development, as it requires sophisticated technology that allows for recognition of objects in a scene and natural language skills. But it will become much more prominent in the next twenty years, blurring the line between fiction and reality.
Technologies like XR contact lenses and invisible, built-in earphones will make the use of these technologies feel much more natural and seamless than they do today. And haptic gloves and bodysuits will allow players to feel hot or cold, and can even simulate touch.
The obvious application for these technologies is immersive games, like the one Aiko was playing. But they could also be used to simulate war conditions as military training exercises, allow young surgeons to practice their skills on a virtual patient, or allow students to “meet” historical figures in the classroom.
XR has the potential to contribute a lot to how we play, learn, and work. But it can also be misused. If people are wearing XR glasses all day, they’re also allowing an app to access all their data, and collect intimate details about their personal lives. We need to think critically about what happens to that data, and develop legislation to protect user privacy.
Self-driving cars could revolutionize our transport systems – but getting the technology right isn’t easy.
From his cockpit in the training center, Chamal piloted his vehicle through the streets of Colombo, a city in Sri Lanka. His task? Rescue tourists from a terrorist attack at a famous temple. His vehicle was a self-driving car, but even these can be thrown off by disasters that disrupt the route they’ve been programmed to drive, so he helped guide it to safety. Weaving through plumes of smoke on the way to the temple, he collected carloads of scared, bewildered tourists, piloting them through the chaotic streets to their hotel. He could feel the tremors as gunshots went off in the distance.
The key message is this: Self-driving cars could revolutionize our transport systems – but getting the technology right isn’t easy.
The dream of creating completely self-driving cars has eluded developers for decades. The fact is, driving is an extremely complex operation. Imagine you’re getting in your car, preparing to go somewhere. You’ll need to use your sense of perception to scan the scene and look out for obstacles. You’ll also need strong navigation and planning skills as you decide where to turn to get to your destination. You’ll need to watch other drivers carefully, and guess what they’re about to do. And you’ll constantly need to make spur-of-the-moment decisions about what actions to take.
These skills are incredibly difficult for even the most sophisticated computer to emulate. There are just so many variables when you’re driving. Perhaps the weather is bad, or there are roadworks, or a dog runs into the street. How do you manage to equip a robot to deal with all of these eventualities?
Unlike other AI experiments, this one has high stakes. If Facebook gets its algorithm wrong, you might see an inappropriate ad – but if an autonomous car has a glitch in its technology, it could kill someone. Human drivers, however, don’t have such a great track record either. In fact, every year over 1.35 million people die in car accidents. When autonomous cars are fully functional, they could reduce that number.
Chamal was able to guide the car to safety from a virtual location, using augmented reality goggles that reproduced the environment around the car on a large screen. Using human drivers at a distance is one way to make autonomous cars safer for now. Another way would be to fundamentally alter the way our roads and cities are designed. Imagine smart roads that can constantly communicate with cars, or separate roads for pedestrians and cars. Perhaps in the future this vision will become a reality.
Autonomous weapons pose an existential threat to humanity.
When Marc’s wife and son died in a California wildfire, he was crazy with grief. But the grief soon turned to rage. The wildfire was the result of climate change, created by a world hell-bent on technological progress at any price. Marc was a physicist, working on a model of a quantum computer. He decided to use his skills and knowledge to shut down the world that had caused him so much pain.
He created sophisticated drones that could move through the sky like a flock of birds, and directed them to assassinate business leaders and other people contributing to climate change. He also targeted major ports, creating havoc by completely disrupting the oil supply.
The key message here is: Autonomous weapons pose an existential threat to humanity.
This story sounds far-fetched, but drones like Marc’s already exist. In fact, the Israeli military has developed a drone that can identify a specific target and kill it by detonating an explosive. And even less-sophisticated drones can be dangerous. Recently, Venezuelan president Nicholás Maduro was nearly assassinated by two drones that released explosives while he was giving a speech. Autonomous weapons have created a new arms race around the world, as global powers fight to create the fastest and most lethal weapons.
Nuclear weapons, of course, have long possessed the power to wipe out huge populations of people. But they’ve also acted as a deterrent to warfare. Just knowing that major powers have nuclear weapons has been enough to stop them from striking each other. Autonomous weapons, on the other hand, can be employed anonymously. If no one knows where they came from, it’s very difficult to retaliate. That means there is no deterrent effect. What’s more, they can be deployed by terrorists – like Marc in the story – who want to sow chaos or trigger a deadly war.
As AI develops and accelerates with the help of advances in quantum computing, autonomous weapons will only become more deadly. So it’s urgent to develop ways to safeguard humanity from the potential fallout. Some proposed solutions have included an insistence on human mediation, or a ban on these kinds of weapons entirely, in the same way chemical weapons have been banned. None of these solutions will be easy to implement, as they require global accord. But it’s essential that we take action now.
Automation is creating an employment crisis.
Throngs of workers protested in front of Landmark headquarters, one of the largest construction companies in the United States. The company employed thousands of people – all of whom were about to lose their jobs. The reason? AI workers could do those same jobs faster, and for no money, making the workers redundant. As part of their settlement, they were enrolled at an “employment restoration” company, which promised to retrain them and find them new jobs. But these jobs were often menial, or located far away from their families. AI had displaced them, upending their lives.
Here’s the key message: Automation is creating an employment crisis.
Today, more and more organizations are replacing their employees with AI substitutes, and menial and entry-level jobs are most at risk. These jobs often belong to people who are already working for minimum wage, so AI will only widen the gap between rich and poor. Increasingly, even jobs like plumbing will be taken over by computers trained to assemble standardized parts.
This mass displacement of workers will have serious ramifications. Not only will people lose their incomes, but they’ll also lose the social interaction and sense of purpose that meaningful work can bring. Imagine honing your skills over the course of a lifetime, only to see a computer outperform you after a few weeks at the job! Unemployment has been linked to higher levels of alcoholism, depression, and suicide. On such a wide scale, this will have serious effects. Measures like the Universal Basic Income could provide a financial cushion, and help to redistribute some of the enormous profits earned by automating jobs. But that doesn’t solve the problem of giving people meaningful work. For that, we’ll need to invest in training humans to do the work that AI will never be able to do as well.
AI thrives when doing tasks that don’t require creativity, like analyzing data. Humans, on the other hand, have the ability to think creatively. We can link abstract concepts, use common sense, and set our own assignments. Any work requiring these capacities will always need human intervention.
Humans also trump AI in traits like empathy and compassion. While an AI nurse might be just as good at doling out pills, he won’t be able to communicate concern in the same way a human would. So the caring professions will always need human intervention.
One of the best ways to support people displaced by AI is to help them to retrain to find new avenues where they can use their uniquely human skills.
AI can optimize your happiness – to an extent.
Victor was a self-made millionaire, but he was bored and depressed. So when he was invited to a mysterious island in Doha, he jumped at the chance for some much-needed adventure. But when he arrived on the island, he had to allow a computer to access all his personal data, and he quickly noticed what it had been used for. His every desire and whim was satisfied before he even asked. A robot waited on him hand and foot. His favorite music played all the time, and his accommodation was decorated with all his favorite things. At first, he was delighted. But soon he was bored again, sick of being pleased all the time.
The key message is this: AI can optimize your happiness – to an extent.
AI can guess what food you like, and what political beliefs resonate with you. But can it make you happy? AI analyzed Victor’s data to try to optimize his happiness. But the positive results were fleeting. It satisfied his hedonistic impulses – but it didn’t help him address his deeper needs.
So what brings happiness? In 1943, Abraham Maslow published a theory that argues that humans have a “hierarchy of needs.” At the bottom of the hierarchy are fundamental physiological needs for food and shelter. Above that come needs for safety, security and employment, followed by love and belonging, self-esteem and self-actualization. Meeting the human population’s basic physiological needs should be made easier thanks to developments in AI.
The clean-energy revolution will bring down manufacturing costs, and AI automation will drastically reduce labor costs. If the profits of the clean-energy revolution are distributed across society, then no one will need to fear hunger or homelessness. But while the needs on the bottom rung of Maslow’s hierarchy will be satisfied, it’s harder to predict how this new age will satisfy more complex desires. If people no longer need to work, will they still have a drive to pursue self-actualization? What will allow them to feel like valuable members of their communities, and provide experiences of love and belonging?
As all the stories we’ve heard show, AI is a disruptive technology that will transform the way we live in the next twenty years and beyond. But how exactly that will play out is up to us. Human governments have the power to make sure that AI benefits society as a whole by guarding privacy, redistributing profits, protecting the environment, and reining in weapons. Our happiness depends on it.
The key message in these summaries is that:
AI will fuel technological and social transformations in every area of our lives, from how we date to the kinds of work we do. But it will also have the ability to power dangerous autonomous weapons that could destroy civilization as we know it. This is a critical time to shape the impact AI will have on the world.
And here’s some more actionable advice: Become informed about what’s happening to your data.
Every time you click on a post, or search for a flight, you’re giving massive corporations like Google and Facebook free data. It’s valuable, and can contain intimate personal information. So start becoming careful about how you share your data. Scrutinize privacy statements, and look for an alternative search engine that protects against trackers.
Technology and the Future, Society and Culture, Science, Technology, Artificial Intelligence, Business, Futurism, Economics, Science Fiction Fantasy, Computer Science, Artificial Intelligence and Semantics, Computers and Technology Industry, Social Science
About the author
Kai-Fu Lee is the CEO of Sinovation Ventures and New York Times bestselling author of AI Superpowers. Lee was formerly the president of Google China and a senior executive at Microsoft, SGI, and Apple. Co-chair of the Artificial Intelligence Council at the World Economic Forum, he has a bachelor’s degree from Columbia and a PhD from Carnegie Mellon. Lee’s numerous honors include being named to the Time 100 and Wired 25 Icons lists. He is based in Beijing.
Chen Qiufan (aka Stanley Chan) is an award-winning author, translator, creative producer, and curator. He is the president of the World Chinese Science Fiction Association. His works include Waste Tide, Future Disease, and The Algorithms for Life. The founder of Thema Mundi, a content development studio, he lives in Beijing and Shanghai.