The Fourth Industrial Revolution heralds a series of social, political, cultural, and economic upheavals that will unfold over the 21st century. Building on the widespread availability of digital technologies that were the result of the Third Industrial, or Digital, Revolution, the Fourth Industrial Revolution will be driven largely by the convergence of digital, biological, and physical innovations.

Like the First Industrial Revolution’s steam-powered factories, the Second Industrial Revolution’s application of science to mass production and manufacturing, and the Third Industrial Revolution’s start into digitization, the Fourth Industrial Revolution’s technologies, such as artificial intelligence, genome editing, augmented reality, robotics, and 3-D printing, are rapidly changing the way humans create, exchange, and distribute value. As occurred in the previous revolutions, this will profoundly transform institutions, industries, and individuals. More importantly, this revolution will be guided by the choices that people make today: the world in 50 to 100 years from now will owe a lot of its character to how we think about, invest in, and deploy these powerful new technologies.

It’s important to appreciate that the Fourth Industrial Revolution involves a systemic change across many sectors and aspects of human life: the crosscutting impacts of emerging technologies are even more important than the exciting capabilities they represent. Our ability to edit the building blocks of life has recently been massively expanded by low-cost gene sequencing and techniques such as CRISPR; artificial intelligence is augmenting processes and skill in every industry; neurotechnology is making unprecedented strides in how we can use and influence the brain as the last frontier of human biology; automation is disrupting century-old transport and manufacturing paradigms; and technologies such as blockchain, used in executing cryptocurrency transactions, and smart materials are redefining and blurring the boundary between the digital and physical worlds.

The result of all this is societal transformation at a global scale. By affecting the incentives, rules, and norms of economic life, it transforms how we communicate, learn, entertain ourselves, and relate to one another and how we understand ourselves as human beings. Furthermore, the sense that new technologies are being developed and implemented at an increasingly rapid pace has an impact on human identities, communities, and political structures. As a result, our responsibilities to one another, our opportunities for self-realization, and our ability to positively impact the world are intricately tied to and shaped by how we engage with the technologies of the Fourth Industrial Revolution. This revolution is not just happening to us—we are not its victims—but rather we have the opportunity and even responsibility to give it structure and purpose.

As economists Erik Brynjolfsson and Andrew McAfee have pointed out, this revolution could yield greater inequality, particularly in its potential to disrupt labor markets. As automation substitutes for labor across the entire economy, the net displacement of workers by machines might exacerbate the gap between returns to capital and returns to labor. On the other hand, it is also possible that the displacement of workers by technology will, in aggregate, result in a net increase in safe and rewarding jobs.

All previous industrial revolutions have had both positive and negative impacts on different stakeholders. Nations have become wealthier, and technologies have helped pull entire societies out of poverty, but the inability to fairly distribute the resulting benefits or anticipate externalities has resulted in global challenges. By recognizing the risks, whether cybersecurity threats, misinformation on a massive scale through digital media, potential unemployment, or increasing social and income inequality, we can take the steps to align common human values with our technological progress and ensure that the Fourth Industrial Revolution benefits human beings first and foremost.

We cannot foresee at this point which scenario is likely to emerge from this new revolution. However, I am convinced of one thing—that in the future, talent, more than capital, will represent the critical factor of production.

Are you a student?
Get a special academic rate on Britannica Premium.

With these fundamental transformations underway today, we have the opportunity to proactively shape the Fourth Industrial Revolution to be both inclusive and human-centered. This revolution is about much more than technology—it is an opportunity to unite global communities, to build sustainable economies, to adapt and modernize governance models, to reduce material and social inequalities, and to commit to values-based leadership of emerging technologies.

The Fourth Industrial Revolution is therefore not a prediction of the future but a call to action. It is a vision for developing, diffusing, and governing technologies in ways that foster a more empowering, collaborative, and sustainable foundation for social and economic development, built around shared values of the common good, human dignity, and intergenerational stewardship. Realizing this vision will be the core challenge and great responsibility of the next 50 years.

This essay was originally published in 2018 in Encyclopædia Britannica Anniversary Edition: 250 Years of Excellence (1768–2018).

Klaus Schwab
Britannica Chatbot logo

Britannica Chatbot

Chatbot answers are created from Britannica articles using AI. This is a beta feature. AI answers may contain errors. Please verify important information using Britannica articles. About Britannica AI.
Top Questions

What is artificial intelligence?

Are artificial intelligence and machine learning the same?

artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since their development in the 1940s, digital computers have been programmed to carry out very complex tasks—such as discovering proofs for mathematical theorems or playing chess—with great proficiency. Despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match full human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in executing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, voice or handwriting recognition, and chatbots.

What is intelligence?

What do you think?

Explore the ProCon debate

All but the simplest human behavior is ascribed to intelligence, while even the most complicated insect behavior is usually not taken as an indication of intelligence. What is the difference? Consider the behavior of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behavior is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—conspicuously absent in the case of the wasp—must include the ability to adapt to new circumstances.

Psychologists generally characterize human intelligence not by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

Learning

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that, the next time the computer encountered the same position, it would recall the solution. This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless the program was previously presented with jumped, whereas a program that is able to generalize can learn the “add -ed” rule for regular verbs ending in a consonant and so form the past tense of jump on the basis of experience with similar verbs.

(Read Ray Kurzweil’s Britannica essay on the future of “Nonbiological Man.”)

computer chip. computer. Hand holding computer chip. Central processing unit (CPU). history and society, science and technology, microchip, microprocessor motherboard computer Circuit Board
Britannica Quiz
Computers and Technology Quiz
Britannica Chatbot logo

Britannica Chatbot

Chatbot answers are created from Britannica articles using AI. This is a beta feature. AI answers may contain errors. Please verify important information using Britannica articles. About Britannica AI.