1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 | 1. Control over Technology In the Fall as recorded in the book of Genesis, man underwent a loss of innocence and a weakening of his power over creation. Both of these losses can be to some extent made good, even in this life—the former by religion and faith, the latter by arts and sciences. —F B, Novum Organum, 1620 Instead, I saw a real aristocracy, armed with a perfected science and working to a logical conclusion the industrial system of to-day. Its triumph had not been simply a triumph over Nature, but a triumph over Nature and the fellow man. —H. G. W, The Time Machine, 1895 Since its first version in 1927, Time magazine’s annual Man of the Year had almost always been a single person, typically a political leader of global significance or a US captain of industry. For 1960, the magazine chose instead a set of brilliant people: American scientists. Fifteen men (unfortunately, no women) were singled out for their remarkable achievements across a range of fields. According to Time, science and technology had finally triumphed. The word technology comes from the Greek tekhne (“skilled craft”) and logia (“speaking” or “telling”), implying systematic study of a technique. Technology is not simply the application of new methods to the production of material goods. Much more broadly, it concerns everything we do to shape our surroundings and organize production. Technology is the way that collective human knowledge is used to improve nutrition, comfort, and health, but often for other purposes, too, such as surveillance, war, or even genocide. Time was honoring scientists in 1960 because unprecedented advances in knowledge had, through new practical applications, transformed everything about human existence. The potential for further progress appeared unbounded. This was a victory lap for the English philosopher Francis Bacon. In Novum Organum, published in 1620, Bacon had argued that scientific knowledge would enable nothing less than human control over nature. For centuries, Bacon’s writings seemed no more than aspirational as the world struggled with natural disasters, epidemics, and widespread poverty. By 1960, however, his vision was no longer fantastical because, as Time’s editors wrote, “The 340 years that have passed since Novum Organum have seen far more scientific change than all the previous 5,000 years.” As President Kennedy put it to the National Academy of Sciences in 1963, “I can imagine no period in the long history of the world where it would be more exciting and rewarding than in the field today of scientific exploration. I recognize with each door that we unlock we see perhaps 10 doors that we never dreamed existed and, therefore, we have to keep working forward.” Abundance was now woven into the fabric of life for many people in the United States and Western Europe, with great expectations for what would come next both for those countries and the rest of the world. This upbeat assessment was based on real achievement. Productivity in industrial countries had surged during the preceding decades so that American, German, or Japanese workers were now producing on average a lot more than just twenty years before. New consumer goods, including automobiles, refrigerators, televisions, and telephones, were increasingly affordable. Antibiotics had tamed deadly diseases, such as tuberculosis, pneumonia, and typhus. Americans had built nuclear-powered submarines and were getting ready to go to the moon. All thanks to breakthroughs in technology. Many recognized that such advances could bring ills as well as comforts. Machines turning against humans has been a staple of science fiction at least since Mary Shelley’s Frankenstein. More practically but no less ominously, pollution and habitat destruction wrought by industrial production were increasingly prominent, and so was the threat of nuclear war—itself a result of astonishing developments in applied physics. Nevertheless, the burdens of knowledge were not seen as insurmountable by a generation becoming confident that technology could solve all problems. Humanity was wise enough to control the use of its knowledge, and if there were social costs of being so innovative, the solution was to invent even more useful things. There were lingering concerns about “technological unemployment,” a term coined by the economist John Maynard Keynes in 1930 to capture the possibility that new production methods could reduce the need for human labor and contribute to mass unemployment. Keynes understood that industrial techniques would continue to improve rapidly but also argued, “This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” Keynes was not the first to voice such fears. David Ricardo, another founder of modern economics, was initially optimistic about technology, maintaining that it would steadily increase workers’ living standards, and in 1819 he told the House of Commons that “machinery did not lessen the demand for labour.” But for the third edition of his seminal Principles of Political Economy and Taxation in 1821, Ricardo added a new chapter, “On Machinery,” in which he wrote, “It is more incumbent on me to declare my opinion on this question, because they have, on further reflection, undergone a considerable change.” As he explained in a private letter that year, “If machinery could do all the work that labour now does, there would be no demand for labour.” But Ricardo’s and Keynes’s concerns did not have much impact on mainstream opinion. If anything, optimism intensified after personal computers and digital tools started spreading rapidly in the 1980s. By the late 1990s, the possibilities for economic and social advances seemed boundless. Bill Gates was speaking for many in the tech industry at the time when he said, “The [digital] technologies involved here are really a superset of all communications technology that has come along in the past, e.g., radio, newspaper. All of those things will be replaced by something that is far more attractive.” Not everything might go right all the time, but Steve Jobs, cofounder of Apple, captured the zeitgeist perfectly at a conference in 2007 with what became a famous line: “Let’s go and invent tomorrow rather than worrying about yesterday.” In fact, both Time magazine’s upbeat assessment and subsequent technooptimism were not just exaggerated; they missed entirely what happened to most people in the United States after 1980. In the 1960s, only about 6 percent of American men between the ages of 25 and 54 were out of the labor market, meaning they were long-term unemployed or not seeking a job. Today that number is around 12 percent, primarily because men without a college degree are finding it increasingly difficult to get well-paid jobs. American workers, both with and without college education, used to have access to “good jobs,” which, in addition to paying decent wages, provided job security and career-building opportunities. Such jobs have largely disappeared for workers without a college degree. These changes have disrupted and damaged the economic prospects for millions of Americans. An even bigger change in the US labor market over the past half century is in the structure of wages. During the decades following World War II, economic growth was rapid and widely shared, with workers from all backgrounds and skills experiencing rapid growth in real incomes (adjusted for inflation). No longer. New digital technologies are everywhere and have made vast fortunes for entrepreneurs, executives, and some investors, yet real wages for most workers have scarcely increased. People without college education have seen their real earnings decline, on average, since 1980, and even workers with a college degree but no postgraduate education have seen only limited gains. The inequality implications of new technologies reach far beyond these numbers. With the demise of good jobs available to most workers and the rapid growth in the incomes of a small fraction of the population trained as computer scientists, engineers, and financiers, we are on our way to a truly two-tiered society, in which workers and those commanding the economic means and social recognition live separately, and that separation grows daily. This is what the English writer H. G. Wells anticipated in The Time Machine, with a future dystopia where technology had so segregated people that they evolved into two separate species. This is not just a problem in the United States. Because of better protection for low-paid workers, collective bargaining, and decent minimum wages, workers with relatively low education levels in Scandinavia, France, or Canada have not suffered wage declines like their American counterparts. All the same, inequality has risen, and good jobs for people without college degrees have become scarce in these countries as well. It is now evident that the concerns raised by Ricardo and Keynes cannot be ignored. True, there has been no catastrophic technological unemployment, and throughout the 1950s and 1960s workers benefited from productivity growth as much as entrepreneurs and business owners did. But today we are seeing a very different picture, with skyrocketing inequality and wage earners largely left behind as new advances pile up. In fact, a thousand years of history and contemporary evidence make one thing abundantly clear: there is nothing automatic about new technologies bringing widespread prosperity. Whether they do or not is an economic, social, and political choice. This book explores the nature of this choice, the historical and contemporary evidence on the relationship among technology, wages, and inequality, and what we can do in order to direct innovations to work in service of shared prosperity. To lay the groundwork, this chapter addresses three foundational questions: • What determines when new machines and production techniques increase wages? • What would it take to redirect technology toward building a better future? • Why is current thinking among tech entrepreneurs and visionaries pushing in a different, more worrying direction, especially with the new enthusiasm around artificial intelligence? The Bandwagon of Progress Optimism regarding shared benefits from technological progress is founded on a simple and powerful idea: the “productivity bandwagon.” This idea maintains that new machines and production methods that increase productivity will also produce higher wages. As technology progresses, the bandwagon will pull along everybody, not just entrepreneurs and owners of capital. Economists have long recognized that demand for all tasks, and thus for different types of workers, does not necessarily grow at the same rate, so inequality may increase because of innovation. Nevertheless, improving technology is generally viewed as the tide lifting all boats because everyone is expected to derive some benefits. Nobody is supposed to be completely left behind by technology, let alone be impoverished by it. According to the conventional wisdom, to rectify the rise in inequality and build even more solid foundations for shared prosperity, workers must find a way to acquire more of the skills they need to work alongside new technologies. As succinctly summarized by Erik Brynjolfsson, one of the foremost experts on technology, “What can we do to create shared prosperity? The answer is not to slow down technology. Instead of racing against the machine, we need to race with the machine. That is our grand challenge.” The theory behind the productivity bandwagon is straightforward: when businesses become more productive, they want to expand their output. For this, they need more workers, so they get busy with hiring. And when many firms attempt to do so at the same time, they collectively bid up wages. This is what happens, but only sometimes. For example, in the first half of the twentieth century, one of the most dynamic sectors of the US economy was car manufacturing. As Ford Motor Company and then General Motors (GM) introduced new electrical machinery, built moreefficient factories, and launched better models, their productivity soared, as did their employment. From a few thousand workers in 1899, producing just 2,500 automobiles, the industry’s employment rose to more than 400,000 by the 1920s. By 1929, Ford and GM were each selling around 1.5 million cars every year. This unprecedented expansion of automobile production pulled up wages throughout the economy, including for workers without much formal education. For most of the twentieth century, productivity rose rapidly in other sectors as well, as did real wages. Remarkably, from the end of World War II to the mid-1970s, the wages of college graduates in the US grew at roughly the same rate as the wages of those workers with only a high school education. Unfortunately, what subsequently occurred is not consistent with the notion that there is any kind of unstoppable bandwagon. How productivity benefits are shared depends on how exactly technology changes and on the rules, norms, and expectations that govern how management treats workers. To understand this, let us unpack the two steps that link productivity growth to higher wages. First, productivity growth increases the demand for workers as businesses attempt to boost profits by expanding output and hiring more people. Second, the demand for more workers increases the wages that need to be offered to attract and retain employees. Unfortunately, neither step is assured, as we explain in the next two sections. Automation Blues Contrary to popular belief, productivity growth need not translate into higher demand for workers. The standard definition of productivity is average output per worker—total output divided by total employment. Obviously, the hope is that as output per worker grows, so will the willingness of businesses to hire people. But employers do not have an incentive to increase hiring based on average output per worker. Rather, what matters to companies is marginal productivity—the additional contribution that one more worker brings by increasing production or by serving more customers. The notion of marginal productivity is distinct from output or revenue per worker: output per worker may increase while marginal productivity remains constant or even declines. To clarify the distinction between output per worker and marginal productivity, consider this often-repeated prediction: “The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.” This imagined factory could churn out a lot of output, so average productivity—its output divided by the one (human) employee—is very high. Yet worker marginal productivity is minuscule; the sole employee is there to feed the dog, and the implication is that both the dog and the employee could be let go without much reduction in output. Better machinery might further increase output per worker, but it is reasonable to expect that this factory would not rush to hire more workers and their dogs, or increase the pay of its lonely employee. This example is extreme, but it represents an important element of reality. When a car company introduces a better vehicle model, as Ford and GM did in the first half of the twentieth century, this tends to increase the demand for the company’s cars, and both revenues per worker and worker marginal productivity rise. After all, the company needs more workers, such as welders and painters, to meet the additional demand, and it will pay them more, if necessary. In contrast, consider what happens when the same automaker installs industrial robots. Robots can perform most welding and painting tasks, and can do so more cheaply than production methods employing a larger number of workers. As a result, the company’s average productivity increases significantly, but it has less need for human welders and painters. This is a general problem. Many new technologies, like industrial robots, expand the set of tasks performed by machines and algorithms, displacing workers who used to be employed in these tasks. Automation raises average productivity but does not increase, and in fact may reduce, worker marginal productivity. Automation is what Keynes worried about, and it was not a new phenomenon when he was writing early in the twentieth century. Many of the iconic innovations of the British industrial revolution in textiles were all about substituting new spinning and weaving machines for the labor of skilled artisans. What is true of automation is true of many aspects of globalization as well. Major breakthroughs in communication tools and shipping logistics have enabled a massive wave of offshoring over the last several decades, with production tasks such as assembly or customer service being transferred to countries where labor is cheaper. Offshoring has reduced costs and boosted profits for companies such as Apple, whose products are made of parts produced in many countries and are almost entirely assembled in Asia. But in industrialized nations it has also displaced workers who used to perform these tasks domestically and has not activated a powerful bandwagon. Automation and offshoring have raised productivity and multiplied corporate profits, but have brought nothing resembling shared prosperity to the United States and other developed countries. Replacing workers with machines and moving work to lower-wage countries are not the only options for improving economic efficiency. There are multiple ways of increasing output per worker—and this has been true throughout history, as we explain in chapters 5 through 9. Some innovations boost how much individuals contribute to production, rather than automating or offshoring work. For example, new software tools that aid the tasks of car mechanics and enable greater precision work increase worker marginal productivity. This is completely different from installing industrial robots with the goal of replacing people. Even more important for raising worker marginal productivity is the creation of new tasks. There was plenty of automation in car manufacturing during the momentous reorganization of the industry led by Henry Ford starting in the 1910s. But mass-production methods and assembly lines simultaneously introduced a range of new design, technical, machineoperation, and clerical tasks, boosting the industry’s demand for workers (as we will detail in Chapter 7). When new machines create new uses for human labor, this expands the ways in which workers can contribute to production and increases their marginal productivity. New tasks were vital not just in early US car manufacturing but also in the growth of employment and wages over the last two centuries. Many of the fastest-growing occupations in the last few decades—MRI radiologists, network engineers, computer-assisted machine operators, software programmers, IT security personnel, and data analysts—did not exist eighty years ago. Even people in occupations that have been around for quite a while, such as bank tellers, professors, or accountants, now work on a variety of tasks that did not exist before World War II, including all of those that involve the use of computers and modern communication devices. In almost all these cases, new tasks were introduced as a result of technological advances and have been a major driver of employment growth. These new tasks have also been an integral part of productivity growth, for they have helped launch new products and more efficient reorganization of the production process. The reason that Ricardo’s and Keynes’s worst fears about technological unemployment did not come to pass is intimately linked to new tasks. Automation was rapid throughout the twentieth century but did not reduce the demand for workers because it was accompanied by other improvements and reorganizations that produced new activities and tasks for workers. Automation in an industry can also push up employment—in that sector or in the economy as a whole—if it reduces costs or increases productivity by enough. New jobs in this case may come either from nonautomated tasks in the same industry or from the expansion of activities in related industries. In the first half of the twentieth century, the rapid increase in car manufacturing raised the demand for a range of nonautomated technical and clerical functions. Just as important, productivity growth in car factories during these decades was a major driver for the expansion of the oil, steel, and chemical industries (think gasoline, car bodies, and tires). Car manufacturing at mass scale also revolutionized the possibilities for transportation, enabling the rise of new retail, entertainment, and service activities, especially as the geography of cities transformed. There will be few new jobs created, however, when the productivity gains from automation are small—what we call “so-so automation” in Chapter 9. For example, self-checkout kiosks in grocery stores bring limited productivity benefits because they shift the work of scanning items from employees to customers. When self-checkout kiosks are introduced, fewer cashiers are employed, but there is no major productivity boost to stimulate the creation of new jobs elsewhere. Groceries do not become much cheaper, there is no expansion in food production, and shoppers do not live differently. The situation is similarly dire for workers when new technologies focus on surveillance, as Jeremy Bentham’s panopticon intended. Better monitoring of workers may lead to some small improvements in productivity, but its main function is to extract more effort from workers and sometimes also reduce their pay, as we will see in chapters 9 and 10. There is no productivity bandwagon from so-so automation and worker surveillance. The bandwagon is also weak, even from new technologies that generate nontrivial productivity gains, when these tasks predominantly focus on automation and cast workers aside. Industrial robots, which have already revolutionized modern manufacturing, generate little or no gains for workers when they are not accompanied by other technologies that create new tasks and opportunities for human labor. In some cases, such as the industrial heartland of the American economy in the Midwest, the rapid adoption of robots has instead contributed to mass layoffs and prolonged regional decline. All of this brings home perhaps the most important thing about technology: choice. There are often myriad ways of using our collective knowledge for improving production and even more ways of directing innovations. Will we use digital tools for surveillance? For automation? Or for empowering workers by creating new productive tasks for them? And where will we put our efforts toward future advances? When the productivity bandwagon is weak and there are no self-acting correction mechanisms ensuring shared benefits, these choices become more consequential—and those who make them become more powerful, both economically and politically. In sum, the first step in the productivity bandwagon causal chain depends on specific choices: using existing technologies and developing new ones for increasing worker marginal productivity—not just automating work, making workers redundant, or intensifying surveillance. Why Worker Power Matters Unfortunately, even an increase in worker marginal productivity is not enough for the productivity bandwagon to boost wages and living standards for everyone. Recall that the second step in the causal chain is that an increase in the demand for workers induces firms to pay higher wages. There are three main reasons why this may not happen. The first is a coercive relationship between employer and employed. Throughout much of history, most agricultural workers were unfree, either working as slaves or in other forms of forced labor. When a master wants to obtain more labor hours from his slaves, he does not have to pay them more money. Rather, he can intensify coercion to extract greater effort and more output. Under such conditions, even revolutionary innovations such as the cotton gin in the American South do not necessarily lead to shared benefits. Even beyond slavery, under sufficiently oppressive conditions, the introduction of new technology can increase coercion, further impoverishing slaves and peasants alike, as we will see in Chapter 4. Second, even without explicit coercion, the employer may not pay higher wages when productivity increases if she does not face competition from rivals. In many early agricultural societies, peasants were legally tied to the land, which meant that they could not seek or accept employment elsewhere. Even in eighteenth-century Britain, employees were prohibited from seeking alternative employment and were often jailed if they tried to take better jobs. When your outside option is prison, employers do not typically offer you generous compensation. History provides plenty of confirmation. In medieval Europe, windmills, better crop rotation, and increased use of horses boosted agricultural productivity. However, there was little or no improvement in the living standards of most peasants. Instead, most of the additional output went to a small elite, and especially to a massive construction boom during which monumental cathedrals were built throughout Europe. When industrial machinery and factories started spreading in Britain in the 1700s, this did not initially increase wages, and there are many instances in which it worsened living standards and conditions for workers. At the same time, factory owners became fabulously wealthy. Third and most important for today’s world, wages are often negotiated rather than being simply determined by impersonal market forces. A modern corporation is often able to make sizable profits thanks to its market position, scale, or technological expertise. For example, when Ford Motor Company pioneered new mass-production techniques and started producing good-quality, cheap cars in the early twentieth century, it also became massively profitable. This made its founder, Henry Ford, into one of the richest businessmen of the early twentieth century. Economists call such megaprofits “economic rents” (or just “rents”) to signify that they are above and beyond the prevailing normal return on capital expected by shareholders given the risks involved in such an investment. Once there are economic rents in the mix, wages for workers are not simply determined by outside market forces but also by potential “rent sharing”—their ability to negotiate some part of these profits. One source of economic rents is market power. In most countries, there is a limited number of professional sports teams, and entry into the sector is typically constrained by the amount of capital required. In the 1950s and 1960s, baseball was a profitable business in the US, but players were not highly paid, even as revenues from television broadcasts poured in. This changed starting in the late 1960s because the players found ways to increase their bargaining power. Today, the owners of baseball teams still do well, but they are forced to share much more of their rents with the athletes. Employers may also share rents to cultivate goodwill and motivate employees to work harder, or because prevailing social norms convince them to do so. On January 5, 1914, Henry Ford famously introduced a minimum pay of five dollars per day to reduce absenteeism, to improve retention of workers, and presumably to reduce the risk of strikes. Many employers have since tried something similar, particularly when it is hard to hire and retain people or when motivating employees turns out to be critical for corporate success. Overall, Ricardo and Keynes may not have been right on every detail, but they correctly understood that productivity growth does not necessarily, automatically deliver broad-based prosperity. It will do so only when new technologies increase worker marginal productivity and the resulting gains are shared between firms and workers. Even more fundamentally, these outcomes depend on economic, social, and political choices. New techniques and machines are not gifts descending unimpeded from the skies. They can focus on automation and surveillance to reduce labor costs. Or they can create new tasks and empower workers. More broadly, they can generate shared prosperity or relentless inequality, depending on how they are used and where new innovative effort is directed. In principle, these are decisions a society should make, collectively. In practice, they are made by entrepreneurs, managers, visionaries, and sometimes political leaders, with defining effects on who wins and who loses from technological advances. Optimism, with Caveats Even though inequality has skyrocketed, many workers have been left behind, and the productivity bandwagon has not come to the rescue in recent decades, we have reasons to be hopeful. There have been tremendous advances in human knowledge, and there is ample room to build shared prosperity based on these scientific foundations—if we start making different choices about the direction of progress. Techno-optimists have one thing right: digital technologies have already revolutionized the process of science. The accumulated knowledge of humanity is now at our fingertips. Scientists have access to incredible measurement tools, ranging from atomic force microscopes to magnetic resonance imagery and brain scans. They also have the computing power to crunch vast amounts of data in a way that even thirty years ago would have seemed like fantasy. Scientific inquiry is cumulative, with inventors building on each other’s work. Unlike today, knowledge used to diffuse slowly. In the 1600s, scholars such as Galileo Galilei, Johannes Kepler, Isaac Newton, Gottfried Wilhelm Leibniz, and Robert Hooke shared their scientific discoveries in letters that took weeks or even months to reach their destination. Nicolaus Copernicus’s heliocentric system, which correctly placed Earth in the orbit of the sun, was developed during the first decade of the sixteenth century. Copernicus had written out his theory by 1514, even if his most widely read book, On the Revolutions of the Celestial Spheres, was published only in 1543. It took almost a century from 1514 for Kepler and Galileo to build on Copernicus’s work and more than two centuries for the ideas to become widely accepted. Today, scientific discoveries travel at lightning speed, especially when there is a pressing need. Vaccine development usually takes years, but in early 2020 Moderna, Inc., invented a vaccine just forty-two days after receiving the recently identified sequence of the SARS-CoV-2 virus. The entire development, testing, and authorization process took less than one year, resulting in remarkably safe and effective protection against severe illness caused by COVID. The barriers to sharing ideas and spreading technical know-how have never been lower, and the cumulative power of science has never been stronger. However, to build on these advances and turn them to work for the betterment of billions of people around the world, we need to redirect technology. This must start by confronting the blind techno-optimism of our age and then developing new ways to use science and innovation. The good and the bad news is that how we use knowledge and science depends on vision—the way that humans understand how they can turn knowledge into techniques and methods targeted at solving specific problems. Vision shapes our choices because it specifies what our aspirations are, what means we will pursue to achieve them, what alternative options we will consider and which ones we will ignore, and how we perceive the costs and benefits of our actions. In short, it is how we imagine technologies and their gifts, as well as the potential damage. The bad news is that even at the best of times, the visions of powerful people have a disproportionate effect on what we do with our existing tools and the direction of innovation. The consequences of technology are then aligned with their interests and beliefs, and often prove costly to the rest. The good news is that choices and visions can change. A shared vision among innovators is critical for the accumulation of knowledge and is also central to how we use technology. Take the steam engine, which transformed Europe and then the world economy. Rapid innovations from the beginning of the eighteenth century built on a common understanding of the problem to be solved: to perform mechanical work using heat. Thomas Newcomen built the first widely used steam engine, sometime around 1712. Half a century later, James Watt and his business partner Matthew Boulton improved Newcomen’s design by separating the condenser and producing a more effective and commercially much more successful engine. The shared perspective is visible in what these innovators were trying to achieve and how: using steam to push a piston back and forth inside a cylinder to generate work and then increasing the efficiency of these engines so that they could be used in a variety of different applications. A shared vision not only enabled them to learn from each other but meant that they approached the problem in similar ways. They predominantly focused on what is called the atmospheric engine, in which condensed steam creates a vacuum inside the cylinder, allowing atmospheric pressure to push the piston. They also collectively ignored other possibilities, such as high-pressure steam engines, first described by Jacob Leupold in 1720. Contrary to the eighteenth-century scientific consensus, high-pressure engines became the standard in the nineteenth century. The early steam engine innovators’ vision also meant that they were highly motivated and did not pause to reflect on the costs that the innovations might impose—for example, on very young children sent to work under draconian conditions in coal mines made possible by improved steam-powered drainage. What is true of steam engines is true of all technologies. Technologies do not exist independent of an underlying vision. We look for ways of solving problems facing us (this is vision). We imagine what kind of tools might help us (also vision). Of the multiple paths open to us, we focus on a handful (yet another aspect of vision). We then attempt alternative approaches, experimenting and innovating based on that understanding. In this process, there will be setbacks, costs, and almost surely unintended consequences, including potential suffering for some people. Whether we are discouraged or even decide that the responsible thing is to abandon our dreams is another aspect of vision. But what determines which technology vision prevails? Even though the choices are about how best to use our collective knowledge, the decisive factors are not just technical or what makes sense in a pure engineering sense. Choice in this context is fundamentally about power—the power to persuade others, as we will see in Chapter 3—because different choices benefit different people. Whoever has greater power is more likely to persuade others of their perspective, which is most often aligned with their interests. And whoever succeeds in turning their ideas into a shared vision gains additional power and social standing. Do not be fooled by the monumental technological achievements of humankind. Shared visions can just as easily trap us. Companies make the investments that management considers best for their bottom line. If a company is installing, say, new computers, this must mean that the higher revenues they generate more than make up for the costs. But in a world in which shared visions guide our actions, there is no guarantee that this is indeed the case. If everybody becomes convinced that artificial-intelligence technologies are needed, then businesses will invest in artificial intelligence, even when there are alternative ways of organizing production that could be more beneficial. Similarly, if most researchers are working on a particular way of advancing machine intelligence, others may follow faithfully, or even blindly, in their footsteps. These issues become even more consequential when we are dealing with “general-purpose” technologies, such as electricity or computers. Generalpurpose technologies provide a platform on which myriad applications can be built and potentially generate benefits—but sometimes also costs—for many sectors and groups of people. These platforms also allow widely different trajectories of development. Electricity, for instance, was not just a cheaper source of energy; it also paved the way to new products, such as radios, household appliances, movies, and TVs. It introduced new electrical machinery. It enabled a fundamental reorganization of factories, with better lighting, dedicated sources of power for individual machinery, and the introduction of new precision and technical tasks in the production process. Advances in manufacturing based on electricity increased demand for raw materials and other industrial inputs, such as chemicals and fossil fuels, as well as retail and transport services. They also launched novel products, including new plastics, dyes, metals, and vehicles, that were then used in other industries. Electricity has also paved the way for much greater levels of pollution from manufacturing production. Although general-purpose technologies can be developed in many different ways, once a shared vision locks in a specific direction, it becomes difficult for people to break out of its hold and explore different trajectories that might be socially more beneficial. Most people affected by those decisions are not consulted. This creates a natural tendency for the direction of progress to be socially biased—in favor of powerful decision makers with dominant visions and against those without a voice. Take the decision of the Chinese Communist Party to introduce a social credit system that collects data on individuals, businesses, and government agencies to keep track of their trustworthiness and whether they abide by the rules. Initiated at the local level in 2009, it aspires to blacklist people and companies nationally because of their speech or social media posts that go against the party’s preferences. This decision, which affects the lives of 1.4 billion people, was taken by a few party leaders. There was no consultation with those whose freedom of speech and association, education, government jobs, ability to travel, and even likelihood of getting government services and housing are now being shaped by the system. This is not something that happens only in dictatorships. In 2018 Facebook founder and CEO Mark Zuckerberg announced that the company’s algorithm would be modified to give users “meaningful social interactions.” What this meant in practice was that the platform’s algorithm would prioritize posts from other users, especially family and friends, rather than news organizations and established brands. The purpose of the change was to increase user engagement because people were found to be more likely to be drawn to and click on posts by their acquaintances. The main consequence of the change was to amplify misinformation and political polarization, as lies and misleading posts spread rapidly from user to user. The change did not just affect the company’s then almost 2.5 billion users; billions more people who were not on the platform were also indirectly affected by the political fallout from the resulting misinformation. The decision was made by Zuckerberg; the company’s chief operating officer, Sheryl Sandberg; and a few other top engineers and executives. Facebook users and citizens of affected democracies were not consulted. What propelled the Chinese Communist Party’s and Facebook’s decisions? In neither case were they dictated by the nature of science and technology. Nor were they the obvious next step in some inexorable march of progress. In both cases you can see the ruinous role of interests—to quash opposition or to increase advertising revenues. Equally central was their leadership’s vision for how communities should be organized and what should be prioritized. But even more important was how technology was used for control: over the political views of the population in the Chinese case, and people’s data and social activities for Facebook. This is the point that, with the advantage of an additional 275 years of human history to draw on, H. G. Wells grasped and Francis Bacon missed: technology is about control, not just over nature but often over other humans. It is not simply that technological change benefits some more than others. More fundamentally, different ways of organizing production enrich and empower some people and disempower others. The same considerations are equally important for the direction of innovation in other contexts. Business owners and managers may often wish to automate or increase surveillance because this enables them to strengthen their control over the production process, save on wage costs, and weaken the power of labor. This demand then translates into incentives to focus innovation more on automation and surveillance, even when developing other, more worker-friendly technologies could increase output more and pave the way to shared prosperity. In these instances, society may even become gripped by visions that favor powerful individuals. Such visions then help business and technology leaders pursue plans that increase their wealth, political power, or status. These elites may convince themselves that whatever is good for them is also best for the common good. They may even come to believe that any suffering that their virtuous path generates is a price well worth paying for progress—especially when those bearing the brunt of the costs are voiceless. When thus inspired by a selfish vision, leaders deny that there are many different paths with widely different implications. They may even become incensed when alternatives are pointed out to them. Is there no remedy against ruinous visions imposed on people without their consent? Is there no barrier against the social bias of technology? Are we locked in a constant cycle of one overconfident vision after another shaping our future while ignoring the damage? No. There is reason to be hopeful because history also teaches us that a more inclusive vision that listens to a broader set of voices and recognizes the effects on everyone is possible. Shared prosperity is more likely when countervailing powers hold entrepreneurs and technology leaders accountable—and push production methods and innovation in a more worker-friendly direction. Inclusive visions do not avoid some of the thorniest questions, such as whether the benefits that some reap justify the costs that others suffer. But they ensure that social decisions recognize their full consequences and without silencing those who do not gain. Whether we end up with selfish, narrow visions or something more inclusive is also a choice. The outcome depends on whether there are countervailing forces and whether those who are not in the corridors of power can organize and have their voices heard. If we want to avoid being trapped in the visions of powerful elites, we must find ways of countering power with alternative sources of power and resisting selfishness with a more inclusive vision. Unfortunately, this is becoming harder in the age of artificial intelligence. Fire, This Time Early human life was transformed by fire. In Swartkrans, a South African cave, the earliest excavated layers show ancient hominid bones that were eaten by predators—big cats or bears. To the apex predators of the day, humans must have seemed like easy prey. Dark places in caves were particularly dangerous places, to be avoided by our ancestors. Then the first evidence of fire appears inside that cave, with a layer of charcoal about a million years old. Subsequently, the archaeological record shows a complete reversal: from that time forward, the bones are mostly those of nonhuman animals. Control of fire gave hominins the ability to take and hold caves, turning the tables on other predators. No other technology in the last ten thousand years can claim to approach this type of fundamental impact on everything else we do and who we are. Now there is another candidate, at least according to its boosters: artificial intelligence (AI). Google’s CEO Sundar Pichai is explicit when he says that “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” AI is the name given to the branch of computer science that develops “intelligent” machines, meaning machines and algorithms (instructions for solving problems) capable of exhibiting high-level capabilities. Modern intelligent machines perform tasks that many would have thought impossible a couple of decades ago. Examples include face-recognition software, search engines that guess what you want to find, and recommendation systems that match you to the products that you are most likely to enjoy or, at the very least, purchase. Many systems now use some form of natural-language processing to interface between human speech or written enquiries and computers. Apple’s Siri and Google’s search engine are examples of AI-based systems that are used widely around the world every day. AI enthusiasts also point to some impressive achievements. AI programs can recognize thousands of different objects and images and provide some basic translation among more than a hundred languages. They help identify cancers. They can sometimes invest better than seasoned financial analysts. They can help lawyers and paralegals sift through thousands of documents to find the relevant precedents for a court case. They can turn naturallanguage instructions into computer code. They can even compose new music that sounds eerily like Johann Sebastian Bach and write (dull) newspaper articles. In 2016 the AI company DeepMind released AlphaGo, which went on to beat one of the two best Go players in the world. The chess program AlphaZero, capable of defeating any chess master, followed one year later. Remarkably, this was a self-taught program and reached a superhuman level after only nine hours of playing against itself. Buoyed by these victories, it has become commonplace to assume that AI will affect every aspect of our lives—and for the better. It will make humankind much more prosperous, healthier, and able to achieve other laudable goals. As the subtitle of a recent book on the subject claims, “artificial intelligence will transform everything.” Or as Kai-Fu Lee, the former president of Google China, puts it, “Artificial Intelligence (AI) could be the most transformative technology in the history of mankind.” But what if there is a fly in the ointment? What if AI fundamentally disrupts the labor market where most of us earn our livelihoods, expanding inequalities of pay and work? What if its main impact will not be to increase productivity but to redistribute power and prosperity away from ordinary people toward those controlling data and making key corporate decisions? What if along this path, AI also impoverishes billions in the developing world? What if it reinforces existing biases—for example, based on skin color? What if it destroys democratic institutions? The evidence is mounting that all these concerns are valid. AI appears set on a trajectory that will multiply inequalities, not just in industrialized countries but everywhere around the world. Fueled by massive data collection by tech companies and authoritarian governments, it is stifling democracy and strengthening autocracy. As we will see in chapters 9 and 10, it is profoundly affecting the economy even as, on its current path, it is doing little to improve our productive capabilities. When all is said and done, the newfound enthusiasm about AI seems an intensification of the same optimism about technology, regardless of whether it focuses on the automation, surveillance, and disempowerment of ordinary people that had already engulfed the digital world. Yet these concerns are not taken seriously by most tech leaders. We are continuously told that AI will bring good. If it creates disruptions, those problems are short-term, inevitable, and easily rectified. If it is creating losers, the solution is more AI. For example, DeepMind’s cofounder, Demis Hassabis, not only thinks that AI “is going to be the most important technology ever invented,” but he is also confident that “by deepening our capacity to ask how and why, AI will advance the frontiers of knowledge and unlock whole new avenues of scientific discovery, improving the lives of billions of people.” He is not alone. Scores of experts are making similar claims. As Robin Li, cofounder of the Chinese internet search firm Baidu and an investor in several other leading AI ventures, states, “The intelligent revolution is a benign revolution in production and lifestyle and also a revolution in our way of thinking.” Many go even further. Ray Kurzweil, a prominent executive, inventor, and author, has confidently argued that the technologies associated with AI are on their way to achieving “superintelligence” or “singularity”— meaning that we will reach boundless prosperity and accomplish our material objectives, and perhaps a few of the nonmaterial ones as well. He believes that AI programs will surpass human capabilities by so much that they will themselves produce further superhuman capabilities or, more fancifully, that they will merge with humans to create superhumans. To be fair, not all tech leaders are as sanguine. Billionaires Bill Gates and Elon Musk have expressed concern about misaligned, or perhaps even evil, superintelligence and the consequences of uncontrolled AI development for the future of humanity. Yet both of these sometime holders of the title “richest person in the world” agree with Hassabis, Li, Kurzweil, and many others on one thing: most technology is for good, and we can and must rely on technology, especially digital technology, to solve humanity’s problems. According to Hassabis, “Either we need an exponential improvement in human behavior—less selfishness, less short-termism, more collaboration, more generosity—or we need an exponential improvement in technology.” These visionaries do not question whether technological change is always progress. They take it for granted that more technology is the answer to our social problems. We do not need to fret too much about the billions of people who are initially left behind; they will soon benefit as well. We must continue to march onward, in the name of progress. As LinkedIn cofounder Reid Hoffman puts it, “Could we have a bad twenty years? Absolutely. But if you’re working toward progress, your future will be better than your present.” Such faith in the beneficent powers of technology is not new, as we already saw in the Prologue. Like Francis Bacon and the foundational story of fire, we tend to see technology as enabling us to turn the tables on nature. Rather than being the weakling prey, thanks to fire we became the planet’s most devastating predator. We view many other technologies through the same lens—we conquer distance with the wheel, darkness with electricity, and illness with medicine. Contrary to all these claims, we should not assume that the chosen path will benefit everybody, for the productivity bandwagon is often weak and never automatic. What we are witnessing today is not inexorable progress toward the common good but an influential shared vision among the most powerful technology leaders. This vision is focused on automation, surveillance, and mass-scale data collection, undermining shared prosperity and weakening democracies. Not coincidentally, it also amplifies the wealth and power of this narrow elite, at the expense of most ordinary people. This dynamic has already produced a new vision oligarchy—a coterie of tech leaders with similar backgrounds, similar worldviews, similar passions, and unfortunately similar blind spots. This is an oligarchy because it is a small group with a shared mind-set, monopolizing social power and disregarding its ruinous effects on the voiceless and the powerless. This group’s sway comes not from tanks and rockets but because it has access to the corridors of power and can influence public opinion. The vision oligarchy is so persuasive because it has had brilliant commercial success. It is also supported by a compelling narrative about all the abundance and control over nature that new technologies, especially the exponentially increasing capabilities of artificial intelligence, will create. The oligarchy has charisma, in its nerdy way. Most importantly, these modern oligarchs mesmerize influential custodians of opinion: journalists, other business leaders, politicians, academicians, and all sorts of intellectuals. The vision oligarchy is always at the table and always at the microphone when important arguments are being made. It is critical to rein in this modern oligarchy, and not just because we are at a precipice. This is the time to act because these leaders have one thing right: we have amazing tools at our disposal, and digital technologies could amplify what humanity can do. But only if we put these tools to work for people. And this is not going to happen until we challenge the worldview that prevails among our current global tech bosses. This worldview is based on a particular—and inaccurate—reading of history and what that implies about how innovation affects humanity. Let us start by reassessing this history. Plan for the Rest of the Book In the rest of this book we develop the ideas introduced in this chapter and reinterpret the economic and social developments of the last thousand years as the outcome of the struggle over the direction of technology and the type of progress—and who won, who lost, and why. Because our focus is on technologies, most of this discussion centers on the parts of the world where the most important and consequential technological changes were taking place. This means first Western Europe and China for agriculture, then Britain and the US for the Industrial Revolution, and then the US and China for digital technologies. Throughout we also emphasize how at times different choices were made in different countries, as well as the implications of technologies in the leading economies on the rest of the world, as they spread, sometimes voluntarily, sometimes forcefully, across the globe. Chapter 2 (“Canal Vision”) provides a historical example of how successful visions can lead us astray. The success of French engineers in building the Suez Canal stands in remarkable contrast to their spectacular failure when the same ideas were brought to Panama. Ferdinand de Lesseps persuaded thousands of investors and engineers into the unworkable plan of building a sea-level canal at Panama, resulting in the deaths of more than twenty thousand people and financial ruin for many more. This is a cautionary tale for any history of technology: great disaster often has its roots in powerful visions, which in turn are based on past success. Chapter 3 (“Power to Persuade”) highlights the central role of persuasion in how we make key technology and social decisions. We explain how the power to persuade is rooted in political institutions and the ability to set the agenda, and emphasize how countervailing powers and a wider range of voices can potentially rein in overconfidence and selfish visions. Chapter 4 (“Cultivating Misery”) applies the main ideas of our framework to the evolution of agricultural technologies, from the beginning of settled agriculture during the Neolithic Age to the major changes in the organization of land and techniques of production during the medieval and early modern eras. In these momentous episodes, we find no evidence of an automatic productivity bandwagon. These major agricultural transitions have tended to enrich and empower small elites while generating few benefits for agricultural workers: peasants lacked political and social power, and the path of technology followed the visions of a narrow elite. Chapter 5 (“A Middling Sort of Revolution”) reinterprets the Industrial Revolution, one of the most important economic transitions in world history. Although much has been written about the Industrial Revolution, what is often underemphasized is the emergent vision of newly emboldened middle classes, entrepreneurs, and businesspeople. Their views and aspirations were rooted in institutional changes that started empowering the middling sort of English people from the sixteenth and seventeenth centuries onward. The Industrial Revolution may have been propelled by the ambitions of new people attempting to improve their wealth and social standing, but theirs was far from an inclusive vision. We discuss how changes in political and economic arrangements came about, and why these were so important in producing a new concept of how nature could be controlled and by whom. Chapter 6 (“Casualties of Progress”) turns to the consequences of this new vision. It explains how the first phase of the Industrial Revolution was impoverishing and disempowering for most people, and why this was the outcome of a strong automation bias in technology and a lack of worker voice in technology and wage-setting decisions. It was not just economic livelihoods that were adversely affected by industrialization but also the health and autonomy of much of the population. This awful picture started changing in the second half of the nineteenth century as regular people organized and forced economic and political reforms. The social changes altered the direction of technology and pushed up wages. This was only a small victory for shared prosperity, and Western nations would have to travel along a much longer, contested technological and institutional path to achieve shared prosperity. Chapter 7 (“The Contested Path”) reviews how arduous struggles over the direction of technology, wage setting, and more generally politics built the foundations of the most spectacular period of economic growth in the West. During the three decades following World War II, the United States and other industrial nations experienced rapid economic growth that was broadly shared across most demographic groups. These economic trends went together with other social improvements, including expansions in education, health care, and life expectancy. We explain how and why technological change did not just automate work but also created new opportunities for workers, and how this was embedded in an institutional setting that bolstered countervailing powers. Chapter 8 (“Digital Damage”) turns to our modern era, starting with how we lost our way and abandoned the shared-prosperity model of the early postwar decades. Central to this volte-face was a change in the direction of technology away from new tasks and opportunities for workers and toward a preoccupation with automating work and cutting labor costs. This redirection was not inevitable but rather resulted from a lack of input and pressure from workers, labor organizations, and government regulation. These social trends contributed to the undermining of shared prosperity. Chapter 9 (“Artificial Struggle”) explains that the post-1980 vision that led us astray has also come to define how we conceive of the next phase of digital technologies, artificial intelligence, and how AI is exacerbating the trends toward economic inequality. In contrast to claims made by many tech leaders, we will also see that in most human tasks existing AI technologies bring only limited benefits. Additionally, the use of AI for workplace monitoring is not just boosting inequality but also disempowering workers. Worse, the current path of AI risks reversing decades of economic gains in the developing world by exporting automation globally. None of this is inevitable. In fact, this chapter argues that AI, and even the emphasis on machine intelligence, reflects a very specific path for the development of digital technologies, one with profound distributional effects—benefiting a few people and leaving the rest behind. Rather than focusing on machine intelligence, it is more fruitful to strive for “machine usefulness,” meaning how machines can be most useful to humans—for example, by complementing worker capabilities. We will also see that when it was pursued in the past, machine usefulness led to some of the most important and productive applications of digital technologies but has become increasingly sidelined in the quest for machine intelligence and automation. Chapter 10 (“Democracy Breaks”) argues that the problems facing us may be even more severe because massive data collection and harvesting using AI methods are intensifying surveillance of citizens by governments and companies. At the same time, AI-powered advertisement-based business models are propagating misinformation and amplifying extremism. The current path of AI is neither good for the economy nor for democracy, and these two problems, unfortunately, reinforce each other. Chapter 11 (“Redirecting Technology”) concludes by outlining how we can reverse these pernicious trends. It provides a template for redirecting technological change based on altering the narrative, building countervailing powers, and developing technical, regulatory, and policy solutions to tackle specific aspects of technology’s social bias. |
Direct link: https://paste.plurk.com/show/B5I8XMoMAZebRaM0qlRk