Estates of Mind

 
donkey.jpg

For many Americans, the economy is still a cold and hard place. Wages are down, job numbers are barely creeping up, and the latest asset bubble drifts high out of the reach of most folks. About the only hot business around is the rivalry to identify a root cause for our woes. Even as Democrats and the GOP continue to argue the value of stimulus, two new schools of thought have shot into fashion. Importantly, both focus on the role of technology. Strangely, they do so from diametrically opposed points of view.

One is led by MIT professors Erik Brynjolfsson and Andrew McAfee, who proudly write in the Luddite tradition. The problem, they say, is too-rapid automation. In short, robots are taking our jobs. The other viewpoint, epitomized by the writings of economists Robert Gordon and Tyler Cowen, says no, the real problem is that we have entered a “great stagnation.” The digital revolution has reached maturity, and no other transformational technology is in sight.

Both groups muster an abundance of evidence. It hardly needs saying that we live in an era in which millions of workers have been displaced by technology. Less obvious to some, but also demonstrably true, is that we live in an era in which the pace of technological progress has stagnated in many key realms. Where, for instance, is the “century of biology” we were promised only a few years ago? And that once-vaunted pharmaceutical “pipeline” looks awfully dry these days.

But what if both camps are right about the effects they observe and wrong about the causes? What begins to make sense of this odd picture is a problem that previous generations of Americans also had to confront—a concentration of economic control that enables a few corporate bosses to manipulate technological advance entirely outside of any open and competitive marketplace. Put another way, what can explain both of these problems is that the masters of America’s biggest technological corporations increasingly enjoy the power to speed the rollout of technologies that favor capital and to slow those that disfavor their own private interests.

Back in the 1930s, America suffered from a similar set of ills, and the government took direct aim. Specifically, starting in the second half of the New Deal, Franklin Delano Roosevelt’s administration combined stepped-up antitrust enforcement with the forced licensing of key patents held by monopolistic enterprises. Today, few people know this history, but the policy laid the groundwork for the long era of prosperity and technological progress that followed, including the birth of Silicon Valley.

Indeed, when the late industrial historian Alfred Chandler Jr. set out to research his second-to-last book, Inventing the Electronic Century, he came to a conclusion that surprised even him. What was most responsible for America’s astounding technological advance in the twentieth century? It was, Chandler wrote in 2001, the men and women of the Roosevelt administration’s antitrust division. They were the “gods,” he wrote, who “set the stage” for the information revolution. Follow where Chandler points, and we may yet recover the key to restoring broad prosperity, along with the ability to devise the technological tools we need to fix many of our most pressing problems.

Our first challenge is simply to recognize how few companies now govern our technological economy. A good starting example is the chemical and biotech giant Monsanto. Here is a corporation that wields almost complete control over the basic genetic traits of key crops, including corn and soy; that over the last decade has buttressed that power by spending upward of $12 billion to buy direct competitors such as Dekalb Genetics and Delta and Pine Land as well as at least thirty companies that breed and retail its seeds; and that has brought at least 145 lawsuits against small farmers to enforce those rights.

Or consider the business software giant Oracle. Its CEO, Larry Ellison, once said that acquiring another company was “a confession that there’s a failure to innovate.” Then in 2004 Ellison began to gobble up precisely those competitors most likely to force Oracle to innovate. This included PeopleSoft, Siebel, Sun Microsystems, and more than eighty other firms.

The story is not much different at Google, which has vacuumed up more than 120 former competitors, along with their products, patents, and, often, their scientists and engineers. If you think of Google as an innovative company, remember that it was the smaller companies it swallowed that actually developed most of its key components. These include YouTube, DoubleClick, and the ITA airline reservation system, as well as ten search companies that no longer compete with Google because Google now owns them. Much the same is true of Intel, Corning, Pfizer, and Microsoft. These giants don’t merely set standards for certain formats of semiconductors, glass, pharmaceuticals, and software. Their mastery over patents and markets empowers them to block or buy most any newcomer that might threaten their sovereignty. What technologies are developed, and how and where they are developed, is increasingly up to these small clubs of executives alone.

Such private dominance over whole precincts of applied science does not lack for defenders. In academia, an entire industry has emerged to preach the gospel of Joseph Schumpeter, the Austrian economist who in 1942 wrote an eloquent defense of the monopolist as the prime mover of innovation. He claimed that monopoly is of social value because of “the protection it … secures for long range planning.” A good example of a modern-day Schumpeterian is Michael Mandel, the former chief economist at BusinessWeek. Noting that society faces “enormous challenges” in remaking our energy, health, and other sectors, Mandel concluded in a recent paper that “only large firms have the staying power and scale” to “implement systemic innovations.”

What’s odd is the almost complete lack of attention to this roll-up of control over our system of industrial science, especially given the public’s fascination with the last high-stakes antitrust suit against a tech goliath. This came in 1998, when the Justice Department charged Microsoft with leveraging its monopoly over operating systems to capture control over lines of business such as browsers. The case resulted in a court order to break Microsoft in two, which the Bush administration later set aside. It also resulted in controls over Microsoft’s actions that, many argue, cleared the way for the initial emergence of companies like Google.

Gary Reback, an antitrust lawyer who took part in the case against Microsoft, sums up the current policy: “In information technology we have no antitrust enforcement today, and I don’t expect any enforcement for at least the next four years.” Worse, in realms ranging from drugs to genetically modified food, the longer the government allows big companies to swallow smaller ones, the harder it becomes to restart the processes of innovation. As antitrust lawyer Robert Litan has observed, “Mergers in high-tech markets should face an extra degree of scrutiny.” The “relative sluggishness of the judicial process,” he says, can make it very hard to “unscramble” a deal after the fact.

The debate over the relationship between monopoly and innovation goes back at least as far as the Industrial Revolution. Indeed, striking the right balance was a preoccupation of the Founders, as evidenced by their concern with patent monopolies. Thomas Jefferson, who served on the first U.S. Patent Commission, rejected the idea that a citizen had any “right” to monopolize control over a technology. Ideas, Jefferson wrote, “cannot, in nature, be a subject of property.” But, to give the inventor a chance to perfect his conception and grow it to scale, Jefferson believed that some ideas are “worth … the embarrassment of an exclusive patent.” Jefferson emphasized, however, that officials must always be chary in granting such privilege.

Nevertheless, problems soon emerged. By the mid-nineteenth century, American financiers had figured out how to use patent monopolies not merely to hobble rival innovators but also to erect corporate empires; by the turn of the twentieth century, they had largely perfected the art. One of the more notable instances saw J. P. Morgan grab control of the electrical patents of Thomas Edison, George Westinghouse, and Nikola Tesla, and then use the resulting “pool” to control the entire electrical industry. One lawyer of that era even penned a primer for businessmen. “Patents are the best and most effective means of controlling competition,” he wrote. Sometimes, he added, patents “give absolute command of the market, enabling the owner to name the price without regard to cost of production.” The first coherent reactions against such abuse of patents also date to this time. In 1900, political scientist Jeremiah Jenks proposed using antitrust law to compel giant companies to license their patents.

During the Progressive Era, the country passed its first antitrust legislation, but enforcement proved weak and never tackled the problem of patent monopoly. This remained true through Roosevelt’s first term. Indeed, during the first two years of the New Deal, FDR largely suspended antitrust enforcement. But following the economic and political failure of the National Industrial Recovery Act, Roosevelt reversed course. In a 1938 message to Congress, FDR said he would use antitrust policy to unleash the “vibrant energies” of entrepreneurs and thereby bring a “new vitality” to America.

The first step was not to dispatch a mob of hillbillies with broad axes. Rather, it was to join Congress in launching the Temporary National Economic Committee (TNEC) to investigate—empirically—how big companies concentrated and used power. The result was the most extensive study of monopoly in the history of America, and a series of shocking revelations. One was the detailed account of how the glass companies Hartford-Empire and Owens-Illinois had managed to capture and hold a 100 percent monopoly over the business of making bottles in America.

As a summary of the TNEC put it, Hartford-Empire had “demonstrated how a corporation might rise to a position of power and monopoly, not through efficiency or through managerial skill, but by manipulating privileges granted under the patent laws.” Once there, Hartford-Empire maintained a “control over production and prices more complete than that exercised by most public utility commissions.”

U.S. patent policy, the summary concluded, promoted two contradictory processes: “One is creative, the other, restrictive; one encourages or rewards inventiveness, while the other fosters monopoly; one promotes production, the other fosters predation.” The overall balance, however, favored the suppression of better ideas. “The patent system permits powerful units or combinations to destroy small competitors by endless litigation or by threats of litigation, regardless of the merits of the small producer’s case or of his product.”

Based largely on these revelations, the Roosevelt administration began to establish the foundations of a competition policy that would remain in effect for two generations. The main architect was Wyoming-born lawyer Thurman Arnold. During the early days of the New Deal, Arnold had been skeptical of antitrust enforcement. But when he was named to head the TNEC inquiry into patents and saw how companies like Hartford-Empire operated, his thinking on the issue changed completely.

Roosevelt named him to run the Antitrust Division of the Department of Justice (DOJ) in 1938; by 1942, Arnold had boosted the staff from eighteen to nearly 600. He also launched a slew of new cases, bringing ninety-two in 1940 compared to just eleven in 1938. And he established clear strategic goals. Arnold agreed with Jeffersonians like Louis Brandeis that the central aim of
anti-monopoly law is to disperse political power. He also believed that competition was best for technological advance, and here he made his greatest mark.

There were three main components to the overarching competition strategy that emerged in the 1930s in the complex dialog between the Roosevelt administration and Congress. One was an acceptance that some industries, like electricity, telephones, and gasworks, were natural monopolies and hence should be regulated by the public. Second was the belief that in areas of the economy that did not require high degrees of scientific knowledge—such as retail, farming, and banking—the government should promote as wide a distribution of power and opportunity as possible. Hence the anti-chain store legislation of the time.

The third component, created largely by Arnold and his team, was a coherent approach to regulating industrial corporations engaged in the art of applying science to mass production.

Well into the 1930s, giant companies like AT&T and DuPont were investing in research, sometimes extravagantly. But their dominance over their markets meant they—like Hartford-Empire—often had little incentive to introduce superior technologies when doing so threatened to cannibalize their existing product lines or otherwise diminish their profits. Arnold tackled this challenge first by insisting that all such companies compete at least to some degree. This led the DOJ to adopt a general policy of aiming to have at least three or four firms engaged in every industrial activity. (One example is the government’s 1945 decision to force Alcoa to share its 100 percent aluminum monopoly with Kaiser and Reynolds.)

Arnold then combined this policy with an entirely new approach to patents. The TNEC had recommended that any patent held by a dominant firm be made “available for use by anyone who may desire its use,” that all licenses be entirely “unrestricted,” and that suits for infringement be all but prohibited. As one writer put it, the goal was to treat the patent monopolies of dominant companies as “a public utility.”

Innovation by Acquisition? The graphs below depict some but not all of the major high-tech companies absorbed in recent years by Google, Oracle, and Monsanto.

July13-LynnChart1.png
July13-LynnChart2.png
July13-LynnChart3.png

Under Arnold’s leadership, in 1941 the DOJ and the Federal Trade Commission (FTC) began to apply a variant of this policy. The government’s general approach was to start by bringing an antitrust suit against a firm that had captured undue control of some sector of the economy. It would then accept a settlement (in the form of a consent decree) by which the corporation promised to share its basic technologies with all comers, for free.

Until the Ronald Reagan administration killed the policy, the U.S. government applied this model to most technologically dominant large corporations in the nation. In the process they forced the people who controlled these companies to spill perhaps upward of 100,000 technological “source codes” into the world. A study done in 1961 counted 107 judgments just between 1941 and 1959, which resulted in the compulsory licensing of 40,000 to 50,000 patents.

One result was the greatest dissemination of industrial knowledge in human history. The world was treated to the secrets behind the televisions of RCA, the light bulbs of General Electric, the cellophane and nylon of DuPont, the titanium of National Lead, and the shoemaking technologies of United Shoe Machinery, among many others.

Another result was a new balance of power in the political economy of technology. By using antitrust law to trump patent law, Arnold and his team largely solved the traditional dilemmas of patent law. The big companies were less free to use patents to protect their bastions. The small firms, precisely because their size exempted them from antitrust oversight, could still fully exploit patent monopoly. Without breaking a single big industrial company, Arnold and his team helped foster a world in which engineers and scientists—no matter how small the company they worked for—could go about their work safe from predation, albeit not competition.

To get a sense of how Arnold’s team liberated the ingenuity of America’s citizens, consider what took place after the DOJ brought an antitrust suit against AT&T in 1949. By the late 1940s, AT&T had become notorious for its failure to integrate the most recent ideas of its subsidiary, Bell Labs, into the telephone system it controlled. The FTC, for instance, had cited the monopoly for sitting on such ready-for-market innovations as automatic dialing, office switchboards, and new handsets.

Even before settling the case, AT&T began licensing out key patents it controlled. One was for an obscure device called the “electronic transistor.” At the time, transistors were seen merely as a potential competitor to existing vacuum tube technology, and AT&T wasn’t much interested in disrupting its existing business lines by developing them. In 1952, AT&T licensed the technology for a small fee to thirty-five companies, twenty-five from the United States and ten from abroad.

Today, of course, transistors are the bedrock of all computer technology. The path to practical application was blazed not by AT&T or any other big firm; as business historian David Mowery has written, “the more aggressive pioneers in the application of the new transistor technology were smaller firms that had not produced vacuum tubes.” One of the smallest, Texas Instruments, introduced the first commercially viable transistor in 1954, just three years after its founding. Other early drivers were Motorola and Fairchild.

Consider, also, what happened inside the big, science-based industrial corporations after they were forced to compete with the fruits of their own scientists’ labors. In his close study of DuPont, business historian David Hounshell writes that “a particularly virulent attack” by the DOJ in the 1940s led executives to conclude that DuPont’s “generation-old strategy of growth through acquisition was no longer politically feasible,” and, further, “that the corporation’s growth would have to be based almost exclusively on the fruits of research.” Pointing to DuPont’s subsequent investments in R&D, Hounshell concluded that Arnold’s policy, although not necessarily best for DuPont’s short-term profits, “was good for the scientific community” at large.

We see much the same pattern in copier technology. Here the key action was a 1975 consent decree between the FTC and Xerox. In 1972, Xerox had been able to use patents to block Litton and IBM from entering the plain paper copier market. But the new agreement opened the market to new competitors and spurred Xerox to redouble its own development efforts. “The transition period” after the consent decree, Stanford economist Timothy Bresnahan has written, “saw a great deal of innovative activity from entrants and Xerox.” Faced with new competitors on all sides, he adds, “Xerox introduced new products in all segments.”

We also see this pattern in the software industry. In January 1969 the DOJ filed suit against IBM, charging the giant with retarding the growth of data-processing companies. In direct response to the suit, IBM decided to “unbundle” its hardware, software, and services. As then CEO Thomas Watson Jr. wrote, to “mollify” the Justice Department IBM abandoned its old marketing practice, by which it would “lump everything together in a single price—hardware, software, engineering help, maintenance, and even training sessions.”

One result, as Alfred Chandler observed, was to open up a market for “companies [including the Computer Sciences Corporation and Applied Data Research] hoping to sell independent software applications.” The other was to spur IBM to new and greater feats of science and engineering. In the years after the suit, Watson writes, IBM “prospered—which made the antitrust laws easier for me to accept.”

Now consider, in contrast, what happened within the walls of the giant science-based industrial corporations after the Reagan administration abandoned most of Arnold’s approach to regulating competition. We see a sudden collapse of investment by giant firms left to govern entire realms of technology as they alone determined.

In the 1980s and ’90s, General Electric was run by Jack Welch, widely recognized as one of the brightest CEOs of the time. Almost as soon as the Reagan administration overturned Arnold’s antitrust regime, Welch embarked on what he called his “No. 1 and No. 2 strategy.” First was a campaign of buying up and selling off business units in order to insulate GE from competition in every industrial sector in which it operated. Second was a shift from a reliance on R&D to drive profitability to a reliance on exploiting Welch’s newly forged corporate power. The bottom line? In 1981, GE was the fourth-biggest U.S. industrial firm and one of the top spenders on research. By 1993, GE had fallen to seventeenth in spending on R&D but had become the most profitable big company in America.

For a more recent example, there’s Pfizer. Here the buying binge did not begin until 1999, but once it started executives pursued it with abandon. Over the next ten years they grabbed Warner-Lambert, Pharmacia, and many smaller companies. The culmination came in 2009 when they seized Wyeth. The executives cut 19,000 jobs. They also cut R&D by a phenomenal 40 percent, from $11.3 billion at the two companies to about $6.5 billion. The former president of Pfizer Global Research, John LaMattina, summed up the results in Nature. “Although mergers and acquisitions in the pharmaceutical industry might have had a reasonable short-term business rationale,” LaMattina wrote, “their impact on the R&D of the organizations involved has been devastating.”

The American public has a fundamental interest in empowering our scientists and engineers to bring forth what is truly new and better, and in empowering ourselves—as a community and as individuals—to adopt these ideas at the pace and on a path that we alone choose. Why then have we almost entirely ignored, since the case against Microsoft, the role competition policy must play in promoting citizen-friendly technological advance?

The most obvious answer is money. In his 1942 defense of monopolists, Schumpeter wrote that dominant firms use their outsized profits to develop and introduce new technologies. In the real world, many goliaths invest their hoards in advertising old technologies, purchasing friendly treatment from Congress and the White House, and hiring “experts” at think tanks and universities to make their case with sponsored research.

The giants have also invested liberally in a powerful, but specious, political argument—that the “global” nature of competition today makes bigness necessary. They use this argument to justify more concentrated corporate power. The economist Michael Mandel distilled this idea in a recent paper. “In order to capture the fruits of innovation,” he wrote, “U.S. companies have to have the resources to stand against foreign competition, much of which may be state supported.” They also use this argument to justify greater control over intellectual property. The 1994 Uruguay Round trade deal, for instance, enabled these giants to reinforce patent and copyright protections not just in developing nations but also here at home, such as by extending patent terms from seventeen to twenty years.

Given how effective this “global competition” argument continues to be, even among sophisticated intellectuals, it merits a detailed response. The first point to consider is simple: the idea entirely ignores all historical evidence. Under the system Arnold pioneered, the American economy prevailed over and ultimately vanquished two rival economic systems, those of National Socialism and, later, Soviet Communism. America became the “Arsenal of Democracy” during World War II even as the Justice Department was busy slapping domestic monopolies with antitrust suits. In the 1950s and ’60s, while American prosperity was putting the lie to Soviet Communism, we were deploying a competition policy that today’s libertarians conflate with “command and control” but that was really the exact opposite.

Today, of course, global trade has vastly expanded. But that makes the idea that U.S. citizens must allow domestic monopolies to concentrate power so as to help “our” companies compete with “their” companies only that much less valid. For one, such arguments contradict the intent and existing structure of the interdependent international industrial system built with such care in the years after World War II.

The American architects of this system assumed that industrial integration with countries like Japan would make it all but impossible for the Free World’s industrialized nations to engage in armed aggression against one another. This strategy was so successful that it provided the argument for the subsequent extension of this system of “free trade” to countries like India and China in the mid-1990s.

The architects of the system also believed that such integration would provide an important economic by-product, namely more competition for big U.S. corporations—and, by extension, more rapid technological advance. Forcing companies like General Motors and RCA to compete with companies like Toyota and Panasonic, so the thinking went, was a great way to supplement antitrust enforcement, not an excuse to abandon it.

The architects of the system were completely confident that the U.S. government could—and would—use trade law to police the international system. One of the best examples of such enforcement took place in the mid-1980s, when Japanese electronics corporations including Hitachi and NEC made a play to capture control over key components of the personal computer, such as DRAM memory chips. The U.S. government responded by applying tariffs and quotas to Japanese-made components. The goal was not to bring the activity home to America but to spread it more widely. And, in fact, the action gave a huge boost to manufacturers in places like South Korea and Singapore.

Today, viewing corporations as national champions that need to be favored with expanded monopoly power is a form of protectionism and extremely dangerous. It leads to less innovation and to a loss of public control over how technology is deployed and for whose benefit. Worse, it distracts us from the challenge of working with citizens of other nations to ensure that our international system—which is, for all intents, now a form of global industrial “commons”—is structured to ensure its safe operation and resiliency at all times. The most important goal? The distribution of physical risk in the system, via the safe distribution of the production capacity we rely on for our foods, drugs, electronics, and other vital supplies. Which is, at bottom, just another way of saying we need a coherent competition policy.

So what technological gems lie hidden inside today’s giant corporations? Which vaults of patents should we crack open first? The fact is, we don’t know which ideas will prove most useful to us, over time. Those that now seem most promising might not pan out. Others, less glittery in their infancy, might yield wonders. The only way to find out is to drag the ideas into the light, and let the public pick through them and play with them just as we did in the golden age of American prosperity.

Today, we are being herded in a very different direction. A century ago, America’s lords of industry boasted of their power right in the names of their industrial estates; there was Standard Oil, Standard Distilling, Standard Rope and Twine. Today’s corporate chieftains often as not choose names that lend an aura of smallness, even cuteness, to the imperial enterprise. But it’s not hard to identify which corporations could be renamed Standard Operating System, Standard Semiconductor, Standard Enterprise Software, Standard Storage, and Standard Search.

The problem is not standardization per se. Some standardization is necessary in almost every technological system: think electrical sockets, doorframes, railroads, and television broadcasting. But too rigid a standardization, or standards setting left in the wrong hands, can be stifling. As the editors of Engineering magazine explained the conundrum a century ago, the challenge is “to suppress the folly of individualism which prefers sliding down a rope to using the standardized staircase, and yet not suppress the benefactor of standards who can evolve the escalator.”

That’s why it matters whether a standard is open or closed. And why it matters whether decisions about how and what to standardize are made by a democratic community or by a single private corporation operating in the name of a few individuals.

Today, in almost every key technological sector in America—including electronics, software, pharmaceuticals, medical devices, and the Internet—standardization is determined and enforced by private actors for private profit. The result is not merely to leave the decision about what technologies to deploy and under what terms in the hands of private corporate governments; it is also to force all of our scientists and engineers to goose-step down particular technological pathways.

Do nothing, and we will get the future they want, as fast as they want it, at the price they set.