The Idea in Brief
As hockey great Wayne Gretzky once said, the key to winning is skating first to where the puck will be next. Business success is similar. We all want to go where the greatest profits will be—but by the time most of us get there, the “puck” has moved on.
Consider IBM: Riding high in the early 1980s, the company clung to where the money had been—computer-system design/assembly—outsourcing its processor chips and operating system to Intel and Microsoft. A 10-year decline followed, as Intel and Microsoft—navigating to where the money would be—captured industry profits.
How to avoid similar disaster? Understand how industries evolve. Then use your insights to:
- Predict profit migration as your industry matures.
- Focus resources on activities that are about to generate substantial profits.
The Idea in Practice
How Value Chains Evolve
As industries and their products mature, value chains evolve predictably:
Stage 1: A Tight Fit. Early products’ functionalities do not yet meet key customers’ needs (e.g., the first mainframe computers weren’t powerful or fast). Companies compete on performance, making the highest-quality products for their most demanding—and profitable—customers.
Firms also push technological frontiers—developing and combining product components more efficiently, using interdependent, proprietary product architectures. Large, established, vertically integrated companies dominate, because all their units communicate under one roof. Products for end-users constitute the most profitable point on the value chain. Example:
Telephone companies still dominate in high-speed Internet access via phone lines—because too many unpredictable interdependencies exist between DSL providers and phone companies. By spanning the entire value chain, incumbent phone companies provide more reliable service.
Stage 2: Going to Pieces. As companies stretch to meet their most demanding customers’ needs, product performance overshoots mainstream consumers’ needs. Disruptive companies enter this less demanding market, displacing incumbents by quickly delivering flexible, customized, and cheaper products. Example:
In the 1990s, computer-industry overshooting shifted competitiveness to speed, convenience, and customization. Dell Computer’s well-timed business model—featuring outsourced subsystems, custom assembly, quick delivery, and competitive prices—garnered astounding success.
Fuzzy Links
As an industry continues to mature, the most profitable point along the value chain shifts from end-use products to components and subsystems—which still have technologically interdependent internal architectures.
Rather than redesign everything, successful companies at this stage mix and match the best components from top suppliers to meet customers’ needs—creating interdependent links between components and subsystems.
Who Wins?
When products’ architectures are interdependent and proprietary, competitors can’t easily copy them. Therefore, companies who control the interdependent links in their industry’s value chain dominate.
How to control those links? As your industry matures and fragments, don’t spin off or out-source asset-intensive businesses to companies that will create subsystems with progressively more interdependent architectures. Instead, flexibly couple and decouple operations. Learning from earlier mistakes, IBM now chops up its integrated value chain—selling its technology, components, and subsystems in the open market—and has created a high-end systems-integration business. Skating to value-chain points requiring complex, nonstandard integration, IBM now earns impressive profits.
When IBM decided to outsource its operating system and processor chips in the early 1980s, it was, or appeared to be, at the top of its game. It owned 70% of the entire mainframe market, controlled 95% of its profits, and had long dominated the industry. Yet disaster famously ensued, as Intel and Microsoft subsequently captured the lion’s share of the computer industry’s profits, and Big Blue entered a decade of decline.
It’s easy to look back and ask, “What were they thinking?” but, in truth, IBM’s decision fit well with prevailing orthodoxies, particularly with the idea that companies should outsource all but their core competencies—that is, sell off or outsource any function that another company could do better or cheaper than it could. Indeed, at the time, many observers hailed IBM’s move as a master-stroke of strategy, forward-looking and astute.
Of course it turned out not to be, but what lessons should we draw from IBM’s spectacular mistake? They’re far from clear. It’s easy to say, “Don’t outsource the thing that’s going to make lots of money next,” but existing models of industry competitiveness offer very little help in predicting where, in an industry’s value chain, future profitability will be most attractive. Executives and investors all wish they could be like Wayne Gretzky, with his uncanny ability to sense where the puck is about to go. But many companies discover that once they get to the place where the money is, there’s very little of it left to go around.
Over the past six years, we’ve been studying the evolution of industry value chains, and we’ve seen a recurring pattern that goes a long way toward explaining why companies so often make strategic errors in choosing where to focus their efforts and resources. Understanding the pattern helps answer some of the enduring questions that IBM’s leaders, and thousands of others before and since, grappled with: Where will attractive profits be earned in the value chain of the future? Under what circumstances will integrated corporations wield powerful competitive advantages? What changes in circumstances will shift competitive advantage to specialized, nonintegrated companies? What causes an industry to fragment? How can a dominant, integrated player determine what to out-source and what to hold on to as its industry begins to break into pieces? How can new entrants figure out where to target their efforts to maximize profitability?
The pattern we observed arises out of a key tenet of the concept of “disruptive technologies”—that the pace of technological progress generated by established players inevitably outstrips customers’ ability to absorb it, creating opportunity for up-starts to displace incumbents. This model has long been used to predict how an industry will change as customers’ needs are exceeded. (See the sidebar “The Disruptive Technologies Model”.) Building on that ground, this new theory provides a useful gauge for measuring not only where competition will arise under those circumstances but also where, in an industry’s shifting value chain, the money will be made in the future.
The Disruptive Technologies Model
The implications of our theory will surprise many readers because, if we’re right, the money will not be made where most companies are headed, as they busily out-source exactly the things they should be holding on to and hold on to precisely the things they should unload. But we’ll get to that later…
A Tight Fit
Companies compete differently at different stages of a product’s evolution. In the early days, when a product’s functionality does not yet meet the needs of key customers, companies compete on the basis of product performance. Later, as the underlying technology improves and mainstream customers’ needs are met, companies are forced to compete on the basis of convenience, customization, price, and flexibility. These different bases of competition call for very different organizational structures at both the company and industry levels.
When products aren’t yet good enough for mainstream customers, competitive pressures force engineers to focus on wringing the best possible performance out of each succeeding product generation by developing and combining proprietary components in ever more efficient ways. They can’t assemble off-the-shelf components using standard interfaces because that would force them to back away from the frontier of what’s technologically possible. When the product is not good enough, backing off from the best you can do spells competitive trouble. To make the highest-performing products possible, then, companies typically need to adopt interdependent, proprietary product architectures.
During the early days of the computer industry, for example, when mainframes were not yet powerful or fast enough to satisfy mainstream customers’ needs, an independent contract manufacturer assembling machines from suppliers’ components could not have survived because the way the machines were designed depended on the way they were manufactured and vice versa. Nor could an independent supplier of operating systems, core memory, or logic circuitry have survived because these key subsystems had to be designed interdependently, too.
When the product isn’t good enough, in other words, being an integrated company is critical to success. As the most integrated company during the early era of the computer industry, IBM dominated its world. Ford and General Motors, as the most integrated automakers, dominated their industry during the era when cars were not good enough. For the same reasons, RCA, Xerox, AT&T, Alcoa, Standard Oil, and U.S. Steel dominated their industries at similar stages. Their products were based on the sorts of proprietary, interdependent value chains that are necessary when pushing the frontier of what is possible.
This article also appears in:
-
When a nonintegrated company tries to compete under these circumstances, it usually fails. Stitching together a system with other “partner” companies is extremely difficult when the subsystems and expertise those companies provide are interdependent. We could offer numerous historical examples, but there are plenty of illustrations from industries that are still emerging. In the late 1990s, for example, many nonintegrated companies attempted to offer high-speed DSL access to the Internet over phone lines operated by telephone companies. Most of these attempts failed. Many believe that low prices for DSL service that were rooted in regulatory peculiarities of the Telecommunications Act of 1996 are what drove the competitive local exchange carriers toward bankruptcy. This was only the proximate cause of their demise, however. The fundamental issue is that at this point in the industry’s evolution, DSL technology isn’t good enough yet, and there are, as a result, too many unpredictable interdependencies between what focused DSL providers need to do and what the telephone companies must do in response. The incumbent phone companies’ capacity to span the whole value chain has been a powerful advantage. They understand their own network architectures and can consequently offer service more quickly, with fewer concerns about the unintended consequences of reconfiguring their own central-office facilities. Regulatory mandates cannot decouple an industry at an interdependent interface. As long as DSL service is not good enough to satisfy most users, the integrated telephone companies will be able to provide better, more reliable service than nonintegrated competitors.
Going to Pieces
Product performance almost always improves beyond the needs of the general consumer, as companies stretch to meet the needs of the most demanding (and most profitable) customers. When technological progress overshoots what mainstream customers can make use of, companies that want to win the business of the overserved customers in less-demanding tiers of the market are forced to change the way they compete. They must bring more flexible products to market faster and customize their products to meet the needs of customers in ever smaller market niches.
To compete on these new dimensions, companies must design modular products, in which the interfaces between components and subsystems are clearly specified. Ultimately, these interfaces coalesce into industry standards. Modular architectures help companies introduce new products faster because subsystems can be improved without having to redesign everything. Companies can mix and match the best components from the best suppliers to respond to the specific needs of individual customers. Although standard interfaces invariably force compromises in system performance, competitors aiming at overserved customers can comfortably trade off some performance to achieve the benefits of speed and flexibility.
Once a modular architecture and the requisite industry standards have been defined, integration is no longer crucial to a company’s success. In fact, it becomes a competitive disadvantage in terms of speed, flexibility, and price, and the industry tends to dis-integrate as a consequence. The exhibit “The Dis-Integration of the Computer Industry” illustrates how this happened in that field. During its early decades, the dominant companies were integrated across most value-chain links because competitive conditions mandated integration. As the personal computer disrupted the industry, however, it was as if the industry got pushed through a bologna slicer. The dominant, integrated companies were displaced by specialists that competed in horizontal strata within the value chain.
The Dis-Integration of the Computer Industry
This shift explains why Dell Computer was so successful in the 1990s. Dell did not succeed because its products were better than those of competitors IBM, Compaq, and the like. Rather, overshooting triggered a shift in the basis of competition to speed, convenience, and customization, and Dell’s business model was a perfect match for that environment. Customers were delighted to buy computers with outsourced subsystems, custom-assembled to their own specifications and delivered incredibly quickly at competitive prices. This also explains how Cisco, with its disruptive router and its nonintegrated business model, bested more integrated competitors like Lucent in the market for telecommunications equipment.
Fuzzy Links
The careful reader will have noticed that the interfaces between stages in the value chain are central to our argument—both to the forces that support integration in the early years of an industry and to those that ultimately pull an industry apart into component pieces. They’ll become even more important when we move on to profitability flows in a moment. So let’s look more closely at what we mean by “the interfaces between components and subsystems.”
Say a company is considering whether it’s feasible to procure a subsystem from a supplier or partner rather than make it in-house. Three conditions must be met. First, managers need to know what to specify—which attributes of the item they’re procuring are crucial and which are not. Second, they must be able to measure those attributes so they can verify that they have received what they need. Third, there can’t be any unpredictable interdependencies: They need to understand how the subsystem will perform with the other pieces of the system so that it can be used with predictable effect. These conditions—specifiability, verifiability, and predictability—are prerequisites to modular designs that enable companies to work efficiently with suppliers and partners. They constitute what economists would term “sufficient information” for an efficient market to emerge for a particular component or subsystem.
Typically, when product performance has become more than good enough, the technologies being used are mature enough for these conditions to be met—facilitating the decoupling of the value chain. It is when performance is not good enough that new technologies are used in new ways—and in those circumstances the conditions of specifiability, verifiability, and predictability often are not met. When sufficient information does not exist at an interface, managerial coordination will always trump market mechanisms, reinforcing the strength of integrated companies.
The evolving structure of the lending industry offers a good example of these forces at work. Integrated banks such as Chase and Deutsche Bank have powerful competitive advantages in the most complex tiers of the lending market. Integration is key to their ability to knit together huge, complex financing packages for sophisticated and demanding global customers. Decisions about whether and how much to lend cannot be made according to fixed formulas and measures; they can only be made through the intuition of experienced lending officers. The high-end bankers who create innovative, complex financial instruments for these customers play a similar role to engineers who push the technological envelope when product functionality is not good enough. In both cases, meeting the needs of the most demanding customers requires that all the constituent parts be under one roof, able to communicate through organizational rather than market mechanisms.
The simpler tiers of the lending market, on the other hand, are being disrupted by innovations in the way credit-worthiness is established—specifically by credit-scoring technology and advances in asset securitization. In these tiers, lenders know and can measure precisely those attributes that determine the likelihood that a borrower will repay a loan. Verifiable information about borrowers—how long they have lived, where they live, how long they have worked, where they work, what their income is, and whether they’ve paid bills on time—is fed into powerful algorithms, which are used to automate lending decisions. Credit scoring took root in the 1960s in the lowest tier of the market, as department stores began to issue their own credit cards. Then, unfortunately for the big banks, the specialist horde of nonbank institutions moved inexorably upmarket in pursuit of profits—first to general consumer credit-card loans, t hen to automobile and mortgage loans, and now to small-business loans. True to form, the lending industry in these simpler tiers of the market has largely dis-integrated, as these specialist companies have emerged, each focusing on just a slice of added value.
Where the Money Goes
Clearly, companies competing in an integrated market face very different challenges from those competing in a fragmented market—the ball game changes fundamentally once components become modular and customers’ thoughts turn to speed or convenience rather than functionality. Sources of profitability change as well. Our model can help managers, strategists, and investors assess how the power to grab profits is likely to shift in the future. The bedrock principle is this: Those who control the interdependent links in a value chain capture the most profit.
The bedrock principle is this: Those who control the interdependent links in a value chain capture the most profit.
In periods when product functionality is not yet good enough, integrated companies that design and make end-use products typically make the most money, for two reasons. First, the interdependent, proprietary architecture of their products makes differentiation straightforward. Second, the high ratio of fixed to variable costs, which is inherent to the design and manufacture of architecturally interdependent products, creates steep economies of scale. Larger competitors can amortize high fixed costs over greater volume, giving them strong cost advantages over smaller competitors. Making highly differentiated products with strong cost advantages is a license to print money, and lots of it.
Management Education—Ripe for Dis-Integration
Hence IBM, as the most integrated competitor in the mainframe computer industry, made 95% of the industry’s profits from just a 70% market share. And from the 1950s through the 1970s, General Motors garnered 80% of the profits from about 55% of the U.S. auto market. Most of IBM’s and GM’s suppliers, by contrast, survived on subsistence profits year after year.
But when the large integrated players overshoot what their mainstream customers can use, the tables begin to turn. Disruptive competitors begin to move upmarket, and the power to make money shifts away from companies that design and assemble the end-use product toward the back end of the value chain to those companies that supply subsystems with internal architectures that are still technologically interdependent.
A good way to visualize this is to imagine an engineer employed at Compaq whose boss just told her to design a desktop computer better than Dell’s, IBM’s, or Hewlett-Packard’s. How would she do it? When designing and assembling a modular product, your competitors can replicate anything you can do very quickly. And because most of the costs in an outsourcing-intensive business model are variable rather than fixed, there are minimal economies of scale, so that large and small competitors have similar costs. Making an undifferentiated product at undifferentiated costs is a recipe for earning undifferentiated profits.
So, what’s our Compaq engineer to do? She’ll put pressure on her suppliers to invent faster microprocessors and higher-capacity, lower-cost disk drives.
Overshooting at the system level often throws the subsystem suppliers back to a stage where their product is not good enough for what the system assembler needs. Competitive forces consequently compel the subsystem suppliers to create architectures that are increasingly interdependent and proprietary as they try to push the bleeding edge of performance. They have to do this to win the business of their immediate customers, who are the designers and manufacturers of modular products. Hence, as a natural and inescapable result of the shift in industry structure, the place where companies are used to making a lot of money—the end-user stage—becomes unlikely to be the place where money will be made in the future. And, conversely, the places where attractive profits were rarely made in the past—components and subsystems—often become highly profitable.
The exhibit “Where the Money Went in the PC Industry” illustrates how this worked in the desktop computer market in the 1990s. Initially, money flowed from the customer to the companies that designed and assembled computers; but as the decade progressed, less and less of it stopped there as profit. Quite a bit of this money flowed over to operating system maker Microsoft and lodged there. Another chunk flowed to processor manufacturer Intel and stopped there. Money flowed to the DRAM chip makers such as Samsung and Micron Technology as well, but not much of it stopped there. It flowed through them and accumulated instead at companies like Applied Materials, which supplied the chip-manufacturing equipment that the DRAM makers used. Similarly, money flowed right through the assemblers of disk drives such as Quantum and lodged at the stage where heads and disks were made.
Where the Money Went in the PC Industry
What’s different about the places where the money collected and those where it didn’t? For most of this period, profits lodged with the products that were the ones not yet good enough for what their immediate customers needed. The architectures of those products therefore tended to be interdependent and proprietary. Companies in the blue boxes could only hang onto subsistence profits because the functionality of their products tended to be more than good enough, and so their architectures had become modular.
Consider the DRAM industry. Because the architecture of their chips was modular, DRAM makers could not be satisfied with even the very best manufacturing equipment available. To succeed, DRAM producers needed to make their products at ever higher yields and ever lower costs. This rendered the functionality of the equipment that Applied Materials and other such companies made not good enough. As a consequence, the architecture of this equipment became interdependent and proprietary, as the equipment makers strove to inch closer to the performance their customers needed.
Where Companies Go Wrong
Once an industry starts to fragment, a very predictable thing happens to companies that design and assemble modular products. They face investor pressure to improve their return on assets but find that because they can’t differentiate their products or make them at a lower cost than competitors, they can’t improve the numerator of their ROA ratio. So they shrink the denominator; they sell off asset-intensive units that design and manufacture components to companies that see in those same operations the opportunity to create subsystems whose architectures are progressively more interdependent—thus improving the numerator of their ROA ratio. Lucent’s recent spin-offs of its component and manufacturing operations is an example. This seems perfectly logical and necessary, given the increasingly modular character of many of Lucent’s systems. But with perfect predictability, this pressure from Wall Street to boost ROA forces companies to skate away from the place where the money will be made in the future.
This scenario could soon play out in one of IBM’s businesses. Through the 1990s, the capacity of the 2.5-inch disk drives used in notebook computers tended to be inadequate. True to form, their architectures were interdependent, and the design and assembly stage was very profitable. As the leading manufacturer, IBM enjoyed 40% gross margins. Now, drive capacity is becoming more than good enough for notebook computer makers, presaging the decline of what has been a beautiful business.
If IBM plays its cards right, however, it is actually in a very attractive position. As the most integrated drive maker, it can skate to where the money will be by using the advent of modularity to detach its head and disk operations from its disk drive design-and-assembly business. If IBM begins to sell its components aggressively to competing disk drive makers, it can continue to enjoy the most attractive profit levels in the industry. There was a time IBM could fight this particular war and win. Now, a better strategy is to sell bullets to the combatants.
IBM has already made similar moves in its computer business through its decisions to chop up its integrated value chain and aggressively sell its technology, components, and subsystems in the open market. Simultaneously, it has created a consulting and systems integration business at the high end and de-emphasized the design and assembly of computers. As IBM has skated to those points in the value chain where complex, nonstandard integration needed to occur, that has led to a remarkable—and remarkably profitable—transformation of a huge company. To the extent that an integrated company like IBM can flexibly couple and decouple its operations, rather than irrevocably sell off operations, it has greater potential than a nonintegrated company to thrive from one cycle to the next.
Where Will the Money Be in the Auto Industry?
We believe this model can help managers, strategists, and investors in a wide variety of industries see into the future with greater clarity than the traditional tools of historical data analysis have allowed. When we consider, for example, where the money in the automobile industry will go in the future, the car companies seem to be falling into exactly the same trap that IBM did some 15 years ago.
While automobiles often used to rust or fall apart mechanically well before their owners were ready to part with them, auto quality now has overshot what most customers want or need. In fact, the most reliable cars usually go out of style long before they wear out. As a result, the basis of competition is changing. Whereas it used to take six years to design a new car model, today it takes less than two. Car companies routinely compete by customizing features to the whims of smaller and smaller market niches. In the 1960s, it was not unusual for a model to sell a million units a year. Today, the market is far more fragmented: If you sell 200,000 units of a particular model, you’re doing fine. Some makers now promise that you can walk into a dealership, custom order a car exactly to your desired configuration, and have it delivered in five days—roughly the response time that Dell Computer offers.
To compete in this way, automakers are adopting modular architectures for their mainstream models. Rather than knitting together individual components from diverse suppliers, they’re procuring subsystems from fewer tier-one suppliers. The architecture within each subsystem—braking, steering, chassis, and the like—is becoming progressively more interdependent as these suppliers work to meet the auto assemblers’ performance and cost demands. Inevitably, the subsystems’ external interfaces are becoming more modular because the economics of using the same subsystem in several car models more than compensates for any compromises in performance that might result.
As the basis of competition has shifted, the vertically integrated automakers have had to break up their value chains so they can more quickly and flexibly incorporate the best components from the best suppliers. GM subsequently spun out its component operations into a separate company, Delphi Automotive Systems, and Ford has spun out its component operations as Visteon. Thus, the same thing is happening to the auto industry that happened to computers: Overshooting has precipitated a change in the basis of competition, which has precipitated a change in architecture, which has forced the dominant, integrated firms to dis-integrate.
To become fast and flexible, IBM’s PC business out-sourced its microprocessor to Intel and its operating system to Microsoft. But in the process, IBM hung onto where the money had been—the design and assembly of the computer system—and put into business the two companies that were positioned where the money would be. GM and Ford, with the encouragement of their investment bankers, have just done exactly the same thing. They have spun out the pieces of the value chain where the money will be in order to stay where the money has been.
Ford and GM had no choice but to decouple their component operations from their design-and-assembly businesses. Indeed, they gave their shareholders the option of owning one or both. But rather than an irreversible divestiture, they might have taken a page from IBM’s recent forays into opportunistic decoupling, ignored the siren song of investment bankers, and found a way not to shed those asset- and scale-intensive businesses where the numerator of the ROA ratio will likely be more attractive in the future. This will be especially true if shifts in customer demand mandate some sort of reintegration in the future.
Managers of the slimmed-down automakers can still do well, but they’ll need to dramatically change the way they do business in the design-and-assembly stage. They need to do in their industry what Dell did in the computer industry—become consummately fast, flexible, and convenient. Overshooting changes the game. If GM and Ford can play this new game better than competitors, they can still prosper, much as Dell did in the 1990s against competitors who hadn’t mastered the new rules as effectively.
The implications of these findings are clear. The power to capture attractive profits will shift in the value chain to those activities where the immediate customer is not yet satisfied with the functionality of available products. It is in these stages that complex, interdependent integration occurs—activities that create steeper economies of scale and greater opportunities for differentiation. The power will shift away from activities where the immediate customer is more than satisfied because it is there that standard, modular integration occurs. In most markets, this power shift occurs tier by tier in a way that is quite predictable.
Executives whose companies are currently making lots of money ought not to wonder whether the power to earn attractive profits will shift, but when.
Executives whose companies are currently making lots of money ought not to wonder whether the power to earn attractive profits will shift, but when. If they watch for the signals, quite possibly they can prosper in all cycles, rather than in only one.
A version of this article appeared in the November 2001 issue of Harvard Business Review.
[ad_2]
Source link