The Emergence of the Large Cap Paradigm [Part 2]
Posted November 17, 2020
As I stated yesterday, my analysts Richard Vigilante and Steve Waite have written this four-part series for readers of Gilder’s Daily Prophecy.
If you missed the intro I gave on my stellar team, go here to catch up.
You can go here to read part 1 if you missed it.
Yesterday we spoke of an emerging “large cap paradigm.”
This paradigm entails fundamental, physical realities that now confine innovation in core technologies — optical communications, wireless communications, and semi-conductor fabrication — to a shrinking number of already dominant firms, most about 20-30 years old.
Over the next 10 years, we believe these firms will become more dominant and more profitable to investors than ever before.
Crucial to understanding why are the insights of our old friend and colleague the late Clayton Christensen. These include both his famous disruptive innovation paradigm and his far less well understood “sustaining innovation” paradigm.
To see how, let’s think about golf carts and the Daytona 500.
Need for Speed
Once Clay had empirically demonstrated “disruptive innovation” — an inferior technology suddenly and unexpectedly displacing an entrenched superior one — he began to speculate on potential future disruptions.
One question Clay asked himself was “what is the disruptive pathway, if any, to the electric car?”
Back in those more innocent days, before the auto industry discovered the obvious pathway was billions in government subsidies, Clay asked himself: “what about golf carts?” He’d noticed an uptick in sales, and that at “adult leisure communities” in the sunbelt some residents were surrendering their cars for carts.
Golf carts are inferior to cars in numerous ways — as disrupting technologies always start out to be — but also a lot cheaper, another disruptive characteristic. Most importantly, they give “good enough” performance for a certain market segment.
Benefiting from the learning curve, might the performance of golf carts improve and their market until — suddenly — they were good enough to displace cars for large parts of the market, in, say, urban Florida, or Arizona?
Maybe. But here is what never would happen. Golf carts would never win the Daytona 500.
Disruptive technologies win by becoming “good enough.” Top of the line technologies get disrupted when they supply more performance than necessary — and customers don’t want to pay for it.
At Daytona, “good enough” is good for nothing.
Prized on the champion track is not average performance per dollar, but the absolute limit of performance.
And the higher that limit rises, the fewer can meet the call.
Today, the most important customers — in semiconductors, in wireless, in optics — are all competing at the Daytona 500. All need more performance than even their best vendors can provide.
Apple’s new A14 “bionic” processor for the iPhone 12 enables astounding improvements in user experience including in still photography, video, and AR.
Its machine learning models enhance photos with faster than ever image stacking.
The A14 enables real time Dolby Vision encoding of 4K resolution — 60 frames per second video. A process used in Hollywood in post-production.
The A-14 enables LiDAR based Augmented Reality (AR) to instantaneously map the phone to three-dimensional reality.
Both CPU and GPU are faster and more powerful in a phone that is lighter and smaller, and yet has slightly better battery life. And, and… OK, we will stop now.
The point is, to create an iPhone that maybe further ahead of its rivals than any generation since the first few Apple needed a chip that could not have been manufactured anywhere on earth 24 months ago.
The Daytona Demand
That’s right, one of the most widespread — and profitable — consumer products in the world until recently sat on a razor’s edge between billions of sales and non-existence.
This is a very special type of demand. Let’s call it Daytona demand.
It is not demand for total capacity across a network or even a data center or an enterprise system.
It is demand for peak performance at a single node, at a single moment of time. It’s demand for the one and only car that reaches the finish line first.
If the object of the Daytona 500 were to provide 500 person/miles of transport around downtown Daytona at the best price, golf carts might beat race cars. I might supply 100 golf carts, carrying five people, each for a mile, and do the job more cheaply than a single race car.
But I wouldn’t win the race. And for some applications winning the race is all that matters.
What matters about the Apple A-14 processor is not total computational power delivered over some reasonable time. What matters is peak power at an instant in time.
And it turns out that of all the semiconductor manufacturing companies in the world, at most three, and perhaps only one, could deliver the A14.
Taiwan Semiconductor invented the “foundry model” of chip production. The foundry model separates chip design — which can be done by thousands of “fabless” design firms — from manufacture, which can be done by only a handful of specialized firms with the capital and expertise to translate design to silicon.
TSMC was the first foundry ever. For decades it continued to have dozens of viable competitors, including “Integrated Device Manufacturers,” such as Intel, which manufactured their own designs (as well as sometimes fabbing for others).
As the graph above shows, 20 years ago almost 30 chipmakers could manufacture at the leading commercial circuit densities, at that time using a so-called 180 nanometer (0.18 micron) process.
Roughly 10 years later, the state of the art commercial process had shrunk by nine-times to circa 20 nanometers. The number of manufacturers that could play at that level had declined, by 80%, to six.
As circuits shrank further, so did the competition.
Intel has never gotten below 10nm. Global, the world’s second largest foundry, got off the merry-go-round at five nm.
Next year, TSMC and Samsung will stand alone in the world at 3nm. TSMC It is already building its first 2 nm plant. No one else is following. Intel is considering getting out of top of the line chip manufacturing altogether, and outsourcing to TSMC.
Why? Two reasons:
- It is really, really hard and fantastically expensive to manufacture at those levels.
- Relatively few chip designs actually require such extreme circuit densities. Most computational tasks don’t need such extraordinary capacity concentrated at a single point in space and time. For many purposes the combination of more, smaller, often more specialized chips working together can “disrupt” top-of-the-line competitors.
But not for the Daytona 500. Some races you just have to win flat out. For the A-14, Apple had to take the checkered flag.
Increasingly the most important chips with the biggest margins, and the longest production runs, will be manufactured at extreme densities that only one or two companies in the world can deliver.
And that’s why size matters. TSMC’s triumph is a function of size, or more specifically, of the learning curve, which predicts a 20-30 decrease in cost for every cumulative doubling in unit volume.
The current semi-conductor market can support only one or two foundries with the accumulated human, and machine capital to manufacture at the TSMC’s level.
But it does absolutely need those one or two.
And those one or two will be not only the biggest foundries with the biggest revenues, they will have the best margins, happily supplied by the most important customers in the market.
Tomorrow: The large cap paradigm in optical and wireless communications.
Lead Analyst, The George Gilder Report
Lead Analyst, The George Gilder Report