Hitting the Wall—Part 2

By Alberto Moel, Vice President Strategy and Partnerships, Veo Robotics

Welcome back, dear reader, to a follow-up on our previous work, where we connected the current decade-long slump in manufacturing productivity to the lack of flexibility in the way we currently build things—incrementally and with limited human-machine collaboration. In Part 1 of this discussion, we set the stage by exploring how other technologies have jumped over or crashed through walls of stagnant productivity growth.

In today’s post we will provide three additional examples that are “closer” to the action of manufacturing and far more recent. And, of course, we will tie it all up in a pretty bow and make the case that the Veo FreeMove system is indeed one of these transformational technologies that will help break through the wall of weak productivity growth in manufacturing.

Figure 1. Transistor density, CPU performance, and Moore’s Law.

Figure 1. Transistor density, CPU performance, and Moore’s Law.

Overcoming the end of Moore’s Law through chip architecture

As I’m sure you’re aware, Moore’s Law is the observation by Intel co-founder Gordon Moore in 1965 (and subsequently revised downward a few times by the powers that be) that the number of transistors in an integrated circuit doubles at some exponential rate. Moore's Law isn’t really a “Law” (with a capital L), but more of an empirical observation and projection of a historical trend that has managed to actually counter Murphy’s Law (which is pervasive in engineering development) by being right more often than wrong.

The original version of Moore’s Law predicted the doubling of transistor density every 18 months. The less ambitious 1975 version of Moore’s law stated that transistor density would double every 2 years, or by 41% YoY.1 In Figure 1, which tracks transistor density and computing performance over 40 years, the blue dashed line projects normalized transistor density growing at the rate set by the 1975 version of Moore’s Law, doubling every 2 years.

However, in reality, transistor density has doubled every 2.5 years or so since the 1980s, which is closer to 32% YoY. This is illustrated by the gray scatterplot showing normalized transistor density for about 350 different microprocessors (CPUs) through the ages.2

In reality, we care less about transistor density itself and more about what the density means for computing performance.3 And for that, we need to look at how these CPUs performed on some computing benchmark. Let’s use the SPECint benchmark and normalize the performance so that 1 corresponds to a 1978 vintage DEC VAX 11/780 minicomputer.4

And indeed, CPU performance (as indicated by the red dotted line) up to the early 2000s exceeded the rate of transistor scaling (and pretty closely matched 1975 Moore’s Law) all through CPU architecture. Up to the mid-1980s, Complex Instruction Set Computer (CISC) architecture improvements trailed Moore’s law, with CPU performance improvements of 22% YoY. Then Reduced Instruction Set Computer (RISC) architectures were developed, resulting in a 55% CAGR CPU performance improvement between the mid-1980s and early 2000s, when reality caught up to 1975 Moore’s Law.

Incremental developments in multicore CPUs from the early 2000s to early 2010s resulted in a 23% improvement in CPU performance, trailing Moore’s Law. And since then, scaling, power, and temperature constraints have limited performance improvements to much lower levels—only about 3% YoY—creating a performance wall.

How did we break through that wall of stagnant CPU performance? With breakthrough technologies, of course. Since 2015, new paradigms such as domain-specific architectures and languages and agile hardware development have begun to emerge and crack the computational wall. A new class of CPU architectures, such as Amazon’s AWS Graviton or AMD’s Epyc CPU series are taking advantage of these innovations to retake the lead (shown in green in Figure 1). Although Figure 1 doesn’t seem to show a major improvement (it’s a log chart, after all), the Amazon AWS Graviton 2 processor computes the SPECint benchmark 2.8x faster than a “traditional” late-model Intel Xeon processor.5

Just as the advent of the telegraph allowed for a step jump in message transmission rates and put their performance on another trajectory, the advent of new approaches and CPU architectures are enabling a rapid acceleration in CPU performance, smashing the wall and again catching up to Moore’s Law.

Figure 2. Fuel efficiency standards and performance 1978-present.

Figure 2. Fuel efficiency standards and performance 1978-present.

A leap in fuel economy from electric vehicles

Another performance lane that has seen a serious step improvement after decades of stagnation is fuel economy.

Fuel economy standards and the level of performance they achieve for internal combustion engines (ICEs) in the US have not changed appreciably since the early 1980s. It is extremely difficult to extract additional fuel efficiency from ICEs without greatly sacrificing performance. Some automakers have engaged in fleet mix-shifting6 in order to meet the efficiency standards, while others have “met” the standards through less savory means (as evidenced by the Volkswagen emissions scandal).

Figure 2 shows the CAFE7 MPG equivalent (MPGe) standards, the laboratory test-level fuel efficiency standards, and the rate at which fuel efficiency has risen over the last few decades. As we can see, the improvements were modest up until the recent introduction of electric vehicle (EV) technology, a transformational leap.

EVs, both purely electric and hybrids, are a new paradigm for fuel efficiency that handily outperform the century-old internal combustion engine. The introduction of mass-produced EVs in the early 2010s led to an almost 8x improvement in fuel efficiency (shown in green in Figure 2), leaping over the wall and placing fuel efficiency improvements on a steeper trajectory.

Figure 3. Spoken word recognition performance 1993-2017.

Figure 3. Spoken word recognition performance 1993-2017.

Human-level speech recognition with Machine Learning

Let us look at one last example of wall-breaking in human speech recognition. As we can see from Figure 3,8 spoken speech recognition made great strides in the early 1990s thanks to probabilistic methods such as Markov Chains, as evidenced by the Switchboard benchmark of word error rates in spoken word recognition.

At the turn of the millennium, however, performance stalled and stayed well below human-level for many years. Then came the paradigm shift and its breakthrough technologies. Deep Neural Network (DNN) and Machine Learning (ML) techniques (shown in green in Figure 3) very quickly improved error rates from 28% to 20% (a 29% gain) over traditional probabilistic methods, and have continued to evolve and transform the speech recognition industry. At this point, spoken word recognition performance using DNN/ML approaches has pretty much reached human levels.

Figure 4. Indexed US manufacturing productivity (1992=100).

Figure 4. Indexed US manufacturing productivity (1992=100).

What does this all mean for the wall in manufacturing?

Figure 4 recasts the US Bureau of Labor Statistics9 manufacturing productivity data we presented last time as a YoY change, indexing manufacturing TFP at 100 in 1992. The wall of stagnation—where productivity gains flattened and even dipped—appears at the turn of the 21st century.

The three examples presented here each used a new (one could say “revolutionary”) approach to move performance improvements beyond stagnation and over/through the wall to a materially better trajectory. By reimagining the problem at hand and attacking it from a fresh angle, each approach uncovered a new paradigm that unlocked major performance gains.

Similarly, we need to reimagine factories and invent a new approach if we want to overcome the wall of stagnant productivity. Veo Robotics has reimagined factories as systems involving sophisticated human-robot collaboration, which will improve flexibility and increase productivity. Veo’s vision and control technology enables this new approach to manufacturing, and will provide immediate benefits that will multiply as safe human-robot collaboration becomes widespread and ubiquitous in the modern factory. The shift is happening now; we hope you’ll join us for the ride.


1 This chart and associated analyses come from John Hennessy and David Patterson’s magisterial 2018 Turing Lecture, A New Age for Computer Architecture, Communications of the ACM, February 2019 62(2), pages 48-60.

2 Yes, I collected all these data points and normalized them by computing the number of transistors relative to CPU die area. I do not recommend doing this, even if you are homebound.

3 The key assumption being, of course, that transistor density and computing performance are correlated. Which, in fact, they are, but not as much or as closely as most people believe. Most of the gains over the last couple of decades have come from CPU architecture improvements.

4 The DEC VAX 11/870, for those of a certain age, was an epic wall-breaking, paradigm-shifting piece of equipment that made people’s heads explode. And to date myself, I programmed VAX minicomputers back when I was a wee lad. It wouldn’t surprise me if the total computing power of a VAX 11/780 was equal to that of your average baby monitor camera, but such was life when people walked to school in the snow. Uphill. Both ways.

5 Not meaning to cast aspersions, as Xeon CPUs are substantially more versatile than specialized CPUs such as the Amazon AWS Graviton series. It’s a tradeoff between flexibility and performance. Sound familiar?

6 Austin, David, and Terry Dinan. 2005. “Clearing the Air: The Costs and Consequences of Higher Cafe Standards and Increased Gasoline Taxes.” Journal of Environmental Economics and Management, 50(3): 562–82.

7 CAFE is the Corporate Average Fuel Economy standards as set by the National Highway Traffic Safety Administration (NHTSA). The data in the chart comes from www.fueleconomy.gov.

8 Based on a 2017 Technology Quarterly report by The Economist on speech recognition.

9 Michael Brill, Brian Chansky, and Jennifer Kim, "Multifactor productivity slowdown in U.S. manufacturing," Monthly Labor Review, U.S. Bureau of Labor Statistics, July 2018. The data and some of the analyses in this section are taken from this paper.