Skip to main content

May 16, 2018

AI and compute

AI And Compute

We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore’s Law had a 2-year doubling period). Since 2012, this metric has grown by more than 300,000x (a 2-year doubling period would yield only a 7x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.

AlexNet to AlphaGo Zero: 300,000x increase in compute

Loading...

Overview

Three factors drive the advance of AI: algorithmic innovation, data (which can be either supervised data or interactive environments), and the amount of compute available for training. Algorithmic innovation and data are difficult to track, but compute is unusually quantifiable, providing an opportunity to measure one input to AI progress. Of course, the use of massive compute sometimes just exposes the shortcomings of our current algorithms. But at least within many current domains, more compute seems to lead predictably to better performance(opens in a new window), and is often complementary to algorithmic advances.

For this analysis, we believe the relevant number is not the speed of a single GPU, nor the capacity of the biggest datacenter, but the amount of compute that is used to train a single model—this is the number most likely to correlate to how powerful our best models are. Compute per model differs greatly from total bulk compute because limits on parallelism(opens in a new window) (both hardware and algorithmic) have constrained how big a model can be or how much it can be usefully trained. Of course, important breakthroughs are still made with modest amounts of compute—this analysis just covers compute capability.

The trend represents an increase by roughly a factor of 10 each year. It’s been partly driven by custom hardware that allows more operations to be performed per second for a given price (GPUs and TPUs), but it’s been primarily propelled by researchers repeatedly finding ways to use more chips in parallel and being willing to pay the economic cost of doing so.

Eras

Looking at the graph we can roughly see four distinct eras:

  • Before 2012: It was uncommon to use GPUs for ML, making any of the results in the graph difficult to achieve.

  • 2012 to 2014: Infrastructure to train on many GPUs was uncommon, so most results used 1-8 GPUs rated at 1-2 TFLOPS for a total of 0.001-0.1 pfs-days.

  • 2014 to 2016: Large-scale results used 10-100 GPUs rated at 5-10 TFLOPS, resulting in 0.1-10 pfs-days. Diminishing returns on data parallelism meant that larger training runs had limited value.

  • 2016 to 2017: Approaches that allow greater algorithmic parallelism such as huge batch sizes(opens in a new window)architecture search(opens in a new window), and expert iteration(opens in a new window), along with specialized hardware such as TPU’s and faster interconnects, have greatly increased these limits, at least for some applications.

AlphaGoZero/AlphaZero is the most visible public example of massive algorithmic parallelism, but many other applications at this scale are now algorithmically possible, and may already be happening in a production context.

Looking forward

We see multiple reasons to believe that the trend in the graph could continue. Many hardware startups(opens in a new window) are developing AI-specific chips, some of which claim they will achieve a substantial increase in FLOPS/Watt (which is correlated to FLOPS/$) over the next 1–2 years. There may also be gains from simply reconfiguring hardware to do the same number of operations for less economic cost(opens in a new window). On the parallelism side, many of the recent algorithmic innovations described above could in principle be combined multiplicatively—for example, architecture search and massively parallel SGD.

On the other hand, cost will eventually limit the parallelism side of the trend and physics will limit the chip efficiency side. We believe the largest training runs today employ hardware that cost in the single digit millions of dollars to purchase (although the amortized cost is much lower). But the majority of neural net compute today is still spent on inference (deployment), not training, meaning companies can repurpose or afford to purchase much larger fleets of chips for training. Therefore, if sufficient economic incentive exists, we could see even more massively parallel training runs, and thus the continuation of this trend for several more years. The world’s total hardware budget is 1 trillion dollars(opens in a new window) a year, so absolute limits remain far away. Overall, given the data above, the precedent for exponential trends in computing, work on ML specific hardware, and the economic incentives at play, we think it’d be a mistake to be confident this trend won’t continue in the short term.

Past trends are not sufficient to predict how long the trend will continue into the future, or what will happen while it continues. But even the reasonable potential for rapid increases in capabilities means it is critical to start addressing both safety and malicious use of AI today. Foresight is essential to responsible policymaking(opens in a new window) and responsible technological development, and we must get out ahead of these trends rather than belatedly reacting to them.

If you’d like to help make sure that AI progress benefits all of humanity, join us at OpenAI. Our research and engineering roles range from machine learning researchers(opens in a new window) to policy researchers(opens in a new window) to infrastructure engineers(opens in a new window).

Loading...

Addendum: Compute used in older headline results

We’ve updated our analysis with data that span 1959 to 2012. Looking at the data as a whole, we clearly see two distinct eras of training AI systems in terms of compute-usage: (a) a first era, from 1959 to 2012, which is defined by results that roughly track Moore’s law, and (b) the modern era, from 2012 to now, of results using computational power that substantially outpaces macro trends. The history of investment in AI broadly is usually told as a story of booms and busts, but we don’t see that reflected in the historical trend of compute used by learning systems. It seems that AI winters and periods of excitement had a small effect on compute used to train modelsB over the last half-century.

Two distinct eras of compute usage in training AI systems

Loading...

Starting from the perceptron(opens in a new window) in 1959, we see a ~2-year doubling time for the compute used in these historical results—with a 3.4-month doubling time starting in ~2012. It’s difficult to draw a strong conclusion from this data alone, but we believe that this trend is probably due to a combination of the limits on the amount of compute that was possible to use for those results and the willingness to spend on scaling up experiments. C

We followed the same methodology outlined in the original post for this updated analysis. When possible, we programmatically counted the number of FLOPs in the results by implementing the models directly. Since computer architectures varied historically and many papers omitted details of their computational setup, these older data points are more uncertain (our original analysis of post-2012 data aimed to be within a factor of 2–3, but for these pre-2012 data points we aim for an order of magnitude estimate). We’ve also created graphs that provide additional views on the data: one graph lays out compute usage in fundamentals, speech, language, vision, and games over time and another visualizes the error-bar estimates around each data point.

We’re very uncertain about the future of compute usage in AI systems, but it’s difficult to be confident that the recent trend of rapid increase in compute usage will stop, and we see many reasons that the trend could continue. Based on this analysis, we think policymakers should consider increasing fundingD for academic research into AI, as it’s clear that some types of AI research are becoming more computationally intensive and therefore expensive.

Footnotes

  1. A

    A petaflop/s-day (pfs-day) consists of performing 1015 neural net operations per second for one day, or a total of about 1020 operations. The compute-time product serves as a mental convenience, similar to kW-hr for energy. We don’t measure peak theoretical FLOPS of the hardware but instead try to estimate the number of actual operations performed. We count adds and multiplies as separate operations, we count any add or multiply as a single operation regardless of numerical precision (making “FLOP” a slight misnomer), and we ignore ensemble models(opens in a new window). Example calculations that went into this graph are provided in this appendix. Doubling time for line of best fit shown is 3.4 months.

  2. B

    Just as in the original analysis, we focus on the costs to train models. This doesn’t include AI systems like expert systems, which attracted substantial investment in the first era.

  3. C

    For one vivid account of the history of computing in AI in this period, see the “False Start” section in Hans Moravec’s article(opens in a new window).

  4. D

    We’ve already advocated for additional funding for academia in our testimony in Congress(opens in a new window) this year, and for the creation of dedicated compute clusters to help academia and industry collaboratively benchmark and assess the safety of AI systems in response to a request for information from NIST(opens in a new window)

Original post

Dario Amodei, Danny Hernandez

Addendum

Girish Sastry, Jack Clark, Greg Brockman, Ilya Sutskever

Acknowledgments

The authors thank Katja Grace, Geoffrey Irving, Jack Clark, Thomas Anthony, and Michael Page for assistance with this post.