December 2018

Machine Intelligence for the Innovators of Tomorrow - BMW i Ventures’ investment in Graphcore

Source: Graphcore
Source: Graphcore

Today, we’re thrilled to announce our investment in Graphcore and its two co-founders, Nigel Toon and Simon Knowles. Graphcore secures the lead in the global AI chip race with $200 million in new capital from leading financial and strategic investors including Atomico, BMW i Ventures, Microsoft, Merian Chrysalis, Pitango, Sequoia and Sofina.

Graphcore is a UK based private company founded in 2016 developing an intelligence processor unit (IPU) that can improve the performance of machine intelligence training and inference by 10x to 100x compared to currently available solutions. The company is in a stage of rapid global growth, tripling the size of the team and opening new offices in London, Palo Alto and Beijing in 2018.

Market

AI is manifesting itself at a phenomenal rate driven by deep/machine learning applications such as speech and image recognition, video surveillance, customer service, network monitoring and automotive. The annual worldwide AI software revenue will grow from $5.4bn in 2017 to $108.8bn by 2025 according to the Tractica Artificial Intelligence Market Forecast. The overall AI compute today is estimated to account for 1-2% of public cloud service sales growing to be about 30% - 40% of the revenue by 2025 [Tractica].  

Driven by AI software compute needs, the demand for specialized hardware accelerators is imminent. The global AI chip data center revenue currently generated by GPUs, CPUs and FPGAs (Field Programmable Gate Arrays) amounts to over $4bn. It is expected that this volume will expand by 2025 to reach more than $11bn (BMW i Ventures estimation), driven mostly by sales of purpose-built AI chips, such as Graphcore’s IPU, posing a threat to Nvidia’s dominance.

The market is expected to double until 2021, so now is the time to release products in order to gain market share. 2019 and 2020 are likely the years when a ramp-up in deep learning chipset volumes will take place and “winners” will begin to emerge.

A “winner” in the chipset market can rise very, very rapidly. Looking back to the year 2000, when Intel launched a data center offensive with its x86 CPU architecture, there was a clear dominant technology in the market for server CPUs - IBM’s family of processors called “Power” - owning 40%+ market share, corresponding to more than $2bn of annual sales. The other leading technology was provided by Sun Microsystems, called “Sparc”.

Jumping to 2018, “Power” and “Sparc” are together holding less than 5% of the data center CPU market while Intel’s x86 processor family has emerged as the clear leader (s. Intel x86 data center market share development below).

https://www.top500.org/
https://www.top500.org/

For reference, there is also a distinct dominant technology in 2018 - Nvidia’s family of GPUs - owning 70%+ of the data center AI chip market, corresponding to approximately $3bn of sales expected in 2018. From 2000 to 2005, Intel grew at a phenomenal rate from about 0% to 57% market share for data centers with a revenue totaling well over $1bn [BMW i Ventures estimation].  While Graphcore’s story may unfold with even higher velocity, due to the unique characteristics of the much faster growing AI chip market today, Intel’s path shows how quickly the adoption of a new architecture can evolve in the processor world.

A decade ago, the overall CPU data center market was growing at a much slower pace than today’s AI chip market, which meant that high sales growth also translated to winning high market share. While we may not observe a gain in market share at such a rate, we expect a similar pattern to develop in the market for purpose-built AI processors, with Graphcore well positioned to lead with the exceptional combination of superior technology, team and timing.

Product

We believe Graphcore has the right team and product at the right time to succeed in an emerging market. The company has developed a novel AI accelerator chip (IPU) that is tailored to enable the machine intelligence of tomorrow. Unlike currently available CPUs, GPUs and FPGAs, Graphcore’s IPU is purpose-built for machine/deep learning algorithms and delivers an order of magnitude better performance. IPUs provide more compute per watt than CPUs and GPUs, while they have also been designed to be extremely efficient at mini-batch training.

Source: Graphcore
Source: Graphcore

An important difference of Graphcore’s IPU is its innovative approach to memory. Computing architectures have always relied on RAM (Random-Access Memory) for program storage. This means that the variables for computation have to be fetched from the external RAM, which is both very energy intensive and is also making the memory interface between the processor and the RAM the bottleneck limiting the overall system performance.  Graphcore’s team decided to store the variables (i.e. the weights of a neural net) on the processor itself and not in the external RAM. This results in 100x+ memory bandwidth increase compared to traditional architectures.

Source: Graphcore
Source: Graphcore

CPUs and GPUs typically have 10’s to 100’s of separate processor cores. Graphcore’s first IPU processor, Colossus, has 1,216 independent processor cores on each chip. The low power consumption of the IPU allows fitting 2 IPU chips onto a single 300W PCIe card resulting in over 2,400 independent processor cores, capable to operate with more than 14,000 independent IPU program threads that are all working in parallel with minimal latency because the knowledge models are held inside in-processor memory.

To make use of these levels of data-parallelism, Graphcore has developed its proprietary software called Poplar™. It determines how the thousands of individual processors on the chip communicate with each other, making sure that data is moved across the chip most efficiently and at the right time, utilizing all available processors. This leads to significant performance improvements, enabling Graphcore to achieve processing speed gains of 10x or more when compared to today’s highest performance GPUs.

Our Graphcore Investment

The market is driven by the increasing number of artificial intelligence and machine learning applications while being limited by the performance boundaries of traditional compute architectures. This causes a large demand for scalable & optimized processing units. Being first to market with a production-ready AI chip and a unique, future-proof design, Graphcore offers an unparalleled opportunity to invest in the potential winner in that space.

So far there are limited high-performance options for central AI computation, especially when looking for companies that have systems ready for evaluation and deployment on the market. Graphcore is selling IPUs to data centers & cloud providers, while the company also has a strong product pipeline aiming towards smaller structures and automotive solutions. The combination of using Graphcore’s IPU in a data center and the possibility to utilize the same architecture in a vehicle is very attractive. In the data center, the IPUs can be used for simulation and training while the same architecture (certified for automotive) can be used in the car for inference, resulting in quicker turnaround times and less complexity during the development phase. Furthermore, the ability to run different kinds of neural nets with equally high performance on an IPU is beneficial: a flexible hardware allows innovating on the algorithms while taking a hardware design decision earlier.

This latest round of funding will allow Graphcore to execute on its product roadmap, accelerate scaling and expand the company’s international footprint. It is a further step towards fulfilling the team’s ambition to build an independent global technology company, focused on the new and fast-growing machine intelligence market.

Final Remarks

The race for the dominant architecture in the AI chip market has just begun and there is no clear winner yet. Graphcore is first to market with a production-ready dedicated chipset for today’s and tomorrow’s AI computing needs.

Graphcore’s chipsets are computationally powerful and optimized for machine learning, allowing for high throughput at a very low latency. With its versatility and flexibility Graphcore’s IPU – which supports multiple machine learning techniques with high efficiency – is well-suited for a wide variety of applications from intelligent assistants to self-driving vehicles. It’s equally good at training and inference, allowing the use in a data center as well as in a vehicle.

Graphcore has the potential to become the winner of that space. We look forward to supporting Nigel, Simon and the Graphcore team in building a major global technology company that can help innovators in AI create the next generation of machine intelligence.

Graphcore’s co-founders, Nigel Toon & Simon Knowles
Graphcore’s co-founders, Nigel Toon & Simon Knowles

P.S: Special thanks to our interns Marco Linner and Marie Tai for the help with the diligence.

Contact Us

Channels for reporting persons.

Responsible, sustainable, and lawful conduct is an integral part of the values of the BMW Group. The BMW Group's reporting system is open to all employees, business partners, customers, or other third parties for reporting specific or potential legal violations. In this way, risks can be identified and addressed at an early stage and, if necessary, appropriate remedies can be provided. Confidentiality and the protection of informants are paramount here. On request, information can also be provided anonymously.

For further information on the reporting channels and the respective process, please visit: BMW Group Compliance

Sign up for our email list
Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Mountain View (HQ)
2606 Bayshore Parkway
Mountain View, CA 94043
USA
San Francisco
915 Battery Street
San Francisco, CA 94111
USA
Munich
Freddie-Mercury-Strasse 5
80797 Munich
Germany