Jensen Huang, President and CEO of Nvidia, during Keynote Computex, which took place from May 29 to June 2, in Taipei (Taiwan). Alamy Stock Photo
DECRYPTION – The very high demand for dedicated processors comes up against Nvidia’s virtual monopoly.
There is the tip of the iceberg: the ChatGPT phenomenon crystallized the democratization of generative artificial intelligence. Businesses, individuals, public organisations… Everyone wants to be able to access these technologies which are already beginning to revolutionize certain uses. Less visible is what is happening in the depths: this unexpected and very rapid explosion in demand is causing a real bottleneck in the demand for chips dedicated to AI. Because behind the “apparent magic” of programs such as ChatGPT or other CoPilot from Microsoft, there are hidden needs in unprecedented and staggering computing capacities. First upstream to “train” on hundreds of billions of data the large language models (LLM) on which these new technologies are based. Then, downstream, to respond to end-user requests for all this new software.
However, in the servers of data centers where everything is at stake, a…