The Information published a story on Saturday that referenced a presentation given to investors by American businessman Elon Musk, who recently informed them that his artificial intelligence startup, xAI, intends to construct a supercomputer to power the next version of its AI chatbot, Grok, according to Reuters.
As to the report, Musk expressed his desire to have the projected supercomputer operational by autumn 2025 and mentioned that xAI might collaborate with Oracle to build the enormous machine.
xAI could not be reached for comment at this time. A request for comment from Reuters was not answered by Oracle.
According to The Information, which quoted Musk from a May presentation to investors, the connected chip clusters, which would be Nvidia's flagship H100 graphics processing units (GPUs), when finished, would be at least four times larger than the largest GPU clusters currently in use.
The market for AI data center chips is dominated by Nvidia's H100 family of potent GPUs, which are in high demand and can be challenging to find.
In an attempt to take on Alphabet's Google and Microsoft-backed OpenAI, Musk established xAI last year. Alongside, Musk co-founded OpenAI.
According to Musk, training the Grok 2 model required roughly 20,000 Nvidia H100 GPUs earlier this year. He also stated that training the Grok 3 model and later models will take 100,000 Nvidia H100 processors.