A new report stated that Microsoft is developing its own artificial intelligence chips that can be used to train large language models, thus reducing dependence on NVIDIA chips.
According to the report, which was published by The Information website, Microsoft has started working on developing chips in secret since 2019, and it is now available to some of its employees and OpenAI employees to test their performance in the latest large language models, such as: (G). PT-4) GPT-4.
main supplier
NVIDIA is the main supplier of AI server chips that companies are racing to buy, and it is estimated that OpenAI will need 30,000 NVIDIA A100 GPUs to market ChatGPT.
Nvidia’s latest H100 GPUs are selling for more than $40,000 on eBay, illustrating the demand for cutting-edge chips that can help spread artificial intelligence software.
And while NVIDIA is scrambling to make as many chips as possible to meet demand, Microsoft is said to be looking at developing its own chips, which it hopes will help it save money in its drive to adopt artificial intelligence in its services.
Microsoft is said to have accelerated work on codenamed Athena, a project to make its own AI chips, which it hopes will help it save money on its drive to adopt AI into its services.
Although it is not known whether Microsoft intends to provide the chips to customers of its cloud service “Azure”, the US software giant is said to be planning to make its artificial intelligence chips more widely available within Microsoft and (OpenAI) OpenAI early next year.
According to the report, Microsoft also has a roadmap for chipsets that include several generations to come.