DeepSeek, a Chinese artificial intelligence (AI) company that develops large language models (LLMs), turned the world of AI on its head recently when it claimed that it spent just $5.6 million (note this is million, not billion) on computing power to develop its base AI model. That would be a fraction of what U.S. companies have been spending on computing power to build their AI models. And demand for infrastructure to power AI software is expected to be immense.For example, Microsoft plans to spend $80 billion building out AI-capable data centers this year. Historically, about half of the spending on data centers goes toward servers. Meta Platforms, meanwhile, announced it would spend $65 billion this year on AI development, while the recently announced Stargate project backed by Oracle, OpenAI, and Softbank has plans to spend $500 billion on AI infrastructure over the next several years.The claim that DeepSeek could build an LLM so cheaply sent shock waves through the markets last week, and
Nvidia (NASDAQ: NVDA) was the biggest loser.
Nvidia's graphics processing units (GPUs) are central to the tech world's AI infrastructure buildout as they are the primary source of the specific type of rapid computing power that AI systems require. The market's logic was simple: If DeepSeek can create an LLM chatbot on par with (or better than) ChatGPT or Meta's Llama using far less processing power, that does not bode well for GPU demand.Continue reading
Weiter zum vollständigen Artikel bei MotleyFool Weiter zum vollständigen Artikel bei MotleyFool