Sector

China's DeepSeek not a major threat to AI-focused chipmakers

The late-January release of details about a free large language model from Chinese AI startup DeepSeek raised questions about massive spending by the biggest players in the U.S., but Fidelity's Adam Benjamin says the actual impact is shallow.

  • All manner of AI-related companies, including chipmakers, were rattled on January 27 when Chinese generative artificial intelligence company DeepSeek released a large language model that was cheaper to develop than others on the market, but Fidelity Portfolio Manager Adam Benjamin says investors overacted to what he considers a shallow threat to competition.
  • "The news led to one of the worst trading days in years for many mega-cap tech stocks, including leading semiconductor companies that represent the backbone and promise of AI capabilities," says Benjamin, who manages Fidelity Advisor® Semiconductors Fund. "But we've since learned that the initial market reaction, including questions about the valuations of the biggest players, was overstated."
  • Benjamin notes that the market was alarmed that DeepSeek's open-source LLM reportedly cost just $5.6 million to develop, and it appeared to be functionally competitive with ChatGPT and other free AI tools. Instead of taking months or years to pretrain the LLM, as has been the case with other models, such as ChatGPT, DeepSeek's R1 reportedly took just two months.
  • "With that said, we don't have a full accounting of the entire development cost," Benjamin cautions. "Notably, the figure doesn't incorporate costs associated with the open-source LLM models that were used to train R1," he says.
  • Benjamin explains that the R1 model was built using more distillation and reinforcement learning – building on top of existing models – making it faster to train and more efficient than one with no previous versions.
  • The DeepSeek news reinvigorated conversations about massive spending on AI and whether algorithmic (software) efficiency – represented in this case by the R1 LLM – may be a more efficient approach than brute-force computational power. The latter has been a driving factor behind the enormous outlays by big data centers on chips and power generation.
  • In Benjamin's view, companies finding more efficient ways to train LLMs isn't a surprise, and we could continue to expect businesses to search for efficiencies. As important, a major reason for Adam's unwavering confidence about AI-focused chip companies is that large cloud-services providers, also known as hyperscalers, have not reduced their spending on computing infrastructure, as originally thought.
  • "In fact, hyperscalers recently raised their capital spending plans, setting the stage for robust outlays in 2025 and beyond," he notes.
  • In helming the fund, Benjamin operates under the philosophy that the value of technology stocks is in large part determined by the companies' future potential to generate earnings and cash flow. His investment framework also focuses on identifying themes that impact the largest end markets, determining potential winners/losers, and how certain companies that are technology disruptors can potentially impact incumbents.
  • Major fund holdings Nvidia, Taiwan Semiconductor Manufacturing and Marvell Technology each struggled in the wake of the DeepSeek news but affirmed their AI-related spending plans, supporting Adam's view that they still offer value and are positioning themselves to capitalize further on the promise of AI.
  • "What DeepSeek's new LLM did do is accelerate the move from training to inference – in other words, to using trained models to analyze new data and make predictions – which could drive a substantial increase in computing requirements," he concludes.
Securities mentioned were fund holdings as of March 31.
  • For specific fund information such as standard performance and holdings, please go to the "Funds Managed" link on this page.
 
 

There was an issue with your input

 
 
 

Please confirm