I wanted to share my thoughts on the recent breakthrough of the Chinese R1 DeepSeek LLM model and the repercussions that this breakthrough will have on the industry.
Smart stuff. Appreciate your explaining. Notable, that AWS margins expanded over the past few years, even with all of the $ spent with NVDA, and scaling up overall AI intiatiives.
Recognize Nvidia integration with CUDA, still hard to see NVDA being able to maintain the same margins as Microsoft, a company that has barely any manufacturing costs.
Very good analysis. Spending more computing time to process to serve a request is the next logical step now that we have reach an upper limit in data set. It makes total sense. Inference chips specialization may lead to a whole new innovation. I also share a similar portfolio with Amazon Google TSM. Whether google is weaken by this is a concern. The commodization of LLM model weskens openAI which is good but it also lower the cost of being served by LLM thus strenghten the case of using LLM versus search. What do you think?
The faster the progress of LLMs, the faster the cannibalization of Google Search will be, so yes, the speed up in availability (costs) of serving these models is a negative for the Search business, but good for their cloud business, waymo, etc.
Wonderful article Richard!
Thank you.
Smart stuff. Appreciate your explaining. Notable, that AWS margins expanded over the past few years, even with all of the $ spent with NVDA, and scaling up overall AI intiatiives.
Recognize Nvidia integration with CUDA, still hard to see NVDA being able to maintain the same margins as Microsoft, a company that has barely any manufacturing costs.
Very good analysis. Spending more computing time to process to serve a request is the next logical step now that we have reach an upper limit in data set. It makes total sense. Inference chips specialization may lead to a whole new innovation. I also share a similar portfolio with Amazon Google TSM. Whether google is weaken by this is a concern. The commodization of LLM model weskens openAI which is good but it also lower the cost of being served by LLM thus strenghten the case of using LLM versus search. What do you think?
The faster the progress of LLMs, the faster the cannibalization of Google Search will be, so yes, the speed up in availability (costs) of serving these models is a negative for the Search business, but good for their cloud business, waymo, etc.
Very interesting read, thank you for sharing!
Excellent writing Richard.
thanks Nick!