AI Industry Shift: From Nvidia to Broadcom

Advertisements

In recent days, the stock market in the United States witnessed a remarkable shift characterized by a phenomenon described by some as “buy Broadcom, sell Nvidia.” This unexpected trend saw Broadcom's shares surge by an astonishing 27%, achieving the highest single-day increase in its history and pushing the company's market valuation past the remarkable threshold of one trillion dollarsIn stark contrast, Nvidia, the dominant player in the chip industry, experienced a slight decline in its share price by 3.3%. This juxtaposition of fortunes in the tech sector sends ripples through market analysts and investors alike, leading to speculation about the future direction of artificial intelligence and semiconductor production.

The catalyst for this sudden buying spree can be traced back to a surprising and bold prediction presented by Broadcom’s Chief Executive Officer, Hock Tan, during a recent earnings call

Tan forecasted that the market demand for Application-Specific Integrated Circuits (ASICs) tailored for artificial intelligence (AI) could reach a staggering $60 to $90 billion by the year 2027. If this prediction holds water, it suggests a formidable potential for Broadcom's ASIC-related AI business to double its annual growth over a span of just three years, from 2025 to 2027. Such a projection greatly enhances market optimism regarding the ASIC sector, indicating a possible growth explosion in the near future.

As AI technology continues to evolve, the industry faces notable challenges, particularly in terms of data exhaustion and diminishing marginal returns on model performanceAt the core of AI development lies pre-training, where models undergo a rigorous and iterative process that involves feeding extensive amounts of data and subsequently refining their capabilitiesMajor technology corporations have scrambled to secure the most powerful Nvidia GPUs available on the market, believing that a sizable inventory of these units would bolster the efficacy of their AI models

However, this race for GPU acquisition has led to concerns that exhaustive training methods are consuming the global data reservoir at an alarming rateAdditionally, the escalating costs associated with computational power juxtaposed against the diminishing returns have sparked an essential debate about whether we are approaching the end of the pre-training phase in AI.

Highlighting this discourse, Ilya Sutskever, a former co-founder of OpenAI and the current head of the startup SSI, delivered a striking address at the NeurIPS 2024 conference, declaring the imminent conclusion of the pre-training eraCiting data as the “fossil fuel” of AI, Sutskever expressed concerns that the resources needed for pre-training have reached their limitNoam Brown, a prominent figure at OpenAI, echoed this sentiment when he pointed out that while the advancements in AI from 2019 to now can be attributed to an expansion of data and computational resources, large language models still struggle with remarkably simple tasks, such as Tic-Tac-Toe.

This dialogue raises a pivotal question: Is scaling all we really need in the pursuit of a better AI? With the industry’s eyes shifting toward the next phase of large-scale AI models — that of logical reasoning — we see a clear trend emerging.

The next stage in AI model development focuses on harnessing existing large models to cultivate applications in various specialized domains, thereby effecting real-world deployment

Current offerings in the market, such as Google's Gemini 2.0 and OpenAI's o1, demonstrate how AI Agents have become primary targets for development among major corporations.

As these robust models become increasingly sophisticated, many experts suggest that ASIC chips, which are tailored for specific applications, may gradually supplant GPU chips that primarily serve training purposesThis notion gains traction amidst Hock Tan’s optimistic forecast for the ASIC market, which in turn corroborates widespread expectations of a paradigm shift within AI.

Delving deeper into the architecture of semiconductors, it's essential to comprehend the distinction between standard semiconductors and ASICsStandard semiconductors adhere to conventional specifications and can be utilized across a broad spectrum of electronic devices, while ASICs are custom-designed to meet specific needs

alefox

Because of this bespoke design, ASICs tend to be integrated into specialized devices, executing precisely the functions for which they were created.

The realm of AI computations has diverged into two distinct paths: the generalized route exemplified by Nvidia's GPUs, suited for high-performance computing, and the specialized route represented by ASIC chipsWhile GPUs excel at handling vast parallel computations, they encounter challenges like memory bottlenecks when performing large matrix multiplicationsASICs, being specifically engineered to remedy these issues, promise higher cost-effectiveness once mass-produced.

Simply put, the strength of GPUs lies in their maturity and established supply chains, whereas the appeal of ASICs lies in their focus and efficiencyThe latter can achieve superior processing speeds and consume less energy, making them more apt for deployment in edge computing environments.

The growing demand for custom AI chips has turned companies like Marvell and Broadcom into gold mines for semiconductor manufacturing

As the supply of GPUs tightens and prices soar, numerous tech giants find themselves entering the arena of self-developed ASIC chips tailored for their specific requirements.

Notably, Google is often cited as a trailblazer in the field of AI ASIC development, having introduced its first-generation Tensor Processing Unit (TPU) back in 2015. Other noteworthy examples include Amazon's Tranium and Inferentia, Microsoft's Maia, Meta's MTIA, and Tesla's DojoMarvell and Broadcom have, for years, dominated the upstream supply chain for self-developed AI chips.

Marvell’s rise can be attributed to a strategic shift in leadershipUnder CEO Matt Murphy's guidance since 2016, the company refocused on developing custom chips for tech giants during its reorganization, effectively capitalizing on the AI boom.

In addition to Google and Microsoft, Marvell has recently inked a five-year collaboration agreement with Amazon’s AWS aimed at designing proprietary AI chips

Industry analysts speculate that this partnership could potentially double Marvell’s custom AI chip business in the coming fiscal year.

Broadcom, another key player, boasts relationships with major clients such as Google, Meta, and ByteDanceAnalysts forecast that by 2027-2028, each client could reach an impressive procurement volume of a million ASICs annuallyWith the growth of additional major clients, Broadcom looks poised to reap considerable AI revenue in the coming years.

As AI moves deeper into what can be described as the “second half” of its development journey, the real game of logical reasoning has only just begunIn the words of CTO Hock Tan, “In the future, 50% of AI flops will be attributed to ASICs, with massive cloud computing providers potentially relying entirely on ASICs for their internal needs.” This undoubtedly foreshadows an exciting and transformative period in both the semiconductor and AI industries.

Leave a Comment

Stay Tuned for Updates

Get exclusive weekly deals with our newsletter subscription