Trends

The Story Of Chip Giant Nvidia And The Entry Of New Players In The Market!

Nvidia's graphic processing units (GPUs) are often regarded as the finest in the industry for running AI models.

If a firm has been established for a few decades and still hasn’t made it to the top, it’s more likely to be a ‘also ran’ company that is purchased or just dies. None of this occurred in the case of Nvidia, headquartered in Santa Clara, California. And what transpired stunned the tech world.

Nvidia‘s graphic processing units (GPUs) are often regarded as the finest in the industry for running AI models. Nvidia’s GPU chips are powering OpenAI, the world’s most disruptive AI firm, which unveiled the chatbot ChatGPT in November 2022.

 The Story Of Chip Giant Nvidia And The Entry Of New Players In The Market!

Nvidia’s fortunes were altered as a result of this. The firm, formed in 1993, became the first chipmaker to have a market worth of more than USD1 trillion, making it the most valuable tech corporation in the United States behind Apple, Microsoft, Alphabet, and Amazon. Its stock has increased 170% since the beginning of the year.

But none of this has affected Nvidia co-founder and CEO Jensen Huang’s signature attire of dark pants, a black shirt and a black jacket, nor the company’s unwavering concentration on growing its closely guarded ecosystem. This contains CUDA software, which only operates on Nvidia hardware and networking features.

What distinguishes Nvidia?

Nvidia’s rise from the shadows to the fore has been spectacular.

Nvidia has prioritised computer graphics and gaming. While Intel ruled the computer’s heart with its microprocessors, Nvidia stayed on the outskirts.

The computing requirements have shifted away from personal computers, laptops, and desktops and towards data centres. This is especially noticeable in computation-intensive applications, which are often performed on virtual machines located in data centres. GPU sales have increased to 50% of data centre chip sales, up from 5%-10% five years ago. GPU-powered applications like high-performance computing, big data analytics, AI, graphics rendering, and cloud gaming frequently have substantially greater power requirements than earlier generations of apps.

A decade ago, when Facebook and Google grew in popularity, they deployed machine learning (ML), deep learning, and AI models to improve their services. As thousands of chips function at the same time, such models need massive amounts of computing power. Nvidia chips were best adapted for such activities, making them appealing beyond their primary sectors of gaming and graphics.

In comparison to CPUs, Nvidia GPUs offer a high memory bandwidth. This enables them to execute several calculations at the same time. AI computing necessitates matrix multiplication (repetitive multiplications), and Nvidia’s processors contain Tensor Cores specialised to such calculations.

 The Story Of Chip Giant Nvidia And The Entry Of New Players In The Market!

According to Sivakumar Ramamurthy, co-founder of Portland, Oregon-based incubator and investment platform business Pragya Ventures, when the first wave of AI innovation began to arrive, switching initially from ML to deep learning, Nvidia’s environment was the finest accessible. Large firms like Google and Facebook drove the need for ML and became consumers of Nvidia’s software infrastructure, which greatly facilitated their entry into AI. The market found them. Ramamurthy worked for Intel for over three decades before launching Pragya Ventures in the United States in 2019.

Thousands of AI processors exchange information at the same time, and Nvidia owns around 80% of this networking industry, owing to its acquisition of Mellanox, a networking equipment expert, in 2019. This secures Nvidia’s supremacy in the burgeoning AI industry of USD27 billion (annual sales as of FY22).

Trying to achieve supremacy comparable to Intel?

For more than a half-century, the business has been the final word in semiconductor chips. However, like many previous behemoths, Intel was caught off guard by the dramatic shift in chip demand in the AI age.

Nvidia did make a large bet on AI before everyone else.

Intel, the semiconductor industry leader, failed to recognise changes in computing demands and continued to focus on central processing units (CPUs) used in personal computers and servers.

Intel’s supremacy was based on microprocessors, which were used by everyone to construct personal computers and servers. IBM, HP, Toshiba, Asus, Samsung, Dell (established by Michael Dell in the 1980s), Lenovo (which purchased IBM’s PC division), and others were among those on the list.

While Intel kept the course and concentrated on PCs, it failed to recognise the shifting contours of the computing world with the introduction of the Internet, cellphones, cloud computing, and data centres. Intel’s CPUs were adequate for PCs but insufficient for the high-volume computing required by AI.

 The Story Of Chip Giant Nvidia And The Entry Of New Players In The Market!

Nvidia is the market leader in GPUs, but unlike in the microprocessor era, the rate of innovation is considerably faster now, thanks to the advent of new applications such as AI, metaverse, ambient computing, and so on. Nvidia has a good start in GPUs, but competition is quickly catching up as GPUs become a more appealing market.

GPUs versus CPUs.

As previously stated, the transition from CPUs to GPUs, in tandem with the increasing popularity of AI-driven applications, has been at the heart of Nvidia’s metamorphosis.

Because of its lead in GPUs, Nvidia remains unrivalled.

Initially, chipmakers focused on CPUs, which serve as the brain of computers or data centre servers. These are general-purpose workhorses that can launch a web browser, run Microsoft Excel, manage massive bank transactions, and coordinate the operation of many systems, including memory and display.

CPUs do a variety of computations, but they do it in a serial fashion.

CPUs fall short when it comes to managing massive amounts of data, such as AI systems that can distinguish human faces or online buying habits from millions of data points. They are far too sluggish for such tasks.

This is where GPUs come into play. GPUs are built to do numerous repetitions of the same computation at the same time. This form of parallel computing can effectively teach AI systems.

As an example, whereas a CPU would feed an algorithm several bits of data one after the other, a GPU processes many pieces of data at the same time. A CPU will have to analyse one pixel at a time to recognise a human face in a 4×6 shot 1200 x 1800 pixels for a high-resolution photo, but a GPU would read all pixels at once. As a result, the latter is favoured in AI applications, fueling Nvidia’s meteoric ascent.

 The Story Of Chip Giant Nvidia And The Entry Of New Players In The Market!

Nvidia now controls more than 80% of the AI market. Currently, none of Nvidia’s competitors provide AI products that include hardware processors, software (CUDA), and networking. Nvidia’s unique selling point is in developing this ecosystem, just as Apple and Microsoft did.

According to Chander Damodaran, chief technology officer of Brillio, a New Jersey-based digital technologies and big data analytics business, Nvidia did not pursue the popular chip path. They placed large bets on gaming, graphics, and blockchain. That has begun to pay off with AI, where they have a significant first-mover advantage.

The competitive ecosystem.

AMD introduced their AI GPU, the MI300X, on June 13. This is Nvidia’s most difficult challenge to date.

AWS is considering using an AMD CPU, but it won’t be ready until the end of this year. AMD intends to entice customers by offering lower-cost CPUs than Nvidia.

Google, Amazon, Microsoft, Meta, and others have begun to create their own processors with the goal of processing huge amounts of data for AI. Google has created its own Tensor Processing Units (TPU), which are tuned for usage in neural networks and can handle enormous data processing.

Apple has developed its own graphics processors for the Mac. Previously, it used Nvidia and Intel CPUs, but it is increasingly creating its own chips, which are manufactured by Taiwan’s TSMC.

Meta is developing its own processors and collaborating with Qualcomm to supply them for its augmented reality (AR) initiatives.

The end user ecosystem is far more informed and prepared to manufacture its own chips. Apple, Google, Qualcomm, and even Meta are all involved. While they may not be able to entirely match Nvidia, it is in their best interests to collaborate with Intel and AMD so that Nvidia does not run away with the market in the long run.

 The Story Of Chip Giant Nvidia And The Entry Of New Players In The Market!

Eventually, there will be several players. People may have thought Intel was invincible 15 years ago. Today, things are extremely different, according to Ramamurthy.

In the fourth quarter of 2022, Intel’s sales fell by a third to over USD14 billion. The corporation revealed a USD700 million financial loss, and its share price has half since March 2021.

If Nvidia continues to develop and remain a leader, it may be able to drive this for a very long time. But he is certain that other players will catch up quickly, says Ramamurthy.

India may gain.

In December 2021, the government unveiled a USD10 billion incentive package for fabs and chip design. However, only Micron Technology has received official approval thus far. Micron’s unit is used for assembly and testing rather than production.

The state-owned Semiconductor Laboratory (SCL) manufactures older generation chips, mostly for defence and other government purposes.

While India lacks manufacturing, it does have over 1 lakh chip designers. This represents around 20% of the world’s chip design expertise at global design houses like Intel, Texas Instruments, Nvidia, Qualcomm, and AMD, as well as other fabless divisions.

The engineers here work on every level of the design process, from specs to architecture to design and verification to manufacturing support and testing.

Even if a fab plan is authorised, it would not be cutting-edge since India lacks the ecosystem and will not be ready for three to four years.

As the globe moves towards GPUs, there may be a scarcity of these chips, which can cost up to USD30,000 per. Analysts at UBS, SemiAnalysis, and other research firms predict a GPU scarcity, which will push up costs and open the door for competitors such as AMD.

Another reason motivating competitors to respond quickly is Nvidia’s walled-garden strategy. Its CUDA software is only compatible with Nvidia processors. Open systems will be used by competitors to try to disrupt this.

 The Story Of Chip Giant Nvidia And The Entry Of New Players In The Market!

Nvidia has a major play as long as their approach aligns with how the market is reorganising. According to Damodaran, the goal for Nvidia and its rivals is to form alliances with huge tech users such as Google, Amazon, and OpenAI. Who will the corporation sell to if that is not present?

Nvidia has seen its fair share of challenges, including the recent crypto meltdown, which has impacted demand for its processors.

The business has been in the game for three decades and risks sliding into an Intel-style trap if it stays in its area while chipmakers and other tech companies play catch-up.

Conclusion.

For the most part, Nvidia has remained in the shadows of chip industry behemoths such as Intel and AMD. That was before AI took over the globe, and tech businesses recognised that Nvidia’s processors were the best at making machines seem brilliant. The corporation currently has a significant lead in AI. But can it keep the lead?

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker