Trends
Trending

The Need For A Domestic AI Hardware Ecosystem In India

The Need For A Domestic AI Hardware Ecosystem In India

From personal assistants to facial recognition programs to track criminals, AI applications rely on hardware to innovate. There are two main activities involved in AI applications: training and inference. 

To get the algorithm right, the algorithm would require exposure to large sets of data. When inferring a situation, however, algorithms need to react more quickly (no time for a cloud call) rather than storing more information.

Inference and training layers are both cloud-based but have varying demands on AI hardware, thus requiring either edge or in-device computing.

Many developers encounter problems with training and inference because of hardware limitations, such as memory, logic, and networking. 

The Need For A Domestic AI Hardware Ecosystem In India

With next-generation accelerator architectures, semiconductor companies can increase computational efficiency or transfer data from memory to storage more efficiently. 

Dedicated AI hardware can make AI applications better suited to handling vast data. 

The importance of hardware in AI will lead to greater demand for semiconductor companies’ current chips. Still, they may also be able to profit from developing novel technologies, such as workload-specific AI accelerators. 

By capturing 40-50 percent of the total value from the technology stack, semiconductor companies could capture the best chance in decades. 

A semiconductor used in AI applications will grow by 18 percent annually over the next few years (compared to 2 percent for non-AI). 

The revenue generated by AI-related semiconductors could reach $67 billion by 2025.

A Case For A National AI Hardware Policy

Chipsets are becoming increasingly application-specific as the semiconductor market shifts. Applications-driven chips are mainly designed to perform the same function repeatedly and efficiently. 

An algorithm-specific AI-enabled piece of hardware fits into the space. Chips to Startup (C2S) is a government program that encourages universities, research institutes, startups, and SMEs to create prototype semiconductor chips for specific applications. 

Additionally, it seeks to train engineers to design ASICs and FPGAs. Markets and Markets reported that AI-related and application-specific semiconductors will reach $57.8 billion by 2026, at a CAGR of 40.1 percent, based on a market research report. 

The Need For A Domestic AI Hardware Ecosystem In India

An increasing share of ASICs, GPUs, and other applications in the semiconductor market signals an increase in AI hardware revenue and significance. 

The space, defense, and telecommunications industries are now adopting AI capabilities in custom hardware. Inference and training chipsets can be categorized as two major categories of AI-enabled hardware. 

In terms of cost-benefit ratios, it would better serve the Indian semiconductor manufacturing dream by investing in AI inference chip fabs than display fabs or other chips. 

Training and inference accelerators are not typically produced similarly to semiconductor ICs (both leading and trailing edge). Compared to AI inference chips, AI training hardware requires more power, data crunching, and memory. 

Producing training hardware requires specific licensed software. Thus, AI inference chips are the best bet for India to develop its ecosystem. 

Indian makers can make a mark in AI hardware by focusing only on large-scale inference chip development and manufacturing.

This contrasts trailing and leading-edge nodes, where significant capital investments are needed in advanced computing and AI training chips. 

Because of their software, inference chipsets are readily available and low-cost, unlike AI training hardware, which a few companies like NVIDIA have monopolized. 

The Need For A Domestic AI Hardware Ecosystem In India

The decline in traditional Arm architectures for handling high AI workloads is essential when focusing on AI hardware. It remains a costly, licensed, and proprietary architecture despite ARM holdings’ release of the v9 microarchitecture in 2020. 

Because RISC chips deal with fewer and simpler instructions (letting software do most of the work), more room is left for AI implementation. 

For integrating high-level AI algorithms into semiconductor chips, alternative architectures (like RISC-V) are preferred due to cost pressures on miniaturization and the consumption of more power. 

Due to its open source and royalty-free nature, RISC-V is gaining acceptance (and the ecosystem surrounding it is maturing), as are its compilers and verification tools. 

Furthermore, the Indian government launched the ‘digital RISC-V’ initiative, which emphasizes reducing reliance on licensed architectures. 

The focus on AI hardware is essential as Arm processor cores are less suitable for AI algorithm training and inference at a sufficient level. India is pushing for alternatives such as RISC-V for AI integration.

How Should India Start?

One, create a trailing edge gab specifically for large-scale fabrication of inference chips. With the semiconductor package for 2021, the government is providing financial incentives for the construction of fabs with a process technology ranging from 28 to 65 nanometers (nm). 

It is easier to set up a trailing edge fab with less investment and can begin production immediately. Large-scale AI chips can be produced in the country using a 45+ nm fab since the country’s trailing edge nodes are adequate for AI hardware production

A public-private partnership can be a priority for a trailing edge fab to handle AI hardware manufacturing. Additionally, it is crucial to support and fund open-source AI hardware design projects. 

The Need For A Domestic AI Hardware Ecosystem In India

The government can support open-source projects to develop AI training hardware (along the lines of RISC-V), given the dominance of a few firms and their proprietary codes.

The Triton language developed by OpenAI is seen as a credible alternative to CUDA developed by NVIDIA. DARPA has developed Real-Time Machine Learning (RTML) to generate ASICs that run real-time ML logic. 

For example, the government could focus on IISc’s ARYABHAT-1 (Analog Reconfigurable Technology And Bias-scaled Hardware for AI Tasks) chip – an intelligent chip designed to handle sizeable parallel computing operations at high speeds, particularly for AI-based applications

Increasing the scope of existing policy schemes to include AI hardware is essential as a first step. In addition, the government has initiated projects such as those designed to develop a semiconductor ecosystem in the country, such as the Design Linked Incentive Scheme (DLI). 

This scheme can be used to implement artificial intelligence using compound semiconductors like gallium nitride (GaN) and silicon carbide (SiC). 

Parallel computing languages and other design aspects related to AI hardware can also be added to the DLI scheme (especially its deployment-linked incentive aspect). 

It is also necessary to upgrade the computing hardware to handle the computational workload of AI algorithms as Artificial Intelligence (AI) gains traction across multiple sectors.

When India is well positioned to make a mark in the low-hanging AI hardware sector, it should seize this opportunity.

edited and proofread by nikita sharma

Nandana Valsan

Nandana Valsan is a Journalist/Writer by profession and an 'India Book of Records holder from Kochi, Kerala. She is pursuing MBA and specializes in Journalism and Mass Communication. She’s best known for News Writings for both small and large Web News Media, Online Publications, Freelance writing, and so on. ‘True Love: A Fantasy Bond’ is her first published write-up as a co-author and 'Paradesi Synagogue: History, Tradition & Antiquity' is her second successful write-up in a book as a co-author in the National Record Anthology. She has won Millenia 15 Most Deserving Youth Award 2022 in the category of Writer. A lot of milestones are waiting for her to achieve. Being a Writer, her passion for helping readers in all aspects of today's digital era flows through in the expert industry coverage she provides.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button