Trends

Run:AI raises $13 million to train AI models efficiently

AI models are great — after all, they’re at the core of everything from voice assistants to datacenter cooling systems. But what isn’t great is the time and effort required to fine-tune them. The data sets ingested by production algorithms comprise hundreds (or millions) of samples and take powerful PCs up to weeks to process. New techniques promise to expedite model training, but not all of them are generalizable.
It’s this longstanding challenge that inspired Omri Geller, Ronen Dar, and Meir Feder to found Run:AI, a software provider developing a platform that autonomously speeds up AI development. It emerged from stealth today with $10 million in series A funding led by S Capital and TLV Partners, bringing its total raised to $13 million following a $3 million seed round, which Geller says will be used to develop the company’s product offering further.
“Traditional computing uses virtualization to help many users or processes share one physical resource efficiently; virtualization tries to be generous,” he added. “But a deep learning workload is essentially selfish since it requires the opposite: It needs the full computing power of multiple physical resources for a single workload, without holding anything back.”
Run:AI’s software, which Geller describes as “low-level” and “close to the metal,” creates an abstraction layer that analyzes the computational characteristics of AI workloads and uses graph-based algorithms to minimize bottlenecks, effectively optimizing the workloads for faster easier execution. It also allocates them in such a way that all available compute resources are maximized, taking into account factors like network bandwidth, compute resources, cost, and data pipeline and size.
Under the hood, Run:AI mathematically “breaks up” AI models into multiple fragments that run in parallel, Geller says, an approach that has the added benefit of cutting down on memory usage. This in turn enables models that would otherwise be constrained by hardware limitations (chiefly graphics card memory) to run unimpeded. “Traditional computing software just can’t satisfy the resource requirements for deep learning workloads,” he said.
Dar and Geller founded Run:AI in 2018 after studying together at Tel Aviv University under Feder, who specializes in information theory and who previously led the exit of two startups. Dar was a postdoc researcher at Bell Labs and R&D and algorithms engineer at Apple, Anobit, and Intel, and Geller was a member of an elite unit of the Israeli military where he led large-scale projects and deployments.
They aren’t the first to market with tech that can optimize algorithms on the fly — Ontario startup DarwinAI taps a technique called generative synthesis to ingest AI models and spit out a highly optimized, compact versions of them. But one investor — TLV Partners’ Rona Segev-Gal — was convinced by the wealth of expertise in hardware, parallel computing, and deep learning Run:AI’s team brings to the table.
“Executing deep neural network workloads across multiple machines is a constantly moving target, requiring recalculations for each model and iteration based on availability of resources,” she said. “Run:AI determines the most efficient and cost-effective way to run a deep learning training workload. We’ve seen many AI companies in recent years, but Omri, Ronen, and Meir’s approach blew our mind.”
Run:AI has several international customers and says that it’s established a U.S. office.
Source: VentureBeat

Follow Us On Facebook, Twitter & Instagram Please Share Your Stories, Press Release & Articles At [email protected]. To Read More News Daily, Subscribe To Our Push Notification at https://www.inventiva.co.in/

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker