Trends

Intel exec talks about simplifying AI implementation at the edge

At Transform 2019 this month, Intel’s GM of Vision Markets Farshid Sabet took to the stage to address the opportunities and innovations in IoT. As Sabet describes, he’s in an enviable position to assess so many companies developing AI at the edge for various applications in a number of geographic locations. Of course, there are different ways to implement AI, ranging from the data center connecting with the cloud to deploying various components at the edge.
“The question is, where is the right place to be able to deploy AI,” said Sabet. “Do we do it all in the cloud and make the devices dumb? Or a combination of less reliance on the cloud and more on the edge and things? There’s no single answer.” He explained that it really depends on what the needs are in terms of latency, privacy, total cost of ownership and system implementations, and where the data gets implemented.
“At Intel, we see the applications range from the very low-power to very high performance, to very customizable and to something that is easy and generic to use,” he said. It’s why, he explained, Intel has applications or processors that address each of these markets. For example, if you’re deploying AI in smart cameras, power concerns becomes especially important, whereas if your applications are in data centers, performance and latency become the predominant concerns.
However, what Sabet and his team have been grappling with is the complexity of implementation. It’s hard enough to hire talent with machine learning experience and expertise, he said, but you also need people who have expertise in either DSP, GPU, CPU, or VPU. And you then have to make sure you hire layers of engineers who have expertise in the relevant areas.
Sabet, unsurprisingly, said the answer is OpenVINO, Intel’s software toolkit that allow inference at the edge and works with any of these computer vision architectures. “You don’t have to be expert in any of these specific products,” explained Sabet, “[The toolkit] allows optimal performance for each of these environments, as opposed to hand-crafting and getting to the low level to be able to do the programming you want. If you have end-to-end systems, you want to be able to develop solutions that could be deployed from the data center to the edge and to the device all at once. You don’t want to develop something that’s only for one of the nodes.”
Source: VentureBeat

Follow Us On Facebook, Twitter & Instagram Please Share Your Stories, Press Release & Articles At [email protected]. To Read More News Daily, Subscribe To Our Push Notification at https://www.inventiva.co.in/
This article is automatically sourced by automatic news feeds through online softwares, Inventiva team has not made any modifications and adjustments in the article and is published as it is after giving due credits to its original source.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker