Trends

Arm’s Neoverse will span everything from tiny devices to high-end server chips

We’re getting a lot of “verse” words from technology companies these days, many of them descended from the Metaverse, the universe of virtual worlds imagined in Neal Stephenson’s Snow Crash novel from 1992.
The most recent iteration is Arm’s Neoverse, a cloud-to-edge infrastructure that the chip design company hopes will support a world with a trillion intelligent devices.
The Neoverse is basically Arm’s ecosystem for supporting the chip design and manufacturing firms that will produce those devices, based on the Arm architecture. But it’s also a market-based approach to supporting customers in different segments, like automotive, machine learning, or the internet of things (IoT).
Arm is also stepping up with a more aggressive roadmap for processors that make use of the most advanced manufacturing possible. That means the company is targeting everything from low-power embedded processors to high-end chips for servers.
I spoke with Haas about Neoverse and other topics at the Arm TechCon event last week in San Jose, California.
Here’s an edited transcript of our interview.

VentureBeat: Is there a particular interesting theme here for you?
Rene Haas: It’s the sum of the parts, what all the guys are talking about. I’m curious what you thought about the Neoverse piece. That’s a major point of emphasis for us in terms of investing in the infrastructure and everything involved with that, but at the same time, as we’ve gone with this more market-based approach, thinking about growing each of the businesses around specific markets, you’ll see similar things around automotive. You’ll see more around the ML space and so on. It’s the continued culmination of the strategy we’ve been building up over the last couple of years.
The infrastructure stuff is interesting, because–a lot of people have questioned this, talking about market share in servers and what that means. For us, we view a lot of the investment we’re making in the infrastructure–it will have a lot of potential flyover into autonomous driving, for example. The same high-end compute platform we’re doing for Zeus and Poseidon I think will transfer well toward autonomous driving and things of that nature. We’re seeing a reaffirmation that the strategy seems to be working.
VentureBeat: Putting it all together like that, what does it achieve for the customers of Arm? They can get a full spectrum of choices?
Haas: Exactly, on a few fronts. One is, there’s a broader choice, whether it’s machine learning IP or GPU IP or CPU IP. At the same time, it’s also having the schedules of all the products lining up appropriately to hit certain market windows. It sounds kind of obvious from the outside. Wouldn’t you have all your products aligning to the same cadence to go off and hit a sample time? But not always. We were finding out that partners, potentially, were designing a next-generation SOC with this year’s CPU, but next year’s GPU, and some interim system IP.
It’s really all about–we want to enable our partners to build better SOCs and build butter phones, or build a better laptop, or build something that’s better tuned for the end market. It’s a combination of choice, but also, it lets us look at each of these markets and make sure we’re investing in the right level of performance that’s going to move the needle on the system. That’s a big piece of it.
Machine learning is a pervasive underlying technology that applies everywhere. It’s not just the accelerator. Part of it is doing a dedicated hardware accelerator, but the other is adding ML extensions to the GPU and CPU, and then having the whole environment, whether it’s through Arm or through the compute libraries, that pulls it all together. Automotive is another area. You need things like split lock. You things like functional safety. There are all these special attributes required that we weren’t doing as good a heterogeneous job across all our products.

Arm's roadmap

Above: Arm’s roadmap

Image Credit: Arm

VentureBeat: This added element there of the high-end cores that are going to come out on a more regular schedule at certain performance targets and manufacturing nodes–that’s a more clear communication that you’ve done of that than before.
Haas: It’s a combination of more clear communication of intent, plus more clear communication that, behind the scenes, we’ve always been working pretty closely with the foundry guys. But now we’re being up front. When you see us talking about certain technologies tied to a certain node, we’re working closely with fab partners to achieve that. Given all the investments that Samsung and TSMC are making in advanced node technology, it’s pretty key.
VentureBeat: I did wonder, with things like Intel and Global Foundries slowing down and dropping the pace of Moore’s Law, whether you’re able to do this with confidence, given that it is possible that TSMC and Samsung could be affected by the same things.
Haas: We continue to see a pretty heavy capital investment from TSMC. Drew’s slide today talked about the number of wafers that are driven by the Arm ecosystem compared to x86. Most important there is that it’s all on the leading technology nodes. We’re seeing a lot of the partnerships driving the advanced technology. I think we’ve hit a point where the external guys — the people who run fabs for a living — are setting the cadence, as opposed to people who have integrated factories, like Intel. That’s good for us.
Five years ago, we were talking about servers. But five years ago we didn’t have much of a 64-bit story. Most of the products were 32-bit. We didn’t have much work done in terms of software ecosystem. A lot of that stuff is now behind us. We’re now moving to the next wave, where the software ecosystem is getting mature. We have very competitive 64-bit products. Now, process-wise–five years ago you’d argue that the blue company was the world leader by a good margin. Now it’s moved around a bit. That’s why we think the opportunity space is pretty profound. That’s why you saw us talking about Neoverse the way we did today.
VentureBeat: I wonder, though, if there’s some uncertainty to the schedule, because the arrival of the nodes is not as clockwork as you might hope.
Haas: I don’t know. Again, the demand for cloud infrastructure product is massive. It’s just massive. The stuff that we help with is–you’re going into these rack systems that have a fixed footprint in terms of power. It’s all about maximizing performance in that power envelope. Process really helps you. I’m not seeing that. Sure, there’s always risk, no doubt about it. There’s less risk as far as, “Are the fab guys investing to make it happen?” as opposed to, “There’s definitely obvious execution risk” because there always is on new stuff.

ARM expects to manage a trillion devices in the Neoverse.

Above: ARM expects to manage a trillion devices in the Neoverse.

Image Credit: ARM

VentureBeat: When Facebook announced the new Oculus Quest, their wireless stand-alone VR headset, they said it would launch in the spring. Some people thought they would have the 845 processor in it, from Qualcomm, and instead it had an 835. I don’t know whether that speaks to some of what you just talked about, or whether people might have an unrealistic understanding of what you can cram into a certain device on a certain timetable. But it’s odd to see these situations where last year’s chip shows up in next year’s product.
Haas: It’s usually OEM-specific, relative to their development cycles, their qualification cycles, and what’s available at a certain time. Some guys are just more aggressive and move faster. Oculus guys, they operate at their own cadence. It’s probably as much a function of those kind of constraints than anything else. For example, on the laptop side, the early Windows and Arm laptops were 835, but now there’s a wave of 850 products that have come out. If folks can shrink their development cycle, that’s really what drives it.
The new Arm laptops are pretty amazing. Having lived through my Nvidia days with Windows RT, it’s night and day. I’ve not found anything it doesn’t run. You’re running full Office, full Powerpoint. You never have a situation where you download a file and the fonts don’t translate. Everything looks and feels right. But the battery life is crazy, 20 hours and more. There’s no fan.
I’m running an 835 with 4GB of RAM, and then I also have a Core i7 Thinkpad with 32GB. The Core i7 is faster, no question, and its battery life is three hours. The 835 is more than 20. The other thing that’s great is it has a built-in LTE modem. You’re always connected. I honestly use that one more often.
VentureBeat: As far as IP goes, it seems like the notion of Arm taking the world is a little more realistic now. It doesn’t seem like you guys have any big worries right now. Would you disagree? Do you still have some challenges?
Haas: As soon as I said we had no worries–there’s always worries, right? We really want to grow the business in the infrastructure. We think we have a huge opportunity in automotive. Those are two big areas. Embedded, for us–the things that will limit us in embedded–we have to solve the security issue. It’s around making sure the platforms adhere to a security standard. Things like PSA being adopted across the board.
The reason I say that is, the adoption to devices being connected and put on the network is just a function of, can it be secure? As opposed to anything else. Security around the embedded side and continuing to invest in the road map on the high end.
Source: VentureBeat
To Read Our Daily News Updates, Please visit Inventiva or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker