Friday, 17 December 2021

Intel Chief Discusses Building the Metaverse’s Plumbing

Former AMD chief and current Intel graphics guru, Raja Koduri, has written a puff piece on the Intel PR site about the company’s plans for creating the “plumbing” of the Metaverse, which includes some interesting projections for the future. Since it’s Intel PR the piece naturally leans in the direction of how Intel can power all kinds of amusing interactions, and has been doing so for a long time now (no argument there), but what the article from Koduri makes clear is the Metaverse as demonstrated by Mark Zuckerberg recently is both a huge deal, and a very, very long way from being anywhere close to reality.

To kick things off, Koduri describes the Metaverse as such, “a utopian convergence of digital experiences fueled by Moore’s Law – an aspiration to enable rich, real-time, globally-interconnected virtual- and augmented-reality environments that will enable billions of people to work, play, collaborate and socialize in entirely new ways.” Yes, that’s a lot of buzzwords packed into one sentence, but it’s essentially what we’ve all seen so far: a 3D world where our avatars interact with one another in a fanciful setting while we sit at home wearing a HMD of some sort, or possibly even Augmented Reality glasses. Despite how corny the demonstrations of this technology have been thus far, Raja declares that he sees the Metaverse as the next big platform in computing, similar to how mobile and the Internet revolutionized computing in the modern age.

So what do we need to get there? First off, Raja describes what is needed to create such a world, which would include, “convincing and detailed avatars with realistic clothing, hair and skin tones – all rendered in real time and based on sensor data capturing real world 3D objects, gestures, audio and much more; data transfer at super high bandwidths and extremely low latencies; and a persistent model of the environment, which may contain both real and simulated elements.” He concludes by asking how the company can solve this problem at scale – for hundreds of millions of users simultaneously? The only logical answer is it can’t, saying, “We need several orders of magnitude more powerful computing capability, accessible at much lower latencies across a multitude of device form factors,” he writes.

Intel’s next-gen Ponte Vecchio CPUs will power its first exascale super computer dubbed Aurora. (Image: Stephen Shankland/Cnet)

To accomplish this, Intel has broken the problem of serving the Metaverse down into three “layers,” and says the company has been working on all three of them. They consists of: intelligence, ops, and compute.  The “intelligence” layer is the software and tools being used by developers, which Raja says needs to be open and based on a unified programming model to encourage easy deployment. The “ops” layer is basically compute power being available to users who don’t have access to it locally, and the “compute” layer is simply raw horsepower needed to run everything, which is where Intel comes in with its CPUs and upcoming Arc GPUs. Remember, this is a PR piece, after all.

Here’s where it gets interesting. Raja says to power all this, we are going to need 1,000 times the compute power provided by today’s state-of-the-art technology, and he notes Intel has a roadmap to achieve zettascale power by the year 2025. That is quite an audacious claim, as Intel will first achieve exascale in 2022 when it delivers its Aurora super computer to the Department of Energy. On a similar note, the United States’ first exascale computer, Frontier, is reportedly currently being installed the Oak Ridge National Laboratory, but won’t be operational until some time next year. If we have just now arrived at exascale, how long will it take us to get to zettascale? According to Wikipedia, it took 12 years to advance from terascale to petascale, and 14 years to go from that to exascale, so it’s reasonable to think it could be another ten years, at least, until we reach zettascale.

Koduri concludes his piece with a final goal, which he says is obtainable. “We believe that the dream of providing a petaflop of compute power and a petabyte of data within a millisecond of every human on the planet is within our reach.” For some context, in 2013 we noted people were skeptical we could achieve exascale by 2020, but we did it, albeit a year later. Note, however, that the power conundrum has never been solved.

Exascale brain modeling

Earlier this year, when China claimed to have built two exascale systems in secret, it was reported that power consumption was ~35MW per system. That’s substantially higher than the 20MW goal that was initially set. We can expect these figures to improve over time as machines become more efficient, but any push towards zettascale is going to require a fundamental rethink of power consumption and scaling.

Now Read:

 



No comments:

Post a Comment