Back about two years ago, Nvidia launched Turing. I liked the architecture well enough, but I thought Nvidia had priced Turing far too high, and said so. How do things look two years later, with Ampere announced and coming in a matter of weeks? Pretty dang good.
My only real problem with Turing was its launch price. Once Nvidia adjusted its pricing in 2019 to address AMD’s new Radeon 5700 and 5700 XT, the family of cards was a much better deal. With Turing, Nvidia had the difficult task of selling a GPU that was priced based on what it could do in the future. With Ampere, it’s apparently delivering that future at an excellent price.
I don’t want to come across as if I’m pre-judging the card — I won’t be giving any buying recommendations, pro or con, until we’ve seen performance. Early performance data available on the internet suggests that the RTX 3080 leaves the RTX 2080 in the dirt, with performance improvements of 1.7x- 1.9x at 4K with all detail levels maxed out.
It’ll be important to see how Ampere responds to lower resolutions so we can typify its performance, but these single data points back up what we can see in the specs: Barring some unknown problem, this GPU is going to shine.
AI-Boosted Visuals Are the Future
When DLSS debuted, it often boosted performance at the cost of a marked visual downgrade. DLSS 2.0 has flipped this equation, with the mode now offering upgraded image quality, sometimes while simultaneously boosting performance. Seven years ago, I had an opportunity to interview Layla Mah, then the Lead Architect for VR and Advanced Rendering at AMD. One of the topics she discussed at some length was the question of how we could keep ramping up effective detail in gaming as Moore’s Law slowed down.
In the long run, it may be more power efficient to render a 1080p image and upscale it to 4K in hardware than it is to render a 4K image natively. The power required to render a frame is directly proportional to the number of pixels in it. If an onboard AI neural net can calculate your 1080p –> 4K upscale at 95 percent of native 4K detail and 50 percent native 4K energy consumption, that’s a huge gain. Devices today are almost always thermally or power limited, and being able to shunt resolution upscaling into more efficient units would free up more room on the GPU for other functional units. That’s important because Ampere also dedicates more area to ray tracing and to improving on Turing’s performance.
The images below show the results of my own upscaling work on Deep Space Nine as of this week. This is a show that shipped on 480p DVD and streams from Netflix and Amazon Prime looking something a cat barfed into a VCR. Nine months of pre-processing work and the magic of AI upscaling, and what you get looks like this:
More on this to come, very soon. Screenshot from “The Way of the Warrior,” remastered by the author.
When Nvidia debuted Turing, it wasn’t clear to me how the company could continue to push rasterization forward, bring ray tracing performance up, and continue to build advanced AI capabilities into the core. Moore’s law isn’t what it used to be. In the future, I think we’ll see AI as the sauce that makes the other two complementary rather than competitive, at least from a hardware design perspective.
The advent of the Xbox Series X and PlayStation 5 are bringing ray tracing to market, and RDNA2 GPUs will do the same for AMD later this year, meaning we can expect to see more ray tracing games in the future. I don’t know that the feature will sweep the space, but it’s going to be added to a lot of titles. If you care about ray-tracing performance going forward, I’d definitely hold off on any card purchases for the next few weeks. I’m not giving a verdict until I have a card, but I like what we saw from Ampere today.
Now Read:
- Nvidia Announces RTX 3070, 3080, 3090 GPUs: Ampere Crushes Turing
- Nvidia Ampere Will Use 12-Pin PCIe Power Connector
- Nvidia’s Datacenter Revenue Has Surpassed Gaming for the First Time
No comments:
Post a Comment