Talk to anyone about building a new PC, and the question of longevity is going to pop up sooner rather than later. Any time someone is dropping serious cash for a hardware upgrade they’re going to have questions about how long it will last them, especially if they’ve been burned before. But how much additional value is it actually possible to squeeze out of the market by doing so — and does it actually benefit the end-user?
Before I dive in on this, let me establish a few ground rules. I’m drawing a line between buying a little more hardware than you need today because you know you’ll have a use for it in the future and attempting to choose components for specific capabilities that you hope will become useful in the future. Let me give an example:
If you buy a GPU suitable for 4K gaming because you intend to upgrade your 1080p monitor to 4K within the next three months, that’s not future-proofing. If you bought a Pascal GPU over a Maxwell card in 2016 (or an AMD card over an NV GPU) specifically because you expected DirectX 12 to be the Next Big Thing and were attempting to position yourself as ideally as possible, that’s future-proofing. In the first case, you made a decision based on the already-known performance of the GPU at various resolutions and your own self-determined buying plans. In the second, you bet that an API with largely unknown performance characteristics would deliver a decisive advantage without having much evidence as to whether or not this would be true.
Note: While this article makes frequent reference to Nvidia GPUs, this is not to imply Nvidia is responsible for the failure of future-proofing as a strategy. GPUs have advanced more rapidly than CPUs over the past decade, with a much higher number of introduced features for improving graphics fidelity or game performance. Nvidia has been responsible for more of these introductions, in absolute terms, than AMD has.
Let’s whack some sacred cows:
DirectX 12
In the beginning, there were hopes that Maxwell would eventually perform well with DX12, or that Pascal would prove to use it effectively, or that games would adopt it overwhelmingly and quickly. None of these has come to pass. Pascal runs fine with DX12, but gains in the API are few and far between. AMD still sometimes picks up more than NV does, but DX12 hasn’t won wide enough adoption to change the overall landscape. If you bought into AMD hardware in 2013 because you thought the one-two punch of Mantle and console wins were going to open up an unbeatable Team Red advantage (and this line of argument was commonly expressed), it didn’t happen. If you bought Pascal because you thought it would be the architecture to show off DX12 (as opposed to Maxwell), that didn’t happen either.
Now to be fair, Nvidia’s marketing didn’t push DX12 as a reason to buy the card. In fact, Nvidia ignored inquiries about their support for async compute to the maximum extent allowable by law. But that doesn’t change the fact that DX12’s lackluster adoption to-date and limited performance uplift scenarios (low-latency APIs improve weak CPU performance more than GPUs, in many cases) aren’t a great reason to have upgraded back in 2016.
DirectX 11
Remember when tessellation was the Next Big Thing that would transform gaming? Instead, it alternated between having a subtle impact on game visuals (with a mild performance hit) or as a way to make AMD GPUs look really bad by stuffing unnecessary tessellated detail into flat surfaces. If you bought an Nvidia GPU because you thought its enormous synthetic tessellation performance was going to yield actual performance improvements in shipping titles that hadn’t been skewed by insane triangle counts, you didn’t get what you paid for.
DirectX 10
Everybody remember how awesome DX10 performance was?
Anybody remember how awesome DX10 performance was?
If you snapped up a GTX 8xxx GPU because you thought it was going to deliver great DX10 performance, you ended up disappointed. The only reason we can’t say the same of AMD is because everyone who bought an HD 2000 series GPU ended up disappointed. When the first generation of DX10-capable GPUs often proved incapable of using the API in practice, consumers who’d tried to future-proof by buying into a generation of very fast DX9 cards that promised future compatibility instead found themselves with hardware that would never deliver acceptable frame rates in what had been a headline feature.
This list doesn’t just apply to APIs, though APIs are an easy example. If you bought into first-generation VR because you expected your hardware would carry you into a new era of amazing gaming, well, that hasn’t happened yet. By the time it does, if it does, you’ll have upgraded your VR sets and the graphics cards that power them at least once. If you grabbed a new Nvidia GPU because you thought PhysX was going to be the wave of the future for gaming experiences, sure, you got some use out of the feature — just not nearly the experience the hype train promised, way back when. I liked PhysX — still do — but it wound up being a mild improvement, not a major must-have.
This issue is not confined to GPUs. If you purchased an AMD APU because you thought HSA (Heterogeneous System Architecture) was going to introduce a new paradigm of CPU – GPU problem solving and combined processing, five years later, you’re still waiting. Capabilities like Intel’s TSX (Transaction Synchronization Extensions) were billed as eventually offering performance improvements in commercial software, though this was expected to take time to evolve. Five years later, however, it’s like the feature vanished into thin air. I can find just one recent mention of TSX being used in a consumer product. It turns out, TSX is incredibly useful for boosting the performance of the PS3 emulator RPCS3. Great! But not a reason to buy it for most people. Intel also added support for raster order views years ago, but if a game ever took advantage of them I’m not aware of it (game optimizations for Intel GPUs aren’t exactly a huge topic of discussion, generally speaking).
You might think this is an artifact of the general slowdown in new architectural improvements, but if anything the opposite is true. Back in the days when Nvidia was launching a new GPU architecture every 12 months, the chances of squeezing support into a brand-new GPU for a just-demonstrated feature was even worse. GPU performance often nearly doubled every year, which made buying a GPU in 2003 for a game that wouldn’t ship until 2004 a really stupid move. In fact, Nvidia ran into exactly this problem with Half-Life 2. When Gabe Newell stood on stage and demonstrated HL2 back in 2003, the GeForce FX crumpled like a beer can.
Newell lied, told everyone the game would ship in the next few months, and people rushed out to buy ATI cards. Turns out the game didn’t actually ship for a year and by the time it did, Nvidia’s GeForce 6xxx family offered far more competitive performance. An entire new generation of ATI cards had also shipped, with support for PCI Express. In this case, everyone who tried to future-proof got screwed.
There’s one arguable exception to this trend that I’ll address directly: DirectX 12 and asynchronous compute. If you bought an AMD Hawaii GPU in 2012 – 2013, the advent of async compute and DX12 did deliver some performance uplift to these solutions. In this case, you could argue that the relative value of the older GPUs increased as a result.
But as refutations go, this is a weak one. First, the gains were limited to only those titles that implemented both DX12 and async compute. Second, they weren’t uniformly distributed across AMD’s entire GPU stack, and higher-end cards tended to pick up more performance than lower-end models. Third, part of the reason this happened is that AMD’s DX11 driver wasn’t multi-threaded. And fourth, the modest uptick in performance that some 28nm AMD GPUs enjoyed was neither enough to move the needle on those GPUs’ collective performance across the game industry nor sufficient to argue for their continued deployment overall relative to newer cards build on 14/16nm. (The question of how quickly a component ages, relative to the market, is related-but-distinct from whether you can future-proof a system in general).
Now, is it a great thing that AMD’s 28nm GPU customers got some love from DirectX 12 and Vulkan? Absolutely. But we can acknowledge some welcome improvements in specific titles while simultaneously recognizing the fact that only a relative handful of games have shipped with DirectX 12 or Vulkan support in the past three years. These APIs could still become the dominant method of playing games, but it won’t happen within the high-end lifespan of a 2016 GPU.
Optimizing Purchases
If you want to maximize your extracted value per dollar, don’t focus on trying to predict how performance will evolve over the next 24-48 months. Instead, focus on available performance today, in shipping software. When it comes to features and capabilities, prioritize what you’re using today over what you’ll hope to use tomorrow. Software roadmaps get delayed. Features are pushed out. Because we never know how much impact a feature will have or how much it’ll actually improve performance, base your buying decision solely on what you can test and evaluate at the moment. If you aren’t happy with the amount of performance you’ll get from an update today, don’t buy the product until you are.
Second, understand how companies price and which features are the expensive ones. This obviously varies from company to company and market to market, but there’s no substitute for it. In the low-end and midrange GPU space, both AMD and Nvidia tend to increase pricing linearly alongside performance. A GPU that offers 10 percent more performance is typically 10 percent more expensive. At the high end, this changes, and a 10 percent performance improvement might cost 20 percent more money. As new generations appear and the next generation’s premium performance becomes the current generation’s midrange, the cost of that performance drops. The GTX 1060 and GTX 980 are an excellent example of how a midrange GPU can hit the performance target of the previous high-end card for significantly less money less than two years later.
Third, watch product cycles and time your purchasing accordingly. Sometimes, the newly inexpensive last generation product is the best deal in town. Sometimes, it’s worth stepping up to the newer hardware at the same or slightly higher price. Even the two-step upgrade process I explicitly declared wasn’t future-proofing can run into trouble if you don’t pay close attention to market trends. Anybody who paid $1,700 for a Core i7-6950X in February 2017 probably wasn’t thrilled when the Core i9-7900X dropped with higher performance and the same 10 cores a few months later for just $999, to say nothing of the hole Threadripper blew in Intel’s HEDT product family by offering 16 cores instead of 10 at the same price.
Finally, remember this fact: It is the literal job of a company’s marketing department to convince you that new features are both overwhelmingly awesome and incredibly important for you to own right now. In real life, these things are messier and they tend to take longer. Given the relatively slow pace of hardware replacement these days, it’s not unusual for it to take 3-5 years before new capabilities are widespread enough for developers to treat them as the default option. You can avoid that disappointment by buying the performance and features you need and can get today, not what you want and hope for tomorrow.
Update (6/4/2020): The best reason to revisit a piece like this is to check it for accuracy. After all, “future-proofing” is supposed to be a method of saving money. Nearly two years later, how does this piece hold up compared to the introductions we’ve seen since?
On the CPU front, all of the good gaming CPUs of 2018 are still solid options in 2020. This is more-or-less as expected, since CPUs age more slowly than GPUs these days. An AMD platform from 2018 has better upgrade options than its Intel equivalent, but both solutions are excellent today.
At the same time, however, chipset compatibility doesn’t extend infinitely far into the future. AMD is going to make Zen 3 CPUs available to a few X470 and B450 motherboards, but it’s an unusual state of affairs. You can plug a modern GPU into a ten-year-old motherboard, but CPU / chipset compatibility rarely aligns for more than a few years, even where AMD is concerned.
On the GPU front, Turing is a few months from its second birthday, and all of our predictions about how long it would take ray tracing to break into mainstream gaming have proven true. There’s a solid handful of RTX games on the market that show off the technology reasonably well, but Ampere-derived GPUs will be in-market before ray tracing establishes itself as a must-have feature.
This is only a problem if you buy your GPUs based on what you expect them to do in the future. If you bought an RTX 2080 or 2080 Ti because you wanted that level of performance in conventional games and viewed ray tracing as a nice addition, then you’ll replace Turing with something else and be satisfied with things. If you bought Turing believing it would take ray tracing mainstream, you’ll probably be disappointed.
Now Read:
- Don’t Buy the Ray-Traced Hype Around the Nvidia RTX 2080
- How to Buy the Right Video Card for Your Gaming PC
- What is DirectX 12?
No comments:
Post a Comment