Home GADGETS The five worst Nvidia GPUs of all time: Infamous and envious graphics...

The five worst Nvidia GPUs of all time: Infamous and envious graphics cards

The five worst Nvidia GPUs of all time: Infamous and envious graphics cards

Though Nvidia and AMD (and ATI before 2006) have long struggled for supremacy in the world of graphics, it’s pretty clear that the green giant has generally held the upper hand. But despite Nvidia’s track record and ability to rake in cash, the company has put out plenty of bad GPUs — just like the five worst AMD GPUs. (See also: The five best AMD GPUs — we’re covering the good and the bad.) Sometimes Nvidia rests on its laurels for a little too long, or reads the room terribly wrong, or simply screws up in spectacular fashion.

We’ve ranked what we think are Nvidia’s five worst GPUs in the 24 years since the beginning of the GeForce brand in 1999, which helped usher in modern graphics cards. We’re working from the best of the worst down to the worst of the worst. These choices are based both on individual cards and whole product lines, which means a single bad GPU in a series of good GPUs won’t necessarily make the cut.

5 — GeForce RTX 3080: The GPU generation you couldn’t buy

Nvidia's 456.55 Drivers Tested

(Image credit: Tom’s Hardware)

Nvidia achieved a comfortable lead over AMD with its GTX 10-series in 2016, and going into 2020 Nvidia was still able to maintain that lead. However, AMD was anticipated to launch a GPU that was truly capable of going toe-to-toe with Nvidia’s best, which meant the upcoming GPUs would need to match that. The situation was complicated by the fact that the RTX 20-series neglected traditional rasterization performance in favor of then-niche ray tracing and AI-powered resolution upscaling. For the first time in a long time, Nvidia was caught with its pants down.

What Nvidia came up with for the RTX 30-series was the Ampere architecture, which addressed many of the issues with Turing-based RTX 20-series chips. Nvidia upgraded from TSMC’s 12nm process (essentially its 16nm node but able to make big chips) to Samsung’s 8nm, which meant increased density for additional cores and more efficiency. Ampere also made the dedicated integer execution units in floating-point plus integer units, meaning it could trade integer performance for additional floating point compute, depending on the workload. Nvidia advertised this change as a massive increase in core count and TFLOPS, which was a bit misleading.

Launching in September of 2020 with the RTX 3080 and 3090, Nvidia preempted AMD’s RDNA 2-powered RX 6000-series. The 3080 at $699 was about on par with the $1,499 3090, and it was easily faster than the RTX 2080 Ti in both ray-traced and non-ray traced games. AMD’s $649 RX 6800 XT and RX 6800 showed up in November, with the RX 6900 XT in December, and while they did tie the 3080 and 3090 in non-ray traced scenarios, they lagged behind with ray tracing enabled, and they also lacked a resolution upscaler like DLSS.

Nvidia GeForce RTX 3080 Founders Edition product images

(Image credit: Tom’s Hardware)

We originally rated the 3080 with 4.5 out of 5 stars, so you might be wondering how in the world it could have ended up on this list. Well, it boils down to everything that happened after launch day. The 3080’s most immediate problem was that you couldn’t realistically buy it at MSRP, and a big part of the problem was the infamous GPU shortage induced by the Covid pandemic, and then massively exacerbated by cryptocurrency miners pushing prices up well north of $1,500. But even after the shortage technically ended in early 2022, almost no 3080 10GBs went for $699 or less.

Meanwhile, high-end RX 6000 cards like the 6800 XT came out of the GPU shortage with sub-MSRP pricing. The 6800 XT has been going for less than its $649 MSRP all this year and into last year, and today it costs less than $500 brand-new. AMD also now has FSR2, which doesn’t quite match DLSS, but it’s a viable resolution upscaling and frame generation alternative. RTX 30 cards don’t even get DLSS 3 frame generation and have to rely on FSR 3 frame generation instead.

Then there’s the wider problems with the RTX 30-series. Pretty much every 30-series card has much less memory than it should, except for the 3060 12GB and the 3090 with its 24GB of VRAM. On launch day things were fine, but more and more evidence is cropping up that most RTX 30-series cards just don’t have enough memory, especially in the latest games. Then there’s the RTX 3050, the lowest-end member of the family with an MSRP of $249 and a real price of $300 or more for most of its life. Compare that to the $109 GTX 1050; in just two generations, Nvidia raised the minimum price of its GPUs by $140.

Even without the GPU shortage, it seems that the RTX 30-series was doomed from the start thanks to its limited amount of memory and merely modest advantage in ray tracing and resolution upscaling. The extreme retail prices were enough to make the RTX 3080 and most of its siblings some of Nvidia’s worst cards, and directly led to Nvidia increasing prices on its next generation parts.

4 — GeForce RTX 4060 Ti: One step forward, one step back

Nvidia GeForce RTX 4060 Ti

(Image credit: Nvidia)

It might be premature to call a GPU that came out half a year ago one of the worst ever, but sometimes it’s obvious that it has to be. Not only is the RTX 4060 Ti a pretty bad graphics card in its own right, but it’s something of a microcosm for the entire RTX 40-series and all of its problems.

Nvidia’s previous RTX 30 GPUs had some real competition in the form of AMD’s RX 6000-series, especially in the midrange. Though the RTX 3060 12GB was pretty good thanks to its increased VRAM capacity, higher-end RTX 3060 Ti and 3070 GPUs came with just 8GB and didn’t have particularly compelling price tags, even after the GPU shortage ended. An (upper) midrange RTX 40-series GPU could have evened the playing field.

The RTX 4060 Ti does have some redeeming qualities: It’s significantly more power efficient than 30-series cards, and it uses the impressively small AD106 die that’s three times as dense as the GA104 die inside the RTX 3070 and RTX 3060 Ti. But there’s a laundry list of things that are wrong with the 4060 Ti, and it primarily has to do with memory.

There are two models of the 4060 Ti: one with 8GB of VRAM and the other with 16GB. That the 8GB model of this GPU even exists is problematic, because it’s so close in performance to the 3070 that already has memory capacity issues. So, obviously you should get the 16GB model just to make sure you’re future-proofed, except the 16GB model costs a whopping $100 extra. With the 4060 Ti, Nvidia is basically telling you that you need to spend $499 instead of $399 to make sure your card actually performs properly 100% of the time.

Nvidia GeForce RTX 4060 Ti Founders Edition photos and unboxing

(Image credit: Tom’s Hardware)

The 4060 Ti also cuts corners in other areas, particularly when it comes to memory bandwidth. Nvidia gave this GPU a puny 128-bit memory bus, half that of the 256-bit bus on the 3070 and 3060 Ti, and this results in a meager bandwidth of 288GB/s on the 4060 Ti, whereas the previous generation had 448GB/s. Additionally, the 4060 Ti has eight instead of 16 PCIe lanes, which shouldn’t matter too much, but it’s still yet more penny pinching. A larger L2 cache helps with the bandwidth issue, but it only goes so far.

In the end, the RTX 4060 Ti is basically just an RTX 3070 with barely any additional performance, decently better efficiency, and support for DLSS 3 Frame Generation. And sure, the 8GB model costs $399 instead of $499 (now $449-ish), but it’s also vulnerable to the same issues that the 3070 can experience. That means you should probably just buy the 16GB model, which is priced the same as the outgoing 3070. Either way, you’re forced to compromise.

The RTX 4060 Ti isn’t uniquely bad, echoing what we’ve seen out of the RTX 40-series as a whole. To date it’s Nvidia’s most expensive GPU lineup by far, ranging from the RTX 4060 starting at $299 to the RTX 4090 starting at $1,599 (and often costing much more). The 4060 Ti isn’t the only 40-series card with a controversial memory setup, as Nvidia planned to initially launch the RTX 4070 Ti as the RTX 4080 12GB. Even with the name change and a $100 price cut, that graphics card warrants a mention on this list. But frankly, nearly every GPU in the RTX 40-series (other than the halo RTX 4090) feels overpriced and equipped with insufficient memory for 2022 and later.

Let’s be clear. What we wanted from the RTX 40-series, and what Nvidia should have provided, was a 64-bit wider interface at every tier, and 4GB more memory at every level — with the exception of the RTX 4090, and perhaps an RTX 4050 8GB. Most of our complaints with the 4060 and 4060 Ti would be gone if they were 12GB cards with a 192-bit bus. The same goes for the 4070 and 4070 Ti: 16GB and a 256-bit bus would have made them much better GPUs. And the RTX 4080 and (presumably) upcoming RTX 4080 Super should be 320-bit with 20GB. The higher memory configurations could have even justified the increase in pricing to an extent, but instead Nvidia raised prices and cut the memory interface width. Which reminds us in all the bad ways of the next entry in this list…

3 — GeForce RTX 2080 and Turing: New features we didn’t ask for and higher prices

Nvidia GeForce RTX 2080 Ti

(Image credit: Nvidia)

Nvidia’s influence in gaming graphics was at its peak after the launch of the GTX 10-series. Across the product stack, AMD could barely compete at all, especially in laptops against Nvidia’s super efficient GPUs. And when Nvidia had a big lead, it would often follow it up with new GPUs that would offer real upgrades over the existing cards. Prior to the launch of the Turing architecture, the community was excited for what Jensen Huang and his team were cooking up.

In the summer of 2018, Nvidia finally revealed its 20-series of GPUs — and they weren’t the GTX 20-series, but instead debuted as the RTX 20-series, with the name change signifying the fact that Nvidia had introduced the world’s first real-time ray tracing graphics cards. Ray tracing was and still is an incredible technological achievement, described as the “holy grail” of graphics. That technology had made its way into gaming GPUs, promising to bring about a revolution in fidelity and realism.

But then people started reading the fine print. And let’s be clear: More than any other GPUs on this list, this is really one where the entirety of the RTX 20-series family gets the blame.

First, there were the prices: These graphics cards were incredibly expensive (worse in many ways than the current 40-series pricing). The RTX 2080 Ti debuted at $1,199? No top-end Nvidia gaming graphics card using a single graphics chip had ever gotten into four digit territory before, unless you want to count the Titan Xp (we don’t). The RTX 2080 was somehow more expensive than the outgoing GTX 1080 Ti. Nvidia also released the Founders Edition version of cards a month before the third-party models, with a $50 to $200 price premium — a premium that didn’t really go away in some cases.

Also, ray tracing was cool and all, but literally no games supported it when the RTX 20-series released. It wasn’t like all the game devs could suddenly switch over from traditional methods to ray tracing, ignoring 95% of the gaming hardware market. Were these GPUs really worth buying, or should people just skip the 20-series altogether?

GeForce RTX 2080 Super

(Image credit: Teclab/YouTube)

Then the reviews came in. The RTX 2080 Ti by virtue of being a massive GPU was much faster than the GTX 1080 Ti. However, the RTX 2080 could only match the 1080 Ti and they were nominally the same price — plus, the 2080 had 3GB less memory. The RTX 2070 and RTX 2060 faired a bit better, but they didn’t offer mind-boggling improvements like the previous GTX 1070 and GTX 1060 did, and again increased generational pricing by $100. The single game in 2018 that offered ray tracing, Battlefield V, ran very poorly even on the RTX 2080 Ti: It was only able to get an average 100 FPS in our 1080p testing. To be clear, the 2080 Ti was advertised as a 4K gaming GPU.

But hey, building an ecosystem of ray traced games takes time, and Nvidia had announced 10 additional titles that would get ray tracing. One of these games got cancelled, and seven of the remaining nine added ray tracing. However, one of those games only added ray tracing for the Xbox Series X/S version, and another two were Asian MMORPGs. Even Nvidia’s ray tracing darling Atomic Heart skipped ray tracing when it finally came out in early 2023, five years after the initial RT tech demo video.

The whole RTX branding thing was also thrown into even more confusion when Nvidia later launched the GTX 16-series, starting with the GTX 1660 Ti, using the same Turing architecture but without ray tracing or tensor cores. If RT was the future and everyone was supposed to jump on that bandwagon, why was Nvidia backpedaling? We can only imagine what Nvidia could have done with a hypothetical GTX 1680 / 2020: A smaller die size, all the rasterization performance, none of the extra cruft.

To give Nvidia some credit, it did at least try to address the 20-series’ pricing issue by launching ‘Super’ models with the 2080, 2070, and 2060. Those all offered better bang for buck (though the 2080 Super was mostly pointless). However, this also indicated that the original RTX 20-series GPUs weren’t expensive because of production costs — after all, they used relatively cheap GDDR6 memory and were fabbed on TSMC’s 12nm node, which was basically 16nm but able to make the big chip for the 2080 Ti. AMD shifted to TSMC 7nm for 2019’s RDNA / RX 5000-series. RTX 20-series was probably expensive just because Nvidia believed they would sell at those prices.

In hindsight, the real “holy grail” in the RTX 20-series was Deep Learning Super Sampling, or DLSS, which dovetails into today’s AI explosion. Although DLSS got off to a rocky start — just like ray tracing, with lots of promises being broken and the quality not being great — today the technology is present in 500 or so games and works amazingly well. Nvidia should have emphasized DLSS much more than ray tracing. Even today, only 37% of RTX 20-series users enable ray tracing, while 68% use DLSS. It really should have been DTX instead of RTX.

2 — GeForce GTX 480: Fermi round one failed to deliver

Nvidia GeForce GTX 480 Core 512

(Image credit: Bilibili)

Although Nvidia was used to being in first place by the late 2000s, in 2009 the company lost the crown to an ATI recently acquired by AMD. The HD 5000-series came out before Nvidia’s own next-generation GPUs were ready, and the HD 5870 was the first winning ATI flagship since the Radeon 9700 Pro. Nvidia wasn’t far behind though, with its brand-new graphics cards slated for early 2010.

It’s important to note how differently Nvidia and AMD/ATI approached GPUs at this time. Nvidia was all about making massive dies on older nodes that were cheap and had good yields, while ATI had shifted to smaller dies on more advanced processes with the HD 3000-series, and kept that design strategy up to the HD 5000-series. Nvidia’s upcoming GPUs, codenamed Fermi, would be just as big as its prior generation, though this time Nvidia would be equal in process, using TSMC’s 40nm just like the HD 5000 lineup.

Launching in March 2010, the flagship GTX 480 was a winner. In our review, it beat the HD 5870 pretty handily. The top-end HD 5970 still stood as the fastest graphics card overall, but it used two graphics chips in CrossFire, which wasn’t always consistent. But the GTX 480 cost $499, much higher than the roughly $400 HD 5870. Nvidia’s flagships always charged a premium, so this wasn’t too different here… except for one thing: power.

Nvidia’s flagships were never particularly efficient, but the GTX 480 was ludicrous. Its power consumption was roughly 300 watts, equal to the dual-GPU HD 5970. The HD 5870 in contrast consumed just 200 watts, meaning the 480 had a 50% higher power draw for maybe 10% to 20% more performance. Keep in mind, this was back when the previous record was 236 watts on the GTX 280, and that was still relatively high for a flagship.

Of course, power turns into heat, and the GTX 480 was also a super hot GPU. Ironically, the reference cooler sort of looked like a grill, and the 480 was derided with memes talking about “the way it’s meant to be grilled,” satirizing Nvidia’s “The Way It’s Meant To Be Played” game sponsorships, and meme videos like cooking an egg on a GTX 480. It was not a great time to be an Nvidia marketer.

In the end, the issues plaguing the GTX 400-series proved to be so troublesome that Nvidia replaced it half a year later with a reworked Fermi architecture for the GTX 500-series. The new GTX 580 still consumed about 300 watts, but it was 10% faster than the 480, and the rest of the 500-series had generally better efficiency than the 400-series.

Nvidia really messed up with the 400-series — you don’t replace your entire product stack in half a year if you’re doing a good job. AMD was extremely close to overtaking Nvidia’s share in the overall graphics market, something that hadn’t happened since the mid 2000s. Nvidia still made way more money than AMD in spite of Fermi’s failures, so at least it wasn’t an absolute trainwreck. It also helped pave the way for a renewed focus on efficiency that made the next three generations of Nvidia GPUs far better.

1 — GeForce FX 5800 Ultra: aka, the leaf blower

Source link