“Shocking as it sounds, the RTX 4090 is worth every penny. It’s just a shame most people won’t be able to afford this excellent GPU.”
Pros
- Huge leaps in 4K gaming performance
- Excellent ray tracing performance
- High power and thermals, but manageable
- DLSS 3 performance is off the charts
Cons
- Very expensive
- DLSS 3 image quality needs some work
The RTX 4090 is both a complete waste of money and the most powerful graphics card ever made. Admittedly, that makes it a difficult product to evaluate, especially considering how much the average PC gamer is looking to spend on an upgrade for their system.
Get your weekly teardown of the tech behind PC gaming
Debuting Nvidia’s new Ada Lovelace architecture, the RTX 4090 has been shrouded in controversy and cited as the poster child for rising GPU prices. As much as it costs, it delivers on performance, especially with the enhancements provided by DLSS 3. Should you save your pennies and sell your car for this beast of GPU? Probably not. But it’s definitely an exciting showcase of how far this technology can really go.
About this review
The RTX 4090 has been through a lot since it originally released. As we wind down the generation, we’re revisiting Nvidia’s flagship GPU to see how it holds up in 2024. Throughout the review, we’ve added additional notes on features, pitfalls, and pricing to better reflect the reality of buying the RTX 4090 in 2024.
Video review
Nvidia RTX 4090 specs
As mentioned, the RTX 4090 introduces Nvidia’s new Ada Lovelace architecture, as well as chipmaker TSMC’s more efficient N4 manufacturing process. Although it’s impossible to compare the RTX 4090 spec-for-spec with the previous generation, we can glean some insights into what Nvidia prioritized when designing Ada Lovelace.
The main focus: clock speeds. The RTX 3090 Ti topped out at around 1.8GHz, but the RTX 4090 showcases the efficiency of the new node with a 2.52GHz boost clock. That’s with the same board power of 450 watts, but it’s running on more cores. The RTX 3090 Ti was just shy of 11,000 CUDA cores, while the RTX 4090 offers 16,384 CUDA cores.
RTX 4090 | RTX 3090 | |
Architecture | Ada Lovelace | Ampere |
Process node | TSMC N4 | 8nm Samsung |
CUDA cores | 16,384 | 10,496 |
Ray tracing cores | 144 3rd-gen | 82 2nd-gen |
Tensor cores | 576 4th-gen | 328 3rd-gen |
Base clock speed | 2235MHz | 1394MHz |
Boost clock speed | 2520MHz | 1695MHz |
VRAM GDDR6X | 24GB | 24GB |
Memory speed | 21Gbps | 19.5Gbps |
Bus width | 384-bit | 384-bit |
TDP | 450W | 350W |
It’s hard to say how much those extra cores matter, especially for gaming. Down the stack, the RTX 4080 has a little more than half of the cores as the RTX 4090, while the RTX 4070 Ti has even less.
Although the RTX 4090 kicked off the Ada Lovelace generation, Nvidia has filled out the stack since. We’ve reviewed every card from Nvidia’s current lineup, and you can find all of our review in the list below. Some cards, such as the RTX 4080 and RTX 4070 Ti, have been replaced with a refreshed Super model.
Synthetic and rendering
Before getting into the full benchmark suite, let’s take a high-level look at performance. Port Royal and Time Spy from 3D Mark show how Nvidia’s latest flagship scales well, showing a 58% gain over the RTX 3090 Ti in Time Spy, as well as a 102% increase over the RTX 3090 in Port Royal.
It’s important to note that 3DMark isn’t the best way to judge performance, as it factors in your CPU much more than most games do (especially at 4K). In the case of the RTX 4090, though, 3DMark shows the scaling well. In fact, my results from real games are actually a little higher than what this synthetic benchmark suggests, at least outside of ray tracing.
I also tested Blender to gauge some content creation tasks with the RTX 4090, and the improvements are astounding. Blender is accelerated by Nvidia’s CUDA cores, and the RTX 4090 seems particularly optimized for these types of workloads, with it putting up more than double the score of the RTX 3090 and RTX 3090 Ti in the Monster and Junkshop scenes, and just under double in the Classroom scene. AMD’s GPUs, which don’t have CUDA, aren’t even close.
4K gaming performance
On to the juicy bits. All of my tests were done with a Ryzen 9 7950X and 32GB of DDR5-6000 memory on an open-air test bench. I kept Resizeable BAR turned on throughout testing, or in the case of AMD GPUs, Smart Access Memory.
The RTX 4090 is a monster physically, but it’s also a monster