All the details on specs, pricing, release date, and more for Nvidia's next-gen RTX graphics cards and the Turing architecture.


Nvidia will launch its next-generation GeForce RTX 20-series graphics cards on September 20, starting with the RTX 2080 and RTX 2080 Ti, followed by the RTX 2070 in October. These will use the new Turing architecture, which boasts more cores than the previous generation Pascal architecture, along with significant updates that should deliver more performance per core. Also included are new technologies to enable real-time ray tracing in games, and deep learning Tensor cores.

There's a lot to cover, and Nvidia is rightly calling this the most significant generational upgrade to its GPU since the first CUDA cores in 2006. Turing promises better performance than existing GPUs, and has the potential to fundamentally change what we expect from graphics. Here's everything you need to know about the RTX 2080 Ti, RTX 2080, and RTX 2070, the Turing architecture, pricing, specs, and more.


Pricing and release dates for the GeForce RTX series

Nvidia has only announced three GeForce RTX models so far. We don't know when or even if lower tier cards will exist. Most likely, but they may not arrive until 2019. Here are the launch dates and prices so far:

GeForce RTX 2080 Ti Founders Edition: $1,199, September 20
GeForce RTX 2080 Ti Reference: $999, September 20?
GeForce RTX 2080 Founders Edition: $799, September 20
GeForce RTX 2080 Reference: $699, September 20?
GeForce RTX 2070 Founders Edition: $599, 'October'
GeForce RTX 2070 Reference: $499, 'October'?


It's not all good news for the RTX 20-series, as pricing for all three classes of GPU has increased substantially. Call it a lack of competition (AMD's GPUs already struggle to compete against the 10-series parts), or the cryptocurrency bubble bursting (there are reportedly a lot of 10-series graphics cards left to sell), or just plain greed. The bottom line is that launch prices on the Founders Edition cards are up to 50 percent higher than the outgoing 10-series parts.

Pre-orders are available, and while we don't generally recommend buying expensive hardware before independent reviews have been published, many places offering pre-orders are currently sold out. What's worse is we don't even know if the lower 'reference' prices will be seen at launch, or if they're merely recommendations. Based on past experience, we expect Founders Edition and factory overclocked cards priced similarly to the FE to be the main option for the first month or two.

The RTX 2070 launch date hasn't been firmly set by Nvidia yet, with only a statement of October 2018. Given the likely demand for the higher end 2080 parts, we anticipate late October. Again, prices will probably be higher for the first month or two. Then again, with Black Friday and the holiday shopping season going on, we might get a few surprises.

GeForce RTX specifications

Nvidia unveiled many core details of the Turing architecture at SIGGRAPH, and followed up by announcing the below specs for the GeForce RTX graphics cards. After much speculation, we now know what to expect. Mostly.


The number of CUDA cores in each model has increased by 15-20 percent across the line, though clockspeeds have dropped slightly as well. In theoretical TFLOPS (that's trillions of floating-point operations per second), the GeForce RTX cards are 14-19 percent faster than the GTX 10-series.

Nvidia equips all the new models with 14 GT/s GDDR6, improving bandwidth by anywhere from 27 percent (RTX 2080 Ti) to as much as 75 percent (RTX 2070). That's assuming there aren't any other tweaks to the memory subsystem, like the improved compression technologies and tiled rendering in Pascal.

Along with faster cores and memory, the Turing architecture adds Tensor cores for deep learning and RT cores for real-time ray tracing. Both have the potential to dramatically change what we can expect from future games in terms of graphics.


Turing architecture and performance expectations

While we have the numbers for the CUDA cores, GDDR6, Tensor cores, and RT cores, there's a lot more going on with the GeForce RTX and Turing architecture. We've provided a deep dive into the Turing architecture elsewhere, which we'll update with additional details closer to launch, but here's the short summary.

Nvidia has reworked the SMs (streaming multiprocessors) and trimmed things down from 128 CUDA cores per SM to 64 CUDA cores. The Pascal GP100 and Volta GV100 also use 64 CUDA cores per SM, so Nvidia has now standardized on a new ratio of CUDA cores per SM. Each SM now includes eight Tensor cores and an unspecified number of RT cores, plus texturing units (which we assume to be half as many as in Pascal). The SM is the fundamental building block for Turing, and can be replicated as needed.

For traditional games, the CUDA cores are the heart of the Turing architecture. Nvidia has made at least one big change relative to Pascal, with each SM able to simultaneously issue both floating-point (FP) and integer (INT) operations—and likely Tensor and RT operations as well. Nvidia said this makes the new CUDA cores "1.5 times faster" than the previous generation.

That might be marketing, but Nvidia's preview benchmarks suggest an average performance increase of around 50 percent for the RTX 2080 over the GTX 1080. Combined with the increase in CUDA core counts and the higher bandwidth of GDDR6, in GPU-limited benchmarks it's not unreasonable to expect 50-75 percent more performance from the GeForce RTX models compared to the previous generation parts.

All Turing GPUs announced so far will be manufactured using TSMC's 12nm FinFET process. The TU102 used in the RTX 2080 Ti has 18.6 billion transistors and measures 754mm2. That's a huge chip, far larger than the GP102 used in the GTX 1080 Ti (471mm2 and 11.8 billion transistors) and only slightly smaller than the Volta GV100. While the full TU102 has up to 72 SMs and a 384-bit GDDR6 interface, the RTX 2080 Ti disables four SMs and one of the 32-bit GDDR6 channels. That leaves room for a future RTX Titan, naturally.

The TU104 trims the SM counts and memory interface by a third, giving a maximum of 48 SMs and a 256-bit interface. The RTX 2080 disables two SMs while the RTX 2070 disables 12 SMs, but both keep the full 256-bit GDDR6 configuration. Nvidia has not revealed die size or transistor count for the TU104, but it should fall in the 500-550 mm2 range, with around 12-13 billion transistors. Again, that's a substantially larger chip than the GP104 used in the GTX 1080/1070.

TSMC's 12nm process is a refinement of the existing 16nm process, perhaps more marketing than a true die shrink. Optimizations to the process technology help improve clockspeeds, chip density, and power use—the holy trinity of faster, smaller, and cooler running chips. TSMC's 12nm FinFET process is also mature at this point, with good yields, allowing Nvidia to create such large GPU designs.

Looking forward, TSMC is readying its 7nm process for full production, and we should see it in a limited fashion by the end of the year (eg, for AMD's Vega 7nm professional GPUs). Don't be surprised if late 2019 sees the introduction of a die shrink of Turing, bringing sizes down to more manageable levels.


What the RT cores and ray-tracing mean for games

Why is ray-tracing such a big deal, and what does it mean for games? We wrote this primer on ray-tracing when Microsoft unveiled its DirectX Ray Tracing (DXR) API. DXR hasn't reached the final public revision yet, but that's expected to happen around the time GeForce RTX cards begin shipping. Nvidia clearly had a lot of input on DXR, and while initial demonstrations like the above Star Wars clip used a DGX-Station with four GV100 GPUs to achieve 'cinematic' 24fps results, Turing is clearly what Nvidia was aiming for.

Not only can a single Turing GPU run the same demonstration as the DGX-Station—which only costs $60,000 if you're wondering—but it can do so at 60fps. That's because the RT cores in Turing are roughly ten times faster for ray tracing than using compute shaders to accomplish the same work. However, doing full ray tracing for real-time games is still a bit impractical.

Nvidia instead suggests using the RT cores for hybrid rendering. Traditional rasterization used for geometry and textures, while ray tracing provides lighting and shadows, reflections, ambient occlusion, and other effects. At least 11 games have announced support for Nvidia's RTX ray tracing. Here's the current list:

  • Assetto Corsa Competizione from Kunos Simulazioni/505 Games
  • Atomic Heart from Mundfish
  • Battlefield V from EA/DICE
  • Control from Remedy Entertainment/505 Games
  • Enlisted from Gaijin Entertainment/Darkflow Software
  • Justice from NetEase
  • JX3 from Kingsoft
  • MechWarrior 5: Mercenaries from Piranha Games
  • Metro Exodus from 4A Games
  • ProjectDH from Nexon’s devCAT Studio
  • Shadow of the Tomb Raider from Square Enix



Several of these games should release in 2018, while others are coming in 2019. Shadow of the Tomb Raider will apparently launch without RTX effects enabled, with a post-launch patch adding the feature. Given the September 14 release date, one week before the official launch of the GeForce RTX cards, plus waiting for Windows 10 Redstone 5 and the full DXR API (currently prepping for arrival in September or October), that shouldn't be much of a concern. Getting games that support brand new hardware features within weeks of the hardware launch is still much faster than the usual rate of adoption.

How machine learning and the Tensor cores affect graphics

If you're thinking the Tensor cores are pointless when it comes to 'real' graphics work, you're wrong. Deep learning and AI are revolutionizing many industries, and games are another potential market. But how can the Tensor cores help with graphics?

Nvidia has specifically talked about DLSS, Deep Learning Super Sampling, a new AI-based anti-aliasing algorithm that can offer improved image quality compared to other AA algorithms like TAA (Temporal Anti-Aliasing). The idea is to train a neural network with high quality AA images as the 'ground truth' model—the desired result. Once trained, DLSS can provide real-time enhancements like the removal of jaggies, plus it can also combine resolution upscaling with anti-aliasing. Nvidia hasn't fully disclosed how DLSS is being implemented, but upscaling 1080p to 4k seems likely.


The above images show standard bilinear filtering used to resize a 1024x576 image to 4096x2304, compared to Nvidia's super resolution AI upscaling algorithm. Nvidia normally shows the comparison using nearest neighbor upscaling, but that's a bit disingenuous since GPUs have been doing real-time bilinear upscaling for decades.

While DLSS could run on regular CUDA cores, getting the desired result in a real-time game at 60fps or more needs the performance of the Tensor cores. That's because the Tensor cores provide about eight times more computational power than the CUDA cores.

What else can deep learning and the Tensor cores do for gaming? DLSS is only one of many possibilities, and arguably not even the most interesting. We'll have more to say on this subject when the GeForce RTX 2080 cards launch.

The GeForce RTX Founders Edition

Nvidia has always provided 'reference' designs for its GeForce graphics cards, but starting with the 10-series the name was changed to Founders Edition. With the RTX 20-series GPUs, Nvidia is changing things again. There will be graphics cards running Nvidia's reference clocks, but the GeForce RTX Founders Edition models will now come factory overclocked (by 90MHz).

The Founders Edition has been completely redesigned, with improved cooling, power deliver circuitry, dual axial fans, and more. Graphics cards with blowers will still exist, but those will come from Nvidia's AIB partners. The dual fans should reduce noise levels while providing better cooling than a single blower fan, but a blower may be a better choice for anyone with a smaller case.

The Founders Edition will also carry a premium price, $100 more on the RTX 2080 and RTX 2070, and $200 more on the RTX 2080 Ti. Since these are generally the first cards to hit retail, it also makes for a hefty early adopter fee. It's not clear whether Nvidia will offer a reference design similar to the FE to its partners, or if the reference design only refers to the circuit board and core components, with cooling left up to the graphics card vendors.


NVLink and SLI support for the RTX 2080 Ti and RTX 2080

Fans of multi-GPU solutions will be glad to know that SLI is still around, and it's getting an upgrade in performance. Gone is the old SLI connector, replaced by a new NVLink connection. NVLink will support substantially higher throughput (up to 100GB/s compared to 4GB/s on the old HB SLI bridge), though how this will affect gaming performance isn't clear. A lot of games in the past couple of years have skipped multi-GPU support, and that doesn't seem likely to change.

The NVLink bridge makes another shift, with only 2-way SLI being supported. That's for the best, considering support for 3-way SLI in games has been even worse than 2-way SLI support. Nvidia offers two NVLink connector options, one with 3-slot spacing and one with 4-slot spacing.

While the RTX 2080 Ti and RTX 2080 support SLI, the RTX 2070 will not. There's no NVLink connector visible on the cards, and NVLink isn't mentioned on the product page either. Barring some other revelation, SLI RTX 2070 is simply not an option.

Something else to consider is that the ray tracing and deep learning algorithms offered on Turing may scale with multiple graphics cards. Nvidia took that approach with PhysX, and running ray tracing on a second GPU could provide a great way to enhance the way games look without killing performance. Nvidia hasn't said anything about offloading ray tracing, DLSS, or any other work to a second GPU, however, so this is purely speculation for now.

GeForce RTX will support native 8K60

Display connection standards are ever-evolving, and the GeForce RTX line adds several new options. Perhaps the biggest addition is VirtualLink, a USB Type-C connection that includes USB 3.1 Gen2 data along with HBR3 video and power. The port is designed to allow a single cable to power a VR headset over one wire. The DisplayPort connectors have also been updated to 1.4a with HBR3, so 8K60 (7680x4320 at 60Hz) with a single connector is possible, though it requires 4:2:0 subsampling.

Notably missing from the video output list is HDMI 2.1, which was released in November 2017. Turing keeps things at HDMI 2.0b, so 4K60 is still the limit. HDMI 2.1 allows for uncompressed 8K60 and 4K120, thanks to the massive 48Gbps transmission bandwidth. Not that most of users are going to have 8K displays anytime soon.

Will there be a Titan RTX card?

The RTX 2080 Ti disables 4 SMs and one 32-bit GDDR6 channel, clearly leaving room for something faster down the road. The Quadro RTX 6000/8000 both come with a full 4.608 CUDA cores, and 24GB/48GB GDDR6. A Titan RTX using the full TU102 seems like a no-brainer at this point, but the actual name, release date, and price are unknown.

Currently, the Titan V sells for $2,999 and comes with 16GB HBM2. The full Turing TU102 chip should match or exceed the Volta GV100 used in the Titan V in almost every way, and would cost less to produce. 24GB of GDDR6 seems likely, though 12GB is also viable. Given the already high price of the RTX 2080 Ti, 24GB and a price of $1,599-$1,999 is possible and even probable, unless AMD can bring some much needed competition.

Which GeForce RTX card should I buy?

With the cards all currently unreleased and untested, purchasing any GeForce RTX currently remains impossible. Based on what we've seen so far and looking at the Founders Edition cards, we recommend waiting at least another month if not more to see how performance and pricing shake out.

Unless you've got a high-end PC just waiting for a new graphics card, we can't recommend paying $100 to $200 extra for a Founders Edition. The graphics cards will almost certainly be faster than anything else, but how much faster, especially for existing games?

However, if you've watched the videos showing real-time ray tracing and can't stand the thought of going another minute without the improved shadows and reflections it provides, we recommend going all-in and buying the RTX 2080 Ti. Most demonstrations of ray tracing have been running at 1080p, and even then framerates are clearly dipping below 60fps on a regular basis. Further optimizations can only help, but you'll almost certainly want every ounce of performance available to get the most out of the feature.

Parting thoughts

Nvidia's Turing architecture and the GeForce RTX line of graphics cards have the potential to completely change what we expect from our GPUs. Nvidia took its already excellent Pascal GPU and found ways to make it even more efficient, then tossed in the kitchen sink by way of the RT cores and Tensor cores. The demonstrations of real-time ray tracing are extremely impressive on a technical level, and multiple developers commented that using the RTX features freed up a lot of artist time and make the process of level design much easier.

The problem is that the new features are only going to be available on GeForce RTX cards, so developers can't simply stop doing things the 'old fashioned' way. For the next five years at least, most games that use ray tracing effects will need to have a fallback rendering method for older GPUs. And by the time ray tracing goes mainstream—because let's not kid ourselves, even with a base price of $499 for the RTX 2070, these are extremely high-end graphics cards—there will inevitably be faster models available.

Should you jump on the GeForce RTX bandwagon? If you're a gaming enthusiast, or simply have deep pockets, the new graphics cards look to raise the bar both on performance and features. We're extremely excited to see the next generation of games that put these cards to good use. But we also play and enjoy a lot of games that don't even attempt to include cutting edge graphics.

The best reason to buy a new graphics card is when your existing graphics card is no longer doing its job adequately. For some, that will be when games can't run at 30fps and 1080p at low to medium quality, while others have been wanting a GPU that can 'guarantee' 60fps or higher at 4k for years.