Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal. You can also buy games using our partner links for GOG and Humble Store.
We do often include affiliate links to earn us some pennies. See more here.

The GPU race continues on once again, as NVIDIA have now officially announced the GeForce RTX 2000 series of GPUs and they're launching in September.

This new series will be based on their Turing architecture and their RTX platform. These new RT Cores will "enable real-time ray tracing of objects and environments with physically accurate shadows, reflections, refractions and global illumination." which sounds rather fun.

They will start off with three models to succeed their current top of the line:

  • RTX 2070 with 8GB GDDR6, available in October
  • RTX 2080 with 8GB GDDR6, available in September
  • RTX 2080 Ti with 11GB GDDR6, available in September

Naturally, for a brand new series they won't be cheap!

The "Founders Edition" NVIDIA are offering will be £1,099/$1,199 for the RTX 2080 Ti, £749/$799 for the RTX 2080 and £569/$599 for the RTX 2070. From what I've seen, these editions will have a higher clock boost over the normal editions.

The normal "Reference" editions will be cheaper of course, with the RTX 2080 Ti at $999, RTX 2080 at $699 and RTX 2070 at $499. Unsure on the UK prices for the normal editions, as I can't see them listed currently but you get the idea.

YouTube Thumbnail
YouTube videos require cookies, you must accept their cookies to view. View cookie preferences.
Accept Cookies & Show   Direct Link

NVIDIA generally have good support for new GPUs on Linux, so I'm sure a brand new driver is already on the way to be released soon.

See more on the official NVIDIA site, their announcement blog post and this post as well.

Will you be picking one up, will you be waiting for the normal edition or will you wait and see what AMD have to offer?

Article taken from GamingOnLinux.com.
Tags: Hardware, NVIDIA
12 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. Find me on Mastodon.
See more from me
The comments on this article are closed.
73 comments
Page: «4/8»
  Go to:

TheRiddick Aug 21, 2018
1920x1080 results are not overly reliably, the best way to do comparisons is 1440p and 4k because it actually stresses the GPU.

Anyway I think I know whats going on, compare the mad max benchmarks of the two links and you will see two things, first is that the Vega cards now have slower performance for some reason (regression) and second is the 1070ti has had its performance increased.

This could be CPU related, I dunno if the results were done on the similar hardware, the recent results were done at 5ghz so we should have seen the CPU eliminated from the results somewhat since it will not struggle at 5ghz.

Anyway I prefer to go by the lastest bench results, and they clearly indicate VEGA has some issues, maybe Phoronix has some configuration issues I dunno.....


Last edited by TheRiddick on 21 August 2018 at 4:55 am UTC
Shmerl Aug 21, 2018
Quoting: TheRiddick1920x1080 results are not overly reliably, the best way to do comparisons is 1440p and 4k because it actually stresses the GPU.

Not really. From the current hardware, nothing handles 4K well, let alone with 144Hz. So such tests are of low value. Current GPUs just didn't catch up to such monitors yet. May be next generation will be more applicable.

Quoting: TheRiddickAnyway I think I know whats going on, compare the mad max benchmarks of the two links and you will see two things, first is that the Vega cards now have slower performance for some reason (regression) and second is the 1070ti has had its performance increased.

This tells me that such benchmarks are obscuring actual hardware, since bottlenecks happen somewhere in the driver and regressions or improvements can occur.


Last edited by Shmerl on 21 August 2018 at 4:59 am UTC
TheRiddick Aug 21, 2018
LOL you missed the point.

Its about stressing the GPU not if the benchmark performs at a playable FPS (most are over 60fps with 1440p and some at 4k btw).

Anyway I've mentioned the regression in results on my phoronix link, maybe someone will know whats going on..


Last edited by TheRiddick on 21 August 2018 at 5:01 am UTC
Guppy Aug 21, 2018
For a realtime raytracing card it somewhat disappointing that the game demoing it (battlefield 5) is obviously using polygon models :(

I guess full raytracing engines are still a thing of the future
TheRiddick Aug 21, 2018
It's a ray traced approximation using the RTX compute power, FULL raytracing is probably 10 years away. It's a bit of a FLUFF piece of tech really which NVIDIA is using as basis to inflate the cost of these cards. In all honesty nobody was asking for this crap in their games.


Last edited by TheRiddick on 21 August 2018 at 5:09 am UTC
14 Aug 21, 2018
View PC info
  • Supporter Plus
$500 for the least expensive one. No thanks.

Believe me when I say that I spend 80% or more of my gaming time on PC. Still, I can't convince myself that a $500 video card is worth it when you can buy an entire console gaming system for $300. Come on.
TheRiddick Aug 21, 2018
Well gaming consoles are allot slower, like ALLOT. Think of it like a mobile phone, yes you can get a nice 8core mobile for $150 off gearbest, but the ones with decent GPU's in them cost $400 and up!!!!

Anyway when XBOX2 and PS5 hit, things might change in that respect, but I believe the next gen consoles might also be $100 more then previous launch price due to the worlds failing economy....
mike44 Aug 21, 2018
Until StarVR get's affordable, I won't bother to update. Hence I'll continue to use my GTX1070 for at least two more years.
sub Aug 21, 2018
Does anyone know if "Tensor Cores" and "RT Cores" are real dedicated hardware components or is this something implemented on top on the more generic GPGPU features?

On another note:

I'm still with an Radeon HD 7950 alongside a Phenom II X4 940.
And it surprisingly works well for me (Playing FHD DOOM and Wolf2).
When I was way younger I burnt too much money on "fresh" hardware, reading reviews about every 2 % of performance advantage which in the end just cost a non-linear fortune compared to something much cheaper that would have served me equally well.

That's also why I couldn't care less about a performance lead of Nvidia - a company hostile towards open standards when they realize they can implement and rigorously exploit a vendor lock-in (see CUDA <-> OpenCL support).

Next card will be AMD again. For sure.
The open driver ecosystem on Linux has come a long way and it's working great.
TheRiddick Aug 21, 2018
The NVIDIA FP16 performance on their Tensor cores is quite impressive, I'll give them that. The Vega cards can only achieve 2x FP32 performance for FP16 which in itself is good but the tensor cores are like 4x or more of their FP32 performance.

People should be reminded that ray tracing compute heavily relies on FP16 performance.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.