Rendering Related Benchmarks
Next, we wanted to test the rendering performance of the card.
Arion v2.5
Arion Benchmark is a standalone render benchmark based on the commercially available Arion render software from RandomControl. The Benchmark is GPU-accelerated using NVIDIA CUDA. However, it is unique in that it can run on both NVIDIA GPUs and CPUs.
Here again we see a big generational boost to the newer architecture Ada Lovelace generation cards.
KeyShot
The KeyShot Benchmark is a simple yet powerful tool to test your CPU and/or GPU and evaluate their performance. KeyShot Viewer is a free standalone application for sharing KeyShot scenes for others to view and interact with your visuals.
The results are multiples based on render time. Higher scores than 1.0 are better than the reference system. The reference system is a relatively ancient Intel Core i7-6900K CPU @3.20GHz, 2601 Mhz, 8 Core(s). A score of 1.0 matches the speed of the reference system. A score of 2.0 would be double the speed of the reference system.
Again, we get a nice gain here, although this is another one where the consumer GPUs do relatively well.
OctaneBench
This Benchmark has a new version. This will be one that we update with future GPUs as we add them to the data set.
In this Benchmark, the NVIDIA RTX 5000 Ada did well.
Blender Benchmark
You can download the Open Data Benchmark from the opendata.blender.org homepage with Windows, Linux, and macOS versions. You can then select any 7 benchmarks (details listed below) to run on your Blender version and render device (CPU / GPU). The Benchmark will also gather non-identifiable data on your system setup, details of which can be found below. Once the Benchmark is complete, you can publicly share your results on Blender Open Data.
Here we again saw good results with the newer Ada generation. Then from there, the RTX 5000 Ada did fairly well, and was a decent value between the two top-end RTX professional cards.
V-Ray
V-Ray does Photorealistic rendering and real-time visualization using CUDA and RTX benchmarks.
V-Ray GPU CUDA
V-Ray GPU RTX
If the results are starting to feel a bit repetitive, it is because that is the pattern with the RTX 5000 somewhere between 70-85% of the RTX 5000 Ada on most tests.
UL Procyon
UL Procyon—benchmarks for professional users. Procyon is a new benchmark suite from UL that we’re creating specifically for professional users in industry, enterprise, government, and retail.
UL Procyon Video Editing
This Benchmark compares the video-editing performance of Windows PCs aimed at creators, enthusiasts, and creative professionals. This test is based on the typical workflow when creating content for online video-sharing platforms.
The Benchmark uses Adobe Premiere Pro to export video project files to common formats. Each video project includes various edits, adjustments, and effects. The benchmark score is based on the time taken to export the videos.
The professional cards tend to punch above their weight on the performance side on these benchmarks.
Next, we will have Uniqine and 3Dmark-related Benchmarks before moving on to power consumption, thermals, and our final thoughts.
I thought gaming GPUs didn’t use blowers because of an enforced market segmentation by Nvidia to prevent those GPUs from being used in the data center.
Yes. The dual slot blower is officially banned from nvidia for top-end cards. However we still have vendors making them and offering it as “cheap alternatives to dc cards”. This is something Nvidia does not want OEMs to make, but unable to ban OEM from making because the potential market is just so high.
The article doesn’t mention it, but I see these have been crippled when it comes to dual precision compute, just like the previous generations.
Ada, Ampere, Turing – no DP capable cards (or at least not severely cut down to 1/64 of SP performance).
Volta (GV100) is the last DP compute card released, and that’s becoming a bit outdated.
Once upon a time, these Quadros (or whatever they are named today) were engineer’s cards. You had to have a high-end one to accelerate engineering simulations (FE, CFD and similar). I guess AI stole the show, and nobody is going to cater to that small market anymore.
What’s the proper way to build a DP crunching workstation nowadays, anyway?
A key reason not discussed in the review for why some users will almost have to use this card vs a much cheaper and faster 4090 is the ECC Memory this card has. For some applications and uses, that is required, also for liability reasons.
Apart from all that, I prefer blower cards if available; unfortunately, current generation consumer cards in that design are almost impossible to find.
Actually you can enable ECC on 4090 the same way on RTX workstation cards. These cards both lose a portion of the total VRAM capacity if ECC is enabled, 4090 just has half of the total capacity.
@TurboFEM: That’s because both NVIDIA and AMD have abandoned FP64 in mainstream architectures as its market share is not worth the cost of silicon to implement it at full speed. Gaming doesn’t need it, and neither does AI/ML.
On NVIDIA side the H100 can perform one FP64 operation every 2 cycles, while the Ada can do it every only every 4 cycles. AMD has implemented native FP64 since CDNA 2, and further improved it in CDNA 3.
So basically for FP64 you need to go for the highest end compute accelerators.
@eastcoast_pete: There’s also driver support and qualifications that are critical for certain uses. Using mainstream cards and mainstream drivers is out of the question for them.
@Gnattu: NVIDIA specifically forbids using mainstream cards in commercial compute (via CUDA and driver EULAs), and actively goes after companies who, for example, rent them as public clouds. While you can try to use them internally, your legal department won’t be happy if they ever find out.
I’m glad to see that others have already mentioned blower GPUs didn’t fall out of favor with consumers, Nvidia enforced that consumer AIBs couldn’t use blowers to ensure that the cheaper RTX3090/4090 wouldn’t be used in workstations instead of their astronomically priced workstation GPUs. I see it’s very popular right now by blogs of all types to gloss over Nvidia’s hostile behavior towards consumers but it’s a damn shame.
Thanks John. That is good info as I am weighing upgrading from A6000 cards.
Running large language models is becoming increasingly common. Suggest having a benchmark for that in the future. For example running the 8x7b mixtral model is common these days.
@Kyle Actually I know some companies get caught. The nvidia geforce driver has its own telemetry so Nvidia knows what you are doing if you don’t cut this connection. The result? Companies now start to disable public Internet access for these nodes and you have to distribute work through a gateway so that the telemetry never reaches Nvidia. I know it is prohibited by Nvidia, but the amount of money we are talking about here is unlikely to be limited by an EULA.
@Gnattu: Oh sure you can work around this issue for internal use. The problem is when you try to sell it to the public as a cloud offering, for example. You can’t really hide the fact you’re using a consumer GPU then – your clients will be able to tell. The issue is whether those clients will care.
EULAs in general are a murky topic, but most “serious” companies will not even try to get into the grey zones.