NVIDIA Quadro RTX 8000 NVLINK Deep Learning benchmarks
As we continue to innovate on our review format, we are now adding deep learning benchmarks. In future reviews, we will add more results to this data set. At this point, we have a fairly nice data set to work with.
ResNet-50 Inferencing in TensorRT using Tensor Cores
ImageNet is an image classification database launched in 2007 designed for use in visual object recognition research. Organized by the WordNet hierarchy, hundreds of image examples represent each node (or category of specific nouns).
In our benchmarks for Inferencing, a ResNet50 Model trained in Caffe will be run using the command line as follows.
nvidia-docker run --shm-size=1g --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm -v ~/Downloads/models/:/models -w /opt/tensorrt/bin nvcr.io/nvidia/tensorrt:18.11-py3 giexec --deploy=/models/ResNet-50-deploy.prototxt --model=/models/ResNet-50-model.caffemodel --output=prob --batch=16 --iterations=500 --fp16
Options are:
–deploy: Path to the Caffe deploy (.prototxt) file used for training the model
–model: Path to the model (.caffemodel)
–output: Output blob name
–batch: Batch size to use for inferencing
–iterations: The number of iterations to run
–int8: Use INT8 precision
–fp16: Use FP16 precision (for Volta or Turing GPUs), no specification will equal FP32
We can change the batch size to 16, 32, 64, 128 and precision to INT8, FP16, and FP32.
The results are in inference latency (in seconds.) If we take the batch size / Latency, that will equal the Throughput (images/sec) which we plot on our charts.
We also found that this benchmark does not use two GPU’s; it only runs on a single GPU. You can, however, run different instances on each GPU using commands like.
```NV_GPUS=0 nvidia-docker run ... &
NV_GPUS=1 nvidia-docker run ... &```
With these commands, a user can scale workloads across many GPU’s. Our graphs show combined totals.
We start with Turing’s new INT8 mode which is one of the benefits of using the NVIDIA RTX cards.
Using the precision of INT8 is by far the fastest inferencing method if at all possible converting code to INT8 will yield faster runs. Installed memory has one of the most significant impacts on these benchmarks. Inferencing on NVIDIA RTX graphics cards does not tax the GPU’s to a great deal, however additional memory allows for larger batch sizes. The NVIDIA Quadro RTX 8000 could easily do larger batch sizes.
Let us look at FP16 and FP32 results.
This is another example where the dual Quadro RTX 8000 is simply the best.
Training with ResNet-50 using Tensorflow
We also wanted to train the venerable ResNet-50 using Tensorflow. During training the neural network is learning features of images, (e.g. objects, animals, etc.) and determining what features are important. Periodically (every 1000 iterations), the neural network will test itself against the test set to determine training loss, which affects the accuracy of training the network. Accuracy can be increased through repetition (or running a higher number of epochs.)
The command line we will use is:
nvidia-docker run --shm-size=1g --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 -v ~/Downloads/imagenet12tf:/imagenet --rm -w /workspace/nvidia-examples/cnn/ nvcr.io/nvidia/tensorflow:18.11-py3 python resnet.py --data_dir=/imagenet --layers=50 --batch_size=128 --iter_unit=batch --num_iter=500 --display_every=20 --precision=fp16
Parameters for resnet.py:
–layers: The number of neural network layers to use, i.e. 50.
–batch_size or -b: The number of ImageNet sample images to use for training the network per iteration. Increasing the batch size will typically increase training performance.
–iter_unit or -u: Specify whether to run batches or epochs.
–num_iter or -i: The number of batches or iterations to run, i.e. 500.
–display_every: How frequently training performance will be displayed, i.e. every 20 batches.
–precision: Specify FP32 or FP16 precision, which also enables TensorCore math for Volta and Turing GPUs.
While this script TensorFlow cannot specify individual GPUs to use, they can be specified by
setting export CUDA_VISIBLE_DEVICES= separated by commas (i.e. 0,1,2,3) within the Docker container workspace.
We will run batch sizes of 16, 32, 64, 128 and change from FP16 to FP32. Our graphs show combined totals.
ResNet-50 Training is greatly improved when using dual GPU configurations. The two Quadro RTX 8000’s in NVLINK match Titan RTX NVLINK but can go far deeper in batch sizes with the expanded memory the RTX 8000’s offer.
Deep Learning Training Using OpenSeq2Seq (GNMT)
While Resnet-50 is a Convolutional Neural Network (CNN) that is typically used for image classification, Recurrent Neural Networks (RNN) such as Google Neural Machine Translation (GNMT) are used for applications such as real-time language translations.
The command line we use for OpenSeq2Seq (GNMT) is as follows.
nvidia-docker run -it --shm-size=1g --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 -v ~/Downloads/OpenSeq2Seq/wmt16_de_en:/opt/tensorflow/nvidia-examples/OpenSeq2Seq/wmt16_de_en -w /workspace/nvidia-examples/OpenSeq2Seq/ nvcr.io/nvidia/tensorflow:18.11-py3
We then open the en_de_gnmt-like-4GPUs.py and edit our variables.
vi example_configs/text2text/en-de/en-de-gnmt-like-4GPUs.py
First, edit data_root to point to the below path:
data_root = "/opt/tensorflow/nvidia-examples/OpenSeq2Seq/wmt16_de_en/"
Additionally, edit the num_gpus, max_steps, and batch_size_per_gpu parameters under
base_prams to set the number of GPUs, run a lower number of steps (i.e. 500) for
benchmarking, and also to set the batch size:
base_params = {
...
"num_gpus": 1,
"max_steps": 500,
"batch_size_per_gpu": 128,
...
},
We also edit lines 44 and below as shown to enable FP16 precision:
#"dtype": tf.float32, # to enable mixed precision, comment this
line and uncomment two below lines
"dtype": "mixed",
"loss_scaling": "Backoff",
We then run the benchmarks as follows.
python run.py --config_file example_configs/text2text/en-de/en-de-gnmt-like-4GPUs.py --mode train
The results will be Avg. Objects per second trained which we plot.
We should note that other GPU’s we used to like the RTX2060, RTX2070, RTX2080 and RTX2080 Ti could not complete this benchmark due to the lack of memory. To enable this benchmark to finish on these GPU’s one might need to lower the batch size to smaller values like 32, 16, 8. We tried this but had no luck, using a batch size 4 could be run but it was decided that this was not a very usable size.
As the NVIDIA Quadro RTX 8000 has 48GB of installed memory, double that of the Titan RTX. The Quadro RTX 8000 is easily equal to the Titan RTX but offers larger batch sizes on a single GPU.
With OpenSeq2Seq (GNMT) Training users have limited choice in GPUs to select from as it requires large amounts of installed memory. We find the Quadro RTX 8000 at the top end with 48GB of installed memory. Using two of the Quadro RTX 8000’s gives one a massive 96GB of memory to work with, enabling very deep batch sizes possible.
Next, we are going to look at the NVIDIA Quadro RTX 8000 NVLINK power and temperature tests and then give our final words.
Why is the Radeon VII not in the comparison, William? It would be competitive in some tests.
Emerth – we do not have one to test and they are discontinued. As a result, it is a low priority. We may look at the Radeon Pro version, but that just started shipping.
Can it run fortnite at 60 fps though?
I would really love to see some testing with this card for VGPUs in VMWare :P
@Jeremy likely it can simulate running Fortnite at 60FPS.
About AIDA64 GPGPU Part1 graph of Page 3, My Titan Black’ Double-Precision FLOPS value is 1842 when “Double precision” is enabled on NVIDIA control panel → Manager 3D settings. Default is disabled! when disabled, the score plummets to 256.7 GFLOPS and it matches the graph value.
It is better to show the better score, their double precision circuit is fully utilized IMO
OctaneRender 4 does not take advantage from quadro or Titan cards, rtx 2080ti result should be comparable to rtx 8000, SLI should just halve the time.
https://render.otoy.com/octanebench/results.php?v=4.00&sort_by=avg&filter=&singleGPU=1
I guess something went wrong.
yamamoto wrote: About AIDA64 GPGPU Part1 graph of Page 3, My Titan Black’ Double-Precision FLOPS value is 1842 when “Double precision” is enabled on NVIDIA control panel.
This is a very important point.
When reading through the review I was about to ignore this card as yet another AI GPU unsuited for doing science. Now your comment has me wondering which other cards treated here suffered a similar methodological problem with double precision arithmetic.
@Eric
Maybe other Kepler GPUs such as GTX Titan or Quadro K6000 are affected
I have a Titan V, there is no “Double Precision” menu item on NVIDIA Control Panel on this GPU and its AIDA64 Double-Precision score is 6283 GFLOPS. It is cost effective solution for some scientific calculation IMO.
Can this gpu beat the titan v in case of gaming and editing .