AMD Radeon RX 6800 Specifications
First, here are the key specs directly from the AMD website (as of the date of this article):
You may notice that the release date was November 2020. It has taken us this long to get these cards.
One key specification for the AMD Radeon RX 6800 is that it uses 16GB GDDR6 memory. This is more than the MSI GeForce RTX 3070 and RTX 3080.
Testing the AMD Radeon RX 6800
Here is our test configuration:
- Motherboard: ASUS ROG Zenith II Extreme Motherboard
- CPU: AMD Threadripper 3960X (24 cores / 48 Threads)
- GPU: AMD Radeon RX 6800
- Cooling: NZXT Kraken X62
- RAM: 4x Corsair Dominator Platinum RGB 3600 MHz 16GB (64GB Total)
- SSD: Sabrent Rocket 4.0 NVMe PCIe Gen4 x4 M.2 SSD
- PSU: EVGA Supernova 1600 T2
- OS: Windows 10 Pro
Here is the obligatory GPU-Z shot of the AMD Radeon RX 6800:
GPU-Z shows the primary stats of our testing the AMD Radeon RX 6800. The GPU clocks in at 1815 MHz and can boost up to 2105 MHz. Pixel Fillrates run at 101.0 GPixels/s, and Texture Fillrate comes in at 505.2 GTexel/s, while memory runs at 2000 MHz. We see 16GB of GDDR6 memory on the AMD Radeon RX 6800.
Let us move on and start our testing with computing-related benchmarks.
Will you include video encoding benchmarks? Given how popular Plex is in the home lab space, I would expect video encoding to be very relevant.
Wanna know the real dirty little secret woth your new card, William? ROCm does not support RDNA GPUs. The last AMD consumer cards that ROCm supported was the Vega 56/64 and it’s 7nM die shrink the VII.
You got a RX5000 or RX6000 series card or any version of APU, well you get to use OpenCL. Aren’t you lucky?
nVidia supported CUDA on every GPU since at least the 8800GT. I can’t imagine how AMD expects to get ROCm out of the upcoming national labs when the only modern card it will work on is the mi100. Ever try to buy an mi100 (or mi50)? It is basically possible to find an AMD reseller that will even condescend to speak to a small ISV.
*correction:
It is basically IMPOSSIBLE to find an AMD reseller that will even condescend to speak to a small ISV.
I find all these reviews and release news for both AMD and Nvidia card a joke at the moment, as an end user I can’t ever find any in stock no matter how deep my pockets!!
I’m not just talking about STH
Why have the new reviews suddenly decided to skip 3080 altogether ???
Hi Ryuk – we still have not been able to grab one for testing.
Pure junk selling, USA warranty evading, AMD still owes me a video board since I did not even get a 1/2 year of performance from the 3 very poorly designed and QA, Vega Frontier Editions.
AMD wanting all the selfish benefits of consumer sales, but none of the mature responsibilities.
The replacement warranty boards from AMD all junk. The first not lasting more than 2 days without crashing (BSD), the second, having waited ~1 mth, just 1 day then crashing. I having the impression, no one watching AMD, hey simply return “defective boards” as replacements, then wash their foul hands, not honoring anything, their word to then not respecting customer.
Worst, this company then sought to abuse USA Consumer Protection Laws by expecting their customers in the USA to send the defective product OUT OF COUNTRY, having no USA depot.
Park McGraw above ^^ had a faulty system (likely PSU or motherboard) that was making graphics cards either not work or actually break/fail, and then decided to blame AMD for it… ♂️
You didn’t get 3x faulty graphics cards in a row, you freaking imbecile… Basic silicon engineering science says that getting 3x GPU duds in a row is practically an impossibility (unless the product itself had a fundamental device killing flaw… of which Vega 10 did not). Aka, it was YOUR SYSTEM that was killing the cards!
And to emerth, I wouldn’t expect ROCm to EVER come to RDNA personally. API translation seriously isn’t easy, so keeping things limited to just two instruction sets (modern CUDA to GCN [+ CDNA which uses the GCN ISA & is basically just GCN w/ the “graphics” cut out]) likely cuts down the work & difficulty DRAMATICALLY!
Not to mention that even IF RDNA DID support ROCm, performance vs Nvidia would still be total crap because of the stark lack of raw FP compute! (AMD prioritized pixel pushing DRAMATICALLY over raw compute w/ RDNA 1 & 2 to get competitive gaming performance & perf/W, with only RDNA 3 starting to eveeeeer so slightly reverse course on that front).
AMD just doesn’t give a crap, whatsoever, about the hobbyist AI/machine learning market. Nvidia’s just got way, WAAAAAY too much dominance there for it to be worth AMD spending basically ANY time & effort to try and assault it. Especially when CDNA is absolutely beating the everliving SH!T out of Nvidia in the HPC & supercomputer market!