AMD Radeon 680M vs NVIDIA GeForce RTX 2080 Max-Q

VS
Performance
Radeon 680M
10,37110% of 104,598
GeForce RTX 2080 Max-Q170% better
27,97327% of 104,598

The GeForce RTX 2080 Max-Q has 170% better performance than the Radeon 680M for the 3DMark 11 Performance GPU benchmark.

Performance per dollar
Radeon 680M
No data available
GeForce RTX 2080 Max-Q
No data available

We do not have any performance per dollar data for the Radeon 680M and the GeForce RTX 2080 Max-Q for the 3DMark 11 Performance GPU benchmark.

Shop Radeon 680M
Shop GeForce RTX 2080 Max-Q
As an Amazon Associate I earn from qualifying purchases.

Summary

#

About the AMD Radeon 680M GPU

The AMD Radeon 680M is a mobile graphics card that launched in Q1 2022. It is built on the RDNA 2.0 GPU microarchitecture (codename Rembrandt) and is manufactured on a 6 nm process.

Cores and Clock Speeds

The 680M includes 768 stream processors (SPs), the processing units for handling parallel computing tasks. The GPU operates at a core clock speed of 2,000 MHz and can dynamically boost its clock speed up to 2,200 MHz. Complementing the processing units are 48 texture mapping units (TMUs) for efficient texture filtering and 32 render output units (ROPs) for pixel processing. Additionally, the GPU features 12 ray accelerators dedicated to real-time ray tracing calculations.

Compatibility & Power Consumption

The GPU has a thermal design power (TDP) of 50 W. A power supply not strong enough to handle this might result in system crashes and potentially damage your hardware.

Benchmark Performance

The 680M has the 176th best 3DMark 11 Performance GPU score among the 543 benchmarked GPUs in our database. It achieves 9.92% of the performance of the best benchmarked GPU, the NVIDIA GeForce RTX 4090.

About the NVIDIA GeForce RTX 2080 Max-Q GPU

The NVIDIA GeForce RTX 2080 Max-Q is a mobile graphics card that launched in Q1 2019. It is built on the Turing GPU microarchitecture (codename TU104) and is manufactured on a 12 nm process.

Memory

The RTX 2080 Max-Q has 8 GB of GDDR6 memory, with a 1,500 MHz memory clock and a 256 bit interface. This gives it a memory bandwidth of 384 Gb/s, which affects how fast it can transfer data to and from memory. GPU memory stores temporary data that helps the GPU with complex math and graphics operations. More memory is generally better, as not having enough can cause performance bottlenecks.

Cores and Clock Speeds

The RTX 2080 Max-Q includes 2,944 CUDA cores, the processing units for handling parallel computing tasks. The GPU operates at a core clock speed of 735 MHz and can dynamically boost its clock speed up to 1,095 MHz. Complementing the processing units are 184 texture mapping units (TMUs) for efficient texture filtering and 64 render output units (ROPs) for pixel processing. Additionally, the GPU features 368 tensor cores optimized for AI-accelerated workloads and 46 RT cores dedicated to real-time ray tracing calculations.

Compatibility & Power Consumption

The GPU has a thermal design power (TDP) of 80 W. A power supply not strong enough to handle this might result in system crashes and potentially damage your hardware.

Benchmark Performance

The RTX 2080 Max-Q has the 84th best 3DMark 11 Performance GPU score among the 543 benchmarked GPUs in our database. It achieves 26.74% of the performance of the best benchmarked GPU, the NVIDIA GeForce RTX 4090.

General Info

General overview of the GPU, including details like its manufacturer, release date, launch price, and current production status.

InfoRadeon 680MGeForce RTX 2080 Max-Q
ManufacturerAMDNVIDIA
ArchitectureRDNA 2.0Turing
Market SegmentMobileMobile
Release DateQ1 2022Q1 2019
Production StatusActiveActive
ShopCheck PriceCheck Price

Gaming Performance

#

Select a game to compare FPS metrics

Display FPS for F1 24
F1 24
Display FPS for Sons of the Forest
Sons of the Forest
Display FPS for Prince of Persia: The Lost Crown
Prince of Persia: The Lost Crown
Display FPS for Alan Wake 2
Alan Wake 2
Display FPS for Lords of the Fallen (2023)
Lords of the Fallen (2023)
Display FPS for Total War: PHARAOH
Total War: PHARAOH
Display FPS for Assassin's Creed Mirage
Assassin's Creed Mirage

FPS Benchmarks

This table showcases the average frame rate (FPS) achieved both GPUs in at various resolutions. Frame rate is a crucial indicator of how smoothly the GPU can run the game. A higher FPS generally translates to a smoother gameplay experience.

  • Frames Per Second
FPS data for F1 24
F1 24
Radeon 680MGeForce RTX 2080 Max-Q
Low - 1080p
55 FPS
--
Medium - 1080p
46 FPS
--
High - 1080p
34 FPS
--
Ultra - 1080p
7 FPS
--
QHD - 1440p
--
--
4K UHD - 2160p
--
--
FPS Source: Notebookcheck

Compare Frames Per Second (FPS)

The average frame rate (FPS) in can be compared to similar GPUs to assess relative performance. Generally, higher FPS results in a smoother gameplay experience.

Choose Baseline GPU:Radeon 680M orGeForce RTX 2080 Max-Q
GPUFrames Per Second
GeForce MX45013.4+81%
RTX A500 Mobile11.7+58%
Radeon 680M7.41
Radeon 660M5.6-24%

Compare Cost Per Frame

The average cost per frame in can be compared to similar GPUs to assess relative value. Generally, a lower cost per frame implies better value for your money.

Choose Baseline GPU:Radeon 680M orGeForce RTX 2080 Max-Q
GPUCost Per Frame
Our database does not have enough data to compare the FPS per dollar with other GPUs.

Benchmark Performance

#
Performance
Radeon 680M
10,37110% of 104,598
GeForce RTX 2080 Max-Q170% better
27,97327% of 104,598

The GeForce RTX 2080 Max-Q has 170% better performance than the Radeon 680M for the 3DMark 11 Performance GPU benchmark.

The Radeon 680M is ranked 176th with a score of 10,371, and the GeForce RTX 2080 Max-Q is ranked 84th with a score of 27,973.

Performance per dollar
Radeon 680M
No data available
GeForce RTX 2080 Max-Q
No data available

We do not have any performance per dollar data for the Radeon 680M and the GeForce RTX 2080 Max-Q for the 3DMark 11 Performance GPU benchmark.

Relative Performance

The average score in the benchmark test can be compared to similar GPUs to assess relative performance. Generally, powerful GPUs tend to have higher scores.

Choose Baseline GPU:Radeon 680M or

Relative Value For Money

The average performance per dollar in the benchmark test can be compared to similar GPUs to assess relative value. A higher score implies a better value for your money.

Choose Baseline GPU:Radeon 680M orGeForce RTX 2080 Max-Q
GPUPerformance Per Dollar
Our database does not have enough data to compare the benchmark performance per dollar with other GPUs.

Benchmark Scores

This table showcases the average performance scores achieved by both GPUs across industry-standard benchmark tests. These scores provide a valuable insight into overall performance. Powerful GPUs tend to have higher scores.

  • Popular
BenchmarkRadeon 680MGeForce RTX 2080 Max-Q
3DMark Time Spy Graphics
2,303
7,923
(+244.03%)
3DMark Time Spy Score
2,580
7,484
(+190.08%)
3DMark Cloud Gate Graphics
43,225
117,764
(+172.44%)
3DMark Fire Strike Standard Graphics
6,865
20,703
(+201.57%)
3DMark Night Raid Graphics
--
90,166
3DMark 11 Performance Score
10,326
21,067
(+104.02%)
Cinebench R15 OpenGL 64 Bit
144.6
(+20.8%)
119.7
PassMark G3D Mark
--
13,063
PassMark G2D Mark
--
534
Benchmarks Source: Notebookcheck

Technical Specs

#

Graphics Processor

General information about the graphics processing unit like their architecture, manufacturing process size, and transistor count. Newer GPU architectures generally bring efficiency improvements and may introduce technologies that enhance graphical capabilities.

SpecRadeon 680MGeForce RTX 2080 Max-Q
CodenameRembrandtTU104
ArchitectureRDNA 2.0Turing
Process Size6 nm12 nm
Transistors13,100 million13,600 million

Memory Details

Memory specifications like their capacity, bandwidth, and clock speeds. GPU memory stores graphics data like frames, textures, and shadows which helps display rendered images. These specs are crucial for graphics-intense applications like gaming and 3D modeling.

SpecRadeon 680MGeForce RTX 2080 Max-Q
Memory SizeSystem Shared8 GB
Memory TypeSystem SharedGDDR6
Memory BandwidthSystem Dependent384 Gb/s
Memory ClockSystem Shared1,500 MHz
Memory InterfaceSystem Shared256 bit
L1 Cache128 KB64 KB
L2 Cache2 MB4 MB

Board Compatibility

Compatibility information like their slot size, bus interface, power consumption, and display support. These specs are useful for verifying compatibility with your motherboard, power supply, and monitor.

SpecRadeon 680MGeForce RTX 2080 Max-Q
Bus InterfacePCIe 4.0 x8PCIe 3.0 x16
Thermal Design Power (TDP)50 W80 W
OutputsDevice DependentDevice Dependent

Cores & Clock Speeds

Processing power information like its cores and clock speed. These specs impact how fast they can process graphics. Each type of core or component serves a specific computational purpose.

SpecRadeon 680MGeForce RTX 2080 Max-Q
Stream Processors (SP)768--
CUDA Cores--2,944
Compute Units (CU)12--
Stream Multiprocessors (SM)--46
Texture Mapping Units (TMU)48184
Render Output Units (ROP)3264
Tensor Cores--368
Ray Accelerators12--
Ray Tracing Cores--46
Core Clock Speed2,000 MHz735 MHz
Core Clock Speed (Boost)2,200 MHz1,095 MHz

Theoretical Performance

Theoretical performance numbers derived from the raw specifications of the different components like core count and clock speeds. While these provide a glimpse into peak processing power, they do not represent real-world performance.

SpecRadeon 680MGeForce RTX 2080 Max-Q
Pixel Fill Rate70.4 GPixel/s70.08 GPixel/s
Texture Fill Rate105.6 GTexel/s201.5 GTexel/s
FP32 Performance3.38 TFLOPS6.45 TFLOPS
FP64 Performance211.2 GFLOPS201.5 GFLOPS

API Support

Graphics API versions supported by these graphics cards. APIs evolve over time, introducing new features and functionalities. Older GPUs may not support recent versions.

SpecRadeon 680MGeForce RTX 2080 Max-Q
DirectX12 Ultimate (12_2)12 Ultimate (12_2)
OpenCL2.03.0
OpenGL4.64.6
Shader Model6.86.8

Check out these comparisons for similar GPUs:

Looking for alternatives? Check out these similar GPUs:

* Performance rating, performance per dollar, and rankings are based on the 3DMark 11 Performance GPU benchmark and MSRP.