Misleading Nvidia marketing
Nvidia’s recent video showing off the effects of a 500 Hz monitor is egregiously misleading. It has two major problems:
It shows massive latency differences. The latency differences are in reality fairly small if you make any attempt to optimize for latency whatsoever, while Nvidia did the exact opposite for this test. They show about a 16 millisecond (ms) difference between 500 Hz and 240 Hz; a more realistic comparison using the same monitors would show about 2 ms in the average case and 4 ms on the worst-case frames. Against the 144 Hz monitor, they show about a 26 ms improvement where a more realistic test would show about 4.5 ms in the average case and 8 ms in the worst case.
It shows both the latency and smoothness differences between 240 Hz and 500 Hz as bigger than those between 144 Hz and 240 Hz. They aren’t. If you think of it as milliseconds per frame instead of frames per second, the diminishing returns inherent to extreme refresh rates become clear: 144 Hz is 6.94 ms per frame, 240 Hz is 4.17 ms per frame, and 500 Hz is 2 ms per frame.
A 500 Hz monitor is legitimately a very cool thing, but it’s important to be realistic about what it can and can’t do. Nvidia would rather you not be realistic. When you spend money on this kind of thing thinking it’s better than it is, a lot of it ends up in Nvidia’s pocket.
Latency effects of monitors
Decent monitors being used at their native resolution add latency in three main ways: scan-out, pixel transition time, and (situationally) pipeline backpressure. (TVs, monitors operating at non-native resolutions, and just plain bad monitors are likely to have more.)
Transferring data from the GPU to the monitor is a non-trivial process called scan-out. In most cases it only happens as fast as it needs to, so with a 240 Hz monitor each frame (from the monitor’s perspective) spends about 4 ms in-flight. This mostly doesn’t add a whole 4 ms of latency though, because the monitor does a rolling refresh, updating pixels as the data for them comes in. Most monitors refresh top-to-bottom, so when some form of vsync is active (as G-Sync is here) the top of the monitor has near-zero scan-out latency, the bottom has the full 4 ms, and the center (where you’re likely focused most often when gaming) has about 2 ms.
Not using any form of vsync adds variance. The minimum and maximum latencies stay the same, but you can’t similarly predict what latency you’ll get within that range for actions more complex than mouselook.
Pixel transition time
Individual pixels take a little bit to change from one color to another, and this adds to latency. Most of the confusion in this area is because the transition time isn’t a fixed number but a curve even for a transition from one specific color to another, and the curve changes depending on the start and end colors. This makes putting a millisecond number on it a bit ambiguous.
The transition time numbers that make sense for latency are lower than for blur. A pixel could get most of a transition done quickly, making the new image visible quickly for low latency, but then take ages to settle in to the final value, causing terrible blur.
This one takes a bit of background. Games typically pipeline their work such that if frame 1 is in scan-out, frame 2 is being rendered on the GPU, frame 3 is being rendered on the CPU, and frame 4 is being simulated on the CPU (or something along these lines). Each of these is called a pipeline stage. By letting the work overlap like this, you can get far higher framerates. The trouble with it is in managing latency. Say all you have is one CPU stage and one GPU stage, but you’re strongly GPU-bound; when do you start work on the next frame on the CPU? Starting too early adds latency, but starting too late reduces your framerate. CPU performance is unlikely to be predictable, so there’s usually no perfect answer to this.
If you have an Nvidia GPU, low latency mode (LLM) is off, you’re playing a game that doesn’t attempt to manage this latency itself (which is most of them), and you get GPU-bound or vsync-bound, latency is going to be terrible, likely to the tune of an extra 2 to 4 frames piled up between pipeline stages. If any of these things isn’t true, it’ll probably be at least decent. (There are more exceptions, but you’re much less likely to run into them.)
Where this intersects with monitors is in getting vsync-bound. (Being capped by G-Sync is effectively being vsync-bound for this purpose.) If your framerate is vsync-bound and work is piling up and ruining your latency, you have three basic options. In order of decreasing effectiveness:
Don’t be vsync-bound. You could do this by switching to a vsync mode that doesn’t limit your framerate (no sync, generic adaptive-sync, fast sync, or enhanced sync), or you could set some other framerate limiter a bit below your monitor’s refresh rate (this is mostly fine for smoothness in the case of adaptive sync).
Fix the buffering so that work doesn’t pile up so readily, even while still vsync-bound. Mainly, turn LLM on. AMD’s Windows drivers don’t even let you get into such a bad situation. There may also be game-specific fixes.
Raise your refresh rate, probably via an expensive new monitor.
One of these isn’t like the other two. Only one hits your wallet in a negative way and likely Nvidia’s in a positive way.
I don’t generally consider pipeline backpressure as a core part of a monitor’s performance, because if latency is a high priority for you you’re just not going to let it cause much trouble this way no matter how slow or fast it is itself. There are plenty of ways to fix it.
Nvidia’s test setup
Nvidia appears to get the latency results they do by letting pipeline backpressure ruin everything for the 144 Hz and 240 Hz monitors but not for the 500 Hz monitor. The slower monitors are vsync-bound and look to have every bit of latency trouble that can entail. They say the 500 Hz monitor is running 500 FPS as well, but looking at the video frame-by-frame it clearly isn’t. It’s not perfectly consistent and I’m not going to spend too long trying to nail that down more precisely (the exact number isn’t important anyway), but I’d guess it’s averaging something like 450. If it’s bound by the first pipeline stage (which is likely), this keeps a minimum of work buffered and keeps latency to a bare minimum, in the harshest possible contrast to the other two tests.
Let’s break down the gaps between the monitors. It looks like about 26 ms total between the 144 Hz and 500 Hz tests and 16 ms total between the 240 Hz and 500 Hz tests.
The effective latency added by pixel transition times looks to be a bit under 2 ms for the 144 Hz monitor, a bit over 1 ms for the 240 Hz monitor, and well under 1 ms for the 500 Hz monitor. Let’s round and stretch these to 2, 1, and 0, conservatively assuming the best of their explanatory power. All of these test setups have minimal scan-out latency at the top of the screen and half in the center (where the camera is pointed), so we can cancel that out easily too: scan-out latency is about 3.5 ms for the 144 Hz monitor, 2 ms for the 240 Hz monitor, and 1 ms for the 500 Hz monitor.
Subtracting those known numbers from the gaps, the remainder is 20.5 ms between the 144 Hz and 500 Hz monitors and 13 ms between the 240 Hz and 500 Hz monitors. If the test is otherwise fair, this remainder is all due to backpressure, which shouldn’t play into monitor selection anyway for anyone serious about low latency.
The remaining numbers are still really weird. That’s not quite how backpressure normally behaves. Either Valorant responds to backpressure strangely or the test is biased in some additional way I can’t see from these results. Unfortunately, any Windows installation which has had Valorant installed on it is no longer trustworthy for most other benchmarking and performance investigation, so I can’t easily check out Valorant’s behavior for myself right now.
This is bad even for tech marketing
The potential impact of misleading marketing is inversely proportional to how well buyers understand the problem space.
If Intel or AMD claimed their next generation of CPUs would have twice the single-threaded performance of their current ones, there’d be no shortage of skepticism. We would at least want to see some very good technical reasons for such a thing to be possible. Absent any particular technical info, people would conclude that it must just be something like wider vector units: a change that doubles performance in specific cases, but not more generally. It’s well-understood that a 30% real single-threaded improvement per generation is difficult enough to achieve as it is.
Latency is not so well-understood, so Nvidia can make ludicrous claims about it and probably benefit anyway.
Say you know how the latency difference between a 144 Hz monitor and a 240 Hz monitor feels, and you’re sensitive enough to latency to think it’s important. This Nvidia test comes out showing the difference between 240 Hz and 500 Hz as being about half again bigger than that difference you already know. Is it tempting to buy? How disappointing will it be when the actual difference is smaller?
Say you look at the video frame-by-frame and get some more precise numbers (about 16 ms and 26 ms), but don’t catch that the numbers don’t make sense. Those numbers are huge. Getting any 16 ms improvement from any halfway decent starting point is extremely difficult. Is it tempting to buy? How disappointing will it be when it isn’t the claimed revolution?