Have you ever heard a tech phrase or buzzword you didn’t recognize? Maybe something spun as a hot new feature or advancement over the previous game in a series. You may have seen a specific technique mentioned by a developer in a conference or sizzle reel, but only in passing, never with an explanation of the tech they mention. I’m of the mindset that knowing how a piece of art works can only improve our appreciation of the art. What I want to do here is shed some light on specific graphic techniques and how to spot them in game. I’ll also touch on the strengths and uses of a technique as well.
The first technique I want to cover blossomed into popularity in the era of the Xbox 360 and PS3. Deferred shading, or deferred lighting, might have been something you’ve heard in the news or hype cycle for a game. I distinctly remember articles on Aliens: Colonial Marines talking about it’s “next gen” lighting called deferred lighting. Five years later, deferred lighting is still widely used today.
Deferred shading and deferred lighting are mostly interchangeable, so I won’t go into the differences. Both are variants on a type of technique called “screen space shading,” meaning objects are shaded together as a scene. Traditionally, objects would be drawn and shaded one at a time. So instead of drawing, coloring, and lighting one object at a time, each stage is done all at once. All the objects have their textures drawn to a buffer, or their normals are drawn to a buffer for lighting. What’s left are buffers that all have one aspect of the final scene, but need to be combined. These buffers are commonly referred to as G-buffers, and combining them gets you the final scene seen in game. The big advantage of deferred shading is the amount of lights that can be used.
Let’s say we have a scene with 4 objects and 6 lights. Traditionally, we’d take the first object, light it with the lights, then move on to the second. By the end of this, we’ve drawn each of the 6 lights once for each of the objects, meaning we would have drawn 24 lights. The more objects that need to be lit, the more times the lights get drawn. In deferred, we draw all our objects out to a G-buffer, then draw the lights using the info from the G-buffers. This means that if we have 6 lights in the scene, we draw 6 lights. It doesn’t matter if we have 4 objects or 40, they’ve all been stored in G-buffers, and those G-buffers are lit just the one time with the 6 lights. Because it’s also only applying lighting to the image after it’s drawn, the only pixels being lit are the ones the player sees, we’re not drawing an object behind another one or anything like that.
So how can we spot this in game? Well there’s a few simple ways, but first we’ll have to mention one of the limits of deferred shading . G-buffers only store data of what the player sees, not objects behind other objects. Because of this, we can’t properly render transparent objects with a deferred renderer. One way to handle it is to sort any transparent objects out, render the opaque objects with the deferred shading, then render the transparent objects the more traditional way. This is a bit expensive though; sort of like the game engine version of a hybrid vehicle, it has to carry two rendering engines instead of just one. The simpler approach is for the renderer to turn on something called “alpha to coverage.”
What alpha to coverage does is take a transparent object and instead makes a stipple pattern of opaque pixels to simulate transparency. Alpha to coverage works great when combined with MSAA (Multi sample Anti-aliasing) as the stipple pattern virtually disappears and it looks more like traditional transparency, but that’s a subject for another day. The piece of information needed here is that MSAA doesn’t play nice with deferred shading, so while alpha to coverage can simulate transparency in some cases, it can’t do so with a deferred shading engine.
These are the biggest clues to if a game is using deferred shading. In those cases, objects with transparencies will be severely limited or will have a visible stipple pattern. As resolutions grow, the stippling becomes harder to spot, and less of an issue, but it’s still possible to spot in some of today’s biggest games. Let’s look at some examples.
Let’s start with something easy to spot, like the above screenshot from Metal Gear Solid V: Ground Zeroes. Notice the foliage behind Snake. To clear sight lines in the game, it will sometimes fade foliage out to allow a better view of the action. That fading out, thanks to alpha to coverage, looks more like dissolving than fading. We can see similar things in Final Fantasy XV as well, though with character models rather than foliage. Look at the screen below. Objects that move into the scene from behind the camera tend to “fade in,” and, while they fade in rather quickly, you can still catch them. Notice something odd about Noctis? Not only can you see he’s a stippled mess, you can see the shape of his head through his hair, as the stipple patterns overlap and make some parts darker. There are a few other graphical artifacts in Final Fantasy XV, but those are clues to other techniques we’ll cover later
As stated before, transparency doesn’t play well with deferred. While most of the problems with transparency have workarounds now, you might still be able to catch this in older games. For instance, load up Saints Row 2 and look at your character’s hair through one or more layer of something transparent (car windows for instance. The same sort of stippling seen above will show up.
In short, if you spot stippled transparencies, or layered transparent objects eventually show stippling, you’re probably playing a game with deferred shading. With these tips, you should be able to spot deferred rendering engines in your favorite games. What other games have you seen that might use it? How about techniques that you’ve heard of and want to know more about? Let us know your thoughts!