If we look at what Direct3D offers in terms of blending functions and blending operations, we can see that each pixel is first multiplied by a number between 0. This determines how much of the pixel's color will influence the final appearance.
Then the two adjusted pixel colors are either added, subtracted, or multiplied; in some functions, the operation is a logic statement where something like the brightest pixel is always selected. Image: Taking Initiative tech blog. The above image is an example of how this works in practice; note that for the left hand pixel, the factor used is the pixel's alpha value. This number indicates how transparent the pixel is. The rest of the stages involve applying a fog value taken from a table of numbers created by the programmer, then doing the same blending math ; carrying out some visibility and transparency checks and adjustments; before finally writing the color of the pixel to the memory on the graphics card.
Why the history lesson? Well, despite the relative simplicity of the design especially compared to modern behemoths , the process describes the fundamental basics of texturing: get some color values and blend them, so that models and environments look how they're supposed to in a given situation. Today's games still do all of this, the only difference is the amount of textures used and the complexity of the blending calculations. Together, they simulate the visual effects seen in movies or how light interacts with different materials and surfaces.
The basics of texturing To us, a texture is a flat, 2D picture that gets applied to the polygons that make up the 3D structures in the viewed frame. To a computer, though, it's nothing more than a small block of memory, in the form of a 2D array. Each entry in the array represents a color value for one of the pixels in the texture image better known as texels - texture pixels.
Every vertex in a polygon has a set of 2 coordinates usually labelled as u,v associated with it that tells the computer what pixel in the texture is associated with it. The vertex themselves have a set of 3 coordinates x,y,z , and the process of linking the texels to the vertices is called texture mapping.
To see this in action, let's turn to a tool we've used a few times in this series of articles: the Real Time Rendering WebGL tool. For now, we'll also drop the z coordinate from the vertices and keep everything on a flat plane. From left-to-right, we have the texture's u,v coordinates mapped directly to the corner vertices' x,y coordinates. Then the top vertices have had their y coordinates increased, but as the texture is still directly mapped to them, the texture gets stretched upwards.
In the far right image, it's the texture that's altered this time: the u values have been raised but this results in the texture becoming squashed and then repeated.
This is because although the texture is now effectively taller, thanks to the higher u value, it still has to fit into the primitive -- essentially the texture has been partially repeated. This is one way of doing something that's seen in lots of 3D games: t exture repeating.
Common examples of this can be found in scenes with rocky or grassy landscapes, or brick walls. Now let's adjust the scene so that there are more primitives, and we'll also bring depth back into play. What we have below is a classic landscape view, but with the crate texture copied, as well as repeated, across the primitives.
Now that crate texture, in its original gif format, is 66 kiB in size and has a resolution of x pixels. The original resolution of the portion of the frame that the crate textures cover is x , so in terms of just pixel 'area' that region should only be able to display 20 crate textures.
We're obviously looking at way more than 20, so it must mean that a lot of the crate textures in the background must be much smaller than x pixels. Indeed they are, and they've undergone a process called texture minification yes, that is a word!
Now let's try it again, but this time zoomed right into one of the crates. Don't forget that the texture is just x pixels in size, but here we can see one texture being more than half the width of the pixels wide image.
This texture has gone through something called texture magnification. These two texture processes occur in 3D games all the time, because as the camera moves about the scene or models move closer and further away, all of the textures applied to the primitives need to be scaled along with the polygons.
Mathematically, this isn't a big deal, in fact, it's so simple that even the most basic of integrated graphics chips blitz through such work. However, texture minification and magnification present fresh problems that have to be resolved somehow. The first issue to be fixed is for textures in the distance.
If we look back at that first crate landscape image, the ones right at the horizon are effectively only a few pixels in size. So trying to squash a x pixel image into such a small space is pointless for two reasons. One, a smaller texture will take up less memory space in a graphics card, which is handy for trying to fit into a small amount of cache.
That means it is less likely to removed from the cache and so repeated use of that texture will gain the full performance benefit of data being in nearby memory. The second reason we'll come to in a moment, as it's tied to the same problem for textures zoomed in. A common solution to the use of big textures being squashed into tiny primitives involves the use of mipmaps.
These are scaled down versions of the original texture; they can be generated the game engine itself by using the relevant API command to make them or pre-made by the game designers. Each level of mipmap texture has half the linear dimensions of the previous one. The mipmaps are all packed together, so that the texture is still the same filename but is now larger.
The texture is packed in such a way that the u,v coordinates not only determine which texel gets applied to a pixel in the frame, but also from which mipmap. The programmers then code the renderer to determine the mipmap to be used based on the depth value of the frame pixel, i.
Sharp eyed readers might have spotted a downside to mipmaps, though, and it comes at the cost of the textures being larger. The original crate texture is x pixels in size, but as you can see in the above image, the texture with mipmaps is now x So for a relatively small increase in memory for the texture mipmaps, you're gaining performance benefits and visual improvements.
Whereas on the right hand side, the use of mipmaps results in a much smoother transition across the landscape, where the crate texture blurs into a consistent color at the horizon.
The thing is, though, who wants blurry textures spoiling the background of their favorite game? The process of selecting a pixel from a texture, to be applied to a pixel in a frame, is called texture sampling , and in a perfect world, there would be a texture that exactly fits the primitive it's for -- regardless of its size, position, direction, and so on.
In other words, texture sampling would be nothing more than a straight 1-to-1 texel-to-pixel mapping process. Since that isn't the case, texture sampling has to account for a number of factors:. Let's analyze these one at a time. The first one is obvious enough: if the texture has been magnified, then there will be more texels covering the pixel in the primitive than required; with minification it will be the other way around, each texel now has to cover more than one pixel.
That's a bit of a problem. The second one isn't though, as mipmaps are used to get around the texture sampling issue with primitives in the distance, so that just leaves textures at an angle. And yes, that's a problem too. Because all textures are images generated for a view 'face on', or to be all math-like: the normal of a texture surface is the same as the normal of the surface that the texture is currently displayed on.
So having too few or too many texels, and having texels at an angle, require an additional process called texture filtering. If you don't use this process, then this is what you get:.
Here we've replaced the crate texture with a letter R texture, to show more clearly how much of a mess it can get without texture filtering! Essentially, though, they all go like this:. To all intents and purposes, nearest point sampling isn't filtering - this is because all that happens is the nearest texel to the pixel requiring the texture is sampled i. Here comes linear filtering to the rescue. The required u,v coordinates for the texel are sent off to the hardware for sampling, but instead of taking the very nearest texel to those coordinates, the sampler takes four texels.
These are directly above, below, left, and right of the one selected by using nearest point sampling. These 4 texels are then blended together using a weighted formula. In Vulkan, for example, the formula is:. The T refers to texel color, where f is for the filtered one and 1 through to 4 are the four sampled texels.
The values for alpha and beta come from how far away the point defined by the u,v coordinates is from the middle of the texture. Fortunately for everyone involved in 3D games, whether playing them or making them, this happens automatically in the graphics processing chip. In fact, this is what the TMU chip in the 3dfx Voodoo did: sampled 4 texels and then blended them together. Direct3D oddly calls this bilinear filtering, but since the time of Quake and the Voodoo's TMU chip, graphics cards have been able to do bilinear filtering in just one clock cycle provided the texture is sitting handily in nearby memory, of course.
Linear filtering can be used alongside mipmaps, and if you want to get really fancy with your filtering, you can take 4 texels from a texture, then another 4 from the next level of mipmap, and then blend all that lot together. And Direct3D's name for this? Trilinear filtering. What's tri about this process? Your guess is as good as ours The last filtering method to mention is called anisotropic.
This is actually an adjustment to the process done in bilinear or trilinear filtering. It initially involves a calculation of the degree of anisotropy of the primitive's surface and it's surprisingly complex, too -- this value increases the primitive's aspect ratio alters due to its orientation:. The above image shows the same square primitive, with equal length sides; but as it rotates away from our perspective, the square appears to become a rectangle, and its width increases over its height.
So the primitive on the right has a larger degree of anisotropy than those left of it and in the case of the square, the degree is exactly zero. Many of today's 3D games allow you to enable anisotropic filtering and then adjust the level of it 1x through to 16x , but what does that actually change?
The setting controls the maximum number of additional texel samples that are taken per original linear sampling. For example, let's say the game is set to use 8x anisotropic bilinear filtering. This means that instead of just fetching 4 texels values, it will fetch 32 values. Just scroll back up a little and compare nearest point sampling to maxed out 16x anisotropic trilinear filtering. So smooth, it's almost delicious! But there must be a price to pay for all this lovely buttery texture deliciousness and it's surely performance: all maxed out, anisotropic trilinear filtering will be fetching samples from a texture, for each pixel being rendered.
For even the very best of the latest GPUs, that just can't be done in a single clock cycle. If we take something like AMD's Radeon RX XT , each one of the texturing units inside the processor can fire off 32 texel addresses in one clock cycle, then load 32 texel values from memory each 32 bits in size in another clock cycle, and then blend 4 of them together in one more tick.
So, for texel samples blended into one, that requires at least 16 clock cycles. Now the base clock rate of a XT is MHz, so sixteen cycles takes a mere 10 nanoseconds.
Doing this for every pixel in a 4K frame, using just one texture unit, would still only take 70 milliseconds. Okay, so perhaps performance isn't that much of an issue!
Even back in , the likes of the 3Dfx Voodoo were pretty nifty when it came to handling textures. It could max out at 1 bilinear filtered texel per clock cycle, and with the TMU chip rocking along at 50 MHz, that meant 50 million texels could be churned out, every second. A game running at x and 30 fps, would only need 14 million bilinear filtered texels per second. However, this all assumes that the textures are in nearby memory and that only one texel is mapped to each pixel.
Twenty years ago, the idea of needing to apply multiple textures to a primitive was almost completely alien, but it's commonplace now. Let's have a look at why this change came about. These can also include random generation of certain features or the ability to influence them in different ways such as scaling them. This allows one Substance Designer file to generate a near-infinite amount of similar textures very quickly once they are set up.
An example of this might be a texture of wooden planks with easy settings you could change that would change the number of planks, the color of the wood, or even the pattern of the wood grain in just a few simple clicks. When Substance came along it changed the way we think about generating textures for games and most modern studios will use it for a large chunk of their material work. For the majority of artists, it has become the standard and allowed the work of those artists to reach all new heights.
Chances are you have heard of Photoshop before. Whilst Photoshop is not the be-all and end-all that it used to be when it comes to texturing thanks to programs like Quixel and Substance it is still widely used thanks to its simplicity and versatility. Most people will be familiar with 2D image editing in some shape or form these days, many people will have even used Photoshop for other things, so it is generally fairly easy and straightforward to pick up, especially compared to the other entries on this list.
Another advantage of Photoshop is the ability to create textures outside of the standard photo-realistic PBR materials that Quixel and Substance specialize in. Photoshop can also be used in conjunction with other ways of texturing, such as sourcing textures from photographs or even extracting materials from other programs. Often in the industry, every artist will have a copy of Photoshop available to them whereas access to Substance and Quixel may be more limited depending on the company, project and the specific role of the artist.
Whilst it is no longer the most powerful texturing tool out there it still tops this list due to its versatility and wide-spread use. It is essentially a stripped-down version of a Photoshop-like image editor that is free and open source. This makes it the perfect program to get started with and see if you want to take this particular line of work further. If applying for an artist position in industry you will at least need working knowledge of Photoshop and probably Substance or Quixel as well.
That being said it makes the perfect stepping stone to get you started and as such it gets a shout out here. For more information check out our complete Game Artist guide. Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by whitelisting our website.
0コメント