HOME

TheInfoList



OR:

Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or
color Color (American English) or colour (British English) is the visual perceptual property deriving from the spectrum of light interacting with the photoreceptor cells of the eyes. Color categories and physical specifications of color are associ ...
.


History

The original technique was pioneered by Edwin Catmull in 1974. Texture mapping originally referred to diffuse mapping, a method that simply mapped
pixel In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest point in an all points addressable display device. In most digital display devices, pixels are the ...
s from a texture to a 3D surface ("wrapping" the image around the object). In recent decades, the advent of multi-pass rendering, multitexturing, mipmaps, and more complex mappings such as
height mapping In computer graphics, a heightmap or heightfield is a raster image used mainly as Discrete Global Grid in secondary elevation modeling. Each pixel stores values, such as surface elevation data, for display in 3D computer graphics. A heightm ...
, bump mapping, normal mapping, displacement mapping, reflection mapping,
specular mapping Specularity is the visual appearance of specular reflections. In computer graphics In computer graphics, it means the quantity used in three-dimensional (3D) rendering which represents the amount of reflectivity a surface has. It is a key com ...
, occlusion mapping, and many other variations on the technique (controlled by a
materials system 3D computer graphics, or “3D graphics,” sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for th ...
) have made it possible to simulate near- photorealism in real time by vastly reducing the number of
polygon In geometry, a polygon () is a plane figure that is described by a finite number of straight line segments connected to form a closed '' polygonal chain'' (or ''polygonal circuit''). The bounded plane region, the bounding circuit, or the two ...
s and lighting calculations needed to construct a realistic and functional 3D scene.


Texture maps

A is an image applied (mapped) to the surface of a shape or
polygon In geometry, a polygon () is a plane figure that is described by a finite number of straight line segments connected to form a closed '' polygonal chain'' (or ''polygonal circuit''). The bounded plane region, the bounding circuit, or the two ...
. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles. They may have 1-3 dimensions, although 2 dimensions are most common for visible surfaces. For use with modern hardware, texture map data may be stored in swizzled or tiled orderings to improve cache coherency.
Rendering APIs Rendering APIs typically provide just enough functionality to abstract a graphics accelerator, focussing on rendering primitives, state management, command lists/ command buffers; and as such differ from fully fledged 3D graphics libraries, 3D en ...
typically manage texture map resources (which may be located in
device memory This glossary of computer hardware terms is a list of definitions of terms and concepts related to computer hardware, i.e. the physical and structural components of computers, architectural issues, and peripheral devices. A ...
) as buffers or surfaces, and may allow ' render to texture' for additional effects such as post processing or environment mapping. They usually contain RGB color data (either stored as direct color, compressed formats, or indexed color), and sometimes an additional channel for alpha blending ( RGBA) especially for billboards and ''decal'' overlay textures. It is possible to use the alpha channel (which may be convenient to store in formats parsed by hardware) for other uses such as specularity. Multiple texture maps (or channels) may be combined for control over specularity, normals, displacement, or subsurface scattering e.g. for skin rendering. Multiple texture images may be combined in
texture atlases In computer graphics, a texture atlas (also called a spritesheet or an image sprite in 2d game development) is an image containing multiple smaller images, usually packed together to reduce overall dimensions. An atlas can consist of uniformly-si ...
or
array textures This is a glossary of terms relating to computer graphics. For more general computer hardware terms, see glossary of computer hardware terms. 0–9 A B ...
to reduce state changes for modern hardware. (They may be considered a modern evolution of tile map graphics). Modern hardware often supports
cube map In computer graphics, cube mapping is a method of environment mapping that uses the six faces of a cube as the map shape. The environment is projected onto the sides of a cube and stored as six square textures, or unfolded into six regions of a ...
textures with multiple faces for environment mapping.


Creation

Texture maps may be acquired by scanning/
digital photography Digital photography uses cameras containing arrays of electronic photodetectors interfaced to an analog-to-digital converter (ADC) to produce images focused by a lens, as opposed to an exposure on photographic film. The digitized image ...
, designed in
image manipulation software In computer graphics, graphics software refers to a program or collection of programs that enable a person to manipulate images or models visually on a computer. Computer graphics can be classified into two distinct categories: raster graphic ...
such as GIMP, Photoshop, or painted onto 3D surfaces directly in a
3D paint tool This is a glossary of terms relating to computer graphics. For more general computer hardware terms, see glossary of computer hardware terms. 0–9 A B ...
such as Mudbox or zbrush.


Texture application

This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a
texture coordinate A vertex (plural vertices) in computer graphics is a data structure that describes certain attributes, like the position of a point in 2D or 3D space, or multiple points on a surface. Application to 3D models 3D models are most often represente ...
(which in the 2d case is also known as
UV coordinate This is a glossary of terms relating to computer graphics. For more general computer hardware terms, see glossary of computer hardware terms. 0–9 A B ...
s). This may be done through explicit assignment of
vertex attributes A vertex (plural vertices) in computer graphics is a data structure that describes certain attributes, like the position of a point in 2D or 3D space, or multiple points on a surface. Application to 3D models 3D models are most often represent ...
, manually edited in a 3D modelling package through
UV unwrapping tools UV mapping is the 3D modeling process of projecting a 3D model's surface to a 2D image for texture mapping. The letters "U" and "V" denote the axes of the 2D texture because "X", "Y", and "Z" are already used to denote the axes of the 3D object i ...
. It is also possible to associate a procedural transformation from 3d space to texture space with the
material Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geolo ...
. This might be accomplished via
planar projection Planar projections are the subset of 3D graphical projections constructed by linearly mapping points in three-dimensional space to points on a two-dimensional projection plane. The projected point on the plane is chosen such that it is collin ...
or, alternatively, cylindrical or spherical mapping. More complex mappings may consider the distance along a surface to minimize distortion. These coordinates are interpolated across the faces of polygons to sample the texture map during rendering. Textures may be repeated or mirrored to extend a finite rectangular bitmap over a larger area, or they may have a one-to-one unique " injective" mapping from every piece of a surface (which is important for
render mapping This is a glossary of terms relating to computer graphics. For more general computer hardware terms, see glossary of computer hardware terms. 0–9 A B ...
and
light mapping A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. Lightmaps are most commonly applied to static ...
, also known as baking).


Texture space

Texture mapping maps the model surface (or screen space during rasterization) into texture space; in this space, the texture map is visible in its undistorted form.
UV unwrapping UV mapping is the 3D modeling process of projecting a 3D model's surface to a 2D image for texture mapping. The letters "U" and "V" denote the axes of the 2D texture because "X", "Y", and "Z" are already used to denote the axes of the 3D object ...
tools typically provide a view in texture space for manual editing of texture coordinates. Some rendering techniques such as subsurface scattering may be performed approximately by texture-space operations.


Multitexturing

Multitexturing is the use of more than one texture at a time on a polygon. For instance, a
light map A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. Lightmaps are most commonly applied to static ob ...
texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Microtextures or detail textures are used to add higher frequency details, and dirt maps may add weathering and variation; this can greatly reduce the apparent periodicity of repeating textures. Modern graphics may use more than 10 layers, which are combined using shaders, for greater fidelity. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface (such as tree bark or rough concrete) that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video games, as graphics hardware has become powerful enough to accommodate it in real-time.


Texture filtering

The way that samples (e.g. when viewed as
pixel In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest point in an all points addressable display device. In most digital display devices, pixels are the ...
s on the screen) are calculated from the texels (texture pixels) is governed by
texture filtering In computer graphics, texture filtering or texture smoothing is the method used to determine the texture color for a texture mapped pixel, using the colors of nearby texels (pixels of the texture). There are two main categories of texture filterin ...
. The cheapest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of a texture coordinate being outside the texture, it is either clamped or wrapped.
Anisotropic filtering In 3D computer graphics, anisotropic filtering (abbreviated AF) is a method of enhancing the image quality of textures on surfaces of computer graphics that are at oblique viewing angles with respect to the camera where the projection of the t ...
better eliminates directional artefacts when viewing textures from oblique viewing angles.


Texture streaming

Texture streaming is a means of using data streams for textures, where each texture is available in two or more different resolutions, as to determine which texture should be loaded into memory and used based on draw distance from the viewer and how much memory is available for textures. Texture streaming allows for rendering engine to use low resolution textures for objects far away from the viewer's camera, and resolve those into more detailed textures, read from a data source, as the point of view nears the objects.


Baking

As an optimization, it is possible to render detail from a complex, high-resolution model or expensive process (such as
global illumination Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from ...
) into a surface texture (possibly on a low-resolution model). ''Baking'' is also known as render mapping. This technique is most commonly used for light maps, but may also be used to generate
normal maps In 3D computer graphics, normal mapping, or Dot3 bump mapping, is a texture mapping technique used for faking the lighting of bumps and dents – an implementation of bump mapping. It is used to add details without using more polygons. A common u ...
and
displacement maps Displacement mapping is an alternative computer graphics technique in contrast to bump, normal, and parallax mapping, using a texture or height map to cause an effect where the actual geometric position of points over the textured surface are ' ...
. Some computer games (e.g.
Messiah In Abrahamic religions, a messiah or messias (; , ; , ; ) is a saviour or liberator of a group of people. The concepts of '' mashiach'', messianism, and of a Messianic Age originated in Judaism, and in the Hebrew Bible, in which a ''mashiach ...
) have used this technique. The original Quake software engine used on-the-fly baking to combine light maps and colour maps ("
surface caching The ''Quake'' engine is the game engine developed by id Software to power their 1996 video game '' Quake''. It featured true 3D real-time rendering and is now licensed under the terms of GNU General Public License v2.0 or later. After rele ...
"). Baking can be used as a form of level of detail generation, where a complex scene with many different elements and materials may be approximated by a ''single'' element with a ''single'' texture, which is then algorithmically reduced for lower rendering cost and fewer
drawcalls Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface ( GUI) t ...
. It is also used to take high-detail models from
3D sculpting software Digital sculpting, also known as sculpt modeling or 3D sculpting, is the use of software that offers tools to push, pull, smooth, grab, pinch or otherwise manipulate a digital object as if it were made of a real-life substance such as clay. Sculp ...
and
point cloud scanning Point or points may refer to: Places * Point, Lewis, a peninsula in the Outer Hebrides, Scotland * Point, Texas, a city in Rains County, Texas, United States * Point, the NE tip and a ferry terminal of Lismore, Inner Hebrides, Scotland * Point ...
and approximate them with
meshes A mesh is a barrier made of connected strands of metal, fiber, or other flexible or ductile materials. A mesh is similar to a web or a net in that it has many attached or woven strands. Types * A plastic mesh may be extruded, oriented, exp ...
more suitable for realtime rendering.


Rasterisation algorithms

Various techniques have evolved in software and hardware implementations. Each offers different trade-offs in precision, versatility and performance.


Forward texture mapping

Some hardware systems e.g. Sega Saturn and the
NV1 The Nvidia NV1, manufactured by SGS-Thomson Microelectronics under the model name STG2000, was a multimedia PCI card announced in May 1995 and released in November 1995. It was sold to retail by Diamond as the Diamond Edge 3D. The NV1 featur ...
traverse texture coordinates directly, interpolating the projected position in screen space through texture space and splatting the texels into a frame buffer. (in the case of the NV1, quadratic interpolation was used allowing curved rendering). Sega provided tools for baking suitable per-quad texture tiles from UV mapped models. This has the advantage that texture maps are read in a simple linear fashion. Forward texture mapping may also sometimes produce more natural looking results than affine texture mapping if the primitives are aligned with prominent texture directions (e.g. road markings or layers of bricks). This provides a limited form of perspective correction. However, perspective distortion is still visible for primitives near the camera (e.g. the Saturn port of ''
Sega Rally ''Sega Rally'' is a series of racing video games published by Sega and developed by several studios including Sega AM3, Sega and Sega Racing Studio. The series released its first title, ''Sega Rally Championship'' in 1994. Initially, ''Sega Ral ...
'' exhibited texture-squashing artifacts as nearby polygons were
near clipped Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. ...
without UV coordinates). This technique is not used in modern hardware because
UV coordinates UV mapping is the 3D modeling process of projecting a 3D model's surface to a 2D image for texture mapping. The letters "U" and "V" denote the axes of the 2D texture because "X", "Y", and "Z" are already used to denote the axes of the 3D object i ...
have proved more versatile for modelling and more consistent for clipping.


Inverse texture mapping

Most approaches use inverse texture mapping, which traverses the
rendering primitives Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined ...
in screen space whilst interpolating texture coordinates for sampling. This interpolation may be ''affine'' or ''perspective correct''. One advantage is that each output pixel is guaranteed to only be traversed once; generally the source texture map data is stored in some lower bit-depth or compressed form whilst the frame buffer uses a higher bit-depth. Another is greater versatility for UV mapping. A texture cache becomes important for buffering reads, since the memory access pattern in texture space is more complex.


Affine texture mapping

Affine texture mapping linearly interpolates texture coordinates across a surface, and so is the fastest form of texture mapping. Some software and hardware (such as the original PlayStation) project vertices in 3D space onto the screen during rendering and linearly interpolate the texture coordinates ''in screen space'' between them ("inverse texture mapping"). This may be done by incrementing fixed point
UV coordinates UV mapping is the 3D modeling process of projecting a 3D model's surface to a 2D image for texture mapping. The letters "U" and "V" denote the axes of the 2D texture because "X", "Y", and "Z" are already used to denote the axes of the 3D object i ...
, or by an incremental error algorithm akin to
Bresenham's line algorithm Bresenham's line algorithm is a line drawing algorithm that determines the points of an ''n''-dimensional raster that should be selected in order to form a close approximation to a straight line between two points. It is commonly used to draw li ...
. In contrast to perpendicular polygons, this leads to noticeable distortion with perspective transformations (see figure: the checker box texture appears bent), especially as primitives near the
camera A camera is an optical instrument that can capture an image. Most cameras can capture 2D images, with some more advanced models being able to capture 3D images. At a basic level, most cameras consist of sealed boxes (the camera body), with ...
. Such distortion may be reduced with the subdivision of the polygon into smaller ones.


Perspective correctness

Perspective correct texturing accounts for the vertices' positions in 3D space, rather than simply interpolating coordinates in 2D screen space. This achieves the correct visual effect but it is more expensive to calculate. To perform perspective correction of the texture coordinates u and v, with z being the depth component from the viewer's point of view, we can take advantage of the fact that the values \frac, \frac, and \frac are linear in screen space across the surface being textured. In contrast, the original z, u and v, before the division, are not linear across the surface in screen space. We can therefore linearly interpolate these reciprocals across the surface, computing corrected values at each pixel, to result in a perspective correct texture mapping. To do this, we first calculate the reciprocals at each vertex of our geometry (3 points for a triangle). For vertex n we have \frac, \frac, \frac. Then, we linearly interpolate these reciprocals between the n vertices (e.g., using Barycentric Coordinates), resulting in interpolated values across the surface. At a given point, this yields the interpolated u_i, v_i, and zReciprocal_i = \frac. Note that this u_i, v_i cannot be yet used as our texture coordinates as our division by z altered their coordinate system. To correct back to the u, v space we first calculate the corrected z by again taking the reciprocal z_ = \frac = \frac. Then we use this to correct our u_i, v_i: u_ = u_i \cdot z_i and v_ = v_i \cdot z_i. This correction makes it so that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between texture coordinates is smaller (stretching the texture wider) and in parts that are farther away this difference is larger (compressing the texture). :Affine texture mapping directly interpolates a texture coordinate u^_ between two endpoints u^_0 and u^_1: ::u^_= (1 - \alpha ) u_0 + \alpha u_1 where 0 \le \alpha \le 1 :Perspective correct mapping interpolates after dividing by depth z^_, then uses its interpolated reciprocal to recover the correct coordinate: ::u^_= \frac 3D graphics hardware typically supports perspective correct texturing. Various techniques have evolved for rendering texture mapped geometry into images with different quality/precision tradeoffs, which can be applied to both software and hardware. Classic software texture mappers generally did only simple mapping with at most one lighting effect (typically applied through a lookup table), and the perspective correctness was about 16 times more expensive.


Restricted camera rotation

The '' Doom engine'' restricted the world to vertical walls and horizontal floors/ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors/ceilings would have a constant depth along a horizontal line. A fast affine mapping could be used along those lines because it would be correct. Some later renderers of this era simulated a small amount of camera pitch with
shearing Sheep shearing is the process by which the woollen fleece of a sheep is cut off. The person who removes the sheep's wool is called a '' shearer''. Typically each adult sheep is shorn once each year (a sheep may be said to have been "shorn" o ...
which allowed the appearance of greater freedom whilst using the same rendering technique. Some engines were able to render texture mapped
Heightmaps In computer graphics, a heightmap or heightfield is a raster image used mainly as Discrete Global Grid in secondary elevation modeling. Each pixel stores values, such as surface elevation data, for display in 3D computer graphics. A heig ...
(e.g. Nova Logic's
Voxel Space Voxel Space was a voxel raster graphics rendering engine invented by Novalogic developer and vice-president of technology, Kyle Freeman. The company was issued a patent for the technology in early 2000. History The original Voxel Space engine w ...
, and the engine for Outcast) via Bresenham-like incremental algorithms, producing the appearance of a texture mapped landscape without the use of traditional geometric primitives.


Subdivision for perspective correction

Every triangle can be further subdivided into groups of about 16 pixels in order to achieve two goals. First, keeping the arithmetic mill busy at all times. Second, producing faster arithmetic results.


World space subdivision

For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering and affine mapping is used on them. The reason this technique works is that the distortion of affine mapping becomes much less noticeable on smaller polygons. The Sony PlayStation made extensive use of this because it only supported affine mapping in hardware but had a relatively high triangle throughput compared to its peers.


Screen space subdivision

Software renderers generally preferred screen subdivision because it has less overhead. Additionally, they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not fit into the low number of registers of the x86 CPU; the 68000 or any RISC is much more suited). A different approach was taken for '' Quake'', which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor. The polygons are rendered independently, hence it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the
polygon normal In geometry, a normal is an object such as a line, ray, or vector that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the (infinite) line perpendicular to the tangent line to the curve at ...
to achieve a more constant z but the effort seems not to be worth it.


Other techniques

Another technique was approximating the perspective with a faster calculation, such as a polynomial. Still another technique uses 1/z value of the last two drawn pixels to linearly extrapolate the next value. The division is then done starting from those values so that only a small remainder has to be divided but the amount of bookkeeping makes this method too slow on most systems. Finally, the Build engine extended the constant distance trick used for Doom by finding the line of constant distance for arbitrary polygons and rendering along it.


Hardware implementations

Texture mapping hardware was originally developed for simulation (e.g. as implemented in the
Evans and Sutherland Evans & Sutherland is a pioneering American computer firm in the computer graphics field. Its current products are used in digital projection environments like planetariums. Its simulation business, which it sold to Rockwell Collins, sold products ...
ESIG image generators), and professional graphics workstations such as
Silicon Graphics Silicon Graphics, Inc. (stylized as SiliconGraphics before 1999, later rebranded SGI, historically known as Silicon Graphics Computer Systems or SGCS) was an American high-performance computing manufacturer, producing computer hardware and soft ...
, broadcast digital video effects machines such as the
Ampex ADO Ampex is an American electronics company founded in 1944 by Alexander M. Poniatoff as a spin-off of Dalmo-Victor. The name AMPEX is a portmanteau, created by its founder, which stands for Alexander M. Poniatoff Excellence.AbramsoThe History ...
and later appeared in Arcade cabinets, consumer video game consoles, and PC
video card A graphics card (also called a video card, display card, graphics adapter, VGA card/VGA, video adapter, display adapter, or mistakenly GPU) is an expansion card which generates a feed of output images to a display device, such as a computer m ...
s in the mid 1990s. In flight simulation, texture mapping provided important motion cues. Modern
graphics processing unit A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, m ...
s (GPUs) provide specialised
fixed function unit This is a glossary of terms relating to computer graphics. For more general computer hardware terms, see glossary of computer hardware terms. 0–9 A B ...
s called ''texture samplers'', or ''texture mapping units'', to perform texture mapping, usually with trilinear filtering or better multi-tap
anisotropic filtering In 3D computer graphics, anisotropic filtering (abbreviated AF) is a method of enhancing the image quality of textures on surfaces of computer graphics that are at oblique viewing angles with respect to the camera where the projection of the t ...
and hardware for decoding specific formats such as DXTn. As of 2016, texture mapping hardware is ubiquitous as most SOCs contain a suitable GPU. Some hardware combines texture mapping with
hidden-surface determination In 3D computer graphics, hidden-surface determination (also known as shown-surface determination, hidden-surface removal (HSR), occlusion culling (OC) or visible-surface determination (VSD)) is the process of identifying what surfaces and parts o ...
in tile based deferred rendering or scanline rendering; such systems only fetch the visible texels at the expense of using greater workspace for transformed vertices. Most systems have settled on the Z-buffering approach, which can still reduce the texture mapping workload with front-to-back sorting.


Applications

Beyond 3D rendering, the availability of texture mapping hardware has inspired its use for accelerating other tasks:


Tomography

It is possible to use texture mapping hardware to accelerate both the
reconstruction Reconstruction may refer to: Politics, history, and sociology * Reconstruction (law), the transfer of a company's (or several companies') business to a new company *''Perestroika'' (Russian for "reconstruction"), a late 20th century Soviet Unio ...
of voxel data sets from tomographic scans, and to visualize the results


User interfaces

Many user interfaces use texture mapping to accelerate animated transitions of screen elements, e.g. Exposé in
Mac OS X macOS (; previously OS X and originally Mac OS X) is a Unix operating system developed and marketed by Apple Inc. since 2001. It is the primary operating system for Apple's Mac computers. Within the market of desktop and lap ...
.


See also

* 2.5D *
3D computer graphics 3D computer graphics, or “3D graphics,” sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for th ...
* Mipmap *
Materials system 3D computer graphics, or “3D graphics,” sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for th ...
* Parametrization * Texture synthesis *
Texture atlas In computer graphics, a texture atlas (also called a spritesheet or an image sprite in 2d game development) is an image containing multiple smaller images, usually packed together to reduce overall dimensions. An atlas can consist of uniformly-siz ...
*
Texture splatting In computer graphics, texture splatting is a method for combining different textures. It works by applying an alphamap (also called a "weightmap" or a "splat map") to the higher levels, thereby revealing the layers underneath where the alphamap i ...
– a technique for combining textures *
Shader (computer graphics) In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene - a process known as '' shading''. Shaders have evolved to perform a variety of ...


References


Software


TexRecon
— open-source software for texturing 3D models written in C++


External links


Introduction into texture mapping using C and SDL
(PDF)
Programming a textured terrain
using XNA/DirectX, from www.riemers.net


Time Texturing
Texture mapping with bezier lines
Polynomial Texture Mapping
Interactive Relighting for Photos

Methods that can be used to interpolate a texture knowing the texture coords at the vertices of a polygon
3D Texturing Tools
{{DEFAULTSORT:Texture Mapping Computer graphics