HOME

TheInfoList



OR:

In
3D computer graphics 3D computer graphics, or “3D graphics,” sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for th ...
, normal mapping, or Dot3 bump mapping, is a
texture mapping Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color. History The original technique was pioneered by Edwin Catmull in 1974. Texture mappi ...
technique used for faking the lighting of bumps and dents – an implementation of bump mapping. It is used to add details without using more
polygon In geometry, a polygon () is a plane figure that is described by a finite number of straight line segments connected to form a closed '' polygonal chain'' (or ''polygonal circuit''). The bounded plane region, the bounding circuit, or the two ...
s. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model or height map. Normal maps are commonly stored as regular
RGB The RGB color model is an additive color model in which the red, green and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three addi ...
images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the
surface normal In geometry, a normal is an object such as a line, ray, or vector that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the (infinite) line perpendicular to the tangent line to the curve ...
.


History

In 1978 Jim Blinn described how the normals of a surface could be perturbed to make geometrically flat faces have a detailed appearance. The idea of taking geometric details from a high polygon model was introduced in "Fitting Smooth Surfaces to Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc.
SIGGRAPH SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) is an annual conference on computer graphics (CG) organized by the ACM SIGGRAPH, starting in 1974. The main conference is held in North America; SIGGRAPH Asia ...
1996, where this approach was used for creating
displacement maps Displacement mapping is an alternative computer graphics technique in contrast to bump, normal, and parallax mapping, using a texture or height map to cause an effect where the actual geometric position of points over the textured surface are ' ...
over
nurbs Non-uniform rational basis spline (NURBS) is a mathematical model using B-spline, basis splines (B-splines) that is commonly used in computer graphics for representing curves and Surface (mathematics), surfaces. It offers great flexibility and pr ...
. In 1998, two papers were presented with key ideas for transferring details with normal maps from high to low polygon meshes: "Appearance Preserving Simplification", by Cohen et al. SIGGRAPH 1998, and "A general method for preserving attribute values on simplified meshes" by Cignoni et al. IEEE Visualization '98. The former introduced the idea of storing surface normals directly in a texture, rather than displacements, though it required the low-detail model to be generated by a particular constrained simplification algorithm. The latter presented a simpler approach that decouples the high and low polygonal mesh and allows the recreation of any attributes of the high-detail model (color, texture coordinates, displacements, etc.) in a way that is not dependent on how the low-detail model was created. The combination of storing normals in a texture, with the more general creation process is still used by most currently available tools.


Spaces

The orientation of coordinate axes differs depending on the space in which the normal map was encoded. A straightforward implementation encodes normals in object-space, so that red, green, and blue components correspond directly with X, Y, and Z coordinates. In object-space the coordinate system is constant. However object-space normal maps cannot be easily reused on multiple models, as the orientation of the surfaces differ. Since color texture maps can be reused freely, and normal maps tend to correspond with a particular texture map, it is desirable for artists that normal maps have the same property. Normal map reuse is made possible by encoding maps in
tangent space In mathematics, the tangent space of a manifold generalizes to higher dimensions the notion of '' tangent planes'' to surfaces in three dimensions and ''tangent lines'' to curves in two dimensions. In the context of physics the tangent space to a ...
. The tangent space is a vector space which is tangent to the model's surface. The coordinate system varies smoothly (based on the derivatives of position with respect to texture coordinates) across the surface. Tangent space normal maps can be identified by their dominant purple color, corresponding to a vector facing directly out from the surface. See below.


Calculating tangent space

In order to find the perturbation in the normal the tangent space must be correctly calculated. Most often the normal is perturbed in a fragment shader after applying the model and view matrices. Typically the geometry provides a normal and tangent. The tangent is part of the tangent plane and can be transformed simply with the
linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
part of the matrix (the upper 3x3). However, the normal needs to be transformed by the inverse transpose. Most applications will want bitangent to match the transformed geometry (and associated UVs). So instead of enforcing the bitangent to be perpendicular to the tangent, it is generally preferable to transform the bitangent just like the tangent. Let ''t'' be tangent, ''b'' be bitangent, ''n'' be normal, ''M3x3'' be the linear part of model matrix, and ''V3x3'' be the linear part of the view matrix. :t' = t \times M_ \times V_ :b' = b \times M_ \times V_ :n' = n \times (M_ \times V_)^ = n \times M_^ \times V_^


Calculation

To calculate the Lambertian (diffuse) lighting of a surface, the unit
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
from the shading point to the light source is dotted with the unit vector normal to that surface, and the result is the intensity of the light on that surface. Imagine a polygonal model of a sphere - you can only approximate the shape of the surface. By using a 3-channel bitmap textured across the model, more detailed normal vector information can be encoded. Each channel in the bitmap corresponds to a spatial dimension (X, Y and Z). These spatial dimensions are relative to a constant coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model, especially in conjunction with advanced lighting techniques. Unit Normal vectors corresponding to the u,v texture coordinate are mapped onto normal maps. Only vectors pointing towards the viewer (z: 0 to -1 for Left Handed Orientation) are present, since the vectors on geometries pointing away from the viewer are never shown. The mapping is as follows: X: -1 to +1 : Red: 0 to 255 Y: -1 to +1 : Green: 0 to 255 Z: 0 to -1 : Blue: 128 to 255 light green light yellow dark cyan light blue light red dark blue dark magenta * A normal pointing directly towards the viewer (0,0,-1) is mapped to (128,128,255). Hence the parts of object directly facing the viewer are light blue. The most common color in a normal map. * A normal pointing to top right corner of the texture (1,1,0) is mapped to (255,255,128). Hence the top-right corner of an object is usually light yellow. The brightest part of a color map. * A normal pointing to right of the texture (1,0,0) is mapped to (255,128,128). Hence the right edge of an object is usually light red. * A normal pointing to top of the texture (0,1,0) is mapped to (128,255,128). Hence the top edge of an object is usually light green. * A normal pointing to left of the texture (-1,0,0) is mapped to (0,128,128). Hence the left edge of an object is usually dark cyan. * A normal pointing to bottom of the texture (0,-1,0) is mapped to (128,0,128). Hence the bottom edge of an object is usually dark magenta. * A normal pointing to bottom left corner of the texture (-1,-1,0) is mapped to (0,0,128). Hence the bottom-left corner of an object is usually dark blue. The darkest part of a color map. Since a normal will be used in the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an alg ...
calculation for the diffuse lighting computation, we can see that the would be remapped to the values, giving that kind of sky blue color seen in normal maps (blue (z) coordinate is perspective (deepness) coordinate and RG-xy flat coordinates on screen). would be remapped to the (/2+)*255=*255=*255= values (0.3^2+0.4^2+(-0.866)^2=1). The sign of the z-coordinate (blue channel) must be flipped to match the normal map's normal vector with that of the eye (the viewpoint or camera) or the light vector. Since negative z values mean that the vertex is in front of the camera (rather than behind the camera) this convention guarantees that the surface shines with maximum strength precisely when the light vector and normal vector are coincident.


Normal mapping in video games

Interactive normal map rendering was originally only possible on PixelFlow, a
parallel rendering Parallel rendering (or distributed rendering) is the application of parallel programming to the computational domain of computer graphics. Rendering graphics can require massive computational resources for complex scenes that arise in scientific ...
machine built at the
University of North Carolina at Chapel Hill A university () is an institution of higher (or tertiary) education and research which awards academic degrees in several academic disciplines. Universities typically offer both undergraduate and postgraduate programs. In the United State ...
. It was later possible to perform normal mapping on high-end SGI workstations using multi-pass rendering and
framebuffer A framebuffer (frame buffer, or sometimes framestore) is a portion of random-access memory (RAM) containing a bitmap that drives a video display. It is a memory buffer containing data representing all the pixels in a complete video frame. Mode ...
operationsHeidrich and Seidel
Realistic, Hardware-accelerated Shading and Lighting
,
SIGGRAPH SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) is an annual conference on computer graphics (CG) organized by the ACM SIGGRAPH, starting in 1974. The main conference is held in North America; SIGGRAPH Asia ...
1999 (
PDF Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. ...
)
or on low end PC hardware with some tricks using paletted textures. However, with the advent of
shader In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene - a process known as ''shading''. Shaders have evolved to perform a variety of speci ...
s in personal computers and game consoles, normal mapping became widely used in commercial video games starting in late 2003. Normal mapping's popularity for
real-time rendering Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in Real-time computing, real time. The term can refer to anything from rendering an application's graphical user ...
is due to its good quality to processing requirements ratio versus other methods of producing similar effects. Much of this efficiency is made possible by distance-indexed detail scaling, a technique which selectively decreases the detail of the normal map of a given texture (cf.
mipmapping In computer graphics, mipmaps (also MIP maps) or pyramids are pre-calculated, optimized sequences of images, each of which is a progressively lower resolution representation of the previous. The height and width of each image, or level, in the ...
), meaning that more distant surfaces require less complex lighting simulation. Many authoring pipelines use high resolution models
baked Baking is a method of preparing food that uses dry heat, typically in an oven, but can also be done in hot ashes, or on hot stones. The most common baked item is bread but many other types of foods can be baked. Heat is gradually transferre ...
into low/medium resolution in-game models augmented with normal maps. Basic normal mapping can be implemented in any hardware that supports palettized textures. The first game console to have specialized normal mapping hardware was the Sega
Dreamcast The is a home video game console released by Sega on November 27, 1998, in Japan; September 9, 1999, in North America; and October 14, 1999, in Europe. It was the first sixth-generation video game console, preceding Sony's PlayStation 2, Nint ...
. However, Microsoft's
Xbox Xbox is a video gaming brand created and owned by Microsoft. The brand consists of five video game consoles, as well as applications (games), streaming services, an online service by the name of Xbox network, and the development arm by the ...
was the first console to widely use the effect in retail games. Out of the sixth generation consoles, only the
PlayStation 2 The PlayStation 2 (PS2) is a home video game console developed and marketed by Sony Computer Entertainment. It was first released in Japan on 4 March 2000, in North America on 26 October 2000, in Europe on 24 November 2000, and in Australia on ...
's
GPU A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobi ...
lacks built-in normal mapping support, though it can be simulated using the PlayStation 2 hardware's vector units. Games for the
Xbox 360 The Xbox 360 is a home video game console developed by Microsoft. As the successor to the original Xbox, it is the second console in the Xbox series. It competed with Sony's PlayStation 3 and Nintendo's Wii as part of the seventh generati ...
and the
PlayStation 3 The PlayStation 3 (PS3) is a home video game console developed by Sony Interactive Entertainment, Sony Computer Entertainment. The successor to the PlayStation 2, it is part of the PlayStation brand of consoles. It was first released on Novemb ...
rely heavily on normal mapping and were the first game console generation to make use of
parallax mapping Parallax mapping (also called offset mapping or virtual displacement mapping) is an enhancement of the bump mapping or normal mapping techniques applied to textures in 3D rendering applications such as video games. To the end user, this means t ...
. The
Nintendo 3DS The is a handheld game console produced by Nintendo. It was announced in March 2010 and unveiled at E3 2010 as the successor to the Nintendo DS. The system features backward compatibility with Nintendo DS video games. As an eighth-generati ...
has been shown to support normal mapping, as demonstrated by '' Resident Evil: Revelations'' and '' Metal Gear Solid 3: Snake Eater''.


See also

*
Reflection (physics) Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. Th ...
*
Ambient occlusion In 3D computer graphics, modeling, and animation, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded ...
*
Depth map In 3D computer graphics and computer vision, a depth map is an image or image channel that contains information relating to the distance of the surfaces of scene objects from a viewpoint. The term is related (and may be analogous) to ''depth ...
*
Baking (computer graphics) This is a glossary of terms relating to computer graphics. For more general computer hardware terms, see glossary of computer hardware terms. 0–9 A B ...
* Tessellation (computer graphics) * Bump mapping *
Displacement mapping Displacement mapping is an alternative computer graphics technique in contrast to bump, normal, and parallax mapping, using a texture or height map to cause an effect where the actual geometric position of points over the textured surface are ' ...


References


External links


Normal Map Tutorial
Per-pixel logic behind Dot3 Normal Mapping
NormalMap-Online
Free Generator inside Browser
Normal Mapping on sunandblackcat.com




using old OpenGL extensions.

Creating normal maps manually by layering digital photographs


Simple Normal Mapper
Open Source normal map generator {{Texture mapping techniques Texture mapping Virtual reality