3D rendering is the
3D computer graphics
3D computer graphics, or “3D graphics,” sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for t ...
process of converting
3D models
In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of any surface of an object (inanimate or living) in three dimensions via specialized software by manipulating edges, vertices, an ...
into
2D images on a
computer. 3D renders may include
photorealistic effects or
non-photorealistic styles.
Rendering methods
Rendering is the final process of creating the actual 2D image or
animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic
wireframe rendering through polygon-based rendering, to more advanced techniques such as:
scanline rendering,
ray tracing, or
radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photorealistic rendering, or
real-time rendering.
Real-time

Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second (a.k.a. "in one frame": In the case of a 30 frame-per-second animation, a frame encompasses one 30th of a second).
The primary goal is to achieve an as high as possible degree of
photorealism
Photorealism is a genre of art that encompasses painting, drawing and other graphic media, in which an artist studies a photograph and then attempts to reproduce the image as realistically as possible in another Medium (arts), medium. Although ...
at an acceptable minimum rendering speed (usually 24 frames per second, as that is the minimum the human eye needs to see to successfully create the illusion of movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result, the final image presented is not necessarily that of the real world, but one close enough for the human eye to tolerate.
Rendering software may simulate such visual effects as
lens flares,
depth of field
The depth of field (DOF) is the distance between the nearest and the furthest objects that are in acceptably sharp focus in an image captured with a camera.
Factors affecting depth of field
For cameras that can only focus on one object di ...
or
motion blur
Motion blur is the apparent streaking of moving objects in a photograph or a sequence of frames, such as a film or animation. It results when the image being recorded changes during the recording of a single exposure, due to rapid movement or ...
. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artifact of a camera. This is the basic method employed in games, interactive worlds and
VRML.
The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as
HDR rendering
High-dynamic-range rendering (HDRR or HDR rendering), also known as high-dynamic-range lighting, is the rendering of computer graphics scenes by using lighting calculations done in high dynamic range (HDR). This allows preservation of details ...
. Real-time rendering is often polygonal and aided by the computer's
GPU.
Non-real-time

Animations for non-interactive media, such as feature films and video, can take much more time to render. Non-real-time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a
hard disk
A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with mag ...
, then transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30
frames per second
A frame is often a structural system that supports other components of a physical construction and/or steel frame that limits the construction's extent.
Frame and FRAME may also refer to:
Physical objects
In building construction
* Framing (c ...
(fps), to achieve the illusion of movement.
When the goal is photo-realism, techniques such as
ray tracing,
path tracing
Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving t ...
,
photon mapping or
radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include
particle systems (which can simulate rain, smoke, or fire),
volumetric sampling (to simulate fog, dust and other spatial atmospheric effects),
caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and
subsurface scattering (to simulate light reflecting inside the volumes of solid objects, such as
human skin
The human skin is the outer covering of the body and is the largest organ of the integumentary system. The skin has up to seven layers of ectodermal tissue guarding muscles, bones, ligaments and internal organs. Human skin is similar to m ...
).
The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a
render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system given the costs involved when using render farms. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using
compositing software.
Reflection and shading models
Models of reflection/scattering and shading are used to describe the appearance of a surface. Although these issues may seem like problems all on their own, they are studied almost exclusively within the context of rendering. Modern 3D computer graphics rely heavily on a simplified reflection model called the
Phong reflection model (not to be confused with
Phong ''shading''). In the
refraction
In physics, refraction is the redirection of a wave as it passes from one medium to another. The redirection can be caused by the wave's change in speed or by a change in the medium. Refraction of light is the most commonly observed phenomen ...
of light, an important concept is the
refractive index
In optics, the refractive index (or refraction index) of an optical medium is a dimensionless number that gives the indication of the light bending ability of that medium.
The refractive index determines how much the path of light is bent, o ...
; in most 3D programming implementations, the term for this value is "index of refraction" (usually shortened to IOR).
Shading can be broken down into two different techniques, which are often studied independently:
*Surface shading - how light spreads across a surface (mostly used in
scanline rendering for real-time 3D rendering in video games)
*Reflection/scattering - how light interacts with a surface ''at a given point'' (mostly used in ray-traced renders for non-real-time photorealistic and artistic 3D rendering in both CGI still 3D images and CGI non-interactive 3D animations)
Surface shading algorithms
Popular surface shading algorithms in 3D computer graphics include:
*
Flat shading: a technique that shades each polygon of an object based on the polygon's "normal" and the position and intensity of a light source
*
Gouraud shading: invented by
H. Gouraud in 1971; a fast and resource-conscious vertex shading technique used to simulate smoothly shaded surfaces
*
Phong shading: invented by
Bui Tuong Phong; used to simulate specular highlights and smooth shaded surfaces
Reflection

Reflection or scattering is the relationship between the incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a
bidirectional scattering distribution function The definition of the BSDF (bidirectional scattering distribution function) is not well standardized. The term was probably introduced in 1980 by Bartell, Dereniak, and Wolfe.
Most often it is used to name the general mathematical function which de ...
or BSDF.
Shading
Shading
Shading refers to the depiction of depth perception in 3D models (within the field of 3D computer graphics) or illustrations (in visual art) by varying the level of darkness. Shading tries to approximate local behavior of light on the object ...
addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a
shader
In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene - a process known as '' shading''. Shaders have evolved to perform a variety of spec ...
. A simple example of shading is
texture mapping
Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color.
History
The original technique was pioneered by Edwin Catmull in 1974.
Texture mappi ...
, which uses an
image
An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimensio ...
to specify the diffuse color at each point on a surface, giving it more apparent detail.
Some shading techniques include:
*
Bump mapping: Invented by
Jim Blinn, a normal-perturbation technique used to simulate wrinkled surfaces.
*
Cel shading
Cel shading or toon shading is a type of non-photorealistic rendering designed to make 3-D computer graphics appear to be flat by using less shading color instead of a shade gradient or tints and shades. A cel shader is often used to mimi ...
: A technique used to imitate the look of hand-drawn animation.
Transport
Transport
Transport (in British English), or transportation (in American English), is the intentional movement of humans, animals, and goods from one location to another. Modes of transport include air, land ( rail and road), water, cable, pipel ...
describes how illumination in a scene gets from one place to another.
Visibility
The visibility is the measure of the distance at which an object or light can be clearly discerned. In meteorology it depends on the transparency of the surrounding air and as such, it is unchanging no matter the ambient light level or time of ...
is a major component of light transport.
Projection

The shaded three-dimensional objects must be flattened so that the display device - namely a monitor - can display it in only two dimensions, this process is called
3D projection
A 3D projection (or graphical projection) is a design technique used to display a three-dimensional (3D) object on a two-dimensional (2D) surface. These projections rely on visual perspective and aspect analysis to project a complex object fo ...
. This is done using projection and, for most applications,
perspective projection. The basic idea behind perspective projection is that objects that are further away are made smaller in relation to those that are closer to the eye. Programs produce perspective by multiplying a dilation constant raised to the power of the negative of the distance from the observer. A dilation constant of one means that there is no perspective. High dilation constants can cause a "fish-eye" effect in which image distortion begins to occur.
Orthographic projection is used mainly in
CAD or
CAM applications where scientific modeling requires precise measurements and preservation of the third dimension.
Rendering engines
Render engines may come together or be integrated with 3D modeling software but there is standalone software as well. Some render engines are compatible with multiple 3D software, while some are exclusive to one.
See also
*
Architectural rendering
Architectural rendering, architectural illustration, or architectural visualization is the art of creating three-dimensional images or animations showing the attributes of a proposed architectural design.
Computer generated renderings
Images tha ...
*
Ambient occlusion
In 3D computer graphics, modeling, and animation, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occlude ...
*
Computer vision
Computer vision is an Interdisciplinarity, interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate t ...
*
Geometry pipeline
Geometric manipulation of modelling primitives, such as that performed by a geometry pipeline, is the first stage in computer graphics systems which perform image generation based on geometric models. While geometry pipelines were originally implem ...
*
Geometry processing
Geometry processing, or mesh processing, is an area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulat ...
*
Graphics
Graphics () are visual images or designs on some surface, such as a wall, canvas, screen, paper, or stone, to inform, illustrate, or entertain. In contemporary usage, it includes a pictorial representation of data, as in design and manufacture, ...
*
Graphics processing unit
A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mo ...
(GPU)
*
Graphical output device
An output device is any piece of computer hardware equipment which converts information into a human-perceptible form or, historically, into a physical machine-readable form for use with other non-computerized equipment. It can be text, graphics, ...
s
*
Image processing
An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimension ...
*
Industrial CT scanning
*
Painter's algorithm
*
Parallel rendering
*
Reflection (computer graphics)
*
SIGGRAPH
*
Volume rendering
Notes and references
External links
How Stuff Works - 3D Graphics(
Wayback Machine
The Wayback Machine is a digital archive of the World Wide Web founded by the Internet Archive, a nonprofit based in San Francisco, California. Created in 1996 and launched to the public in 2001, it allows the user to go "back in time" and s ...
copy)
{{DEFAULTSORT:3d Rendering