Texture Compression
   HOME





Texture Compression
Texture compression is a specialized form of image compression designed for storing texture maps in 3D computer graphics rendering systems. Unlike conventional image compression algorithms, texture compression algorithms are optimized for random access. Texture compression can be applied to reduce memory usage at runtime. Texture data is often the largest source of memory usage in a mobile application. Tradeoffs In their seminal paper on texture compression, Beers, Agrawala and Chaddha list four features that tend to differentiate texture compression from other image compression techniques. These features are: ;Decoding Speed: It is highly desirable to be able to render directly from the compressed texture data and so, in order not to impact rendering performance, decompression must be fast. ;Random Access: Since predicting the order that a renderer accesses texels would be difficult, any texture compression scheme must allow fast random access to decompressed texture data. Thi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Image Compression
Image compression is a type of data compression applied to digital images, to reduce their cost for computer data storage, storage or data transmission, transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data. Lossy and lossless image compression Image compression may be lossy compression, lossy or lossless compression, lossless. Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Anisotropic Filtering
In 3D computer graphics, anisotropic filtering (AF) is a technique that improves the appearance of Texture filtering, textures, especially on surfaces viewed at sharp Viewing angle, angles. It helps make textures look sharper and more detailed by reducing blur and aliasing that can occur when surfaces are angled away from the viewer. Anisotropy, Anisotropic filtering works by applying different amounts of filtering in different directions, unlike simpler methods like Bilinear filtering, bilinear and trilinear filtering which filter equally in all directions. While it requires more processing power than these simpler methods, anisotropic filtering became a standard feature in most graphics cards in the late 1990s and is now commonly used in games and other 3D applications, often with user-adjustable settings. Comparison to isotropic algorithms Anisotropic filtering enhances texture sharpness, counteracting the blur introduced by mipmapping, a common Anti-aliasing filter, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Texture Compression
Texture compression is a specialized form of image compression designed for storing texture maps in 3D computer graphics rendering systems. Unlike conventional image compression algorithms, texture compression algorithms are optimized for random access. Texture compression can be applied to reduce memory usage at runtime. Texture data is often the largest source of memory usage in a mobile application. Tradeoffs In their seminal paper on texture compression, Beers, Agrawala and Chaddha list four features that tend to differentiate texture compression from other image compression techniques. These features are: ;Decoding Speed: It is highly desirable to be able to render directly from the compressed texture data and so, in order not to impact rendering performance, decompression must be fast. ;Random Access: Since predicting the order that a renderer accesses texels would be difficult, any texture compression scheme must allow fast random access to decompressed texture data. Thi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Color Cell Compression
Color Cell Compression is a lossy compression, lossy image compression algorithm developed by Campbell et al., in 1986, which can be considered an early forerunner of modern texture compression algorithms, such as S3 Texture Compression and Adaptive Scalable Texture Compression. It is closely related to Block Truncation Coding, another lossy image compression algorithm, which predates Color Cell Compression, in that it uses the dominant luminance of a block of pixels to partition said pixels into two representative colors. The primary difference between Block Truncation Coding and Color Cell Compression is that the former was designed to compress grayscale images and the latter was designed to compress color images. Also, Block Truncation Coding requires that the standard deviation of the colors of pixels in a block be computed in order to compress an image, whereas Color Cell Compression does not use the standard deviation. Both algorithms, though, can compress an image down to ef ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Vector Quantization
Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors. Developed in the early 1980s by Robert M. Gray, it was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms. In simpler terms, vector quantization chooses a set of points to represent a larger set of points. The density matching property of vector quantization is powerful, especially for identifying the density of large and high-dimensional data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for lossy data compression. It can also be used for lossy data correction ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Neural Network (machine Learning)
In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected units or nodes called ''artificial neurons'', which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by ''edges'', which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called the ''activation function''. The strength of the signal at each connection is determined by a ''weight'', which adjusts during the learning process. Typically, neuron ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mipmap
In computer graphics, a mipmap (''mip'' being an acronym of the Latin phrase ''multum in parvo'', meaning "much in little") is a pre-calculated, optimized sequence of images, each of which has an image resolution which is a factor of two smaller than the previous. Their use is known as ''mipmapping''. They are intended to increase rendering speed and reduce aliasing artifacts. A high-resolution mipmap image is used for high-density samples, such as for objects close to the camera; lower-resolution images are used as the object appears farther away. This is a more efficient way of downscaling a texture than sampling all texels in the original texture that would contribute to a screen pixel; it is faster to take a constant number of samples from the appropriately downfiltered textures. Since mipmaps, by definition, are pre-allocated, additional storage space is required to take advantage of them. They are also related to wavelet compression. Mipmaps are widely used in 3D ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Texels
In Computer graphics, computer graphics, a texel, texture element, or texture pixel is the fundamental unit of a texture maps, texture map. Textures are represented by Array (data structure), arrays of texels representing the texture space, just as other Digital image, images are represented by arrays of pixels. Texels can also be described by Image segmentation, image regions that are obtained through simple procedures such as Thresholding (image processing), thresholding. Voronoi diagram, Voronoi tesselation can be used to define their spatial relationships—divisions are made at the midpoints between the centroids of each texel and the centroids of every surrounding texel for the entire texture. This results in each texel centroid having a Voronoi polygon surrounding it, which consists of all points that are closer to its own texel centroid than any other centroid. Rendering When texturing a 3D surface or surfaces (a process known as texture mapping), the rendering (comput ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Nvidia
Nvidia Corporation ( ) is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware. Founded in 1993 by Jensen Huang (president and CEO), Chris Malachowsky, and Curtis Priem, it designs and supplies graphics processing units (GPUs), application programming interfaces (APIs) for data science and high-performance computing, and system on a chip units (SoCs) for mobile computing and the automotive market. Nvidia is also a leading supplier of artificial intelligence (AI) hardware and software. Nvidia outsources the manufacturing of the hardware it designs. Nvidia's professional line of GPUs are used for edge-to-cloud computing and in supercomputers and workstations for applications in fields such as architecture, engineering and construction, media and entertainment, automotive, scientific research, and manufacturing design. Its GeForce line of GPUs are aimed at the consumer market and are used in ap ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Graphics Processing Units
A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles. GPUs were later found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. The ability of GPUs to rapidly perform vast numbers of calculations has led to their adoption in diverse fields including artificial intelligence (AI) where they excel at handling data-intensive and computationally demanding tasks. Other non-graphical uses include the training of neural networks and cryptocurrency mining. History 1970s Arcade system boards have used specialized graphics circuits since the 1970s. In early video game hardware, RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Ericsson Texture Compression
Ericsson Texture Compression (ETC) is a lossy compression, lossy texture compression technique developed in collaboration with Ericsson Research in early 2005. It was originally developed under the name iPACKMAN and based on an earlier compression scheme called PACKMAN. ETC1 The original 'ETC1' compression scheme provides 6x compression of 24-bit computing, 24-bit RGB data. It does not support the compression of images with Alpha components, although there are work-arounds for this. ETC1 takes 4x4 groups of pixel data and compresses each into a single 64-bit word. The 4×4 pixel group is first divided into two 4×2 chunks - either horizontally or vertically. Each half is given a base color - either using 4/4/4 RGB or by giving one of them a 5/5/5 RGB and having the other be a 3/3/3 bit offset from that base. Each 4×2 region also has a 3-bit brightness range selection. Each pixel is then offset from the base color by adding one of four signed values to the base color for its ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]