Lightfield Lewis
   HOME

TheInfoList



OR:

A light field, or lightfield, is a vector function that describes the amount of
light Light, visible light, or visible radiation is electromagnetic radiation that can be visual perception, perceived by the human eye. Visible light spans the visible spectrum and is usually defined as having wavelengths in the range of 400– ...
flowing in every direction through every point in a space. The space of all possible '' light rays'' is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its
radiance In radiometry, radiance is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area. Radiance is used to characterize diffuse emission and reflection of electromagnetic radiati ...
.
Michael Faraday Michael Faraday (; 22 September 1791 – 25 August 1867) was an English chemist and physicist who contributed to the study of electrochemistry and electromagnetism. His main discoveries include the principles underlying electromagnetic inducti ...
was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The term ''light field'' was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space. The term "radiance field" may also be used to refer to similar, or identical concepts. The term is used in modern research such as
neural radiance field A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry ...
s


The plenoptic function

For geometric
optics Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of optical instruments, instruments that use or Photodetector, detect it. Optics usually describes t ...
—i.e., to incoherent light and to objects larger than the wavelength of light—the fundamental carrier of light is a
ray Ray or RAY may refer to: Fish * Ray (fish), any cartilaginous fish of the superorder Batoidea * Ray (fish fin anatomy), the bony or horny spine on ray-finned fish Science and mathematics * Half-line (geometry) or ray, half of a line split at an ...
. The measure for the amount of light traveling along a ray is
radiance In radiometry, radiance is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area. Radiance is used to characterize diffuse emission and reflection of electromagnetic radiati ...
, denoted by ''L'' and measured in ; i.e.,
watt The watt (symbol: W) is the unit of Power (physics), power or radiant flux in the International System of Units (SI), equal to 1 joule per second or 1 kg⋅m2⋅s−3. It is used to quantification (science), quantify the rate of Work ...
s (W) per
steradian The steradian (symbol: sr) or square radian is the unit of solid angle in the International System of Units (SI). It is used in three-dimensional geometry, and is analogous to the radian, which quantifies planar angles. A solid angle in the fo ...
(sr) per square meter (m2). The steradian is a measure of
solid angle In geometry, a solid angle (symbol: ) is a measure of the amount of the field of view from some particular point that a given object covers. That is, it is a measure of how large the object appears to an observer looking from that point. The poin ...
, and meters squared are used as a measure of cross-sectional area, as shown at right. The radiance along all such rays in a region of three-dimensional space illuminated by an unchanging arrangement of lights is called the plenoptic function. The plenoptic illumination function is an idealized function used in
computer vision Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
and
computer graphics Computer graphics deals with generating images and art with the aid of computers. Computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. ...
to express the image of a scene from any possible viewing position at any viewing angle at any point in time. It is not used in practice computationally, but is conceptually useful in understanding other concepts in vision and graphics. Since rays in space can be parameterized by three coordinates, ''x'', ''y'', and ''z'' and two angles ''θ'' and ''ϕ'', as shown at left, it is a five-dimensional function, that is, a function over a five-dimensional
manifold In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or ''n-manifold'' for short, is a topological space with the property that each point has a N ...
equivalent to the product of 3D
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are ''Euclidean spaces ...
and the
2-sphere A sphere (from Greek , ) is a surface analogous to the circle, a curve. In solid geometry, a sphere is the set of points that are all at the same distance from a given point in three-dimensional space.. That given point is the ''center' ...
. The light field at each point in space can be treated as an infinite collection of vectors, one per direction impinging on the point, with lengths proportional to their radiances. Integrating these vectors over any collection of lights, or over the entire sphere of directions, produces a single scalar value—the total irradiance at that point, and a resultant direction. The figure shows this calculation for the case of two light sources. In computer graphics, this vector-valued function of
3D space In geometry, a three-dimensional space (3D space, 3-space or, rarely, tri-dimensional space) is a mathematical space in which three values (''coordinates'') are required to determine the position (geometry), position of a point (geometry), poi ...
is called the vector irradiance field. The vector direction at each point in the field can be interpreted as the orientation of a flat surface placed at that point to most brightly illuminate it.


Higher dimensionality

Time,
wavelength In physics and mathematics, wavelength or spatial period of a wave or periodic function is the distance over which the wave's shape repeats. In other words, it is the distance between consecutive corresponding points of the same ''phase (waves ...
, and
polarization Polarization or polarisation may refer to: Mathematics *Polarization of an Abelian variety, in the mathematics of complex manifolds *Polarization of an algebraic form, a technique for expressing a homogeneous polynomial in a simpler fashion by ...
angle can be treated as additional dimensions, yielding higher-dimensional functions, accordingly.


The 4D light field

In a plenoptic function, if the region of interest contains a
concave Concave or concavity may refer to: Science and technology * Concave lens * Concave mirror Mathematics * Concave function, the negative of a convex function * Concave polygon A simple polygon that is not convex is called concave, non-convex or ...
object (e.g., a cupped hand), then light leaving one point on the object may travel only a short distance before another point on the object blocks it. No practical device could measure the function in such a region. However, for locations outside the object's
convex hull In geometry, the convex hull, convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space, ...
(e.g., shrink-wrap), the plenoptic function can be measured by capturing multiple images. In this case the function contains redundant information, because the radiance along a ray remains constant throughout its length. The redundant information is exactly one dimension, leaving a four-dimensional function variously termed the photic field, the 4D light field or lumigraph. Formally, the field is defined as radiance along rays in empty space. The set of rays in a light field can be parameterized in a variety of ways. The most common is the two-plane parameterization. While this parameterization cannot represent all rays, for example rays parallel to the two planes if the planes are parallel to each other, it relates closely to the
analytic geometry In mathematics, analytic geometry, also known as coordinate geometry or Cartesian geometry, is the study of geometry using a coordinate system. This contrasts with synthetic geometry. Analytic geometry is used in physics and engineering, and als ...
of perspective imaging. A simple way to think about a two-plane light field is as a collection of perspective images of the ''st'' plane (and any objects that may lie astride or beyond it), each taken from an observer position on the ''uv'' plane. A light field parameterized this way is sometimes called a light slab.


Sound analog

The analog of the 4D light field for sound is the sound field or wave field'','' as in
wave field synthesis Wave field synthesis (WFS) is a spatial audio rendering technique, characterized by creation of virtual acoustic environments. It produces ''artificial'' wavefronts synthesized by a large number of individually driven loudspeakers from elemen ...
, and the corresponding parametrization is the Kirchhoff–Helmholtz integral, which states that, in the absence of obstacles, a sound field over time is given by the pressure on a plane. Thus this is two dimensions of information at any point in time, and over time, a 3D field. This two-dimensionality, compared with the apparent four-dimensionality of light, is because light travels in rays (0D at a point in time, 1D over time), while by the
Huygens–Fresnel principle The Huygens–Fresnel principle (named after Netherlands, Dutch physicist Christiaan Huygens and France, French physicist Augustin-Jean Fresnel) states that every point on a wavefront is itself the source of spherical wavelets, and the secondary w ...
, a sound
wave front In physics, the wavefront of a time-varying ''wave field'' is the set ( locus) of all points having the same '' phase''. The term is generally meaningful only for fields that, at each point, vary sinusoidally in time with a single temporal freq ...
can be modeled as spherical waves (2D at a point in time, 3D over time): light moves in a single direction (2D of information), while sound expands in every direction. However, light travelling in non-vacuous media may scatter in a similar fashion, and the irreversibility or information lost in the scattering is discernible in the apparent loss of a system dimension.


Image refocusing

Because light field provides spatial and angular information, we can alter the position of focal planes after exposure, which is often termed ''refocusing''. The principle of refocusing is to obtain conventional 2-D photographs from a light field through the integral transform. The transform takes a lightfield as its input and generates a photograph focused on a specific plane. Assuming L_(s,t,u,v) represents a 4-D light field that records light rays traveling from position (u,v) on the first plane to position (s,t) on the second plane, where F is the distance between two planes, a 2-D photograph at any depth \alpha F can be obtained from the following integral transform: : \mathcal_\left _\rights, t) = \iint L_F\left(u\left(1 - \frac\right) + \frac, v\left(1 - \frac\right) + \frac, u, v\right)~dudv , or more concisely, :\mathcal_\left _\right\boldsymbol)=\frac \int L_\left(\boldsymbol\left(1-\frac\right)+\frac, \boldsymbol\right) d \boldsymbol, where \boldsymbol=(s,t), \boldsymbol=(u,v), and \mathcal_\left cdot\right/math> is the photography operator. In practice, this formula cannot be directly used because a plenoptic camera usually captures discrete samples of the lightfield L_(s,t,u,v), and hence resampling (or interpolation) is needed to compute L_\left(\boldsymbol\left(1-\frac\right)+\frac, \boldsymbol\right). Another problem is high computational complexity. To compute an N\times N 2-D photograph from an N\times N\times N\times N 4-D light field, the complexity of the formula is O(N^4).


Fourier slice photography

One way to reduce the complexity of computation is to adopt the concept of
Fourier slice theorem Fourier may refer to: * Fourier (surname), French surname Mathematics *Fourier series, a weighted sum of sinusoids having a common period, the result of Fourier analysis of a periodic function *Fourier analysis, the description of functions as su ...
: The photography operator \mathcal_\left cdot\right/math> can be viewed as a shear followed by projection. The result should be proportional to a dilated 2-D slice of the 4-D Fourier transform of a light field. More precisely, a refocused image can be generated from the 4-D Fourier spectrum of a light field by extracting a 2-D slice, applying an inverse 2-D transform, and scaling. The asymptotic complexity of the algorithm is O(N^2 \operatornameN).


Discrete focal stack transform

Another way to efficiently compute 2-D photographs is to adopt discrete focal stack transform (DFST). DFST is designed to generate a collection of refocused 2-D photographs, or so-called Focal Stack. This method can be implemeted by fast
fractional fourier transform In mathematics, in the area of harmonic analysis, the fractional Fourier transform (FRFT) is a family of linear transformations generalizing the Fourier transform. It can be thought of as the Fourier transform to the ''n''-th power, where ''n' ...
(FrFT). The discrete photography operator \mathcal_\left cdot\right/math> is defined as follows for a lightfield L_(\boldsymbol ,\boldsymbol ) sampled in a 4-D grid \boldsymbol = \Delta s \tilde, \tilde = -\boldsymbol _, ..., \boldsymbol _, \boldsymbol = \Delta u \tilde, \tilde=-\boldsymbol _,...,\boldsymbol _: :\mathcal_ \boldsymbol)= \sum_^ L(\boldsymbol q+\boldsymbol, \boldsymbol) \Delta \boldsymbol, \quad \Delta \boldsymbol=\Delta u\Delta v, \quad q=\left(1-\frac\right) Because (\boldsymbol q+\boldsymbol, \boldsymbol) is usually not on the 4-D grid, DFST adopts
trigonometric interpolation In mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a ...
to compute the non-grid values. The algorithm consists of these steps: * Sample the light field L_(\boldsymbol ,\boldsymbol ) with the sampling period \Delta s and \Delta u and get the discretized light field L^d_(\boldsymbol ,\boldsymbol ). * Pad L^d_(\boldsymbol ,\boldsymbol ) with zeros such that the signal length is enough for FrFT without aliasing. * For every \boldsymbol , compute the
Discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discre ...
of L^d_(\boldsymbol ,\boldsymbol ), and get the result R1. * For every focal length \alpha F, compute the
fractional fourier transform In mathematics, in the area of harmonic analysis, the fractional Fourier transform (FRFT) is a family of linear transformations generalizing the Fourier transform. It can be thought of as the Fourier transform to the ''n''-th power, where ''n' ...
of R1, where the order of the transform depends on \alpha, and get the result R2. * Compute the inverse Discrete Fourier transform of R2. * Remove the marginal pixels of R2 so that each 2-D photograph has the size (2_+1) by (2_+1)


Methods to create light fields

In computer graphics, light fields are typically produced either by rendering a
3D model In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of a surface of an object (inanimate or living) in three dimensions via specialized software by manipulating edges, vertices, and ...
or by photographing a real scene. In either case, to produce a light field, views must be obtained for a large collection of viewpoints. Depending on the parameterization, this collection typically spans some portion of a line, circle, plane, sphere, or other shape, although unstructured collections are possible. Devices for capturing light fields photographically may include a moving handheld camera or a robotically controlled camera, an arc of cameras (as in the
bullet time Bullet time (also known as frozen moment, dead time, flow motion or time slice) is a visual effect or visual impression of detaching the time and space of a camera (or viewer) from that of its visible subject. It is a depth enhanced simulation of ...
effect used in ''
The Matrix ''The Matrix'' is a 1999 science fiction film, science fiction action film written and directed by the Wachowskis. It is the first installment in the The Matrix (franchise), ''Matrix'' film series, starring Keanu Reeves, Laurence Fishburne, Ca ...
''), a dense array of cameras,
handheld camera Hand-held camera or hand-held shooting is a filmmaking and video production technique in which a camera is held in the camera operator's hands as opposed to being mounted on a tripod or other base. Hand-held cameras are used because they are conve ...
s, Ng 2005 microscopes, or other optical system. The number of images in a light field depends on the application. A light field capture of
Michelangelo Michelangelo di Lodovico Buonarroti Simoni (6March 147518February 1564), known mononymously as Michelangelo, was an Italian sculptor, painter, architect, and poet of the High Renaissance. Born in the Republic of Florence, his work was inspir ...
's statue of ''
Night Night, or nighttime, is the period of darkness when the Sun is below the horizon. Sunlight illuminates one side of the Earth, leaving the other in darkness. The opposite of nighttime is daytime. Earth's rotation causes the appearance of ...
'' contains 24,000 1.3-megapixel images, which is considered large as of 2022. For light field rendering to completely capture an opaque object, images must be taken of at least the front and back. Less obviously, for an object that lies astride the ''st'' plane, finely spaced images must be taken on the ''uv'' plane (in the two-plane parameterization shown above). The number and arrangement of images in a light field, and the resolution of each image, are together called the "sampling" of the 4D light field. Also of interest are the effects of occlusion, lighting and reflection.


Applications


Illumination engineering

Gershun's reason for studying the light field was to derive (in closed form) illumination patterns that would be observed on surfaces due to light sources of various shapes positioned above these surface. The branch of optics devoted to illumination engineering is
nonimaging optics Nonimaging optics (also called anidolic optics)Roland Winston et al., ''Nonimaging Optics'', Academic Press, 2004 R. John Koshel (Editor), ''Illumination Engineering: Design with Nonimaging Optics'', Wiley, 2013 is a branch of optics that is conce ...
. It extensively uses the concept of flow lines (Gershun's flux lines) and vector flux (Gershun's light vector). However, the light field (in this case the positions and directions defining the light rays) is commonly described in terms of
phase space The phase space of a physical system is the set of all possible physical states of the system when described by a given parameterization. Each possible state corresponds uniquely to a point in the phase space. For mechanical systems, the p ...
and
Hamiltonian optics Hamiltonian opticsH. A. Buchdahl, ''An Introduction to Hamiltonian Optics'', Dover Publications, 1993, . and Lagrangian opticsVasudevan Lakshminarayanan et al., ''Lagrangian Optics'', Springer Netherlands, 2011, . are two formulations of geometrical ...
.


Light field rendering

Extracting appropriate 2D slices from the 4D light field of a scene, enables novel views of the scene. Depending on the parameterization of the light field and slices, these views might be perspective, orthographic, crossed-slit, general linear cameras, multi-perspective, or another type of projection. Light field rendering is one form of image-based rendering.


Synthetic aperture photography

Integrating an appropriate 4D subset of the samples in a light field can approximate the view that would be captured by a camera having a finite (i.e., non-
pinhole A hole is an opening in or through a particular medium, usually a solid body. Holes occur through natural and artificial processes, and may be useful for various purposes, or may represent a problem needing to be addressed in many fields of e ...
) aperture. Such a view has a finite
depth of field The depth of field (DOF) is the distance between the nearest and the farthest objects that are in acceptably sharp focus (optics), focus in an image captured with a camera. See also the closely related depth of focus. Factors affecting depth ...
. Shearing or warping the light field before performing this integration can focus on different fronto-parallel or oblique planes. Images captured by digital cameras that capture the light field can be refocused.


3D display

Presenting a light field using technology that maps each sample to the appropriate ray in physical space produces an
autostereoscopic Autostereoscopy is any method of displaying stereoscopic images (adding binocular perception of 3D depth) without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headg ...
visual effect akin to viewing the original scene. Non-digital technologies for doing this include
integral photography Integral imaging is a three-dimensional imaging technique that captures and reproduces a light field by using a two-dimensional array of microlenses (or lenslets), sometimes called a fly's-eye lens, normally without the aid of a larger overall o ...
, parallax panoramagrams, and
holography Holography is a technique that allows a wavefront to be recorded and later reconstructed. It is best known as a method of generating three-dimensional images, and has a wide range of other uses, including data storage, microscopy, and interfe ...
; digital technologies include placing an array of lenslets over a high-resolution display screen, or projecting the imagery onto an array of lenslets using an array of video projectors. An array of video cameras can capture and display a time-varying light field. This essentially constitutes a
3D television 3D television (3DTV) is television that conveys depth perception to the viewer by employing techniques such as stereoscopy, stereoscopic display, free viewpoint television, multi-view display, or any other form of 3D display. Most modern 3D te ...
system. Modern approaches to light-field display explore co-designs of optical elements and compressive computation to achieve higher resolutions, increased contrast, wider fields of view, and other benefits.


Brain imaging

Neural activity can be recorded optically by genetically encoding neurons with reversible fluorescent markers such as
GCaMP GCaMP is a genetically encoded calcium indicator (GECI) initially developed in 2001 by Junichi Nakai. It is a synthetic fusion of green fluorescent protein (GFP), calmodulin (CaM), and M13, a peptide sequence from myosin light-chain kinase. When b ...
that indicate the presence of
calcium ions Calcium ions (Ca2+) contribute to the physiology and biochemistry of organisms' cells. They play an important role in signal transduction pathways, where they act as a second messenger, in neurotransmitter release from neurons, in contraction ...
in real time. Since light field microscopy captures full volume information in a single frame, it is possible to monitor neural activity in individual neurons randomly distributed in a large volume at video framerate. Quantitative measurement of neural activity can be done despite optical aberrations in brain tissue and without reconstructing a volume image, and be used to monitor activity in thousands of neurons.


Generalized scene reconstruction (GSR)

This is a method of creating and/or refining a scene model representing a generalized light field and a relightable matter field.Leffingwell, 2018 Data used in reconstruction includes images, video, object models, and/or scene models. The generalized light field represents light flowing in the scene. The relightable matter field represents the light interaction properties and emissivity of matter occupying the scene. Scene data structures can be implemented using Neural Networks, and Physics-based structures, among others. The light and matter fields are at least partially disentangled.


Holographic stereograms

Image generation and predistortion of synthetic imagery for holographic stereograms is one of the earliest examples of computed light fields.


Glare reduction

Glare Glare may refer to: * Glare (vision), difficulty seeing in the presence of very bright light * Glaring, a facial expression of squinted eyes and look of contempt * A call collision in telecommunications * GLARE, Glass reinforced aluminium, an ...
arises due to multiple scattering of light inside the camera body and lens optics that reduces image contrast. While glare has been analyzed in 2D image space,Talvala 2007 it is useful to identify it as a 4D ray-space phenomenon.Raskar 2008 Statistically analyzing the ray-space inside a camera allows the classification and removal of glare artifacts. In ray-space, glare behaves as high frequency noise and can be reduced by outlier rejection. Such analysis can be performed by capturing the light field inside the camera, but it results in the loss of spatial resolution. Uniform and non-uniform ray sampling can be used to reduce glare without significantly compromising image resolution.


See also

* Angle–sensitive pixel * Dual photography *
Light-field camera A light field camera, also known as a plenoptic camera, is a camera that captures information about the ''light field'' emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are tr ...
*
Lytro Lytro, Inc. was an American company founded in 2006 by Ren Ng which developed some of the first commercially available light-field cameras. Lytro began shipping its first generation pocket-sized camera, capable of refocusing images after being t ...
*
Raytrix Raytrix GmbH is a German company founded by Christian Perwass and Lennart Wietzke that was the first to create and market commercial plenoptic cameras. The R5 camera produces images of 1 megapixel resolution, while the R11 produces 3 megapixel ima ...
* Reflectance paper


Notes


References


Theory

* Adelson, E.H., Bergen, J.R. (1991)
"The Plenoptic Function and the Elements of Early Vision"
In ''Computation Models of Visual Processing'', M. Landy and J.A. Movshon, eds., MIT Press, Cambridge, 1991, pp. 3–20. * Arvo, J. (1994)
"The Irradiance Jacobian for Partially Occluded Polyhedral Sources"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 335–342. * Bolles, R.C., Baker, H. H., Marimont, D.H. (1987)
"Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion"
''International Journal of Computer Vision'', Vol. 1, No. 1, 1987, Kluwer Academic Publishers, pp 7–55. * Faraday, M.

, ''Philosophical Magazine'', S.3, Vol XXVIII, N188, May 1846. * Gershun, A. (1936). "The Light Field", Moscow, 1936. Translated by P. Moon and G. Timoshenko in ''Journal of Mathematics and Physics'', Vol. XVIII, MIT, 1939, pp. 51–151. * Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M. (1996)
"The Lumigraph"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 43–54. * Levoy, M., Hanrahan, P. (1996)
"Light Field Rendering"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 31–42. * Moon, P., Spencer, D.E. (1981). ''The Photic Field'', MIT Press. * Wong, T.T., Fu, C.W., Heng, P.A., Leung C.S. (2002)
"The Plenoptic-Illumination Function"
''IEEE Trans. Multimedia'', Vol. 4, No. 3, pp. 361–371.


Analysis

* G. Wetzstein, I. Ihrke, W. Heidrich (2013
"On Plenoptic Multiplexing and Reconstruction"
''International Journal of Computer Vision (IJCV)'', Volume 101, Issue 2, pp. 384–400. * Ramamoorthi, R., Mahajan, D., Belhumeur, P. (2006)

''ACM TOG''. * Zwicker, M., Matusik, W., Durand, F., Pfister, H. (2006)
"Antialiasing for Automultiscopic 3D Displays"
''Eurographics Symposium on Rendering, 2006''. * Ng, R. (2005)
"Fourier Slice Photography"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 735–744. * Durand, F., Holzschuch, N., Soler, C., Chan, E., Sillion, F. X. (2005)
"A Frequency Analysis of Light Transport"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 1115–1126. * Chai, J.-X., Tong, X., Chan, S.-C., Shum, H. (2000)

''Proc. ACM SIGGRAPH'', ACM Press, pp. 307–318. * Halle, M. (1994
"Holographic Stereograms as Discrete imaging systems"
in ''SPIE Proc. Vol. #2176: Practical Holography VIII'', S.A. Benton, ed., pp. 73–84. * Yu, J., McMillan, L. (2004)
"General Linear Cameras"
''Proc. ECCV 2004'', Lecture Notes in Computer Science, pp. 14–27.


Cameras

* Marwah, K., Wetzstein, G., Bando, Y., Raskar, R. (2013)
"Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections"
''ACM Transactions on Graphics (SIGGRAPH)''. * Liang, C.K., Lin, T.H., Wong, B.Y., Liu, C., Chen, H. H. (2008)
"Programmable Aperture Photography:Multiplexed Light Field Acquisition"
''Proc. ACM SIGGRAPH''. * Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J. (2007)
"Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing"
''Proc. ACM SIGGRAPH''. * Georgiev, T., Zheng, C., Nayar, S., Curless, B., Salesin, D., Intwala, C. (2006)
"Spatio-angular Resolution Trade-offs in Integral Photography"
''Proc. EGSR 2006''. * Kanade, T., Saito, H., Vedula, S. (1998)

Tech report CMU-RI-TR-98-34, December 1998. * Levoy, M. (2002)
Stanford Spherical Gantry
* Levoy, M., Ng, R., Adams, A., Footer, M., Horowitz, M. (2006)
"Light Field Microscopy"
''ACM Transactions on Graphics'' (Proc. SIGGRAPH), Vol. 25, No. 3. * Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P. (2005)
"Light Field Photography with a Hand-Held Plenoptic Camera"
''Stanford Tech Report'' CTSR 2005–02, April, 2005. * Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Levoy, M., Horowitz, M. (2005)
"High Performance Imaging Using Large Camera Arrays"
''ACM Transactions on Graphics'' (Proc. SIGGRAPH), Vol. 24, No. 3, pp. 765–776. * Yang, J.C., Everett, M., Buehler, C., McMillan, L. (2002)
"A Real-Time Distributed Light Field Camera"
''Proc. Eurographics Rendering Workshop 2002''.
"The CAFADIS camera"


Displays

* Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R. (2012)
"Tensor Displays: Compressive Light Field Display using Multilayer Displays with Directional Backlighting"
''ACM Transactions on Graphics (SIGGRAPH)'' * Wetzstein, G., Lanman, D., Heidrich, W., Raskar, R. (2011)
"Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays"
''ACM Transactions on Graphics (SIGGRAPH)'' * Lanman, D., Wetzstein, G., Hirsch, M., Heidrich, W., Raskar, R. (2011)
"Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs"
''ACM Transactions on Graphics (SIGGRAPH Asia)'' * Lanman, D., Hirsch, M. Kim, Y., Raskar, R. (2010)
"HR3D: Glasses-free 3D Display using Dual-stacked LCDs High-Rank 3D Display using Content-Adaptive Parallax Barriers"
''ACM Transactions on Graphics (SIGGRAPH Asia)'' * Matusik, W., Pfister, H. (2004)
"3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes"
''Proc. ACM SIGGRAPH'', ACM Press. * Javidi, B., Okano, F., eds. (2002).
Three-Dimensional Television, Video and Display Technologies
', Springer-Verlag. * Klug, M., Burnett, T., Fancello, A., Heath, A., Gardner, K., O'Connell, S., Newswanger, C. (2013)
"A Scalable, Collaborative, Interactive Light-field Display System"
''SID Symposium Digest of Technical Papers'' * Fattal, D., Peng, Z., Tran, T., Vo, S., Fiorentino, M., Brug, J., Beausoleil, R. (2013)
"A multi-directional backlight for a wide-angle, glasses-free three-dimensional display"
''Nature 495, 348–351''


Archives


"The Stanford Light Field Archive"

"UCSD/MERL Light Field Repository"

"The HCI Light Field Benchmark"

"Synthetic Light Field Archive"


Applications

* Grosenick, L., Anderson, T., Smith S. J. (2009
"Elastic Source Selection for in vivo imaging of neuronal ensembles."
From Nano to Macro, 6th IEEE International Symposium on Biomedical Imaging. (2009) 1263–1266. * Grosenick, L., Broxton, M., Kim, C. K., Liston, C., Poole, B., Yang, S., Andalman, A., Scharff, E., Cohen, N., Yizhar, O., Ramakrishnan, C., Ganguli, S., Suppes, P., Levoy, M., Deisseroth, K. (2017
"Identification of cellular-activity dynamics across large tissue volumes in the mammalian brain"
bioRxiv 132688; doi
Identification of cellular-activity dynamics across large tissue volumes in the mammalian brain
* Heide, F., Wetzstein, G., Raskar, R., Heidrich, W. (2013
"Adaptive Image Synthesis for Compressive Displays"
ACM Transactions on Graphics (SIGGRAPH) * Wetzstein, G., Raskar, R., Heidrich, W. (2011

IEEE International Conference on Computational Photography (ICCP) * Pérez, F., Marichal, J. G., Rodriguez, J.M. (2008)
"The Discrete Focal Stack Transform"
''Proc. EUSIPCO'' * Raskar, R., Agrawal, A., Wilson, C., Veeraraghavan, A. (2008)

''Proc. ACM SIGGRAPH.'' * Talvala, E-V., Adams, A., Horowitz, M., Levoy, M. (2007)
"Veiling Glare in High Dynamic Range Imaging"
''Proc. ACM SIGGRAPH.'' * Halle, M., Benton, S., Klug, M., Underkoffler, J. (1991)
"The UltraGram: A Generalized Holographic Stereogram"
''SPIE Vol. 1461, Practical Holography V'', S.A. Benton, ed., pp. 142–155. * Zomet, A., Feldman, D., Peleg, S., Weinshall, D. (2003)
"Mosaicing New Views: The Crossed-Slits Projection"
''IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)'', Vol. 25, No. 6, June 2003, pp. 741–754. * Vaish, V., Garg, G., Talvala, E., Antunez, E., Wilburn, B., Horowitz, M., Levoy, M. (2005)
"Synthetic Aperture Focusing using a Shear-Warp Factorization of the Viewing Transform"
''Proc. Workshop on Advanced 3D Imaging for Safety and Security'', in conjunction with CVPR 2005. *Bedard, N., Shope, T., Hoberman, A., Haralam, M. A., Shaikh, N., Kovačević, J., Balram, N., Tošić, I. (2016)
"Light field otoscope design for 3D in vivo imaging of the middle ear"
''Biomedical optics express'', ''8''(1), pp. 260–272. *Karygianni, S., Martinello, M., Spinoulas, L., Frossard, P., Tosic, I. (2018).
Automated eardrum registration from light-field data
. IEEE International Conference on Image Processing (ICIP) * Rademacher, P., Bishop, G. (1998)
"Multiple-Center-of-Projection Images"
''Proc. ACM SIGGRAPH'', ACM Press. * Isaksen, A., McMillan, L., Gortler, S.J. (2000)
"Dynamically Reparameterized Light Fields"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 297–306. * Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M. (2001)
"Unstructured Lumigraph Rendering"
''Proc. ACM SIGGRAPH'', ACM Press. * Ashdown, I. (1993)

''Journal of the Illuminating Engineering Society'', Vol. 22, No. 1, Winter, 1993, pp. 163–180. * Chaves, J. (2015
"Introduction to Nonimaging Optics, Second Edition"
CRC Press * Winston, R., Miñano, J.C., Benitez, P.G., Shatz, N., Bortz, J.C., (2005
"Nonimaging Optics"
Academic Press * Pégard, N. C., Liu H.Y., Antipa, N., Gerlock M., Adesnik, H., and Waller, L.. ''Compressive light-field microscopy for 3D neural activity recording.'' Optica 3, no. 5, pp. 517–524 (2016). * Leffingwell, J., Meagher, D., Mahmud, K., Ackerson, S. (2018)
"Generalized Scene Reconstruction."
arXiv:1803.08496v3 s.CV pp. 1–13. * Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020)
“NeRF: Representing scenes as neural radiance fields for view synthesis.”
Computer Vision – ECCV 2020, 405–421. * Yu, A., Fridovich-Keil, S., Tancik, M., Chen, Q., Recht, B., Kanazawa, A. (2021)
"Plenoxels: Radiance Fields without Neural Networks."
arXiv:2111.11215, pp. 1–25 * {{cite journal, last1=Perez, first1=CC, last2=Lauri, first2=A, last3=Symvoulidis, first3=P, last4=Cappetta, first4=M, last5=Erdmann, first5=A, last6=Westmeyer, first6=GG, display-authors=2, title=Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera., journal=Journal of Biomedical Optics, date=September 2015, volume=20, issue=9, pages=096009, doi=10.1117/1.JBO.20.9.096009, pmid=26358822, bibcode=2015JBO....20i6009C, doi-access=free *Perez, C. C., Lauri, A., Symvoulidis, P., Cappetta, M., Erdmann, A., & Westmeyer, G. G. (2015). Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera. Journal of Biomedical Optics, 20(9), 096009-096009. * León, K., Galvis, L., and Arguello, H. (2016)
"Reconstruction of multispectral light field (5d plenoptic function) based on compressive sensing with colored coded apertures from 2D projections"
Revista Facultad de Ingeniería Universidad de Antioquia 80, pp. 131. Optics 3D computer graphics 3D display