Depth Of Field
The depth of field (DOF) is the distance between the nearest and the farthest objects that are in acceptably sharp focus (optics), focus in an image captured with a camera. See also the closely related depth of focus. Factors affecting depth of field For cameras that can only focus on one object distance at a time, depth of field is the distance between the nearest and the farthest objects that are in acceptably sharp focus in the image. "Acceptably sharp focus" is defined using a property called the "circle of confusion". The depth of field can be determined by focal length, distance to subject (object to be imaged), the acceptable circle of confusion size, and aperture. Limitations of depth of field can sometimes be overcome with various techniques and equipment. The approximate depth of field can be given by: \text \approx \frac for a given maximum acceptable circle of confusion , focal length , f-number , and distance to subject . As distance or the size of the acc ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Bollard
A bollard is a sturdy, short, vertical post. The term originally referred to a post on a ship or quay used principally for mooring boats. In modern usage, it also refers to posts installed to control road traffic and posts designed to prevent automotive vehicles from colliding with pedestrians and structures. Etymology The term is probably related to bole, meaning a tree trunk. The earliest citation given by the ''Oxford English Dictionary'' (referring to a maritime bollard) dates from 1844, although a reference in the '' Caledonian Mercury'' in 1817 describes bollards as huge posts. History Wooden posts were used for basic traffic management from at least the second half of the 17th century. One early well-documented case is that of the "postes and rales in ye King's highway for ye (safety) of all foot passengers" erected in 1671 in the High Street of Old Brentford, Middlesex (part of the London–Bath road). Another is that of "two oak-posts" set up next to the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Motion Blur
Motion blur is the apparent streaking of moving objects in a photograph or a sequence of frames, such as a film or animation. It results when the image being recorded changes during the recording of a single exposure, due to rapid movement or long-exposure photography, long exposure. Usages / Effects of motion blur Photography When a camera creates an image, that image does not represent a single instant of time. Because of technological constraints or artistic requirements, the image may represent the scene over a period of time. Most often this exposure time is brief enough that the image captured by the camera appears to capture an instantaneous moment, but this is not always so, and a fast moving object or a longer exposure time may result in blurring artifacts which make this apparent. As objects in a scene move, an image of that scene must represent an Integral, integration of all positions of those objects, as well as the camera's viewpoint, over the period of exposur ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Diffraction
Diffraction is the deviation of waves from straight-line propagation without any change in their energy due to an obstacle or through an aperture. The diffracting object or aperture effectively becomes a secondary source of the Wave propagation, propagating wave. Diffraction is the same physical effect as Wave interference, interference, but interference is typically applied to superposition of a few waves and the term diffraction is used when many waves are superposed. Italian scientist Francesco Maria Grimaldi coined the word ''diffraction'' and was the first to record accurate observations of the phenomenon in 1660 in science, 1660. In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic pattern is most pronounced when a wave from a Coherence (physics), coherent source (such as a laser) encounters a slit/aperture tha ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
4D Light Field
A light field, or lightfield, is a vector function that describes the amount of light flowing in every direction through every point in a space. The space of all possible '' light rays'' is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The term ''light field'' was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space. The term "radiance field" may also be used to refer to similar, or identical concepts. The term is used in modern research such as neural radiance fields The plenoptic function For geometric optics—i.e., to incoherent light and to objects larger than the wavelength of light—the fundamental carrier of light is a ray. The measure for the amount of light traveling along a ray is radiance, denote ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Plenoptic Camera
A light field camera, also known as a plenoptic camera, is a camera that captures information about the ''light field'' emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are traveling in space. This contrasts with conventional cameras, which record only light intensity at various wavelengths. One type uses an array of micro-lenses placed in front of an otherwise conventional image sensor to sense intensity, color, and directional information. Multi-camera arrays are another type. A holographic image is a type of film-based light field image. History Early research The first light field camera was proposed by Gabriel Lippmann in 1908. He called his concept "integral photography". Lippmann's experimental results included crude integral photographs made by using a plastic sheet embossed with a regular array of microlenses, or by partially embedding small glass beads, closely packed in a random pattern, int ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Apodization
In signal processing, apodization (from Greek "removing the foot") is the modification of the shape of a mathematical function. The function may represent an electrical signal, an optical transmission, or a mechanical structure. In optics, it is primarily used to remove Airy disks caused by diffraction around an intensity peak, improving the focus. Apodization in electronics Apodization in signal processing The term apodization is used frequently in publications on Fourier-transform infrared (FTIR) signal processing. An example of apodization is the use of the Hann window in the fast Fourier transform analyzer to smooth the discontinuities at the beginning and end of the sampled time record. Apodization in digital audio An apodizing filter can be used in digital audio processing instead of the more common brick-wall filters, in order to reduce the pre- and post- ringing that the latter introduces. Apodization in mass spectrometry During oscillation within an Orbit ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Wavefront Coding
In optics and signal processing, wavefront coding refers to the use of a phase modulating element in conjunction with deconvolution to extend the depth of field of a digital imaging system such as a video camera. Wavefront coding falls under the broad category of computational photography as a technique to enhance the depth of field. Encoding The wavefront of a light wave passing through the camera system is modulated using optical elements that introduce a spatially varying optical path length. The modulating elements must be placed at or near the plane of the aperture stop or pupil so that the same modulation is introduced for all field angles across the field-of-view. This modulation corresponds to a change in complex argument of the pupil function of such an imaging device, and it can be engineered with different goals in mind: e.g. extending the depth of focus. Linear phase mask Wavefront coding with linear phase masks works by creating an optical transfer function that e ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Light Scanning Photomacrography
Light Scanning Photomacrography (LSP), also known as Scanning Light Photomacrography (SLP) or Deep-Field Photomacrography, is a photographic film technique that allows for high magnification light imaging with exceptional depth of field (DOF). This method overcomes the limitations of conventional macro photography, which typically only keeps a portion of the subject in acceptable focus at high magnifications. Historical background The principles of LSP were first documented in the early 1960s by Dan McLachlan Jr., who highlighted its capability for extreme focal depth in microscopy McLachlan, Dan Jr"Extreme Focal Depth in Microscopy" (1964) Applied Optics, Vol. 3, No. 9, pp. 1009-1013. and in 1968 patented the process.McLachlan, D., Jr. (27 Aug 1968US Patent 3398634A/ref> The technique was revived and further developed in the 1980s by photographers such as Darwin Dale and Nile Root, a faculty member at the Rochester Institute of Technology.Root, N. (1985scanning photomacrograph ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Depth Map
In 3D computer graphics and computer vision, a depth map is an Digital image, image or Channel (digital image), image channel that contains information relating to the distance of the Computer representation of surfaces, surfaces of scene objects from a viewpoint. The term is related (and may be analogous) to ''depth buffer'', ''Z-buffer'', ''Z-buffering'', and ''Z-depth''.[ftp://ftp.futurenet.co.uk/pub/arts/Glossary.pdf Computer Arts / 3D World Glossary], Document retrieved 26 January 2011. The "Z" in these latter terms relates to a convention that the central axis of view of a Virtual camera system, camera is in the direction of the camera's Z axis, and not to the absolute Z axis of a scene. Examples File:Cubic Structure.jpg, Cubic Structure File:Cubic Frame Stucture and Floor Depth Map.jpg, Depth Map: Nearer is darker File:Cubic Structure and Floor Depth Map with Front and Back Delimitation.jpg, Depth Map: Nearer the Focal Plane is darker Two different depth maps can be s ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
3D Reconstruction
In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction. Motivation and applications The research of 3D reconstruction has always been a difficult goal. By Using 3D reconstruction one can determine any object's 3D profile, as well as knowing the 3D coordinate of any point on the profile. The 3D reconstruction of objects is a generally scientific problem and core technology of a wide variety of fields, such as Computer Aided Geometric Design ( CAGD), computer graphics, computer animation, computer vision, medical imaging, computational science, virtual reality, digital media, etc. For instance, the lesion information of the patients can be presented in 3D on the computer, which offers a new and accurate approach in diagn ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |