Radial distortion, whilst primarily dominated by low order radial components,^{[3]} can be corrected using Brown's distortion model,^{[4]} also known as the Brown–Conrady model based on earlier work by Conrady.^{}
Radial distortion, whilst primarily dominated by low order radial components,^{[3]} can be corrected using Brown's distortion model,^{[4]} also known as the Brown–Conrady model based on earlier work by Conrady.^{[5]} The Brown–Conrady model corrects both for radial distortion and for tangential distortion caused by physical elements in a lens not being perfectly aligned. The latter is also known as decentering distortion. See Zhang^{[6]} for additional discussion of radial distortion.
Barrel distortion typically will have a negative term for whereas pincushion distortion will have a positive value. Moustache distortion will have a non-monotonic radial geometric series where for some the sequence will change sign.
To model radial distortion, the division model^{[7]} typically provides a more accurate approximation than Brown-Conrady's even-order polynomial model:^{[8]}
using the same parameters previously defined. For radial distortion, this division model is often preferred over the Brown–Conrady model, as it requires fewer terms to describe more accurately severe distortion.^{[8]} Using this model, a single term is usually sufficient to model most cameras.^{[9]}
Software can correct those distortions by warping the image with a reverse distortion. This involves determining which distorted pixel corresponds to each undistorted pixel, which is non-trivial due to the non-linearity of the distortion equation.^{[3]} Lateral chromatic aberration (purple/green fringing) can be significantly reduced by applying such warping for red, green and blue separately.
Distorting or undistorting requires either both sets of coefficients or inverting the non-linear problem which, in general, lacks an analytical solution. Standard approaches such as approximating, locally linearizing and iterative solvers all apply. Which solver is preferable depends on the accuracy required and the computational resources available.
Calibrated systems work from a table of lens/camera transfer functions:
Manual systems allow manual adjustment of distortion parameters:
convert distorted_image.jpg -distort barrel "0.06335 -0.18432 -0.13009" corrected_image.jpg
using the same parameters previously defined. For radial distortion, this division model is often preferred over the Brown–Conrady model, as it requires fewer terms to describe more accurately severe distortion.^{[8]} Using this model, a single term is usually sufficient to model most cameras.^{[9]}
Software can correct those distortions by warping the image with a reverse distortion. This involves determining which distorted pixel corresponds to each undistorted pixel, which is non-trivial due to the non-linearity of the distortion equation.^{[3]} Lateral chromatic aberration (purple/green fringing) can be significantly reduced by applying such warping for red, green and blue separately.
Distorting or undistorting requires either both sets of coefficients or inverting the non-linear problem which, in general, lacks an analytical solution. Standard approaches such as approximating, locally linearizing and iterative solvers all apply. Which solver is preferable depends on the accuracy required and the computational resources available.
Calibrated systems work from a table of lens/camera transfer functions:
Manual systems allow manual adjustment of distortion parameters:
convert distorted_image.jpg -distort barrel "0.06335 -0.18432 -0.13009" corrected_image.jpg
Besides these systems that address images, there are some that also adjust distortion parameters for videos:
Radial distor
Software can correct those distortions by warping the image with a reverse distortion. This involves determining which distorted pixel corresponds to each undistorted pixel, which is non-trivial due to the non-linearity of the distortion equation.^{[3]} Lateral chromatic aberration (purple/green fringing) can be significantly reduced by applying such warping for red, green and blue separately.
Distorting or undistorting requires either both sets of coefficients or inverting the non-linear problem which, in general, lacks an analytical solution. Standard approaches such as approximating, locally linearizing and iterative solvers all apply. Which solver is preferable depends on the accuracy required and the computational resources available.
Calibrated systems work from a table of lens/camera transfer functions:
convert distorted_image.jpg -distort barrel "0.06335 -0.18432 -0.13009" corrected_image.jpg
Besides these systems that address images, there are some that also adjust distortion parameters for videos:
Radial distortion is a failure of a lens to be rectilinear: a failure to image lines into lines. If a photograph is not taken straight-on then, even with a perfect rectilinear lens, rectangles will appear as trapezoids: lines are imaged as lines, but the angles between them are not preserved (tilt is not a conformal map). This effect can be controlled by using a perspective control lens, or corrected in post-processing.
Due to Radial distortion is a failure of a lens to be rectilinear: a failure to image lines into lines. If a photograph is not taken straight-on then, even with a perfect rectilinear lens, rectangles will appear as trapezoids: lines are imaged as lines, but the angles between them are not preserved (tilt is not a conformal map). This effect can be controlled by using a perspective control lens, or corrected in post-processing.
Due to perspective, cameras image a cube as a square frustum (a truncated pyramid, with trapezoidal sides)—the far end is smaller than the near end. This creates perspective, and the rate at which this scaling happens (how quickly more distant objects shrink) creates a sense of a scene being deep or shallow. This cannot be changed or corrected by a simple transform of the resulting image, because it requires 3D information, namely the depth of objects in the scene. This effect is known as perspect
Due to perspective, cameras image a cube as a square frustum (a truncated pyramid, with trapezoidal sides)—the far end is smaller than the near end. This creates perspective, and the rate at which this scaling happens (how quickly more distant objects shrink) creates a sense of a scene being deep or shallow. This cannot be changed or corrected by a simple transform of the resulting image, because it requires 3D information, namely the depth of objects in the scene. This effect is known as perspective distortion; the image itself is not distorted, but is perceived as distorted when viewed from a normal viewing distance.
Note that if the center of the image is closer than the edges (for example, a straight-on shot of a face), then barrel distortion and wide-angle distortion (taking the shot from close) both increase the size of the center, while pincushion distortion and telephoto distortion (taking the shot from far) both decrease the size of the center. However, radial distortion bends straight lines (out or in), while perspective distortion does not bend lines, and these are distinct phenomena. Fisheye lenses are wide-angle lenses with heavy barrel distortion and thus exhibit both these phenomena, so objects in the center of the image (if shot from a short distance) are particularly enlarged: even if the barrel distortion is corrected, the resulting image is still from a wide-angle lens, and will still have a wide-angle perspective.