Computational Imaging
Computational imaging is the process of indirectly forming images from measurements using algorithms that rely on a significant amount of computing. In contrast to traditional imaging, computational imaging systems involve a tight integration of the sensing system and the computation in order to form the images of interest. The ubiquitous availability of fast computing platforms (such as multi-core CPUs and GPUs), the advances in algorithms and modern sensing hardware is resulting in imaging systems with significantly enhanced capabilities. Computational Imaging systems cover a broad range of applications include computational microscopy, tomographic imaging, MRI, ultrasound imaging, computational photography, Synthetic Aperture Radar (SAR), seismic imaging etc. The integration of the sensing and the computation in computational imaging systems allows for accessing information which was otherwise not possible. For example: * A single X-ray image does not reveal the precise ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
|
Multi-core Processor
A multi-core processor (MCP) is a microprocessor on a single integrated circuit (IC) with two or more separate central processing units (CPUs), called ''cores'' to emphasize their multiplicity (for example, ''dual-core'' or ''quad-core''). Each core reads and executes Instruction set, program instructions, specifically ordinary Instruction set, CPU instructions (such as add, move data, and branch). However, the MCP can run instructions on separate cores at the same time, increasing overall speed for programs that support Multithreading (computer architecture), multithreading or other parallel computing techniques. Manufacturers typically integrate the cores onto a single IC Die (integrated circuit), die, known as a ''chip multiprocessor'' (CMP), or onto multiple dies in a single Chip carrier, chip package. As of 2024, the microprocessors used in almost all new personal computers are multi-core. A multi-core processor implements multiprocessing in a single physical package. Des ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
|
Compressed Sensing
Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a Signal (electronics), signal by finding solutions to Underdetermined system, underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals. Compressed sensing has applications in, for example, magnetic resonance imaging (MRI) where the incoherence condition is typically satisfied. Overview A common goal of the engineering field of signal processing is to reconstruct a signal from a series ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
|
3D Slicer
3D Slicer (Slicer) is a free and open source software package for image analysis and scientific visualization. Slicer is used in a variety of medical applications, including autism, multiple sclerosis, systemic lupus erythematosus, prostate cancer, lung cancer, breast cancer, schizophrenia, orthopedic biomechanics, COPD, cardiovascular disease and neurosurgery. About 3D Slicer is a free open source software (BSD-style license) that is a flexible, modular platform for image analysis and visualization. 3D Slicer is extended to enable development of both interactive and batch processing tools for a variety of applications. 3D Slicer provides image registration, processing of DTI (diffusion tractography), an interface to external devices for image guidance support, and GPU-enabled volume rendering, among other capabilities. 3D Slicer has a modular organization that allows the addition of new functionality and provides a number of generic features. The interactive visualization c ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
|
ImageJ
ImageJ is a Java (programming language), Java-based image processing program developed at the National Institutes of Health and the Laboratory for Optical and Computational Instrumentation (LOCI, University of Wisconsin). Its first version, ImageJ 1.x, is developed in the public domain, while ImageJ2 and the related projects SciJava, ImgLib2, and SCIFIO are licensed with a permissive BSD licenses, BSD-2 license. ImageJ was designed with an open architecture that provides extensibility via Java plug-in (computing), plugins and recordable macros. Custom acquisition, analysis and processing plugins can be developed using ImageJ's built-in editor and a Java compiler. User-written plugins make it possible to solve many image processing and analysis problems, from three-dimensional live-cell imaging to radiology, radiological image processing, multiple imaging system data comparisons to automated hematology systems. ImageJ's plugin architecture and built-in development environment has mad ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
|
Mathematical Optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maxima and minima, maximizing or minimizing a Function of a real variable, real function by systematically choosing Argument of a function, input values from within an allowed set and computing the Value (mathematics), value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. Optimization problems Opti ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
|
Estimation Theory
Estimation theory is a branch of statistics that deals with estimating the values of Statistical parameter, parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An ''estimator'' attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered: * The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest * The set estimation, set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector. Examples For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sa ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
|
Least Squares
The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The method is widely used in areas such as regression analysis, curve fitting and data modeling. The least squares method can be categorized into linear and nonlinear forms, depending on the relationship between the model parameters and the observed data. The method was first proposed by Adrien-Marie Legendre in 1805 and further developed by Carl Friedrich Gauss. History Founding The method of least squares grew out of the fields of astronomy and geodesy, as scientists and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans during the Age of Discovery. The accurate description of the behavior of celestial bodies was the key to enabling ships to sail in open seas, where sailors could no longer rely on la ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
|
![]() |
High-dynamic-range Imaging
High dynamic range (HDR), also known as wide dynamic range, extended dynamic range, or expanded dynamic range, is a signal with a higher dynamic range than usual. The term is often used in discussing the dynamic ranges of images, videos, audio or radio. It may also apply to the means of recording, processing, and reproducing such signals including analog and digitized signals. Imaging In this context, the term ''high dynamic range'' means there is a large amount of variation in light levels within a scene or an image. The ''dynamic range'' refers to the range of luminosity between the brightest area and the darkest area of that scene or image. (HDRI) refers to the set of imaging technologies and techniques that allow the dynamic range of images or videos to be increased. It covers the acquisition, creation, storage, distribution and display of images and videos. Modern films have often been shot with cameras featuring a higher dynamic range, and legacy films can be post-con ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
![]() |
Iterative Reconstruction
Iterative reconstruction refers to Iteration, iterative algorithms used to reconstruct 2D and 3D reconstruction, 3D images in certain Digital imaging, imaging techniques. For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually a better, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image in a single reconstruction step.Herman, G. T.Fundamentals of computerized tomography: Image reconstruction from projection 2nd edition, Springer, 2009 In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization. Basic concepts The reconstruction of an image from the acquired data is an inverse problem. Often, it is not possible to exactly solve the inverse problem ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
![]() |
Inverse Problem
An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, sound source reconstruction, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects. Inverse problems are some of the most important mathematical problems in science and mathematics because they tell us about parameters that we cannot directly observe. They can be found in system identification, optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, slope stability analys ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
![]() |
Parallel Computing
Parallel computing is a type of computing, computation in which many calculations or Process (computing), processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: Bit-level parallelism, bit-level, Instruction-level parallelism, instruction-level, Data parallelism, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.S.V. Adve ''et al.'' (November 2008)"Parallel Computing Research at Illinois: The UPCRC Agenda" (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits—increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The computer industry has accepted that future performance inc ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |
![]() |
Fourier Ptychography
Fourier ptychography is a computational imaging technique based on optical microscopy that consists in the synthesis of a wider numerical aperture from a set of full-field images acquired at various coherent illumination angles, resulting in increased resolution compared to a conventional microscope. Each image is acquired under the illumination of a coherent light source at various angles of incidence (typically from an array of LEDs); the acquired image set is then combined using an iterative phase retrieval algorithm into a final high-resolution image that can contain up to a billion pixels (a gigapixel) with diffraction-limited resolution, resulting in a high space-bandwidth product. Fourier ptychography reconstructs the complex image of the object (with quantitative phase information), but contrary to holography, it is a non-interferometric imaging technique and thus often easier to implement. The name "ptychography" comes from the ancient Greek word πτυχή ("ptych� ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] [Amazon] |