Audio analysis
   HOME

TheInfoList



OR:

Audio analysis refers to the extraction of information and meaning from audio signals for analysis, Statistical classification, classification, storage, retrieval, Sound synthesis, synthesis, etc. The observation mediums and interpretation methods vary, as audio analysis can refer to the human ear and how people interpret the audible sound source, or it could refer to using technology such as an Audio analyzer to evaluate other qualities of a sound source such as amplitude, Distortion (music), distortion, frequency response, and more. Once an audio source's information has been observed, the information revealed can then be processed for the logical, emotional, descriptive, or otherwise relevant interpretation by the user.


Natural Analysis

The most prevalent form of audio analysis is derived from the sense of hearing. A type of sensory perception that occurs in much of the planet's fauna, audio analysis is a fundamental process of many living beings. Sounds made by the surrounding environment or other living beings provides input to the hearing mechanism, for which the listener's brain can interpret the sound and how it should respond. Examples of functions include speech, startle response, music listening, and more. An inherent ability of humans, hearing is fundamental in communication across the globe, and the process of assigning meaning and value to speech is a complex but necessary function of the human body. The study of the auditory system has been greatly centered using mathematics and the analysis of sinusoidal vibrations and sounds. The Fourier transform, Fourier Transform has been an essential theorem in understanding how the human ear processes moving air and turns it into the audible frequency range, about 20 to 20,000 Hz. The ear is able take one complex waveform and process it into varying frequency ranges thanks to differences in the structures of the ear canal, that are tuned to specific frequency ranges. The initial sensory input is then analyzed further up in the neurological system where the perception of sound takes place. The auditory system also works in tandem with the neural system so that the listener is capable of spatially locating the direction from which a sound source originated. This is known as the Haas Effect, Haas or Precedence effect and is possible due to the nature of having two ears, or auditory receptors. The difference in time it takes for a sound to reach both ears provides the necessary information for the brain to calculate the spatial positioning of the source.


Signal Analysis

Audio signals can be analyzed in several different ways, depending on the kind of information desired from the signal. Types of signal analysis include: *Amplitude, Level and gain *Frequency domain analysis *Frequency response *Total harmonic distortion, Total Harmonic Distortion plus Noise (THD+N) *Phase (waves), Phase *Crosstalk *Intermodulation distortion (IMD) *Stereo and Surround Hardware analyzers have been the primary means of signal analysis since the invention of the first audio analyzer, made by Hewlett-Packard, the HP200A. Hardware analyzers are typically used in engineering, testing, and manufacturing of professional and consumer grade products. As computer technology progressed, integrated software found its way into these hardware systems, and later there would be audio analysis tools that did not require any hardware components save for the computer running the software. Software audio analyzers are regularly used in various stages of music production, such as live audio, mixing, and mastering. These products tend to employ Fast Fourier transform, Fast Fourier Transform (FFT) algorithms and processing to provide a visual representation of the signal being analyzed. Display and information types include frequency spectrum, Goniometer, stereo field, Surround sound, surround field, spectrogram, and more.


See also

* Semantic audio * Speech recognition * Sound recognition


References

{{DEFAULTSORT:Audio Analysis Audio engineering