HOME

TheInfoList



OR:

Audio signal processing is a subfield of
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing '' signals'', such as sound, images, and scientific measurements. Signal processing techniques are used to optimize transmissions, ...
that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—
longitudinal wave Longitudinal waves are waves in which the vibration of the medium is parallel ("along") to the direction the wave travels and displacement of the medium is in the same (or opposite) direction of the wave propagation. Mechanical longitudinal wa ...
s which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals is typically measured in decibels. As audio signals may be represented in either digital or
analog Analog or analogue may refer to: Computing and electronics * Analog signal, in which information is encoded in a continuous variable ** Analog device, an apparatus that operates on analog signals *** Analog electronics, circuits which use analo ...
format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation.


History

The motivation for audio signal processing began at the beginning of the 20th century with inventions like the
telephone A telephone is a telecommunications device that permits two or more users to conduct a conversation when they are too far apart to be easily heard directly. A telephone converts sound, typically and most efficiently the human voice, into el ...
, phonograph, and
radio Radio is the technology of signaling and communicating using radio waves. Radio waves are electromagnetic waves of frequency between 30 hertz (Hz) and 300  gigahertz (GHz). They are generated by an electronic device called a transm ...
that allowed for the transmission and storage of audio signals. Audio processing was necessary for early
radio broadcasting Radio broadcasting is transmission of audio (sound), sometimes with related metadata, by radio waves to radio receivers belonging to a public audience. In terrestrial radio broadcasting the radio waves are broadcast by a land-based radi ...
, as there were many problems with
studio-to-transmitter link A studio transmitter link (or STL) sends a radio station's or television station's audio and video from the broadcast studio or origination facility to a radio transmitter, television transmitter or uplink facility in another location. This is ...
s. The theory of signal processing and its application to audio was largely developed at
Bell Labs Nokia Bell Labs, originally named Bell Telephone Laboratories (1925–1984), then AT&T Bell Laboratories (1984–1996) and Bell Labs Innovations (1996–2007), is an American industrial research and scientific development company owned by mult ...
in the mid 20th century.
Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, and cryptographer known as a "father of information theory". As a 21-year-old master's degree student at the Massachusetts In ...
and Harry Nyquist's early work on communication theory,
sampling theory In statistics, quality assurance, and survey methodology, sampling is the selection of a subset (a statistical sample) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt ...
and
pulse-code modulation Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications. In a PCM stream, the a ...
(PCM) laid the foundations for the field. In 1957, Max Mathews became the first person to synthesize audio from a computer, giving birth to computer music. Major developments in digital audio coding and audio data compression include differential pulse-code modulation (DPCM) by C. Chapin Cutler at Bell Labs in 1950, linear predictive coding (LPC) by Fumitada Itakura (
Nagoya University , abbreviated to or NU, is a Japanese national research university located in Chikusa-ku, Nagoya. It was the seventh Imperial University in Japan, one of the first five Designated National University and selected as a Top Type university of ...
) and Shuzo Saito ( Nippon Telegraph and Telephone) in 1966, adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973, discrete cosine transform (DCT) coding by Nasir Ahmed, T. Natarajan and
K. R. Rao Kamisetty Ramamohan Rao was an Indian-American electrical engineer. He was a professor of Electrical Engineering at the University of Texas at Arlington (UT Arlington). Academically known as K. R. Rao, he is credited with the co-invention of di ...
in 1974, and modified discrete cosine transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at the
University of Surrey The University of Surrey is a public research university in Guildford, Surrey, England. The university received its royal charter in 1966, along with a number of other institutions following recommendations in the Robbins Report. The institu ...
in 1987. LPC is the basis for perceptual coding and is widely used in speech coding, while MDCT coding is widely used in modern audio coding formats such as MP3 and
Advanced Audio Coding Advanced Audio Coding (AAC) is an audio coding standard for lossy digital audio compression. Designed to be the successor of the MP3 format, AAC generally achieves higher sound quality than MP3 encoders at the same bit rate. AAC has been sta ...
(AAC).


Analog signals

An analog audio signal is a continuous signal represented by an electrical voltage or current that is ''analogous'' to the sound waves in the air. Analog signal processing then involves physically altering the continuous signal by changing the voltage or current or charge via electrical circuits. Historically, before the advent of widespread digital technology, analog was the only method by which to manipulate a signal. Since that time, as computers and software have become more capable and affordable, digital signal processing has become the method of choice. However, in music applications, analog technology is often still desirable as it often produces
nonlinear In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other ...
responses that are difficult to replicate with digital filters.


Digital signals

A digital representation expresses the audio waveform as a sequence of symbols, usually
binary numbers A binary number is a number expressed in the base-2 numeral system or binary numeral system, a method of mathematical expression which uses only two symbols: typically "0" (zero) and "1" ( one). The base-2 numeral system is a positional notation ...
. This permits signal processing using digital circuits such as digital signal processors,
microprocessor A microprocessor is a computer processor where the data processing logic and control is included on a single integrated circuit, or a small number of integrated circuits. The microprocessor contains the arithmetic, logic, and control circu ...
s and general-purpose computers. Most modern audio systems use a digital approach as the techniques of digital signal processing are much more powerful and efficient than analog domain signal processing.


Applications

Processing methods and application areas include
storage Storage may refer to: Goods Containers * Dry cask storage, for storing high-level radioactive waste * Food storage * Intermodal container, cargo shipping * Storage tank Facilities * Garage (residential), a storage space normally used to store car ...
,
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
, music information retrieval, speech processing, localization, acoustic detection, transmission, noise cancellation, acoustic fingerprinting, sound recognition, synthesis, and enhancement (e.g. equalization, filtering, level compression, echo and
reverb Reverberation (also known as reverb), in acoustics, is a persistence of sound, after a sound is produced. Reverberation is created when a sound or signal is reflected causing numerous reflections to build up and then decay as the sound is abs ...
removal or addition, etc.).


Audio broadcasting

Audio signal processing is used when broadcasting audio signals in order to enhance their fidelity or optimize for bandwidth or latency. In this domain, the most important audio processing takes place just before the transmitter. The audio processor here must prevent or minimize overmodulation, compensate for non-linear transmitters (a potential issue with
medium wave Medium wave (MW) is the part of the medium frequency (MF) radio band used mainly for AM radio broadcasting. The spectrum provides about 120 channels with more limited sound quality than FM stations on the FM broadcast band. During the dayt ...
and shortwave broadcasting), and adjust overall loudness to the desired level.


Active noise control

Active noise control is a technique designed to reduce unwanted sound. By creating a signal that is identical to the unwanted noise but with the opposite polarity, the two signals cancel out due to destructive interference.


Audio synthesis

Audio synthesis is the electronic generation of audio signals. A musical instrument that accomplishes this is called a synthesizer. Synthesizers can either imitate sounds or generate new ones. Audio synthesis is also used to generate human
speech Speech is a human vocal communication using language. Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words (that is, all English words sound different from all French words, even if they are th ...
using speech synthesis.


Audio effects

Audio effects alter the sound of a
musical instrument A musical instrument is a device created or adapted to make musical sounds. In principle, any object that produces sound can be considered a musical instrument—it is through purpose that the object becomes a musical instrument. A person who pl ...
or other audio source. Common effects include distortion, often used with electric guitar in
electric blues Electric blues refers to any type of blues music distinguished by the use of electric amplification for musical instruments. The guitar was the first instrument to be popularly amplified and used by early pioneers T-Bone Walker in the late 19 ...
and
rock music Rock music is a broad genre of popular music that originated as "rock and roll" in the United States in the late 1940s and early 1950s, developing into a range of different styles in the mid-1960s and later, particularly in the United States and ...
; dynamic effects such as volume pedals and compressors, which affect loudness;
filters Filter, filtering or filters may refer to: Science and technology Computing * Filter (higher-order function), in functional programming * Filter (software), a computer program to process a data stream * Filter (video), a software component th ...
such as wah-wah pedals and graphic equalizers, which modify frequency ranges;
modulation In electronics and telecommunications, modulation is the process of varying one or more properties of a periodic waveform, called the '' carrier signal'', with a separate signal called the ''modulation signal'' that typically contains informat ...
effects, such as chorus, flangers and phasers; pitch effects such as pitch shifters; and time effects, such as
reverb Reverberation (also known as reverb), in acoustics, is a persistence of sound, after a sound is produced. Reverberation is created when a sound or signal is reflected causing numerous reflections to build up and then decay as the sound is abs ...
and delay, which create echoing sounds and emulate the sound of different spaces. Musicians, audio engineers and record producers use effects units during live performances or in the studio, typically with electric guitar, bass guitar,
electronic keyboard An electronic keyboard, portable keyboard, or digital keyboard is an electronic musical instrument, an electronic derivative of keyboard instruments. Electronic keyboards include synthesizers, digital pianos, stage pianos, electronic organs ...
or
electric piano An electric piano is a musical instrument which produces sounds when a performer presses the keys of a piano-style musical keyboard. Pressing keys causes mechanical hammers to strike metal strings, metal reeds or wire tines, leading to vibrations ...
. While effects are most frequently used with electric or electronic instruments, they can be used with any audio source, such as acoustic instruments, drums, and vocals.


Computer audition


See also

* Sound card * Sound effect


References


Further reading

* * {{DEFAULTSORT:Audio Signal Processing Audio electronics Signal processing