Phase Vocoder
A phase vocoder is a type of vocoder-purposed algorithm which can interpolate information present in the frequency and time domains of audio signals by using phase information extracted from a frequency transform. The computer algorithm allows frequency-domain modifications to a digital sound file (typically time expansion/compression and pitch shifting). At the heart of the phase vocoder is the short-time Fourier transform (STFT), typically coded using fast Fourier transforms. The STFT converts a time domain representation of sound into a time-frequency representation (the "analysis" phase), allowing modifications to the amplitudes or phases of specific frequency components of the sound, before resynthesis of the time-frequency domain representation into the time domain by the inverse STFT. The time evolution of the resynthesized sound can be changed by means of modifying the time position of the STFT frames prior to the resynthesis operation allowing for time-scale modificati ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Spectral Leakage
The Fourier transform of a function of time, s(t), is a complex-valued function of frequency, S(f), often referred to as a frequency spectrum. Any LTI system theory, linear time-invariant operation on s(t) produces a new spectrum of the form H(f)•S(f), which changes the relative magnitudes and/or angles (Phase (waves), phase) of the non-zero values of S(f). Any other type of operation creates new frequency components that may be referred to as spectral leakage in the broadest sense. Sampling (signal processing), Sampling, for instance, produces leakage, which we call ''aliasing, aliases'' of the original spectral component. For Fourier transform purposes, Sampling (signal processing), sampling is modeled as a product between s(t) and a Dirac comb function. The spectrum of a product is the convolution between S(f) and another function, which inevitably creates the new frequency components. But the term 'leakage' usually refers to the effect of ''windowing'', which is the produ ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Audio Time Stretching And Pitch Scaling
Time stretching is the process of changing the speed or duration of an audio signal without affecting its pitch. Pitch scaling is the opposite: the process of changing the pitch without affecting the speed. Pitch shift is pitch scaling implemented in an effects unit and intended for live performance. Pitch control is a simpler process which affects pitch and speed simultaneously by slowing down or speeding up a recording. These processes are often used to match the pitches and tempos of two pre-recorded clips for mixing when the clips cannot be reperformed or resampled. Time stretching is often used to adjust radio commercials and the audio of television advertisements to fit exactly into the 30 or 60 seconds available. It can be used to conform longer material to a designated time slot, such as a 1-hour broadcast. Resampling The simplest way to change the duration or pitch of an audio recording is to change the playback speed. For a digital audio recording, this can be accomp ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Curtis Roads
Curtis Roads (born May 9, 1951) is an American composer, author and computer programmer. He composes electronic and electroacoustic music, specializing in granular and pulsar synthesis. Career and music Born in Cleveland, Ohio, Roads studied composition at the California Institute of the Arts and the University of California San Diego. He is former chair and current vice chair of the Media Arts and Technology Program at the University of California, Santa Barbara.MAT: Faculty and Researchers ", ''Mat.UCSB.edu''. He has previously taught at the "Federico II", [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
JoAnn Kuchera-Morin
JoAnn Kuchera-Morin (born 1951) is a professor of media arts & technology and of music." A composer and researcher specializing in multimodal interaction, she is the creator and director of the AlloSphere at the California NanoSystems Institute and the creator and director of the Center for Research in Electronic Art Technology (CREATE) at the University of California, Santa Barbara. Kuchera-Morin initiated and was chief scientist of the University of California Digital Media Innovation Program (DiMI) from 1998 to 2003. The culmination of Kuchera-Morin’s creativity and research is the AlloSphere instrument, a 30-foot-diameter, 3-story-high metal sphere inside an echo-free cube, designed for immersive, interactive scientific and artistic investigation of multi-dimensional data sets. Scientifically, the AlloSphere is an instrument for gaining insight and developing bodily intuition about environments into which the body cannot venture—abstract higher-dimensional information spa ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Roger Reynolds
Roger Lee Reynolds (born July 18, 1934) is an American composer. He is known for his capacity to integrate diverse ideas and resources, and for the seamless blending of traditional musical sounds with those newly enabled by technology. Beyond composition, his contributions to musical life include mentorship, algorithmic design, engagement with psychoacoustics, writing books and articles, and festival organization. During his early career, Reynolds worked in Europe and Asia, returning to the US in 1969 to accept an appointment in the music department at the University of California, San Diego. His leadership there established it as a state of the art facility – in parallel with Stanford, IRCAM, and MIT – a center for composition and computer music exploration. Reynolds won early recognition with Fulbright, Guggenheim, National Endowment for the Arts, and National Institute of Arts and Letters awards. In 1989, he was awarded the Pulitzer Prize for a string orchestra compositi ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Transfigured Wind
Transfiguration(s) or The Transfiguration may refer to: Religion * Transfiguration of Jesus, an event in the Bible * Feast of the Transfiguration, a Christian holiday celebrating the Transfiguration of Jesus * Transfiguration (religion), a momentary transformation of a person into some aspect of the divine Paintings * ''Transfiguration'' (Bellini, Venice), c. 1454–1460 * ''Transfiguration of Christ'' (Bellini), c. 1480 * ''Transfiguration'' (Lotto), c. 1510—1512 * ''Transfiguration Altarpiece'' (Perugino), 1517 * ''Transfiguration'' (Pordenone), c. 1515–1516 * ''Transfiguration'' (Raphael), c. 1516–1520 * ''Transfiguration'' (Rubens), 1604–1605 * ''Transfiguration'' (Savoldo), c. 1530 Film and television * ''The Transfiguration'' (film), a 2016 American film * Transfiguration (Harry Potter), a subject taught at Hogwarts in ''Harry Potter'' media * "Transfigurations", a 1990 episode of ''Star Trek: The Next Generation'' Literature * ''Transfigurations'' (nov ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Vox Cycle
''Vox Cycle'' is a series of six electroacoustic compositions by Trevor Wishart. A independent movement cycle for four amplified voices, the works were composed between 1979 and 1988 and feature extended vocal techniques and the contemporary vocal composition. ''Vox Cycle'' is focused on the relationship and the interpolation of natural sounds and human voice, the main musical interest of the composer on which he has been researching for a long time, starting from ''Red Bird'' composition released in 1978. The poetics at the base of the work have linguistic and philosophical relevance, regarding the relationship between the creation and disintegration of man, natural developments, and failure of western culture and society. ''The Raw and the Cooked'' by Claude Lévi-Strauss influenced the composer's central idea for these compositions. All the vocals in the movements, except ''Vox V'' which is based only on the recording of vocal sounds improvised by Wishart himself, are perfo ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Trevor Wishart
Trevor Wishart (born 11 October 1946) is an English composer, based in York. Wishart has contributed to composing with digital audio media, both fixed and interactive. He has also written extensively on the topic of what he terms " sonic art", and contributed to the design and implementation of software tools used in the creation of digital music; notably, the Composers Desktop Project. Wishart was born in Leeds, West Riding of Yorkshire. He was educated at the University of Oxford (BA 1968), the University of Nottingham (MA 1969), and the University of York (PhD 1973). Although mainly a freelance composer, he holds an honorary position at the University of York. He was appointed as composer-in-residence at the University of Durham in 2006, and then at the University of Oxford Faculty of Music in 2010–11, supported by the Leverhulme Trust. Music Wishart's compositional interests deal mainly with the human voice, in particular with the transformation of it and the interpol ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Ircam
IRCAM (French: ''Ircam, '', English: Institute for Research and Coordination in Acoustics/Music) is a French institute dedicated to the research of music and sound, especially in the fields of Avant-garde music, avant garde and Electroacoustic music, electro-acoustical art music. It is situated next to, and is organisationally linked with, the Centre Pompidou in Paris. The extension of the building was designed by Renzo Piano and Richard Rogers. Much of the institute is located underground, beneath the fountain to the east of the buildings. A centre for musical research Several concepts for electronic music and audio processing have emerged at IRCAM. John Chowning pioneered work on FM synthesis at IRCAM, and Miller Puckette originally wrote Max (software), Max at IRCAM in the mid-1980s, which would become the real-time audio processing graphical programming environment Max/MSP. Max/MSP has subsequently become a widely used tool in electroacoustic music. Many of the techniques a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
IEEE Transactions On Speech And Audio Processing
The Institute of Electrical and Electronics Engineers (IEEE) is an American 501(c)(3) public charity professional organization for electrical engineering, electronics engineering, and other related disciplines. The IEEE has a corporate office in New York City and an operations center in Piscataway, New Jersey. The IEEE was formed in 1963 as an amalgamation of the American Institute of Electrical Engineers and the Institute of Radio Engineers. History The IEEE traces its founding to 1884 and the American Institute of Electrical Engineers. In 1912, the rival Institute of Radio Engineers was formed. Although the AIEE was initially larger, the IRE attracted more students and was larger by the mid-1950s. The AIEE and IRE merged in 1963. The IEEE is headquartered in New York City, but most business is done at the IEEE Operations Center in Piscataway, New Jersey, opened in 1975. The Australian Section of the IEEE existed between 1972 and 1985, after which it split into state- and te ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Window Function
In signal processing and statistics, a window function (also known as an apodization function or tapering function) is a mathematical function that is zero-valued outside of some chosen interval. Typically, window functions are symmetric around the middle of the interval, approach a maximum in the middle, and taper away from the middle. Mathematically, when another function or waveform/data-sequence is "multiplied" by a window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the "view through the window". Equivalently, and in actual practice, the segment of data within the window is first isolated, and then only that data is multiplied by the window function values. Thus, tapering, not segmentation, is the main purpose of window functions. The reasons for examining segments of a longer function include detection of transient events and time-averaging of frequency spectra. The duration of the segments is determine ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |