Structured Audio
   HOME
*





Structured Audio
MPEG-4 Structured Audio is an ISO/IEC standard for describing sound. It was published as subpart 5 of MPEG-4 Part 3 (ISO/IEC 14496-3:1999) in 1999. It allows the transmission of synthetic music and sound effects at very low bit rates (from 0.01 to 10 kbit/s), and the description of parametric sound post-production for mixing multiple streams and adding effects to audio scenes. It does not standardize a particular set of synthesis methods, but a method for describing synthesis methods. The sound descriptions generate audio when compiled (or interpreted) by a compliant decoder. MPEG-4 Structured Audio consists of the following major elements: * Structured Audio Orchestra Language (SAOL), an audio programming language. SAOL is historically related to Csound and other so-called Music-N languages. It was created by an MIT Media Lab grad student named Eric Scheirer while he was studying under Barry Vercoe during the 1990s. * Structured Audio Score Language (SASL) - is used to d ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


ISO/IEC
ISO/IEC JTC 1, entitled "Information technology", is a joint technical committee (JTC) of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Its purpose is to develop, maintain and promote standards in the fields of information and communications technology (ICT). JTC 1 has been responsible for many critical IT standards, ranging from the Joint Photographic Experts Group (JPEG) image formats and Moving Picture Experts Group (MPEG) audio and video formats to the C and C++ programming languages. History ISO/IEC JTC 1 was formed in 1987 as a merger between ISO/TC 97 (Information Technology) and IEC/TC 83, with IEC/SC 47B joining later. The intent was to bring together, in a single committee, the IT standardization activities of the two parent organizations in order to avoid duplicative or possibly incompatible standards. At the time of its formation, the mandate of JTC 1 was to develop base standards in information t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

MIT Media Lab
The MIT Media Lab is a research laboratory at the Massachusetts Institute of Technology, growing out of MIT's Architecture Machine Group in the School of Architecture. Its research does not restrict to fixed academic disciplines, but draws from technology, media, science, art, and design. , Media Lab's research groups include neurobiology, biologically inspired fabrication, socially engaging robots, emotive computing, bionics, and hyperinstruments. The Media Lab was founded in 1985 by Nicholas Negroponte and former MIT President Jerome Wiesner, and is housed in the Wiesner Building (designed by I. M. Pei), also known as Building E15. The Lab has been written about in the popular press since 1988, when Stewart Brand published ''The Media Lab: Inventing the Future at M.I.T.'', and its work was a regular feature of technology journals in the 1990s. In 2009, it expanded into a second building. The Media Lab came under scrutiny in 2019 due to its acceptance of donations fr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

MIDI
MIDI (; Musical Instrument Digital Interface) is a technical standard that describes a communications protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing, and recording music. The specification originates in the paper ''Universal Synthesizer Interface'' published by Dave Smith and Chet Wood of Sequential Circuits at the 1981 Audio Engineering Society conference in New York City. A single MIDI cable can carry up to sixteen channels of MIDI data, each of which can be routed to a separate device. Each interaction with a key, button, knob or slider is converted into a MIDI event, which specifies musical instructions, such as a note's pitch, timing and loudness. One common MIDI application is to play a MIDI keyboard or other controller and use it to trigger a digital sound module (which contains synthesized musical sounds) to generate sounds, wh ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Run Time (program Lifecycle Phase)
In computer science, runtime, run time, or execution time is the final phase of a computer programs life cycle, in which the code is being executed on the computer's central processing unit (CPU) as machine code. In other words, "runtime" is the running phase of a program. A runtime error is detected after or during the execution (running state) of a program, whereas a compile-time error is detected by the compiler before the program is ever executed. Type checking, register allocation, code generation, and code optimization are typically done at compile time, but may be done at runtime depending on the particular language and compiler. Many other runtime errors exist and are handled differently by different programming languages, such as division by zero errors, domain errors, array subscript out of bounds errors, arithmetic underflow errors, several types of underflow and overflow errors, and many other runtime errors generally considered as software bugs which ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


DownLoadable Sounds
A DLS format (from Downloadable Sound) is any of the standardized file formats for digital musical instrument sound banks (collections of virtual musical instrument programs). The DLS standards also include detailed specifications for how MIDI protocol-controlled music synthesizers should render the instruments in a DLS file. As a result, DLS can also be considered primarily a synthesizer specification and only secondarily a file format. The current DLS standards were developed first by the Interactive Audio Special Interest Group (IASIG), and then by the MIDI Manufacturers Association (MMA). Any future versions of DLS would be developed through the MMA working group process. The DLS specifications are published in English by the MMA and in Japanese by Association of Musical Electronics Industry (AMEI). The DLS family is closely related to the proprietary SoundFonts format from Creative Labs. All versions of DLS to date are based on sample-based synthesis, however in principle th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




SoundFont
SoundFont is a brand name that collectively refers to a file format and associated technology that uses sample-based synthesis to play MIDI files. It was first used on the Sound Blaster AWE32 sound card for its General MIDI support. SoundFont is a registered trademark of Creative Technology, Ltd., and the exclusive license for re-formatting and managing historical SoundFont content has been acquired by Digital Sound Factory. Specification The newest version of the SoundFont file format is 2.04 (or 2.4). It is based on the RIFF format. History The original SoundFont file format was developed in the early 1990s by E-mu Systems and Creative Labs. A specification for this version was never released to the public. The first and only major device to utilize this version was Creative's Sound Blaster AWE32 in 1994. Files in this format conventionally have the file extension of . SoundFont 2.0 was developed in 1996. This file format generalized the data representation usin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Sample-based Synthesis
Sample-based synthesis is a form of audio synthesis that can be contrasted to either subtractive synthesis or additive synthesis. The principal difference with sample-based synthesis is that the seed waveforms are sampled sounds or instruments instead of fundamental waveforms such as sine and saw waves used in other types of synthesis. History Before digital recording became practical, instruments such as the Welte (1930s), phonogene (1950s) and the Mellotron (1960s) used analog optical disks or analog tape decks to play back sampled sounds. When sample-based synthesis was first developed, most affordable consumer synthesizers could not record arbitrary samples, but instead formed timbres by combining pre-recorded samples from ROM before routing the result through analog or digital filters. These synthesizers and their more complex descendants are often referred to as ROMplers. Sample-based instruments have been used since the Computer Music Melodian, the Fairlight ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Table-lookup Synthesis
Wavetable synthesis is a sound synthesis technique used to create quasi-periodic waveforms often used in the production of musical tones or notes. Development Wavetable synthesis was invented by Max Mathews in 1958 as part of MUSIC II. MUSIC II “had four-voice polyphony and was capable of generating sixteen wave shapes via the introduction of a wavetable oscillator.” Hal Chamberlin discussed wavetable synthesis in Byte's September 1977 issue. Wolfgang Palm of Palm Products GmbH (PPG) developed his version in the late 1970s and published it in 1979. The technique has since been used as the primary synthesis method in synthesizers built by PPG and Waldorf Music and as an auxiliary synthesis method by Ensoniq and Access. It is currently used in hardware synthesizers from Waldorf Music and in software synthesizers for PCs and tablets, including apps offered by PPG and Waldorf, among others. It was also independently developed by Michael McNabb, who used it in his 1978 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Barry Vercoe
Barry Lloyd Vercoe (born 1937) is a New Zealand-born computer scientist and composer. He is best known as the inventor of Csound, a music synthesis language with wide usage among computer music composers. SAOL, the underlying language for the MPEG-4 Structured Audio standard, is also historically derived from Csound. Born in Wellington, Vercoe received undergraduate degrees in music (1959) and mathematics (1962) from the University of Auckland before emigrating to the United States. While employed as an assistant professor at the Oberlin Conservatory of Music (1965-1967) and as the Contemporary Music Project's Seattle/ Tacoma composer-in-residence (1967-1968), he earned his A.Mus.D. in composition from the University of Michigan (where he studied with Ross Lee Finney) in 1968. Prior to taking these positions, Vercoe supported his doctoral studies by working as a staff statistician at Michigan; it was in this capacity that first acquired an aptitude for computer programming ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Eric Scheirer
The given name Eric, Erich, Erikk, Erik, Erick, or Eirik is derived from the Old Norse name ''Eiríkr'' (or ''Eríkr'' in Old East Norse due to monophthongization). The first element, ''ei-'' may be derived from the older Proto-Norse ''* aina(z)'', meaning "one, alone, unique", ''as in the form'' ''Æ∆inrikr'' explicitly, but it could also be from ''* aiwa(z)'' "everlasting, eternity", as in the Gothic form ''Euric''. The second element ''- ríkr'' stems either from Proto-Germanic ''* ríks'' "king, ruler" (cf. Gothic ''reiks'') or the therefrom derived ''* ríkijaz'' "kingly, powerful, rich, prince"; from the common Proto-Indo-European root * h₃rḗǵs. The name is thus usually taken to mean "sole ruler, autocrat" or "eternal ruler, ever powerful". ''Eric'' used in the sense of a proper noun meaning "one ruler" may be the origin of ''Eriksgata'', and if so it would have meant "one ruler's journey". The tour was the medieval Swedish king's journey, when newly elected, to s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

MPEG-4 Part 3
MPEG-4 Part 3 or MPEG-4 Audio (formally ISO/IEC 14496-3) is the third part of the ISO/IEC MPEG-4 international standard developed by Moving Picture Experts Group. It specifies audio coding methods. The first version of ISO/IEC 14496-3 was published in 1999. The MPEG-4 Part 3 consists of a variety of audio coding technologies – from lossy speech coding (HVXC, CELP), general audio coding ( AAC, TwinVQ, BSAC), lossless audio compression ( MPEG-4 SLS, Audio Lossless Coding, MPEG-4 DST), a Text-To-Speech Interface (TTSI), Structured Audio (using SAOL, SASL, MIDI) and many additional audio synthesis and coding techniques. MPEG-4 Audio does not target a single application such as real-time telephony or high-quality audio compression. It applies to every application which requires the use of advanced sound compression, synthesis, manipulation, or playback. MPEG-4 Audio is a new type of audio standard that integrates numerous different types of audio coding: natural sound and s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]