American Sign Language phonology
   HOME

TheInfoList



OR:

Sign language Sign languages (also known as signed languages) are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with non-manual markers. Sign l ...
s such as American Sign Language (ASL) are characterized by
phonological Phonology is the branch of linguistics that studies how languages or dialects systematically organize their sounds or, for sign languages, their constituent parts of signs. The term can also refer specifically to the sound or sign system of a ...
processes analogous to, yet dissimilar from, those of oral
language Language is a structured system of communication. The structure of a language is its grammar and the free components are its vocabulary. Languages are the primary means by which humans communicate, and may be conveyed through a variety of ...
s. Although there is a qualitative difference from oral languages in that sign-language
phoneme In phonology and linguistics, a phoneme () is a unit of sound that can distinguish one word from another in a particular language. For example, in most dialects of English, with the notable exception of the West Midlands and the north-wes ...
s are not based on sound, and are spatial in addition to being temporal, they fulfill the same role as phonemes in oral languages. Basically, three types of signs are distinguished: one-handed signs, symmetric two-handed signs (i.e. signs in which both hands are active and perform the same or a similar action), and asymmetric two-handed signs (i.e. signs in which one hand is active he 'dominant' or 'strong' handand one hand is held static he 'non-dominant' or 'weak' hand. The non-dominant hand in asymmetric signs often functions as the location of the sign. Almost all simple signs in ASL are monosyllabic.


Phonemes and features

Signs consist of units smaller than the sign. These are often subdivided into ''parameters'':
handshape In sign languages, handshape, or dez, refers to the distinctive configurations that the hands take as they are used to form words. In Stokoe terminology it is known as the , an abbreviation of ''designator''. Handshape is one of five components ...
s with a particular
orientation Orientation may refer to: Positioning in physical space * Map orientation, the relationship between directions on a map and compass directions * Orientation (housing), the position of a building with respect to the sun, a concept in building de ...
, that may perform some type of movement, in a particular
location In geography, location or place are used to denote a region (point, line, or area) on Earth's surface or elsewhere. The term ''location'' generally implies a higher degree of certainty than ''place'', the latter often indicating an entity with an ...
on the body or in the "signing space", and non-manual signals. These may include movement of the eyebrows, the cheeks, the nose, the head, the torso, and the eyes. Parameter values are often equalled to spoken language phonemes, although sign language phonemes allow more simultaneity in their realization than phonemes in spoken languages. Phonemes in signed languages, as in oral languages, consist of features. For instance, the /B/ and /G/ handshapes are distinguished by the number of selected fingers: llversus ne Most phonological research focuses on the handshape. A problem in most studies of handshape is the fact that often elements of a manual alphabet are borrowed into signs, although not all of these elements are part of the sign language's phoneme inventory. Also, allophones are sometimes considered separate phonemes. The first inventory of ASL handshapes contained 19 phonemes (or ''
chereme In phonology and linguistics, a phoneme () is a unit of sound that can distinguish one word from another in a particular language. For example, in most dialects of English, with the notable exception of the West Midlands and the north-west ...
s'' ). In some phonological models, movement is a phonological prime. Other models consider movement as redundant, as it is predictable from the locations, hand orientations and handshape features at the start and end of a sign. Models in which movement is a prime usually distinguish ''path ''movement (i.e. movement of the hand through space) and ''internal'' movement (i.e. an opening or closing movement of the hand, a hand rotation, or finger wiggling).


Allophony and assimilation

Each phoneme may have multiple
allophone In phonology, an allophone (; from the Greek , , 'other' and , , 'voice, sound') is a set of multiple possible spoken soundsor '' phones''or signs used to pronounce a single phoneme in a particular language. For example, in English, (as in '' ...
s, i.e. different realizations of the same phoneme. For example, in the /B/ handshape, the bending of the selected fingers may vary from straight to bent at the lowest joint, and the position of the thumb may vary from stretched at the side of the hand to fold in the palm of the hand. Allophony may be free, but is also often conditioned by the context of the phoneme. Thus, the /B/ handshape will be flexed in a sign in which the fingertips touch the body, and the thumb will be folded in the palm in signs where the radial side of the hand touches the body or the other hand. Assimilation of sign phonemes to signs in the context is a common process in ASL. For example, the point of contact for signs like THINK, normally at the forehead, may be articulated at a lower location if the location in the following sign is below the cheek. Other assimilation processes concern the number of selected fingers in a sign, that may adapt to that of the previous or following sign. Also, has been observed that one-handed signs are articulated with two hands when followed by two-handed signs.


Phonotactics

As yet, little is known about ASL
phonotactic Phonotactics (from Ancient Greek "voice, sound" and "having to do with arranging") is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable struc ...
constraints (or those in other signed languages). The Symmetry and Dominance Conditions are sometimes assumed to be phonotactic constraints. The Symmetry Condition requires both hands in a symmetric two-handed sign to have the same or a mirrored configuration, orientation, and movement. The Dominance Condition requires that only one hand in a twohanded sign moves if the hands do not have the same handshape specifications, ''and'' that the non-dominant hand has an unmarked handshape. However, since these conditions seem to apply in more and more signed languages as cross-linguistic research increases, it is doubtful whether these should be considered as specific to ASL phonotactics.


Prosody

ASL conveys prosody through facial expression and upper-body position. Head position, eyebrows, eye gaze, blinks, and mouth positions all convey important linguistic information in sign languages. Some signs have required facial components that distinguish them from other signs. An example of this sort of lexical distinction is the sign translated 'not yet', which requires that the tongue touch the lower lip and that the head rotates from side to side, in addition to the manual part of the sign. Without these features, it would be interpreted as 'late'. Though there are some non-manual signs that are used for a number of functions, proficient signers don't have any more difficulty decoding what raised eyebrows mean in a specific context than speakers of English have figuring out what the
pitch contour __NOTOC__ In linguistics, speech synthesis, and music, the pitch contour of a sound is a function or curve that tracks the perceived pitch of the sound over time. Pitch contour may include multiple sounds utilizing many pitches, and can relate t ...
of a sentence in context means. The use of similar facial changes such as eyebrow height to convey both prosody and grammatical distinctions is similar to the overlap of prosodic pitch and lexical or grammatical tone in a tone language. Like most signed languages, ASL has an analogue to speaking loudly and whispering in oral language. "Loud" signs are larger and more separated, sometimes even with one-handed signs being produced with both hands. "Whispered" signs are smaller, off-center, and sometimes (partially) blocked from sight to unintended onlookers by the speaker's body or a piece of clothing. In fast signing, in particular in context, sign movements are smaller and there may be less repetition. Signs occurring at the end of a phrase may show repetition or may be held ("phrase-final lengthening").


Phonological processing in the brain

The brain processes language phonologically by first identifying the smallest units in an utterance, then combining them to make meaning. In spoken language, these smallest units are often referred to as phonemes, and they are the smallest sounds we identify in a spoken word. In sign language, the smallest units are often referred to as the parameters of a sign (i.e. handshape, location, movement and palm orientation), and we can identify these smallest parts within a produced sign. The cognitive method of phonological processing can be described as segmentation and categorization, where the brain recognizes the individual parts within the sign and combines them to form meaning. This is similar to how spoken language combines sounds to form syllables and then words. Even though the modalities of these languages differ (spoken vs. signed), the brain still processes them similarly through segmentation and categorization. Measuring brain activity while a person produces or perceives sign language reveals that the brain processes signs differently compared to regular hand movements. This is similar to how the brain differentiates between spoken words and semantically lacking sounds. More specifically, the brain is able to differentiate actual signs from the transition movements in between signs, similarly to how words in spoken language can be identified separately from sounds or breaths that occur in between words that don't contain linguistic meaning. Multiple studies have revealed enhanced brain activity while processing sign language compared to processing only hand movements. For example, during a brain surgery performed on a deaf patient who was still awake, their neural activity was observed and analyzed while they were shown videos in American Sign Language. The results showed that greater brain activity occurred during the moments when the person was perceiving actual signs as compared to the moments that occurred during transition into the next sign This means the brain is segmenting the units of the sign and identifying which units combine to form actual meaning. An observed difference in location for phonological processing between spoken language and sign language is the activation of areas of the brain specific to auditory vs. visual stimuli. Because of the modality differences, the cortical regions will be stimulated differently depending on which type of language it is. Spoken language creates sounds, which affects the auditory cortices in the superior temporal lobes. Sign language creates visual stimuli, which affects the occipitotemporal regions. Yet both modes of language still activate many of the same regions that are known for language processing in the brain.  For example, the left superior temporal gyrus is stimulated by language in both spoken and signed forms, even though it was once assumed it was only affected by auditory stimuli. No matter the mode of language being used, whether it be spoken or signed, the brain processes language by segmenting the smallest phonological units and combining them to make meaning.


References

* Battison, R. (1978) Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press. * Brentari, D. (1998) A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press. * Hulst, Harry van der. 1993. Units in the analysis of signs. Phonology 10, 209–241. * Liddell, Scott K. & Robert E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64. 197–277. * Perlmutter, D. 1992. Sonority and syllable structure in American Sign Language. Linguistic Inquiry 23, 407–442. * Sandler, W.(1989) Phonological representation of the sign: linearity and nonlinearity in American Sign Language. Dordrecht: Foris. * Stokoe, W. (1960) Sign language structure. An outline of the visual communication systems of the American Deaf. (1993 Reprint ed.). Silver Spring, MD: Linstok Press. * Van der Kooij, E.(2002). Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic Implementation and Iconicity. PhD Thesis, Universiteit Leiden, Leiden. {{Sign language navigation Phonologies by language American Sign Language