Character (computer)
   HOME

TheInfoList



OR:

In
computer A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations ( computation) automatically. Modern digital electronic computers can perform generic sets of operations known as programs. These prog ...
and machine-based
telecommunications Telecommunication is the transmission of information by various types of technologies over wire, radio, optical, or other electromagnetic systems. It has its origin in the desire of humans for communication over a distance greater than that ...
terminology, a character is a unit of
information Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random, ...
that roughly corresponds to a
grapheme In linguistics, a grapheme is the smallest functional unit of a writing system. The word ''grapheme'' is derived and the suffix ''-eme'' by analogy with ''phoneme'' and other names of emic units. The study of graphemes is called '' graphemi ...
, grapheme-like unit, or
symbol A symbol is a mark, sign, or word that indicates, signifies, or is understood as representing an idea, object, or relationship. Symbols allow people to go beyond what is known or seen by creating linkages between otherwise very different conc ...
, such as in an
alphabet An alphabet is a standardized set of basic written graphemes (called letters) that represent the phonemes of certain spoken languages. Not all writing systems represent language in this way; in a syllabary, each character represents a syllab ...
or
syllabary In the linguistic study of written languages, a syllabary is a set of written symbols that represent the syllables or (more frequently) moras which make up words. A symbol in a syllabary, called a syllabogram, typically represents an (option ...
in the written form of a
natural language In neuropsychology, linguistics, and philosophy of language, a natural language or ordinary language is any language that has evolved naturally in humans through use and repetition without conscious planning or premeditation. Natural languages ...
. Examples of characters include
letters Letter, letters, or literature may refer to: Characters typeface * Letter (alphabet), a character representing one or more of the sounds used in speech; any of the symbols of an alphabet. * Letterform, the graphic form of a letter of the alpha ...
,
numerical digit A numerical digit (often shortened to just digit) is a single symbol used alone (such as "2") or in combinations (such as "25"), to represent numbers in a positional numeral system. The name "digit" comes from the fact that the ten digits (Latin ...
s, common
punctuation Punctuation (or sometimes interpunction) is the use of spacing, conventional signs (called punctuation marks), and certain typographical devices as aids to the understanding and correct reading of written text, whether read silently or aloud. A ...
marks (such as "." or "-"), and whitespace. The concept also includes
control character In computing and telecommunication, a control character or non-printing character (NPC) is a code point (a number) in a character set, that does not represent a written symbol. They are used as in-band signaling to cause effects other than the ...
s, which do not correspond to visible symbols but rather to instructions to format or process the text. Examples of control characters include
carriage return A carriage return, sometimes known as a cartridge return and often shortened to CR, or return, is a control character or mechanism used to reset a device's position to the beginning of a line of text. It is closely associated with the line feed ...
and tab as well as other instructions to printers or other devices that display or otherwise process text. Characters are typically combined into strings. Historically, the term ''character'' was used to denote a specific number of contiguous bits. While a character is most commonly assumed to refer to 8 bits (one
byte The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable uni ...
) today, other options like the 6-bit character code were once popular, and the 5-bit Baudot code has been used in the past as well. The term has even been applied to 4 bits with only 16 possible values. All modern systems use a varying-size sequence of these fixed-sized pieces, for instance
UTF-8 UTF-8 is a variable-length character encoding used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode'' (or ''Universal Coded Character Set'') ''Transformation Format 8-bit''. UTF-8 is capable of e ...
uses a varying number of 8-bit code units to define a "
code point In character encoding terminology, a code point, codepoint or code position is a numerical value that maps to a specific character. Code points usually represent a single grapheme—usually a letter, digit, punctuation mark, or whitespace—but ...
" and
Unicode Unicode, formally The Unicode Standard,The formal version reference is is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard, ...
uses varying number of ''those'' to define a "character".


Encoding

Computers and communication equipment represent characters using a
character encoding Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values tha ...
that assigns each character to something an
integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ...
quantity represented by a sequence of digits, typically that can be stored or transmitted through a
network Network, networking and networked may refer to: Science and technology * Network theory, the study of graphs as a representation of relations between discrete objects * Network science, an academic field that studies complex networks Mathematic ...
. Two examples of usual encodings are
ASCII ASCII ( ), abbreviated from American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. Because ...
and the
UTF-8 UTF-8 is a variable-length character encoding used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode'' (or ''Universal Coded Character Set'') ''Transformation Format 8-bit''. UTF-8 is capable of e ...
encoding for
Unicode Unicode, formally The Unicode Standard,The formal version reference is is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard, ...
. While most character encodings map characters to numbers and/or bit sequences,
Morse code Morse code is a method used in telecommunication to encode text characters as standardized sequences of two different signal durations, called ''dots'' and ''dashes'', or ''dits'' and ''dahs''. Morse code is named after Samuel Morse, one ...
instead represents characters using a series of electrical impulses of varying length.


Terminology

Historically, the term ''character'' has been widely used by industry professionals to refer to an ''encoded character'', often as defined by the programming language or API. Likewise, ''character set'' has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term
glyph A glyph () is any kind of purposeful mark. In typography, a glyph is "the specific shape, design, or representation of a character". It is a particular graphical representation, in a particular typeface, of an element of written language. A g ...
is used to describe a particular visual appearance of a character. Many computer
font In metal typesetting, a font is a particular size, weight and style of a typeface. Each font is a matched set of type, with a piece (a " sort") for each glyph. A typeface consists of a range of such fonts that shared an overall design. In mo ...
s consist of glyphs that are indexed by the numerical code of the corresponding character. With the advent and widespread acceptance of Unicode and bit-agnostic ''coded character sets'', a character is increasingly being seen as a unit of
information Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random, ...
, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines ''character'', or ''abstract character'' as "a member of a set of elements used for the organization, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. Such differentiation is an instance of the wider theme of the
separation of presentation and content Separation of content and presentation (or separation of content and style) is the separation of concerns design principle as applied to the authoring and presentation of content. Under this principle, visual and design aspects (presentation and s ...
. For example, the
Hebrew letter The Hebrew alphabet ( he, אָלֶף־בֵּית עִבְרִי, ), known variously by scholars as the Ktav Ashuri, Jewish script, square script and block script, is an abjad script used in the writing of the Hebrew language and other Jewish ...
aleph Aleph (or alef or alif, transliterated ʾ) is the first letter of the Semitic abjads, including Phoenician , Hebrew , Aramaic , Syriac , Arabic ʾ and North Arabian 𐪑. It also appears as South Arabian 𐩱 and Ge'ez . These lett ...
("א") is often used by mathematicians to denote certain kinds of
infinity Infinity is that which is boundless, endless, or larger than any natural number. It is often denoted by the infinity symbol . Since the time of the ancient Greeks, the philosophical nature of infinity was the subject of many discussions am ...
(ℵ), but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("
code point In character encoding terminology, a code point, codepoint or code position is a numerical value that maps to a specific character. Code points usually represent a single grapheme—usually a letter, digit, punctuation mark, or whitespace—but ...
s"), though they may be rendered identically. Conversely, the Chinese
logogram In a written language, a logogram, logograph, or lexigraph is a written character that represents a word or morpheme. Chinese characters (pronounced '' hanzi'' in Mandarin, ''kanji'' in Japanese, ''hanja'' in Korean) are generally logograms, ...
for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local
typeface A typeface (or font family) is the design of lettering that can include variations in size, weight (e.g. bold), slope (e.g. italic), width (e.g. condensed), and so on. Each of these variations of the typeface is a font. There are thousands o ...
s may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point. The Unicode standard also differentiates between these abstract characters and ''coded characters'' or ''encoded characters'' that have been paired with numeric codes that facilitate their representation in computers.


Combining character

The
combining character In digital typography, combining characters are characters that are intended to modify other characters. The most common combining characters in the Latin script are the combining diacritical marks (including combining accents). Unicode al ...
is also addressed by Unicode. For instance, Unicode allocates a code point to each of * 'i ' (U+0069), * the combining diaeresis (U+0308), and * 'ï' (U+00EF). This makes it possible to code the middle character of the word 'naïve' either as a single character 'ï' or as a combination of the character with the combining diaeresis: (U+0069 LATIN SMALL LETTER I + U+0308 COMBINING DIAERESIS); this is also rendered as 'ï '. These are considered canonically equivalent by the Unicode standard.


char

A ''char'' in the
C programming language ''The C Programming Language'' (sometimes termed ''K&R'', after its authors' initials) is a computer programming book written by Brian Kernighan and Dennis Ritchie, the latter of whom originally designed and implemented the language, as well a ...
is a data type with the size of exactly one
byte The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable uni ...
, which in turn is defined to be large enough to contain any member of the "basic execution character set". The exact number of bits can be checked via macro. By far the most common size is 8 bits, and the POSIX standard ''requires'' it to be 8 bits. In newer C standards ''char'' is required to hold
UTF-8 UTF-8 is a variable-length character encoding used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode'' (or ''Universal Coded Character Set'') ''Transformation Format 8-bit''. UTF-8 is capable of e ...
code units which requires a minimum size of 8 bits. A
Unicode Unicode, formally The Unicode Standard,The formal version reference is is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard, ...
code point may require as many as 21 bits. This will not fit in a ''char'' on most systems, so more than one is used for some of them, as in the variable-length encoding
UTF-8 UTF-8 is a variable-length character encoding used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode'' (or ''Universal Coded Character Set'') ''Transformation Format 8-bit''. UTF-8 is capable of e ...
where each code point takes 1 to 4 bytes. Furthermore, a "character" may require more than one code point (for instance with
combining characters In digital typography, combining characters are characters that are intended to modify other characters. The most common combining characters in the Latin script are the combining diacritical marks (including combining accents). Unicode also ...
), depending on what is meant by the word "character". The fact that a character was historically stored in a single byte led to the two terms ("char" and "character") being used interchangeably in most documentation. This often makes the documentation confusing or misleading when multibyte encodings such as UTF-8 are used, and has led to inefficient and incorrect implementations of string manipulation functions (such as computing the "length" of a string as a count of code units rather than bytes). Modern POSIX documentation attempts to fix this, defining "character" as a sequence of one or more bytes representing a single graphic symbol or control code, and attempts to use "byte" when referring to char data. However it still contains errors such as defining an array of ''char'' as a ''character array'' (rather than a ''byte array''). Unicode can also be stored in strings made up of code units that are larger than ''char''. These are called " wide characters". The original C type was called ''''. Due to some platforms defining ''wchar_t'' as 16 bits and others defining it as 32 bits, recent versions have added ''char16_t'', ''char32_t''. Even then the objects being stored might not be characters, for instance the variable-length
UTF-16 UTF-16 (16-bit Unicode Transformation Format) is a character encoding capable of encoding all 1,112,064 valid code points of Unicode (in fact this number of code points is dictated by the design of UTF-16). The encoding is variable-length, as cod ...
is often stored in arrays of ''char16_t''. Other languages also have a ''char'' type. Some such as
C++ C++ (pronounced "C plus plus") is a high-level general-purpose programming language created by Danish computer scientist Bjarne Stroustrup as an extension of the C programming language, or "C with Classes". The language has expanded significan ...
use 8 bits like C. Others such as
Java Java (; id, Jawa, ; jv, ꦗꦮ; su, ) is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea to the north. With a population of 151.6 million people, Java is the world's mo ...
use 16 bits for ''char'' in order to represent UTF-16 values.


See also

*
Character literal A character literal is a type of literal in programming for the representation of a single character's value within the source code of a computer program. Languages that have a dedicated character data type generally include character literals ...
*
Character (symbol) A character is a semiotic sign or symbol, or a glyph typically a letter, a numerical digit, an ideogram, a hieroglyph, a punctuation mark or another typographic mark. History The Ancient Greek word ('charaktīr') is an agent noun of the ver ...
*
Fill character In computer terminology, a fill character is a character transmitted solely for the purpose of consuming time. It does this by filling a timeslot on a data transmission line which would otherwise be forced to be idle (empty). In this way, fill cha ...
*
Combining character In digital typography, combining characters are characters that are intended to modify other characters. The most common combining characters in the Latin script are the combining diacritical marks (including combining accents). Unicode al ...
* Universal Character Set characters *
Homoglyph In orthography and typography, a homoglyph is one of two or more graphemes, characters, or glyphs with shapes that appear identical or very similar. The designation is also applied to sequences of characters sharing these properties. Synoglyph ...


References


External links


Characters: A Brief Introduction
by The Linux Information Project (LINFO)
ISO/IEC TR 15285:1998
summarizes the ISO/IEC's character model, focusing on terminology definitions and differentiating between characters and glyphs {{Authority control Character encoding Data types Digital typography Primitive types