
In
computing
Computing is any goal-oriented activity requiring, benefiting from, or creating computer, computing machinery. It includes the study and experimentation of algorithmic processes, and the development of both computer hardware, hardware and softw ...
and
telecommunications
Telecommunication, often used in its plural form or abbreviated as telecom, is the transmission of information over a distance using electronic means, typically through cables, radio waves, or other communication technologies. These means of ...
, a character is the internal representation of a
character (symbol)
A character is a semiotic sign, symbol, grapheme, or glyph typically a letter, a numerical digit, an ideogram, a hieroglyph, a punctuation mark or another typographic mark.
History
The Ancient Greek word () is an agent noun of the verb ...
used within a computer or system.
Examples of characters include
letters,
numerical digit
A numerical digit (often shortened to just digit) or numeral is a single symbol used alone (such as "1"), or in combinations (such as "15"), to represent numbers in positional notation, such as the common base 10. The name "digit" origin ...
s,
punctuation
Punctuation marks are marks indicating how a piece of writing, written text should be read (silently or aloud) and, consequently, understood. The oldest known examples of punctuation marks were found in the Mesha Stele from the 9th century BC, c ...
marks (such as "." or "-"), and
whitespace. The concept also includes
control character
In computing and telecommunications, a control character or non-printing character (NPC) is a code point in a character encoding, character set that does not represent a written Character (computing), character or symbol. They are used as in-ba ...
s, which do not correspond to visible symbols but rather to instructions to format or process the text. Examples of control characters include
carriage return
A carriage return, sometimes known as a cartridge return and often shortened to CR, or return, is a control character or mechanism used to reset a device's position to the beginning of a line of text. It is closely associated with the line feed ...
and
tab as well as other instructions to
printers or other devices that display or otherwise process text.
Characters are typically combined into ''
string
String or strings may refer to:
*String (structure), a long flexible structure made from threads twisted together, which is used to tie, bind, or hang other objects
Arts, entertainment, and media Films
* ''Strings'' (1991 film), a Canadian anim ...
s''.
Historically, the term ''character'' was used to denote a specific number of contiguous
bits. While a character is most commonly assumed to refer to 8 bits (one
byte
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable un ...
) today, other options like the
6-bit character code were once popular,
[ and the 5-bit Baudot code has been used in the past as well. The term has even been applied to 4 bits][ with only 16 possible values. All modern systems use a varying-size sequence of these fixed-sized pieces, for instance ]UTF-8
UTF-8 is a character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode Transformation Format 8-bit''. Almost every webpage is transmitted as UTF-8.
UTF-8 supports all 1,112,0 ...
uses a varying number of 8-bit code unit
Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using computers. The numerical values that make up a c ...
s to define a "code point
A code point, codepoint or code position is a particular position in a Table (database), table, where the position has been assigned a meaning. The table may be one dimensional (a column), two dimensional (like cells in a spreadsheet), three dime ...
" and Unicode
Unicode or ''The Unicode Standard'' or TUS is a character encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 defines 154,998 Char ...
uses varying number of ''those'' to define a "character".
Encoding
Computers and communication equipment represent characters using a character encoding
Character encoding is the process of assigning numbers to graphical character (computing), characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using computers. The numerical v ...
that assigns each character to something an integer
An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
quantity represented by a sequence of digits, typically that can be stored or transmitted through a network. Two examples of usual encodings are ASCII
ASCII ( ), an acronym for American Standard Code for Information Interchange, is a character encoding standard for representing a particular set of 95 (English language focused) printable character, printable and 33 control character, control c ...
and the UTF-8
UTF-8 is a character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode Transformation Format 8-bit''. Almost every webpage is transmitted as UTF-8.
UTF-8 supports all 1,112,0 ...
encoding for Unicode
Unicode or ''The Unicode Standard'' or TUS is a character encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 defines 154,998 Char ...
. While most character encodings map characters to numbers and/or bit sequences, Morse code
Morse code is a telecommunications method which Character encoding, encodes Written language, text characters as standardized sequences of two different signal durations, called ''dots'' and ''dashes'', or ''dits'' and ''dahs''. Morse code i ...
instead represents characters using a series of electrical impulses of varying length.
Terminology
The dictionary Merriam-Webster defines a "character", in the relevant sense, as "a symbol (such as a letter or number) that represents information; ''also'': a representation of such a symbol that may be accepted by a computer".[
Historically, the term ''character'' has been widely used by industry professionals to refer to an ''encoded character'', often as defined by the programming language or ]API
An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how to build ...
. Likewise, ''character set'' has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term glyph
A glyph ( ) is any kind of purposeful mark. In typography, a glyph is "the specific shape, design, or representation of a character". It is a particular graphical representation, in a particular typeface, of an element of written language. A ...
is used to describe a particular visual appearance of a character. Many computer font
In metal typesetting, a font is a particular size, weight and style of a ''typeface'', defined as the set of fonts that share an overall design.
For instance, the typeface Bauer Bodoni (shown in the figure) includes fonts " Roman" (or "regul ...
s consist of glyphs that are indexed by the numerical code of the corresponding character.
With the advent and widespread acceptance of Unicode[ and bit-agnostic ''coded character sets'', a character is increasingly being seen as a unit of ]information
Information is an Abstraction, abstract concept that refers to something which has the power Communication, to inform. At the most fundamental level, it pertains to the Interpretation (philosophy), interpretation (perhaps Interpretation (log ...
, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines ''character'', or ''abstract character'' as "a member of a set of elements used for the organization, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. Such differentiation is an instance of the wider theme of the separation of presentation and content
Separation of content and presentation (or separation of content and style) is the separation of concerns design principle as applied to the authoring and presentation of content. Under this principle, visual and design aspects (presentation and s ...
.
For example, the Hebrew letter aleph
Aleph (or alef or alif, transliterated ʾ) is the first Letter (alphabet), letter of the Semitic abjads, including Phoenician alphabet, Phoenician ''ʾālep'' 𐤀, Hebrew alphabet, Hebrew ''ʾālef'' , Aramaic alphabet, Aramaic ''ʾālap'' � ...
("א") is often used by mathematicians to denote certain kinds of infinity
Infinity is something which is boundless, endless, or larger than any natural number. It is denoted by \infty, called the infinity symbol.
From the time of the Ancient Greek mathematics, ancient Greeks, the Infinity (philosophy), philosophic ...
(ℵ), but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("code point
A code point, codepoint or code position is a particular position in a Table (database), table, where the position has been assigned a meaning. The table may be one dimensional (a column), two dimensional (like cells in a spreadsheet), three dime ...
s"), though they may be rendered identically. Conversely, the Chinese logogram
In a written language, a logogram (from Ancient Greek 'word', and 'that which is drawn or written'), also logograph or lexigraph, is a written character that represents a semantic component of a language, such as a word or morpheme. Chine ...
for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typeface
A typeface (or font family) is a design of Letter (alphabet), letters, Numerical digit, numbers and other symbols, to be used in printing or for electronic display. Most typefaces include variations in size (e.g., 24 point), weight (e.g., light, ...
s may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.
The Unicode standard also differentiates between these abstract characters and ''coded characters'' or ''encoded characters'' that have been paired with numeric codes that facilitate their representation in computers.
Combining character
The combining character
In digital typography, combining characters are Character (computing), characters that are intended to modify other characters. The most common combining characters in the Latin script are the combining diacritic, diacritical marks (including c ...
is also addressed by Unicode. For instance, Unicode allocates a code point to each of
* 'i ' (U+0069),
* the combining diaeresis (U+0308), and
* 'ï' (U+00EF).
This makes it possible to code the middle character of the word 'naïve' either as a single character 'ï' or as a combination of the character with the combining diaeresis: (U+0069 LATIN SMALL LETTER I + U+0308 COMBINING DIAERESIS); this is also rendered as .
These are considered canonically equivalent by the Unicode standard.
char
A ''char'' in the C programming language
C (''pronounced'' '' – like the letter c'') is a general-purpose programming language. It was created in the 1970s by Dennis Ritchie and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of ...
is a data type with the size of exactly one byte
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable un ...
,[ which in turn is defined to be large enough to contain any member of the "basic execution character set". The exact number of bits can be checked via macro. By far the most common size is 8 bits, and the POSIX standard ''requires'' it to be 8 bits.][ In newer C standards ''char'' is required to hold ]UTF-8
UTF-8 is a character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode Transformation Format 8-bit''. Almost every webpage is transmitted as UTF-8.
UTF-8 supports all 1,112,0 ...
code units[ which requires a minimum size of 8 bits.
A ]Unicode
Unicode or ''The Unicode Standard'' or TUS is a character encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 defines 154,998 Char ...
code point may require as many as 21 bits.[ This will not fit in a ''char'' on most systems, so more than one is used for some of them, as in the variable-length encoding ]UTF-8
UTF-8 is a character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode Transformation Format 8-bit''. Almost every webpage is transmitted as UTF-8.
UTF-8 supports all 1,112,0 ...
where each code point takes 1 to 4 bytes. Furthermore, a "character" may require more than one code point (for instance with combining character
In digital typography, combining characters are Character (computing), characters that are intended to modify other characters. The most common combining characters in the Latin script are the combining diacritic, diacritical marks (including c ...
s), depending on what is meant by the word "character".
The fact that a character was historically stored in a single byte led to the two terms ("char" and "character") being used interchangeably in most documentation. This often makes the documentation confusing or misleading when multibyte encodings such as UTF-8 are used, and has led to inefficient and incorrect implementations of string manipulation functions (such as computing the "length" of a string as a count of code units rather than bytes). Modern POSIX documentation attempts to fix this, defining "character" as a sequence of one or more bytes representing a single graphic symbol or control code, and attempts to use "byte" when referring to char data.[ However it still contains errors such as defining an array of ''char'' as a ''character array'' (rather than a ''byte array'').][
Unicode can also be stored in strings made up of code units that are larger than ''char''. These are called " wide characters". The original C type was called ''''. Due to some platforms defining ''wchar_t'' as 16 bits and others defining it as 32 bits, recent versions have added ''char16_t'', ''char32_t''. Even then the objects being stored might not be characters, for instance the variable-length ]UTF-16
UTF-16 (16-bit Unicode Transformation Format) is a character encoding that supports all 1,112,064 valid code points of Unicode. The encoding is variable-length as code points are encoded with one or two ''code units''. UTF-16 arose from an earli ...
is often stored in arrays of ''char16_t''.
Other languages also have a ''char'' type. Some such as C++ use at least 8 bits like C.[ Others such as ]Java
Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
use 16 bits for ''char'' in order to represent UTF-16 values.
See also
* Character literal
A character literal is a type of literal in programming for the representation of a single character's value within the source code of a computer program.
Languages that have a dedicated character data type generally include character literals; ...
* Character (symbol)
A character is a semiotic sign, symbol, grapheme, or glyph typically a letter, a numerical digit, an ideogram, a hieroglyph, a punctuation mark or another typographic mark.
History
The Ancient Greek word () is an agent noun of the verb ...
* Fill character
* Combining character
In digital typography, combining characters are Character (computing), characters that are intended to modify other characters. The most common combining characters in the Latin script are the combining diacritic, diacritical marks (including c ...
* Universal Character Set characters
* Homoglyph
In orthography and typography, a homoglyph is one of two or more graphemes, character (computing), characters, or glyphs with shapes that appear identical or very similar but may have differing meaning. The designation is also applied to sequence ...
References
External links
Characters: A Brief Introduction
by The Linux Information Project (LINFO)
ISO/IEC TR 15285:1998
summarizes the ISO/IEC's character model, focusing on terminology definitions and differentiating between characters and glyphs
{{Authority control
Character encoding
Data types
Digital typography
Primitive types