HOME

TheInfoList



OR:

This article compares
Unicode Unicode or ''The Unicode Standard'' or TUS is a character encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 defines 154,998 Char ...
encodings in two types of environments: 8-bit clean environments, and environments that forbid the use of
byte The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable un ...
values with the high bit set. Originally, such prohibitions allowed for links that used only seven data bits, but they remain in some standards, so some standard-conforming software must generate messages that comply with the restrictions. The Standard Compression Scheme for Unicode and the Binary Ordered Compression for Unicode are excluded from the comparison tables because it is difficult to simply quantify their size.


Compatibility issues

A
UTF-8 UTF-8 is a character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode Transformation Format 8-bit''. Almost every webpage is transmitted as UTF-8. UTF-8 supports all 1,112,0 ...
file that contains only
ASCII ASCII ( ), an acronym for American Standard Code for Information Interchange, is a character encoding standard for representing a particular set of 95 (English language focused) printable character, printable and 33 control character, control c ...
characters is identical to an ASCII file. Legacy programs can generally handle UTF-8-encoded files, even if they contain non-ASCII characters. For instance, the C printf function can print a UTF-8 string because it only looks for the ASCII '%' character to define a formatting string. All other bytes are printed unchanged. UTF-16 and UTF-32 are incompatible with ASCII files, and thus require
Unicode Unicode or ''The Unicode Standard'' or TUS is a character encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 defines 154,998 Char ...
-aware programs to display, print, and manipulate them even if the file is known to contain only characters in the ASCII subset. Because they contain many zero bytes, character strings representing such files cannot be manipulated by common null-terminated string handling logic. The prevalence of string handling using this logic means that, even in the context of UTF-16 systems such as
Windows Windows is a Product lining, product line of Proprietary software, proprietary graphical user interface, graphical operating systems developed and marketed by Microsoft. It is grouped into families and subfamilies that cater to particular sec ...
and
Java Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
, UTF-16 text files are not commonly used. Rather, older 8-bit encodings such as ASCII or
ISO-8859-1 ISO/IEC 8859-1:1998, ''Information technology—8-bit computing, 8-bit single-byte coded graphic character (computing), character sets—Part 1: Latin alphabet No. 1'', is part of the ISO/IEC 8859 series of ASCII-based standard character enc ...
are still used, forgoing Unicode support entirely, or UTF-8 is used for Unicode. One rare counter-example is the "strings" file introduced in Mac OS X 10.3 Panther, which is used by applications to look up internationalized versions of messages. By default, this file is encoded in UTF-16, with "files encoded using UTF-8 ... not guaranteed to work."
XML Extensible Markup Language (XML) is a markup language and file format for storing, transmitting, and reconstructing data. It defines a set of rules for encoding electronic document, documents in a format that is both human-readable and Machine-r ...
is conventionally encoded as UTF-8, and all XML processors must at least support UTF-8 and UTF-16.


Efficiency

UTF-8 UTF-8 is a character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode Transformation Format 8-bit''. Almost every webpage is transmitted as UTF-8. UTF-8 supports all 1,112,0 ...
requires 8, 16, 24 or 32 bits (one to four bytes) to encode a Unicode character, UTF-16 requires either 16 or 32 bits to encode a character, and UTF-32 always requires 32 bits to encode a character. The first 128 Unicode
code point A code point, codepoint or code position is a particular position in a Table (database), table, where the position has been assigned a meaning. The table may be one dimensional (a column), two dimensional (like cells in a spreadsheet), three dime ...
s, U+0000 to U+007F, which are used for the C0 Controls and Basic Latin characters and which correspond to ASCII, are encoded using 8 bits in UTF-8, 16 bits in UTF-16, and 32 bits in UTF-32. The next 1,920 characters, U+0080 to U+07FF, represent the rest of the characters used by almost all
Latin-script alphabet A Latin-script alphabet (Latin alphabet or Roman alphabet) is an alphabet that uses Letter (alphabet), letters of the Latin script. The 21-letter archaic Latin alphabet and the 23-letter classical Latin alphabet belong to the oldest of this gr ...
s as well as Greek,
Cyrillic The Cyrillic script ( ) is a writing system used for various languages across Eurasia. It is the designated national script in various Slavic, Turkic, Mongolic, Uralic, Caucasian and Iranic-speaking countries in Southeastern Europe, Ea ...
, Coptic, Armenian,
Hebrew Hebrew (; ''ʿÎbrit'') is a Northwest Semitic languages, Northwest Semitic language within the Afroasiatic languages, Afroasiatic language family. A regional dialect of the Canaanite languages, it was natively spoken by the Israelites and ...
,
Arabic Arabic (, , or , ) is a Central Semitic languages, Central Semitic language of the Afroasiatic languages, Afroasiatic language family spoken primarily in the Arab world. The International Organization for Standardization (ISO) assigns lang ...
, Syriac,
Thaana Thaana, Tãna, Taana or Tāna (  ) is the present writing system of the Maldivian language spoken in the Maldives. Thaana has characteristics of both an abugida (diacritics, vowel-killer strokes) and a true alphabet (all vowels are w ...
and N'Ko. Characters in this range require 16 bits to encode in both UTF-8 and UTF-16, and 32 bits in UTF-32. For U+0800 to U+FFFF, the remaining characters in the Basic Multilingual Plane and capable of representing the rest of the characters of most of the world's living languages, UTF-8 needs 24 bits to encode a character while UTF-16 needs 16 bits and UTF-32 needs 32. Code points U+010000 to U+10FFFF, which represent characters in the supplementary planes, require 32 bits in UTF-8, UTF-16 and UTF-32. A file is shorter in UTF-8 than in UTF-16 if there are more ASCII code points than there are code points in the range U+0800 to U+FFFF. Advocates of UTF-8 as the preferred form argue that real-world documents written in languages that use characters only in the high range are still often shorter in UTF-8 due to the extensive use of spaces, digits, punctuation, newlines,
HTML Hypertext Markup Language (HTML) is the standard markup language for documents designed to be displayed in a web browser. It defines the content and structure of web content. It is often assisted by technologies such as Cascading Style Sheets ( ...
, and embedded words and acronyms written with Latin letters. UTF-32, by contrast, is always longer unless there are no code points less than U+10000. All printable characters in UTF-EBCDIC use at least as many bytes as in UTF-8, and most use more, due to a decision made to allow encoding the C1 control codes as single bytes. For seven-bit environments, UTF-7 is more space efficient than the combination of other Unicode encodings with quoted-printable or base64 for almost all types of text (see " Seven-bit environments" below).


Processing time

Text with variable-length encoding such as UTF-8 or UTF-16 is harder to process if there is a need to work with individual code units as opposed to working with code points. Searching is unaffected by whether the characters are variably sized since a search for a sequence of code units does not care about the divisions. However, it does require that the encoding be self-synchronizing, which both UTF-8 and UTF-16 are. A common misconception is that there is a need to "find the ''n''th character" and that this requires a fixed-length encoding; however, in real use the number ''n'' is only derived from examining the characters, thus sequential access is needed anyway. Efficiently using character sequences in one endian order loaded onto a machine with a different endian order requires extra processing. Characters may either be converted before use or processed with two distinct systems. Byte-based encodings such as UTF-8 do not have this problem. UTF-16BE and UTF-32BE are big-endian; UTF-16LE and UTF-32LE are little-endian.


Processing issues

For processing, a format should be easy to search, truncate, and generally process safely. All normal Unicode encodings use some form of fixed-size code unit. Depending on the format and the
code point A code point, codepoint or code position is a particular position in a Table (database), table, where the position has been assigned a meaning. The table may be one dimensional (a column), two dimensional (like cells in a spreadsheet), three dime ...
to be encoded, one or more of these code units will represent a Unicode code point. To allow easy searching and truncation, a sequence must not occur within a longer sequence or across the boundary of two other sequences. UTF-8, UTF-16, UTF-32 and UTF-EBCDIC have these important properties but UTF-7 and GB 18030 do not. Fixed-size characters can be helpful, but even if there is a fixed byte count per code point (as in UTF-32), there is not a fixed byte count per displayed character due to
combining character In digital typography, combining characters are Character (computing), characters that are intended to modify other characters. The most common combining characters in the Latin script are the combining diacritic, diacritical marks (including c ...
s. Considering these incompatibilities and other quirks among different encoding schemes, handling Unicode data with the same (or compatible) protocol throughout and across the interfaces (e.g. using an API/library, handling Unicode characters in client/server model, etc.) can in general simplify the whole pipeline while simultaneously eliminating a potential source of bugs. UTF-16 is popular because many APIs date to the time when Unicode was 16-bit fixed width (referred as UCS-2). However, using UTF-16 makes characters outside the Basic Multilingual Plane a special case, which increases the risk of oversights related to their handling. That said, programs that mishandle surrogate pairs probably also have problems with combining sequences, so using UTF-32 is unlikely to solve the more general problem of poor handling of multi-code-unit characters. If any stored data is in UTF-8 (such as file contents or names), it is very difficult to write a system that uses UTF-16 or UTF-32 as an API. This is due to the oft-overlooked fact that the byte array used by UTF-8 can physically contain invalid sequences. For instance, it is impossible to fix an invalid UTF-8 filename using a UTF-16 API, as no possible UTF-16 string will translate to that invalid filename. The opposite is not true: it is trivial to translate invalid UTF-16 to a unique (though technically invalid) UTF-8 string, so a UTF-8 API can control both UTF-8 and UTF-16 files and names, making UTF-8 preferred in any such mixed environment. An unfortunate but far more common workaround used by UTF-16 systems is to interpret the UTF-8 as some other encoding such as CP-1252 and ignore the
mojibake Mojibake (; , 'character transformation') is the garbled or gibberish text that is the result of text being decoded using an unintended character encoding. The result is a systematic replacement of symbols with completely unrelated ones, often ...
for any non-ASCII data.


For communication and storage

UTF-16 and UTF-32 do not have
endianness file:Gullivers_travels.jpg, ''Gulliver's Travels'' by Jonathan Swift, the novel from which the term was coined In computing, endianness is the order in which bytes within a word (data type), word of digital data are transmitted over a data comm ...
defined, so a byte order must be selected when receiving them over a byte-oriented network or reading them from a byte-oriented storage. This may be achieved by using a byte-order mark at the start of the text or assuming big-endian (RFC 2781).
UTF-8 UTF-8 is a character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode Transformation Format 8-bit''. Almost every webpage is transmitted as UTF-8. UTF-8 supports all 1,112,0 ...
, UTF-16BE, UTF-32BE, UTF-16LE and UTF-32LE are standardised on a single byte order and do not have this problem. If the byte stream is subject to
corruption Corruption is a form of dishonesty or a criminal offense that is undertaken by a person or an organization that is entrusted in a position of authority to acquire illicit benefits or abuse power for one's gain. Corruption may involve activities ...
then some encodings recover better than others. UTF-8 and UTF-EBCDIC are best in this regard as they can always resynchronize after a corrupt or missing byte at the start of the next code point; GB 18030 is unable to recover until the next ASCII non-number. UTF-16 can handle ''altered'' bytes, but not an odd number of ''missing'' bytes, which will garble all the following text (though it will produce uncommon and/or unassigned characters). If ''bits'' can be lost all of them will garble the following text, though UTF-8 can be resynchronized as incorrect byte boundaries will produce invalid UTF-8 in almost all text longer than a few bytes.


In detail

The tables below list the number of bytes per code point for different Unicode ranges. Any additional comments needed are included in the table. The figures assume that overheads at the start and end of the block of text are negligible.
N.B. The tables below list numbers of bytes per ''code point'', ''not'' per user-visible "character" (or "grapheme cluster"). It can take multiple code points to describe a single grapheme cluster, so even in UTF-32, care must be taken when splitting or concatenating strings.


Eight-bit environments


Seven-bit environments

This table may not cover every special case and so should be used for estimation and comparison only. To accurately determine the size of text in an encoding, see the actual specifications. Endianness does not affect sizes ( UTF-16BE and UTF-32BE have the same size as UTF-16LE and UTF-32LE, respectively). The use of UTF-32 under quoted-printable is highly impractical, but if implemented, will result in 8–12 bytes per code point (about 10 bytes in average), namely for BMP, each code point will occupy exactly 6 bytes more than the same code in quoted-printable/UTF-16. Base64/UTF-32 gets bytes for ''any'' code point. An ASCII control character under quoted-printable or UTF-7 may be represented either directly or encoded (escaped). The need to escape a given control character depends on many circumstances, but newlines in text data are usually coded directly.


Compression schemes

BOCU-1 and SCSU are two ways to compress Unicode data. Their
encoding In communications and Data processing, information processing, code is a system of rules to convert information—such as a letter (alphabet), letter, word, sound, image, or gesture—into another form, sometimes data compression, shortened or ...
relies on how frequently the text is used. Most runs of text use the same script; for example,
Latin Latin ( or ) is a classical language belonging to the Italic languages, Italic branch of the Indo-European languages. Latin was originally spoken by the Latins (Italic tribe), Latins in Latium (now known as Lazio), the lower Tiber area aroun ...
,
Cyrillic The Cyrillic script ( ) is a writing system used for various languages across Eurasia. It is the designated national script in various Slavic, Turkic, Mongolic, Uralic, Caucasian and Iranic-speaking countries in Southeastern Europe, Ea ...
, Greek and so on. This normal use allows many runs of text to compress down to about 1 byte per code point. These stateful encodings make it more difficult to randomly access text at any position of a string. These two compression schemes are not as efficient as other compression schemes, like zip or bzip2. Those general-purpose compression schemes can compress longer runs of bytes to just a few bytes. The SCSU and BOCU-1 compression schemes will not compress more than the theoretical 25% of text encoded as UTF-8, UTF-16 or UTF-32. Other general-purpose compression schemes can easily compress to 10% of original text size. The general-purpose schemes require more complicated algorithms and longer chunks of text for a good compression ratio.
Unicode Technical Note #14
contains a more detailed comparison of compression schemes.


Historical: UTF-5 and UTF-6

Proposals have been made for a UTF-5 and UTF-6 for the internationalization of domain names (IDN). The UTF-5 proposal used a base 32 encoding, where Punycode is (among other things, and not exactly) a base 36 encoding. The name ''UTF-5'' for a code unit of 5 bits is explained by the equation 25 = 32. The UTF-6 proposal added a running length encoding to UTF-5; here ''6'' simply stands for ''UTF-5 plus 1''. The
IETF The Internet Engineering Task Force (IETF) is a standards organization for the Internet standard, Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster ...
IDN WG later adopted the more efficient Punycode for this purpose.


Not being seriously pursued

UTF-1 never gained serious acceptance. UTF-8 is much more frequently used. The nonet encodings UTF-9 and UTF-18 are April Fools' Day RFC joke specifications, although UTF-9 is a functioning nonet Unicode transformation format, and UTF-18 is a functioning nonet encoding for all non-Private-Use code points in Unicode 12 and below, although not for Supplementary Private Use Areas or portions of Unicode 13 and later.


Notes


References

{{Unicode navigation
Unicode Unicode or ''The Unicode Standard'' or TUS is a character encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 defines 154,998 Char ...