Han unification
   HOME

TheInfoList



OR:

Han unification is an effort by the authors of
Unicode Unicode, formally The Unicode Standard,The formal version reference is is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard, ...
and the Universal Character Set to map multiple
character set Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values tha ...
s of the Han characters of the so-called CJK languages into a single set of unified characters. Han characters are a feature shared in common by written Chinese (
hanzi Chinese characters () are logograms developed for the writing of Chinese. In addition, they have been adapted to write other East Asian languages, and remain a key component of the Japanese writing system where they are known as ''kanji' ...
), Japanese (
kanji are the logographic Chinese characters taken from the Chinese script and used in the writing of Japanese. They were made a major part of the Japanese writing system during the time of Old Japanese and are still used, along with the subsequ ...
), Korean (
hanja Hanja (Hangul: ; Hanja: , ), alternatively known as Hancha, are Chinese characters () used in the writing of Korean. Hanja was used as early as the Gojoseon period, the first ever Korean kingdom. (, ) refers to Sino-Korean vocabulary, ...
) and Vietnamese (
chữ Hán Chữ Hán (𡨸漢, literally "Chinese characters", ), Chữ Nho (𡨸儒, literally "Confucian characters", ) or Hán tự (漢字, ), is the Vietnamese term for Chinese characters, used to write Văn ngôn (which is a form of Classical Chinese ...
). Modern Chinese, Japanese and Korean
typeface A typeface (or font family) is the design of lettering that can include variations in size, weight (e.g. bold), slope (e.g. italic), width (e.g. condensed), and so on. Each of these variations of the typeface is a font. There are thousands o ...
s typically use regional or historical variants of a given Han character. In the formulation of Unicode, an attempt was made to unify these variants by considering them different glyphs representing the same "
grapheme In linguistics, a grapheme is the smallest functional unit of a writing system. The word ''grapheme'' is derived and the suffix ''-eme'' by analogy with ''phoneme'' and other names of emic units. The study of graphemes is called '' graphemi ...
", or orthographic unit, hence, "Han unification", with the resulting character repertoire sometimes contracted to Unihan. Nevertheless, many characters have regional variants assigned to different
code points In character encoding terminology, a code point, codepoint or code position is a numerical value that maps to a specific character. Code points usually represent a single grapheme—usually a letter, digit, punctuation mark, or whitespace—but ...
, such as
Traditional A tradition is a belief or behavior (folk custom) passed down within a group or society with symbolic meaning or special significance with origins in the past. A component of cultural expressions and folklore, common examples include holidays or ...
(U+500B) versus Simplified (U+4E2A). Unihan can also refer to the Unihan Database maintained by the
Unicode Consortium The Unicode Consortium (legally Unicode, Inc.) is a 501(c)(3) non-profit organization incorporated and based in Mountain View, California. Its primary purpose is to maintain and publish the Unicode Standard which was developed with the intentio ...
, which provides information about all of the unified Han characters encoded in the Unicode Standard, including mappings to various national and industry standards, indices into standard dictionaries, encoded variants, pronunciations in various languages, and an English definition. The database is available to the public as text files and via an interactive website. The latter also includes representative glyphs and definitions for compound words drawn from the free Japanese
EDICT An edict is a decree or announcement of a law, often associated with monarchism, but it can be under any official authority. Synonyms include "dictum" and "pronouncement". ''Edict'' derives from the Latin edictum. Notable edicts * Telepinu Proc ...
and Chinese CEDICT dictionary projects (which are provided for convenience and are not a formal part of the Unicode Standard).


Rationale and controversy

The Unicode Standard details the principles of Han unification. The Ideographic Research Group (IRG), made up of experts from the Chinese-speaking countries, North and South Korea, Japan, Vietnam, and other countries, is responsible for the process. One rationale was the desire to limit the size of the full Unicode character set, where CJK characters as represented by discrete ideograms may approach or exceed 100,000 characters. Version 1 of Unicode was designed to fit into 16 bits and only 20,940 characters (32%) out of the possible 65,536 were reserved for these
CJK Unified Ideographs The Chinese, Japanese and Korean (CJK) scripts share a common background, collectively known as CJK characters. In the process called Han unification, the common (shared) characters were identified and named CJK Unified Ideographs. As of Unicode ...
. Unicode was later extended to 21 bits allowing many more CJK characters (97,058 are assigned, with room for more). The articl
''The secret life of Unicode''
located on IBM DeveloperWorks, attempts to illustrate part of the motivation for Han unification: In fact, the three ideographs for "one" (, , or ) are encoded separately in Unicode, as they are not considered national variants. The first is the common form in all three countries, while the second and third are used on financial instruments to prevent tampering (they may be considered variants). However, Han unification has also caused considerable controversy, particularly among the Japanese public, who, with the nation's literati, have a history of protesting the culling of historically and culturally significant variants. (See . Today, the list of characters officially recognized for use in proper names continues to expand at a modest pace.) In 1993, the
Japan Electronic Industries Development Association The (Formerly ) was an industry research, development, and standards body for electronics in Japan. It was merged with EIAJ to form JEITA on November 1, 2000. JEIDA was similar to SEMATECH of the US, ECMA of Europe. JEIDA developed a number ...
(JEIDA) published a pamphlet titled "" (We are feeling anxious for the future character encoding system ), summarizing major criticism against the Han Unification approach adopted by Unicode.


Graphemes versus glyphs

A
grapheme In linguistics, a grapheme is the smallest functional unit of a writing system. The word ''grapheme'' is derived and the suffix ''-eme'' by analogy with ''phoneme'' and other names of emic units. The study of graphemes is called '' graphemi ...
is the smallest abstract unit of meaning in a writing system. Any grapheme has many possible glyph expressions, but all are recognized as the same grapheme by those with reading and writing knowledge of a particular writing system. Although Unicode typically assigns characters to code points to express the graphemes within a system of writing, the Unicode Standard
section 3.4 ''D7''
does caution:
An abstract character does not necessarily correspond to what a user thinks of as a "character" and should not be confused with a ''grapheme''.
However, this quote refers to the fact that some graphemes are composed of several characters. So, for example, the character combined with (i.e. the combination "å") might be understood by a user as a single grapheme while being composed of multiple Unicode abstract characters. In addition, Unicode also assigns some code points to a small number (other than for compatibility reasons) of formatting characters, whitespace characters, and other abstract characters that are not graphemes, but instead used to control the breaks between lines, words, graphemes and grapheme clusters. With the unified Han ideographs, the Unicode Standard makes a departure from prior practices in assigning abstract characters not as graphemes, but according to the underlying meaning of the grapheme: what linguists sometimes call sememes. This departure therefore is not simply explained by the oft quoted distinction between an abstract character and a glyph, but is more rooted in the difference between an abstract character assigned as a grapheme and an abstract character assigned as a sememe. In contrast, consider
ASCII ASCII ( ), abbreviated from American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. Because ...
's unification of
punctuation Punctuation (or sometimes interpunction) is the use of spacing, conventional signs (called punctuation marks), and certain typographical devices as aids to the understanding and correct reading of written text, whether read silently or aloud. A ...
and
diacritic A diacritic (also diacritical mark, diacritical point, diacritical sign, or accent) is a glyph added to a letter or to a basic glyph. The term derives from the Ancient Greek (, "distinguishing"), from (, "to distinguish"). The word ''diacrit ...
s, where graphemes with widely different meanings (for example, an
apostrophe The apostrophe ( or ) is a punctuation mark, and sometimes a diacritical mark, in languages that use the Latin alphabet and some other alphabets. In English, the apostrophe is used for two basic purposes: * The marking of the omission of one ...
and a single quotation mark) are unified because the glyphs are the same. For Unihan the characters are not unified by their appearance, but by their definition or meaning. For a grapheme to be represented by various glyphs means that the grapheme has glyph variations that are usually determined by selecting one font or another or using glyph substitution features where multiple glyphs are included in a single font. Such glyph variations are considered by Unicode a feature of rich text protocols and not properly handled by the plain text goals of Unicode. However, when the change from one glyph to another constitutes a change from one grapheme to another—where a glyph cannot possibly still, for example, mean the same grapheme understood as the small letter "a"—Unicode separates those into separate code points. For Unihan the same thing is done whenever the abstract meaning changes, however rather than speaking of the abstract meaning of a grapheme (the letter "a"), the unification of Han ideographs assigns a new code point for each different meaning—even if that meaning is expressed by distinct graphemes in different languages. Although a grapheme such as "ö" might mean something different in English (as used in the word "coördinated") than it does in German, it is still the same grapheme and can be easily unified so that English and German can share a common abstract Latin writing system (along with Latin itself). This example also points to another reason that "abstract character" and grapheme as an abstract unit in a written language do not necessarily map one-to-one. In English the combining diaeresis, "¨", and the "o" it modifies may be seen as two separate graphemes, whereas in languages such as Swedish, the letter "ö" may be seen as a single grapheme. Similarly in English the dot on an "i" is understood as a part of the "i" grapheme whereas in other languages, such as Turkish, the dot may be seen as a separate grapheme added to the dotless "ı". To deal with the use of different graphemes for the same Unihan sememe, Unicode has relied on several mechanisms: especially as it relates to rendering text. One has been to treat it as simply a font issue so that different fonts might be used to render Chinese, Japanese or Korean. Also font formats such as OpenType allow for the mapping of alternate glyphs according to language so that a text rendering system can look to the user's environmental settings to determine which glyph to use. The problem with these approaches is that they fail to meet the goals of Unicode to define a consistent way of encoding multilingual text. So rather than treat the issue as a rich text problem of glyph alternates, Unicode added the concept of
variation selectors Variation Selectors is the block name of a Unicode code point block containing 16 variation selectors. Each variation selector is used to specify a specific glyph variant for a preceding character. They are currently used to specify standardize ...
, first introduced in version 3.2 and supplemented in version 4.0. While variation selectors are treated as combining characters, they have no associated diacritic or mark. Instead, by combining with a base character, they signal the two character sequence selects a variation (typically in terms of grapheme, but also in terms of underlying meaning as in the case of a location name or other proper noun) of the base character. This then is not a selection of an alternate glyph, but the selection of a grapheme variation or a variation of the base abstract character. Such a two-character sequence however can be easily mapped to a separate single glyph in modern fonts. Since Unicode has assigned 256 separate variation selectors, it is capable of assigning 256 variations for any Han ideograph. Such variations can be specific to one language or another and enable the encoding of plain text that includes such grapheme variations.


Unihan "abstract characters"

Since the Unihan standard encodes "abstract characters", not "glyphs", the graphical artifacts produced by Unicode have been considered temporary technical hurdles, and at most, cosmetic. However, again, particularly in Japan, due in part to the way in which Chinese characters were incorporated into Japanese writing systems historically, the inability to specify a particular variant was considered a significant obstacle to the use of Unicode in scholarly work. For example, the unification of "grass" (explained above), means that a historical text cannot be encoded so as to preserve its peculiar orthography. Instead, for example, the scholar would be required to locate the desired glyph in a specific typeface in order to convey the text as written, defeating the purpose of a unified character set. Unicode has responded to these needs by assigning variation selectors so that authors can select grapheme variations of particular ideographs (or even other characters). Small differences in graphical representation are also problematic when they affect legibility or belong to the wrong cultural tradition. Besides making some Unicode fonts unusable for texts involving multiple "Unihan languages", names or other orthographically sensitive terminology might be displayed incorrectly. (Proper names tend to be especially orthographically conservative—compare this to changing the spelling of one's name to suit a language reform in the US or UK.) While this may be considered primarily a graphical representation or rendering problem to be overcome by more artful fonts, the widespread use of Unicode would make it difficult to preserve such distinctions. The problem of one character representing semantically different concepts is also present in the Latin part of Unicode. The Unicode character for an apostrophe is the same as the character for a right single quote (’). On the other hand, the capital Latin letter A is not unified with the
Greek letter The Greek alphabet has been used to write the Greek language since the late 9th or early 8th century BCE. It is derived from the earlier Phoenician alphabet, and was the earliest known alphabetic script to have distinct letters for vowels as ...
Α or the
Cyrillic letter , bg, кирилица , mk, кирилица , russian: кириллица , sr, ћирилица, uk, кирилиця , fam1 = Egyptian hieroglyphs , fam2 = Proto-Sinaitic , fam3 = Phoenician , fam4 = Gr ...
А. This is, of course, desirable for reasons of compatibility, and deals with a much smaller alphabetic character set. While the unification aspect of Unicode is controversial in some quarters for the reasons given above, Unicode itself does now encode a vast number of seldom-used characters of a more-or-less antiquarian nature. Some of the controversy stems from the fact that the very decision of performing Han unification was made by the initial Unicode Consortium, which at the time was a consortium of North American companies and organizations (most of them in California), but included no East Asian government representatives. The initial design goal was to create a 16-bit standard, and Han unification was therefore a critical step for avoiding tens of thousands of character duplications. This 16-bit requirement was later abandoned, making the size of the character set less of an issue today. The controversy later extended to the internationally representative ISO: the initial CJK Joint Research Group (CJK-JRG) favored a proposal (DIS 10646) for a non-unified character set, "which was thrown out in favor of unification with the Unicode Consortium's unified character set by the votes of American and European ISO members" (even though the Japanese position was unclear). Endorsing the Unicode Han unification was a necessary step for the heated ISO 10646/Unicode merger. Much of the controversy surrounding Han unification is based on the distinction between
glyph A glyph () is any kind of purposeful mark. In typography, a glyph is "the specific shape, design, or representation of a character". It is a particular graphical representation, in a particular typeface, of an element of written language. A g ...
s, as defined in Unicode, and the related but distinct idea of
grapheme In linguistics, a grapheme is the smallest functional unit of a writing system. The word ''grapheme'' is derived and the suffix ''-eme'' by analogy with ''phoneme'' and other names of emic units. The study of graphemes is called '' graphemi ...
s. Unicode assigns abstract characters (graphemes), as opposed to glyphs, which are a particular visual representations of a character in a specific
typeface A typeface (or font family) is the design of lettering that can include variations in size, weight (e.g. bold), slope (e.g. italic), width (e.g. condensed), and so on. Each of these variations of the typeface is a font. There are thousands o ...
. One character may be represented by many distinct glyphs, for example a "g" or an "a", both of which may have one loop (, ) or two (, ). Yet for a reader of Latin script based languages the two variations of the "a" character are both recognized as the same grapheme. Graphemes present in national character code standards have been added to Unicode, as required by Unicode's Source Separation rule, even where they can be composed of characters already available. The national character code standards existing in CJK languages are considerably more involved, given the technological limitations under which they evolved, and so the official CJK participants in Han unification may well have been amenable to reform. Unlike European versions, CJK Unicode fonts, due to Han unification, have large but irregular patterns of overlap, requiring language-specific fonts. Unfortunately, language-specific fonts also make it difficult to access a variant which, as with the "grass" example, happens to appear more typically in another language style. (That is to say, it would be difficult to access "grass" with the four-stroke radical more typical of Traditional Chinese in a Japanese environment, which fonts would typically depict the three-stroke radical.) Unihan proponents tend to favor markup languages for defining language strings, but this would not ensure the use of a specific variant in the case given, only the language-specific font more likely to depict a character as that variant. (At this point, merely stylistic differences do enter in, as a selection of Japanese and Chinese fonts are not likely to be visually compatible.) Chinese users seem to have fewer objections to Han unification, largely because Unicode did not attempt to unify
Simplified Chinese characters Simplified Chinese characters are standardized Chinese characters used in mainland China, Malaysia and Singapore, as prescribed by the '' Table of General Standard Chinese Characters''. Along with traditional Chinese characters, they are o ...
with
Traditional Chinese characters Traditional Chinese characters are one type of standard Chinese character sets of the contemporary written Chinese. The traditional characters had taken shapes since the clerical change and mostly remained in the same structure they took a ...
. (Simplified Chinese characters are used among Chinese speakers in the
People's Republic of China China, officially the People's Republic of China (PRC), is a country in East Asia. It is the world's List of countries and dependencies by population, most populous country, with a Population of China, population exceeding 1.4 billion, slig ...
,
Singapore Singapore (), officially the Republic of Singapore, is a sovereign island country and city-state in maritime Southeast Asia. It lies about one degree of latitude () north of the equator, off the southern tip of the Malay Peninsula, bor ...
, and
Malaysia Malaysia ( ; ) is a country in Southeast Asia. The federal constitutional monarchy consists of thirteen states and three federal territories, separated by the South China Sea into two regions: Peninsular Malaysia and Borneo's East Mal ...
. Traditional Chinese characters are used in Hong Kong and Taiwan (
Big5 Big-5 or Big5 is a Chinese character encoding method used in Taiwan, Hong Kong, and Macau for traditional Chinese characters. The People's Republic of China (PRC), which uses simplified Chinese characters, uses the GB 18030 character se ...
) and they are, with some differences, more familiar to Korean and Japanese users.) Unicode is seen as neutral with regards to this politically charged issue, and has encoded Simplified and Traditional Chinese glyphs separately (e.g. the ideograph for "discard" is U+4E1F for Traditional Chinese Big5 #A5E1 and U+4E22 for Simplified Chinese GB #2210). It is also noted that Traditional and Simplified characters should be encoded separately according to Unicode Han Unification rules, because they are distinguished in pre-existing PRC character sets. Furthermore, as with other variants, Traditional to Simplified characters is not a one-to-one relationship.


Alternatives

There are several alternative character sets that are not encoding according to the principle of Han Unification, and thus free from its restrictions: * CNS character set *
CCCII character set The Chinese Character Code for Information Interchange () or CCCII is a character set developed by the Chinese Character Analysis Group in Taiwan. It was first published in 1980, and significantly expanded in 1982 and 1987. It is used mostly by ...
* TRON * ''Mojikyō'' These region-dependent character sets are also seen as not affected by Han Unification because of their region-specific nature: * ISO/IEC 2022 (based on sequence codes to switch between Chinese, Japanese, Korean character sets – hence without unification) * Big5 extensions * GCCS and its successor
HKSCS The Hong Kong Supplementary Character Set (; commonly abbreviated to HKSCS) is a set of Chinese characters – 4,702 in total in the initial release—used in Cantonese, as well as when writing the names of some places in Hong Kong (whether in ...
However, none of these alternative standards has been as widely adopted as
Unicode Unicode, formally The Unicode Standard,The formal version reference is is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard, ...
, which is now the base character set for many new standards and protocols, internationally adopted, and is built into the architecture of operating systems (
Microsoft Windows Windows is a group of several proprietary graphical operating system families developed and marketed by Microsoft. Each family caters to a certain sector of the computing industry. For example, Windows NT for consumers, Windows Server for ...
,
Apple An apple is an edible fruit produced by an apple tree (''Malus domestica''). Apple trees are cultivated worldwide and are the most widely grown species in the genus '' Malus''. The tree originated in Central Asia, where its wild ancest ...
macOS macOS (; previously OS X and originally Mac OS X) is a Unix operating system developed and marketed by Apple Inc. since 2001. It is the primary operating system for Apple's Mac computers. Within the market of desktop and la ...
, and many
Unix-like A Unix-like (sometimes referred to as UN*X or *nix) operating system is one that behaves in a manner similar to a Unix system, although not necessarily conforming to or being certified to any version of the Single UNIX Specification. A Unix-li ...
systems), programming languages (
Perl Perl is a family of two high-level, general-purpose, interpreted, dynamic programming languages. "Perl" refers to Perl 5, but from 2000 to 2019 it also referred to its redesigned "sister language", Perl 6, before the latter's name was offic ...
, Python, C#,
Java Java (; id, Jawa, ; jv, ꦗꦮ; su, ) is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea to the north. With a population of 151.6 million people, Java is the world's mo ...
,
Common Lisp Common Lisp (CL) is a dialect of the Lisp programming language, published in ANSI standard document ''ANSI INCITS 226-1994 (S20018)'' (formerly ''X3.226-1994 (R1999)''). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived fr ...
, APL, C, C++), and libraries (IBM
International Components for Unicode International Components for Unicode (ICU) is an open-source project of mature C/ C++ and Java libraries for Unicode support, software internationalization, and software globalization. ICU is widely portable to many operating systems and environ ...
(ICU) along with the Pango,
Graphite Graphite () is a crystalline form of the element carbon. It consists of stacked layers of graphene. Graphite occurs naturally and is the most stable form of carbon under standard conditions. Synthetic and natural graphite are consumed on la ...
,
Scribe A scribe is a person who serves as a professional copyist, especially one who made copies of manuscripts before the invention of automatic printing. The profession of the scribe, previously widespread across cultures, lost most of its promi ...
, Uniscribe, and
ATSUI The Apple Type Services for Unicode Imaging (ATSUI) is the set of services for rendering Unicode-encoded text introduced in Mac OS 8.5 and carried forward into Mac OS X. It replaced the WorldScript engine for legacy encodings. Obsolescence ...
rendering engines), font formats ( TrueType and
OpenType OpenType is a format for scalable computer fonts. It was built on its predecessor TrueType, retaining TrueType's basic structure and adding many intricate data structures for prescribing typographic behavior. OpenType is a registered trademark ...
) and so on. In March 1989, a (B)TRON-based system was adopted by Japanese government organizations "Center for Educational Computing" as the system of choice for school education including
compulsory education Compulsory education refers to a period of education that is required of all people and is imposed by the government. This education may take place at a registered school or at other places. Compulsory school attendance or compulsory schooling ...
. However, in April, a report titled "1989 National Trade Estimate Report on Foreign Trade Barriers" from
Office of the United States Trade Representative The Office of the United States Trade Representative (USTR) is an agency of the United States federal government responsible for developing and promoting American trade policy. Part of the Executive Office of the President, it is headed by the ...
have specifically listed the system as a trade barrier in Japan. The report claimed that the adoption of the TRON-based system by the Japanese government is advantageous to Japanese manufacturers, and thus excluding US operating systems from the huge new market; specifically the report lists MS-DOS, OS/2 and UNIX as examples. The Office of USTR was allegedly under Microsoft's influence as its former officer Tom Robertson was then offered a lucrative position by Microsoft. While the TRON system itself was subsequently removed from the list of sanction by
Section 301 of the Trade Act of 1974 Section 301 of the U.S. Trade Act of 1974 (19 U.S.C.br>§ 2411 last amended March 23, 2018) authorizes the President to take all appropriate action, including tariff-based and non-tariff-based retaliation, to obtain the removal of any act, po ...
after protests by the organization in May 1989, the trade dispute caused the
Ministry of International Trade and Industry The was a ministry of the Government of Japan from 1949 to 2001. The MITI was one of the most powerful government agencies in Japan and, at the height of its influence, effectively ran much of Japanese industrial policy, funding research and ...
to accept a request from Masayoshi Son to cancel the Center of Educational Computing's selection of the TRON-based system for the use of educational computers. The incident is regarded as a symbolic event for the loss of momentum and eventual demise of the BTRON system, which led to the widespread adoption of MS-DOS in Japan and the eventual adoption of Unicode with its successor Windows.


Merger of all equivalent characters

There has not been any push for full semantic unification of all semantically-linked characters, though the idea would treat the respective users of East Asian languages the same, whether they write in Korean, Simplified Chinese, Traditional Chinese,
Kyūjitai ''Kyūjitai'' ( ja, 舊字體 / 旧字体, lit=old character forms) are the traditional forms of kanji, Chinese written characters used in Japanese. Their simplified counterparts are ''shinjitai'' ( ja, 新字体, lit=new character forms, la ...
Japanese,
Shinjitai are the simplified forms of kanji used in Japan since the promulgation of the Tōyō Kanji List in 1946. Some of the new forms found in ''shinjitai'' are also found in Simplified Chinese characters, but ''shinjitai'' is generally not as extensi ...
Japanese or Vietnamese. Instead of some variants getting distinct code points while other groups of variants have to share single code points, all variants could be reliably expressed only with metadata tags (e.g., CSS formatting in webpages). The burden would be on all those who use differing versions of , , , , whether that difference be due to simplification, international variance or intra-national variance. However, for some platforms (e.g., smartphones), a device may come with only one font pre-installed. The system font must make a decision for the default glyph for each code point and these glyphs can differ greatly, indicating different underlying graphemes. Consequently, relying on language markup across the board as an approach is beset with two major issues. First, there are contexts where language markup is not available (code commits, plain text). Second, any solution would require every operating system to come pre-installed with many glyphs for semantically identical characters that have many variants. In addition to the standard character sets in Simplified Chinese, Traditional Chinese, Korean, Vietnamese, Kyūjitai Japanese and Shinjitai Japanese, there also exist "ancient" forms of characters that are of interest to historians, linguists and philologists. Unicode's Unihan database has already drawn connections between many characters. The Unicode database catalogs the connections between variant characters with distinct code points already. However, for characters with a shared code point, the reference glyph image is usually biased toward the Traditional Chinese version. Also, the decision of whether to classify pairs as semantic variants or z-variants is not always consistent or clear, despite rationalizations in the handbook. So-called semantic variants of (U+4E1F) and (U+4E22) are examples that Unicode gives as differing in a significant way in their abstract shapes, while Unicode lists and as z-variants, differing only in font styling. Paradoxically, Unicode considers and to be near identical z-variants while at the same time classifying them as significantly different semantic variants. There are also cases of some pairs of characters being simultaneously semantic variants and specialized semantic variants and simplified variants: (U+500B) and (U+4E2A). There are cases of non-mutual equivalence. For example, the Unihan database entry for (U+4E80) considers (U+9F9C) to be its z-variant, but the entry for does not list as a z-variant, even though was obviously already in the database at the time that the entry for was written. Some clerical errors led to doubling of completely identical characters such as (U+FA23) and (U+27EAF). If a font has glyphs encoded to both points so that one font is used for both, they should appear identical. These cases are listed as z-variants despite having no variance at all. Intentionally duplicated characters were added to facilitate bit-for-bit round-trip conversion. Because round-trip conversion was an early selling point of Unicode, this meant that if a national standard in use unnecessarily duplicated a character, Unicode had to do the same. Unicode calls these intentional duplications " compatibility variants" as with 漢 (U+FA9A) which calls (U+6F22) its compatibility variant. As long as an application uses the same font for both, they should appear identical. Sometimes, as in the case of with U+8ECA and U+F902, the added compatibility character lists the already present version of as both its compatibility variant and its z-variant. The compatibility variant field overrides the z-variant field, forcing normalization under all forms, including canonical equivalence. Despite the name, compatibility variants are actually canonically equivalent and are united in any Unicode normalization scheme and not only under compatibility normalization. This is similar to how is canonically equivalent to a pre-composed . Much software (such as the MediaWiki software that hosts Wikipedia) will replace all canonically equivalent characters that are discouraged (e.g. the angstrom symbol) with the recommended equivalent. Despite the name, CJK "compatibility variants" are canonically equivalent characters and not compatibility characters. 漢 (U+FA9A) was added to the database later than (U+6F22) was and its entry informs the user of the compatibility information. On the other hand, (U+6F22) does not have this equivalence listed in this entry. Unicode demands that all entries, once admitted, cannot change compatibility or equivalence so that normalization rules for already existing characters do not change. Some pairs of Traditional and Simplified are also considered to be semantic variants. According to Unicode's definitions, it makes sense that all simplifications (that do not result in wholly different characters being merged for their homophony) will be a form of semantic variant. Unicode classifies and as each other's respective traditional and simplified variants and also as each other's semantic variants. However, while Unicode classifies (U+5104) and (U+4EBF) as each other's respective traditional and simplified variants, Unicode does not consider and to be semantic variants of each other. Unicode claims that "Ideally, there would be no pairs of z-variants in the Unicode Standard." This would make it seem that the goal is to at least unify all minor variants, compatibility redundancies and accidental redundancies, leaving the differentiation to fonts and to language tags. This conflicts with the stated goal of Unicode to take away that overhead, and to allow any number of any of the world's scripts to be on the same document with one encoding system. Chapter One of the handbook states that "With Unicode, the information technology industry has replaced proliferating character sets with data stability, global interoperability and data interchange, simplified software, and reduced development costs. While taking the ASCII character set as its starting point, the Unicode Standard goes far beyond ASCII's limited ability to encode only the upper- and lowercase letters A through Z. It provides the capacity to encode all characters used for the written languages of the world – more than 1 million characters can be encoded. No escape sequence or control code is required to specify any character in any language. The Unicode character encoding treats alphabetic characters, ideographic characters, and symbols equivalently, which means they can be used in any mixture and with equal facility." That leaves us with settling on one unified reference grapheme for all z-variants, which is contentious since few outside of Japan would recognize and as equivalent. Even within Japan, the variants are on different sides of a major simplification called Shinjitai. Unicode would effectively make the PRC's simplification of (U+4FA3) and (U+4FB6) a monumental difference by comparison. Such a plan would also eliminate the very visually distinct variations for characters like (U+76F4) and (U+96C7). One would expect that all simplified characters would simultaneously also be z-variants or semantic variants with their traditional counterparts, but many are neither. It is easier to explain the strange case that semantic variants can be simultaneously both semantic variants and specialized variants when Unicode's definition is that specialized semantic variants have the same meaning only in certain contexts. Languages use them differently. A pair whose characters are 100% drop-in replacements for each other in Japanese may not be so flexible in Chinese. Thus, any comprehensive merger of recommended code points would have to maintain some variants that differ only slightly in appearance even if the meaning is 100% the same for all contexts in one language, because in another language the two characters may not be 100% drop-in replacements.


Examples of language-dependent glyphs

In each row of the following table, the same character is repeated in all six columns. However, each column is marked (by the lang attribute) as being in a different language: Chinese ( simplified and two types of
traditional A tradition is a belief or behavior (folk custom) passed down within a group or society with symbolic meaning or special significance with origins in the past. A component of cultural expressions and folklore, common examples include holidays or ...
), Japanese, Korean, or Vietnamese. The browser should select, for each character, a
glyph A glyph () is any kind of purposeful mark. In typography, a glyph is "the specific shape, design, or representation of a character". It is a particular graphical representation, in a particular typeface, of an element of written language. A g ...
(from a
font In metal typesetting, a font is a particular size, weight and style of a typeface. Each font is a matched set of type, with a piece (a " sort") for each glyph. A typeface consists of a range of such fonts that shared an overall design. In mo ...
) suitable to the specified language. (Besides actual character variation—look for differences in stroke order, number, or direction—the typefaces may also reflect different typographical styles, as with serif and non-serif alphabets.) This only works for fallback glyph selection if you have CJK fonts installed on your system and the font selected to display this article does not include glyphs for these characters. No character variant that is exclusive to Korean or Vietnamese has received its own code point, whereas almost all Shinjitai Japanese variants or Simplified Chinese variants each have distinct code points and unambiguous reference glyphs in the Unicode standard. In the twentieth century, East Asian countries made their own respective encoding standards. Within each standard, there coexisted variants with distinct code points, hence the distinct code points in Unicode for certain sets of variants. Taking Simplified Chinese as an example, the two character variants of (U+5167) and (U+5185) differ in exactly the same way as do the Korean and non-Korean variants of (U+5168). Each respective variant of the first character has either (U+5165) or (U+4EBA). Each respective variant of the second character has either (U+5165) or (U+4EBA). Both variants of the first character got their own distinct code points. However, the two variants of the second character had to share the same code point. The justification Unicode gives is that the national standards body in the PRC made distinct code points for the two variations of the first character /, whereas Korea never made separate code points for the different variants of . There is a reason for this that has nothing to do with how the domestic bodies view the characters themselves. China went through a process in the twentieth century that changed (if not simplified) several characters. During this transition, there was a need to be able to encode both variants within the same document. Korean has always used the variant of with the (U+5165) radical on top. Therefore, it had no reason to encode both variants. Korean language documents made in the twentieth century had little reason to represent both versions in the same document. Almost all of the variants that the PRC developed or standardized got distinct code points owing simply to the fortune of the Simplified Chinese transition carrying through into the computing age. This privilege however, seems to apply inconsistently, whereas most simplifications performed in Japan and mainland China with code points in national standards, including characters simplified differently in each country, did make it into Unicode as distinct code points. Sixty-two Shinjitai "simplified" characters with distinct code points in Japan got merged with their Kyūjitai traditional equivalents, like . This can cause problems for the language tagging strategy. There is no universal tag for the traditional and "simplified" versions of Japanese as there are for Chinese. Thus, any Japanese writer wanting to display the Kyūjitai form of may have to tag the character as "Traditional Chinese" or trust that the recipient's Japanese font uses only the Kyūjitai glyphs, but tags of Traditional Chinese and Simplified Chinese may be necessary to show the two forms side by side in a Japanese textbook. This would preclude one from using the same font for an entire document, however. There are two distinct code points for in Unicode, but only for "compatibility reasons". Any Unicode-conformant font must display the Kyūjitai and Shinjitai versions' equivalent code points in Unicode as the same. Unofficially, a font may display differently with 海 (U+6D77) as the Shinjitai version and 海 (U+FA45) as the Kyūjitai version (which is identical to the traditional version in written Chinese and Korean). The radical (U+7CF8) is used in characters like /, with two variants, the second form being simply the cursive form. The radical components of (U+7D05) and (U+7EA2) are semantically identical and the glyphs differ only in the latter using a cursive version of the component. However, in mainland China, the standards bodies wanted to standardize the cursive form when used in characters like . Because this change happened relatively recently, there was a transition period. Both (U+7D05) and (U+7EA2) got separate code points in the PRC's text encoding standards bodies so Chinese-language documents could use both version. The two variants received distinct code points in Unicode as well. The case of the radical (U+8278) proves how arbitrary the state of affairs is. When used to compose characters like (U+8349), the radical was placed at the top, but had two different forms. Traditional Chinese and Korean use a four-stroke version. At the top of should be something that looks like two plus signs (). Simplified Chinese, Kyūjitai Japanese and Shinjitai Japanese use a three-stroke version, like two plus signs sharing their horizontal strokes (, i.e. ). The PRC's text encoding bodies did not encode the two variants differently. The fact that almost every other change brought about by the PRC, no matter how minor, did warrant its own code point suggests that this exception may have been unintentional. Unicode copied the existing standards as is, preserving such irregularities. The Unicode Consortium has recognized errors in other instances. The myriad Unicode blocks for CJK Han Ideographs have redundancies in original standards, redundancies brought about by flawed importation of the original standards, as well as accidental mergers that are later corrected, providing precedent for dis-unifying characters. For native speakers, variants can be unintelligible or be unacceptable in educated contexts. English speakers may understand a handwritten note saying "4P5 kg" as "495 kg", but writing the nine backwards (so it looks like a "P") can be jarring and would be considered incorrect in any school. Likewise, to users of one CJK language reading a document with "foreign" glyphs: variants of can appear as mirror images, can be missing a stroke/have an extraneous stroke, and may be unreadable to Non-Japanese people. (In Japan, both variants are accepted).


Examples of some non-unified Han ideographs

In some cases, often where the changes are the most striking, Unicode has encoded variant characters, making it unnecessary to switch between fonts or lang attributes. However, some variants with arguably minimal differences get distinct codepoints, and not every variant with arguably substantial changes gets a unique codepoint. As an example, take a character such as (U+5165), for which the only way to display the variants is to change font (or lang attribute) as described in the previous table. On the other hand, for (U+5167), the variant of (U+5185) gets a unique codepoint. For some characters, like / (U+514C/U+5151), either method can be used to display the different glyphs. In the following table, each row compares variants that have been assigned different code points. For brevity, note that
shinjitai are the simplified forms of kanji used in Japan since the promulgation of the Tōyō Kanji List in 1946. Some of the new forms found in ''shinjitai'' are also found in Simplified Chinese characters, but ''shinjitai'' is generally not as extensi ...
variants with different components will usually (and unsurprisingly) take unique codepoints (e.g., 氣/気). They will not appear here nor will the simplified Chinese characters that take consistently simplified radical components (e.g., 紅/红, 語/语). This list is not exhaustive.


Ideographic Variation Database (IVD)

In order to resolve issues brought by Han unification, a Unicode Technical Standard known as the Unicode Ideographic Variation Database have been created to resolve the problem of specifying specific glyph in plain text environment. By registering glyph collections into the Ideographic Variation Database (IVD), it is possible to use Ideographic Variation Selectors to form Ideographic Variation Sequence (IVS) to specify or restrict the appropriate glyph in text processing in a Unicode environment.


Unicode ranges

Ideographic characters assigned by Unicode appear in the following blocks: *
CJK Unified Ideographs The Chinese, Japanese and Korean (CJK) scripts share a common background, collectively known as CJK characters. In the process called Han unification, the common (shared) characters were identified and named CJK Unified Ideographs. As of Unicode ...
(4E00–9FFF) (Otherwise known as URO, abbreviation of Unified Repertoire and Ordering) *
CJK Unified Ideographs Extension A CJK Unified Ideographs Extension-A is a Unicode block containing rare Han ideographs. The block has dozens of variation sequences defined for standardized variants. It also has thousands of ideographic variation sequences registered in the Un ...
(3400–4DBF) *
CJK Unified Ideographs Extension B CJK Unified Ideographs Extension B is a Unicode block containing rare and historic CJK ideographs for Chinese, Japanese, Korean, and Vietnamese. The block has dozens of variation sequences defined for standardized variants. It also has thous ...
(20000–2A6DF) *
CJK Unified Ideographs Extension C __FORCETOC__ CJK Unified Ideographs Extension C is a Unicode block containing rare and historic CJK ideographs for Chinese, Japanese, Korean, and Vietnamese. The block has dozens of ideographic variation sequences registered in the Unicode I ...
(2A700–2B73F) *
CJK Unified Ideographs Extension D CJK Unified Ideographs Extension D is a Unicode block containing rare and historic CJK ideographs for Chinese, Japanese, Korean, and Vietnamese. The block has hundreds of ideographic variation sequences registered in the Unicode Ideographic Va ...
(2B740–2B81F) *
CJK Unified Ideographs Extension E CJK Unified Ideographs Extension E is a Unicode block containing rare and historic CJK ideographs for Chinese, Japanese, Korean, and Vietnamese. The block has dozens of ideographic variation sequences registered in the Unicode Ideographic Vari ...
(2B820–2CEAF) *
CJK Unified Ideographs Extension F CJK Unified Ideographs Extension F is a Unicode block containing rare and historic CJK ideographs for Chinese, Japanese, Korean, and Vietnamese, as well as more than a thousand Sawndip characters for writing the Zhuang language. The block has ...
(2CEB0–2EBEF) *
CJK Unified Ideographs Extension G CJK Unified Ideographs Extension G is a Unicode block containing rare and historic CJK Unified Ideographs for Chinese, Japanese, Korean, and Vietnamese. It is the first block to be allocated to the Tertiary Ideographic Plane. The exotic charac ...
(30000–3134F) *
CJK Unified Ideographs Extension H __FORCETOC__ CJK Unified Ideographs Extension H is a Unicode block containing rare and historic CJK Unified Ideographs The Chinese, Japanese and Korean (CJK) scripts share a common background, collectively known as CJK characters. In the proc ...
(31350–323AF) *
CJK Compatibility Ideographs CJK Compatibility Ideographs is a Unicode block created to contain Han characters that were encoded in multiple locations in other established character encodings, in addition to their CJK Unified Ideographs assignments, in order to retain rou ...
(F900–FAFF) (the twelve characters at FA0E, FA0F, FA11, FA13, FA14, FA1F, FA21, FA23, FA24, FA27, FA28 and FA29 are actually "unified ideographs" not "compatibility ideographs") Unicode includes support of CJKV radicals, strokes, punctuation, marks and symbols in the following blocks: *
CJK Radicals Supplement CJK Radicals Supplement is a Unicode block containing alternative, often positional, forms of the Kangxi radicals. They are used as headers in dictionary indices and other CJK ideograph collections organized by radical-stroke. Block History ...
(2E80–2EFF) * CJK Strokes (31C0–31EF) * CJK Symbols and Punctuation (3000–303F) * Ideographic Description Characters (2FF0–2FFF) Additional compatibility (discouraged use) characters appear in these blocks: * CJK Compatibility (3300–33FF) * CJK Compatibility Forms (FE30–FE4F) *
CJK Compatibility Ideographs CJK Compatibility Ideographs is a Unicode block created to contain Han characters that were encoded in multiple locations in other established character encodings, in addition to their CJK Unified Ideographs assignments, in order to retain rou ...
(F900–FAFF) *
CJK Compatibility Ideographs Supplement CJK Compatibility Ideographs Supplement is a Unicode block containing Han characters used only for roundtrip compatibility mapping with planes 3, 4, 5, 6, 7, and 15 of CNS 11643-1992. Block History The following Unicode-related documents recor ...
(2F800–2FA1F) *
Enclosed CJK Letters and Months Enclosed CJK Letters and Months is a Unicode block containing circled and parenthesized Katakana, Hangul, and CJK ideographs. Also included in the block are miscellaneous glyphs that would more likely fit in CJK Compatibility or Enclosed Alp ...
(3200–32FF) *
Enclosed Ideographic Supplement Enclosed Ideographic Supplement is a Unicode block containing forms of characters and words from Chinese, Japanese and Korean enclosed within or stylised as squares, brackets, or circles. It contains three such characters containing one or more ...
(1F200–1F2FF) * Kangxi Radicals (2F00–2FDF) These compatibility characters (excluding the twelve unified ideographs in the CJK Compatibility Ideographs block) are included for compatibility with legacy text handling systems and other legacy character sets. They include forms of characters for vertical text layout and rich text characters that Unicode recommends handling through other means.


International Ideographs Core

The International Ideographs Core (IICore) is a subset of 9810 ideographs derived from the CJK Unified Ideographs tables, designed to be implemented in devices with limited memory, input/output capability, and/or applications where the use of the complete ISO 10646 ideograph repertoire is not feasible. There are 9810 characters in the current standard.


Unihan database files

The Unihan project has always made an effort to make available their build database. The libUnihan project provides a normalized SQLite Unihan database and corresponding C library. All tables in this database are in
fifth normal form Fifth normal form (5NF), also known as projection–join normal form (PJ/NF), is a level of database normalization designed to remove redundancy in relational databases recording multi-valued facts by isolating semantically related multiple relati ...
. libUnihan is released under the
LGPL The GNU Lesser General Public License (LGPL) is a free-software license published by the Free Software Foundation (FSF). The license allows developers and companies to use and integrate a software component released under the LGPL into their own ...
, while its database, UnihanDb, is released under the
MIT License The MIT License is a permissive free software license originating at the Massachusetts Institute of Technology (MIT) in the late 1980s. As a permissive license, it puts only very limited restriction on reuse and has, therefore, high license comp ...
.


See also

*
Chinese character encoding In computing, Chinese character encodings can be used to represent text written in the CJK languages— Chinese, Japanese, Korean—and (rarely) obsolete Vietnamese, all of which use Chinese characters. Several general-purpose characte ...
* GB 18030 *
Sinicization Sinicization, sinofication, sinification, or sinonization (from the prefix , 'Chinese, relating to China') is the process by which non-Chinese societies come under the influence of Chinese culture, particularly the language, societal norms, cul ...
* Z-variant * List of CJK fonts *
Allography Allography, from the Greek for "other writing", has several meanings which all relate to how words and sounds are written down. Authorship An allograph may be the opposite of an autograph – i.e. a person's words or name (signature) written by ...
* Variant Chinese characters


Notes


References

{{DEFAULTSORT:Han Unification Chinese-language computing Encodings of Japanese Korean-language computing Unicode Natural language and computing Character encoding