UTF-32
UTF-32 stands for Unicode Transformation Format in 32 bits. It is a protocol to encode Unicode code points that uses exactly 32 bits per Unicode code point (but a number of leading bits must be zero as there are fewer than 221 Unicode code points). UTF-32 is a fixed-length encoding, in contrast to all other Unicode transformation formats, which are variable-length encodings. Each 32-bit value in UTF-32 represents one Unicode code point and is exactly equal to that code point's numerical value.
The main advantage of UTF-32 is that the Unicode code points are directly indexable. Finding the Nth code point in a sequence of code points is a constant time operation. In contrast, a variable-length code requires sequential access to find the Nth code point in a sequence. This makes UTF-32 a simple replacement in code that uses integers that are incremented by one to identify a character in a string, as was commonly done for ASCII.
The main disadvantage of UTF-32 is that it is space-inefficient, using four bytes per code point. Characters beyond the BMP are relatively rare in most texts, and can typically be ignored for sizing estimates. This makes UTF-32 close to twice the size of UTF-16. It can be up to four times the size of UTF-8 depending on how many of the characters are in the ASCII subset.
History
The original ISO 10646 standard defines a 32-bit encoding form called UCS-4, in which each encoded character in the Universal Character Set (UCS) is represented by a 31-bit value between 0 and 0x7FFFFFFF (the sign bit was unused and zero). In November 2003, Unicode was restricted by RFC 3629 to match the constraints of the UTF-16 character encoding: explicitly prohibiting code points greater than U+10FFFF (and also the high and low surrogates U+D800 through U+DFFF).[1][2] Although the ISO standard had (as of 1998 in Unicode 2.1) "reserved for private use" 0xE00000 to 0xFFFFFF, and 0x60000000 to 0x7FFFFFFF[3] these areas were removed in later versions. Because the Principles and Procedures document of ISO/IEC JTC 1/SC 2 Working Group 2 states that all future assignments of characters will be constrained to the Unicode range, UTF-32 will be able to represent all UCS characters and UTF-32 and UCS-4 are identical.
Analysis
Though a fixed number of bytes per code point appear convenient, it is not as useful as it appears. It makes truncation easier but not significantly so compared to UTF-8 and UTF-16 (both of which can search backwards for the point to truncate by looking at 2–4 code units at most).
It is extremely rare that code wishes to find the Nth code point without earlier examining the code points 0 to N–1,[4] so an integer index that is incremented by 1 for each character can be replaced with an integer offset, measured in code units and incremented by the number of code units as each character is examined. This removes the speed advantage that novice programmers may believe UTF-32 has.
UTF-32 does not make calculating the displayed width of a string easier, since even with a "fixed width" font there may be more than one code point per character position (combining marks) or more than one character position per code point (for example "grapheme clusters" for CJK ideographs). Editors that limit themselves to left-to-right languages and precomposed characters can take advantage of fixed-sized code units, but such editors are unlikely to support non-BMP characters and thus can work equally well with 16-bit UTF-16 encoding.
Use
The main use of UTF-32 is in internal APIs where the data is single code points or glyphs, rather than strings of characters. For instance in modern text rendering it is common that the last step is to build a list of structures each containing coordinates (x,y), attributes, and a single UTF-32 character identifying the glyph to draw. Often non-Unicode information is stored in the "unused" 11 bits of each word.
On Unix systems, UTF-32 strings are sometimes used for storage, due to the type wchar_t being defined as 32 bit. Python versions up to 3.2 can be compiled to use them instead of UTF-16; from version 3.3 onward, UTF-16 support is dropped, and a system is used whereby strings are stored in UTF-32 but with leading zero bytes optimized away "depending on the character with the largest Unicode ordinal (1, 2, or 4 bytes)" to make all characters that size.[5] Seed7 and Lasso programming languages encode all characters and strings with UTF-32, in the believe that direct indexing is important, whereas Julia, had native UTF-8, UTF-16 and UTF-32 encoding in the standard library, but simplified to UTF-8 only (with all other encodings considered legacy, in accordance with "UTF-8 Everywhere Manifesto",[6] moved to packages, out of the standard library). Use of UTF-32 strings on Windows (where wchar_t is 16 bits) is almost non-existent.
Though technically invalid, the surrogate halves are often encoded and allowed. This allows invalid UTF-16 (such as Windows filenames) to be translated to UTF-32, similar to how the WTF-8 variant of UTF-8 works. Sometimes paired surrogates are encoded instead of non-BMP characters, similar to CESU-8. Due to the large number of unused 32-bit values, it is also possible to preserve invalid UTF-8 by using non-Unicode values to encode UTF-8 errors, though there is no standard for this.
See also
References
- ↑ ISO/IEC 10646:2014 Clause 9.4: "Because surrogate code points are not UCS scalar values, UTF-32 code units in the range 0000 D800-0000 DFFF are ill-formed". Clause 4.57: "[UCS codespace] consisting of the integers from 0 to 10 FFFF (hexadecimal)". Clause 4.58: "[UCS scalar value] any UCS code point except high-surrogate and low-surrogate code points".
- ↑ Mapping code points to Unicode encoding forms, § 1: UTF-32
- ↑ THE UNIVERSAL CHARACTER SET (UCS)
- ↑ http://www.ibm.com/developerworks/xml/library/x-utf8/
- ↑ Löwis, Martin. "PEP 393 -- Flexible String Representation". python.org. Python. Retrieved 26 October 2014.
- ↑ "UTF-8 Everywhere Manifesto".
External links
- The Unicode Standard 5.0.0, chapter 3 – formally defines UTF-32 in § 3.10, D99-D101
- Unicode Standard Annex #19 – formally defined UTF-32 for Unicode 3.x (March 2001; last updated March 2002)
- Registration of new charsets: UTF-32, UTF-32BE, UTF-32LE – announcement of UTF-32 being added to the IANA charset registry (April 2002)