UTF-8/en

UTF-8 is a variable-length character encoding, which in this instance means that it uses 1 to 4 bytes per symbol. So, the first UTF-8 byte is used for encoding ASCII, giving the character set full backwards compatibility with ASCII. UTF-8 means that ASCII and Latin characters are interchangeable with little increase in the size of the data, because only the first byte is used. Users of Eastern alphabets such as Japanese, who have been assigned a higher byte range are unhappy, as this results in as much as a 50% redundancy in their data.

What is a character encoding?
Computers themselves do not understand printed text as a human would. For computers, every character of text is represented by a number. Traditionally, each set of numbers used to represent alphabets and characters (known as a coding system, encoding, or character set) was limited in size due to limitations in computer hardware.

The history of character encodings
The most common (or at least the most widely accepted) character set is ASCII (American Standard Code for Information Interchange). It is widely held that ASCII is the most successful software standard ever created. Modern ASCII was standardized in 1986 (ANSI X3.4, RFC 20, ISO/IEC 646:1991, ECMA-6) by the American National Standards Institute.

ASCII is strictly seven-bit, meaning that it uses bit patterns representable with seven binary digits, which provides a range of 0 to 127 in decimal. These include 32 non-visible control characters, most between 0 and 31, with the final control character, DEL or delete at 127. Characters 32 to 126 are visible characters: a space, punctuation marks, Latin letters and numbers.

The eighth bit in ASCII was originally used as a parity bit for error checking. If error checking is not desired, it is left as 0. This means that, with ASCII, each character is represented by a single byte.

Although ASCII was enough for communication in modern English, in other European languages that include accented characters, things were not so easy. The ISO 8859 standards were developed to meet these needs. They were backwards compatible with ASCII, but instead of leaving the eighth bit blank, they used it to allow another 127 characters in each encoding. ISO 8859's limitations soon came to light, and there are currently 15 variants of the ISO 8859 standard (8859-1 through to 8859-15). Outside of the ASCII-compatible byte range of these character sets, there is often conflict between the letters represented by each byte. To complicate interoperability between character encodings further, Windows-1252 is used in some versions of Microsoft Windows instead for Western European languages. This is a super-set of ISO 8859-1, however it is different in several ways; these sets do all retain ASCII compatibility.

The necessary development of completely different single-byte encodings for non-Latin alphabets, such as EUC (Extended Unix Coding) which is used for Japanese and Korean (and to a lesser extent Chinese) created more confusion. Other operating systems still used different character sets for the same languages, for example, Shift-JIS and ISO-2022-JP. Users wishing to view cyrillic glyphs had to choose between KOI8-R for Russian and Bulgarian or KOI8-U for Ukrainian, as well as all the other cyrillic encodings such as the unsuccessful ISO 8859-5, and the common Windows-1251 set. All of these character sets broke most compatibility with ASCII. Although it should be mentioned KOI8 encodings place cyrillic characters in Latin order, so in case the eighth bit is stripped, text is still decipherable on an ASCII terminal through case-reversed transliteration.

All of this has led to mass confusion, and to an almost total inability for multilingual communication; especially across different alphabets. Enter Unicode.

What is Unicode?
Unicode throws away the traditional single-byte limit of character sets. It uses 17 "planes" of 65,536 code points to describe a maximum of 1,114,112 characters. As the first plane, aka. "Basic Multilingual Plane" or BMP, contains almost everything character a user will ever need. Many have made the wrong assumption that Unicode was a 16-bit character set.

Unicode has been mapped in many different ways, but the two most common are UTF (Unicode Transformation Format) and UCS (Universal Character Set). A number after UTF indicates the number of bits in one unit, while the number after UCS indicates the number of bytes. UTF-8 has become the most widespread means for the interchange of Unicode text as a result of its eight-bit clean nature; it is therefore the subject of this document.

What UTF-8 can do
UTF-8 allows users to work in a standards-compliant and internationally accepted multilingual environment, with a comparatively low data redundancy. It is the preferred way for transmitting non-ASCII characters over the Internet, through Email, IRC, or almost any other medium. Despite this, many people regard UTF-8 in online communication as abusive. It is always best to be aware of the attitude towards UTF-8 in a specific channel, mailing list, or Usenet group before using non-ASCII UTF-8.

Finding or creating UTF-8 locales
Now that the principles behind Unicode have been laid out, get ready to start using UTF-8 locally!

The preliminary requirement for UTF-8 is to have a version of glibc installed that has national language support. The recommend means to do this is the file. It is beyond the scope of this document to explain the usage of this file. For users interested in more knowledge further explanation can be found in the Gentoo Localization Guide.

Next, the user needs to decide whether a UTF-8 locale is available for the language of choice, or whether one needs to be generated.

From the output of the above command, look for a result with a suffix similar to. If there is no result with a similar suffix a UTF-8 compatible locale must be created.

Replace "en_GB" with the desired locale setting:

Another way to include a UTF-8 locale is to add it to the file and generate necessary locales using the locale-gen command.

Setting the locale
There is one environment variable that needs to be set in order to use the new UTF-8 locales:  (optionally modify the   variable to change the system language as well). There are also many different ways to set it; some system administrators prefer to only have a UTF-8 environment for a specific user, in which case they set them in their (/bin/sh for Bourne shell users),  or  (/bin/bash for Bourne again shell users). More details and best practices can be found in the Localization Guide.

Still others prefer to set the locale globally. One specific circumstance where the author particularly recommends doing this is when is in use, because this init script starts the display manager and desktop before any of the aforementioned shell startup files are sourced. In other words, this is performed before any of the variables are loaded in the environment.

Setting the locale globally should be done using file. This file should look something like the following:

Next, the environment must be updated with the change.

Now, run locale with no arguments to see if the correct variables have been loaded in the environment:

Alternatively, using eselect to set locales
Although it is good to maintain the system as described above, it is possible to verify the correct locale configured using the eselect utility.

Use eselect to list the available locales on the system:

Using eselect setting the locale is as simple as listing them. Once the correct locale has been determined invoke:

Check the result:

In case it is preferred to have with   instead of , run the appropriate eselect command:

Running the following command will update the variables in the shell:

That is everything. The system is now using UTF-8 locales. The next hurdle is the configuration of the applications used from day to day.

Application support
When Unicode first started gaining momentum in the software world, multibyte character sets were not well suited to languages like C, which is the base language of most commonly used programs. Even today, some programs are not able to handle UTF-8 properly. Fortunately the majority of programs, especially the common ones, are supported.

Filenames, NTFS, and FAT
There are several NLS options in the Linux kernel configuration menu, but it is important to not become confused. For the most part, the only thing that needs to be done is to build UTF-8 NLS support into the kernel, and change the default NLS option to utf8.

When planning to mount NTFS partitions, users may need to specify a  option with mount. When planning on mounting FAT partitions, users may need to specify a  option with mount. Optionally, users can also set a default codepage for FAT in the kernel configuration.

Avoid setting  to UTF-8; it is not recommended. Instead, pass the  option when mounting FAT partitions. For further information man mount</tt> or see the appropriate kernel documentation at

For changing the encoding of filenames, can be used.

The format of the convmv</tt> command is as follows:

Substitute iso-8859-1 with the charset being converted from:

For changing the contents of files, use the iconv</tt> utility, it comes bundled with and should be installed on all Gentoo systems. Substitute iso-8859-1 with the charset being converted from. After running the command be sure to check for sane output:

To convert a file, another file must be created:

The recode package can also be used for this purpose.

The system console
To enable UTF-8 on the console edit Set   and read the comments -- it is important to have a font that has a good range of characters to make the most of Unicode. For this to work make sure the Unicode locale has been properly created.

The  variable, set in, should have a Unicode keymap specified.

Ncurses and Slang
It is wise to add  to the global USE flags in, and then to re-emerge  and. Portage will do this automatically if the  or   options are used. Run the following command to pull in the packages:

We also need to rebuild packages that link to these, now the USE changes have been applied. The tool we use is part of the  package.

KDE, GNOME and Xfce
All of the major desktop environments have full Unicode support, and will require no further setup than what has already been covered in this guide. This is because the underlying graphical toolkits (Qt or GTK+2) are UTF-8 aware. Subsequently, all applications running on top of these toolkits should be UTF-8-aware out of the box.

The exceptions to this rule come in Xlib and GTK+1. GTK+1 requires a iso-10646-1 FontSpec in the ~/.gtkrc, for example. Also, applications using Xlib or Xaw will need to be given a similar FontSpec, otherwise they will not work.

If an application has support for both a Qt and GTK+2 GUI, the GTK+2 GUI will generally give better results with Unicode.

X11 and fonts
TrueType fonts have support for Unicode, and most of the fonts that ship with Xorg have impressive character support, although, obviously, not every single glyph available in Unicode has been created for that font. To build fonts (including the Bitstream Vera set) with support for East Asian letters with X, make sure you have the  USE flag set. Many other applications utilize this flag, so it may be worthwhile to add it as a permanent USE flag.

Also, many font packages in Portage are Unicode aware. See the Fontconfig page for more information on recommended fonts and configuration.

Window managers and terminal emulators
Window managers not built on GTK+ or Qt generally have very good Unicode support, as they often use the Xft library for handling fonts. If your window manager does not use Xft for fonts, you can still use the FontSpec mentioned in the previous section as a Unicode font.

Terminal emulators that use Xft and support Unicode are harder to come by. Aside from Konsole and gnome-terminal, the best options in Portage are, , , , or plain when built with the   USE flag and invoked as. supports UTF-8 too, when invoked as  or the following is put into the :

Vim, emacs, xemacs and nano
Vim provides full UTF-8 support, and also has builtin detection of UTF-8 files. For further information in Vim, use.

GNU Emacs since version 23 and XEmacs version 21.5 have full UTF-8 support. GNU Emacs 24 also supports editing bidirectional text.

Nano has provided full UTF-8 support since version 1.3.6.

Shells
Currently,  provides full Unicode support through the GNU readline library. Z Shell (zsh</tt>) offers Unicode support with the  USE flag.

The C shell, tcsh</tt> and ksh</tt> do not provide UTF-8 support at all.

Irssi
Irssi has complete UTF-8 support, although it does require a user to set an option.

For channels where non-ASCII characters are often exchanged in non-UTF-8 charsets, the  command may be used to convert the characters. Type  for more information.

Mutt
The Mutt mail user agent has very good Unicode support. To use UTF-8 with Mutt, nothing needs to be put in the configuration files. Mutt will work under Unicode environment without modification if all the configuration files (signature included) are UTF-8 encoded.

Further information is available from the Mutt Wiki.

Man
Man pages are an integral part of any Linux machine. To ensure that any unicode in the man pages render correctly, edit and replace a line as shown below.

links and elinks
These are commonly used text-based browsers, and we shall see how we can enable UTF-8 support on them. On elinks</tt> and links</tt>, there are two ways to go about this, one using the Setup option from within the browser or editing the config file. To set the option through the browser, open a site with elinks</tt> or links</tt> and then + to enter the Setup Menu then select Terminal options, or press. Scroll down and select the last option  by pressing. Then Save and exit the menu. On links</tt> one may have to do a repeat + and then press to save. The config file option, is shown below.

Samba
Samba is a software suite which implements the SMB (Server Message Block) protocol for UNIX systems such as Macs, Linux and FreeBSD. The protocol is also sometimes referred to as the Common Internet File System (CIFS). Samba also includes the NetBIOS system - used for file sharing over windows networks.

add the following under the [global] section:

Testing it all out
There are numerous UTF-8 test websites around. ,, , and all Mozilla based browsers (including Firefox) support UTF-8. Konqueror and Opera have full UTF-8 support too.

When using one of the text-only web browsers, make absolutely sure you are using a Unicode-aware terminal.

If you see certain characters displayed as boxes with letters or numbers inside, this means that your font does not have a character for the symbol or glyph that the UTF-8 wants. Instead, it displays a box with the hex code of the UTF-8 symbol.


 * unicode-table.com
 * A W3C UTF-8 Test Page
 * A UTF-8 test page provided by the University of Frankfurt

Input methods
Dead keys may be used to input characters in X that are not included on your keyboard. These work by pressing your right key (or in some countries, ) and an optional key from the non-alphabetical section of the keyboard to the left of the return key at once, releasing them, and then pressing a letter. The dead key should modify it. Input can be further modified by using the key at the same time as pressing the  and modifier.

To enable dead keys in X, you need a layout that supports it. Most European layouts already have dead keys with the default variant. However, this is not true of North American layouts. Although there is a degree of inconsistency between layouts, the easiest solution seems to be to use a layout in the form "en_US" rather than "us", for example. The layout is set in like so:

This change will come into effect when your X server is restarted. To apply the change now, use the setxkbmap</tt> tool, for example, setxkbmap en_US</tt>.

It is probably easiest to describe dead keys with examples. Although the results are locale dependent, the concepts should remain the same regardless of locale. The examples contain UTF-8, so to view them tell the browser to view the page as UTF-8, or have a UTF-8 locale already configured.

When I press and  at once, release them, and then press, 'ä' is produced. When I press and  at once, and then press, 'ë' is produced. When I press and  at once, and then press, 'á' is produced, and when I press  and  at once, release them, and then press , 'é' is produced.

By pressing, and  at once, releasing them, and then pressing , a Scandinavian 'å' is produced. Similarly, when I press, and  at once, release only the , and then press it again, '°' is produced. Although it looks like one, this (U+02DA) is not the same as a degree symbol (U+00B0).

can be used with alphabetical keys alone. For example, and, a Greek lower-case letter mu is produced. and produce a scharfes s or esszet, etc. As many European users would expect (because it is marked on their keyboard),  and  (or  depending on the keyboard layout) produces a Euro sign, '€'.

External resources

 * The Wikipedia entry for Unicode
 * The Wikipedia entry for UTF-8
 * Unicode.org
 * UTF-8.com
 * RFC 3629
 * RFC 2277
 * Characters vs. Bytes
 * Locales and Internationalization

System configuration files (in /etc)
Most system configuration files (such as ) do not support UTF-8. It is recommended to stick with the ASCII character set for these files.