![]() |
"FACE WITH TEARS OF JOY" (U+1F602) |
I've been fighting with characters sets on several occasions throughout the years. Just recently, I had a bug in TransformTool related to character encoding and how errors are handled in the .NET framework. While writing about the bug I needed a reference to a basic introduction to character encoding — only to discover that most are very technically focused and dive right into the characters' hex codes. Here, I'll try to fill that gap and explain only the basics. I'll include pointers to more detailed resources in case you decide to dig deeper into the dark world of character encodings.
How encodings work
The Unicode Consortium has a great explanation of how it really works:
Fundamentally, computers just deal with numbers. They store letters and other characters by assigning a number for each one.The number assigned to a character is called a codepoint. An encoding defines how many codepoints there are, and which abstract letters they represent e.g. "Latin Capital Letter A". Furthermore, an encoding defines how the codepoint can be represented as one or more bytes. We'll use one of the most prominent encodings as our first example: ASCII.
![]() |
Capital A in the ASCII encoding |
There, that was the big picture in a few paragraphs! That's how it works! Now we'll go more into detail on how characters are encoded, because that's usually where things go wrong. We'll leave the fonts, if you want to dig further into this see Understanding characters, keystrokes, codepoints and glyphs.
We've seen that ASCII assigns the number 65 to a capital A. But what about the other characters? Here's the uppercase characters in ASCII along with their (decimal) codepoints:
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 |
And here's the lowercase characters and their codepoints:
a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q | r | s | t | u | v | w | x | y | z |
97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 |
There you go, that's the english alphabet in both lower- and uppercase. You can have a look at the complete table of printable ASCII characters at Wikipedia where you'll also find numbers, punctuation marks etc. Character encodings are often referred to as code pages or character sets as well.
There are (too) many encodings in common use around the world, each defining their own set of characters with corresponding numbers. Wikipedia lists over 50 common character encodings. The sheer number of encodings is one of the main reasons that things get messy.
How encodings differ
The Unicode Consortium summarizes the problems that arise due to all these different character encodings:
These encoding systems also conflict with one another. That is, two encodings can use the same number for two different characters, or use different numbers for the same character. Any given computer (especially servers) needs to support many different encodings; yet whenever data is passed between different encodings or platforms, that data always runs the risk of corruption.
To show some of the conflicts, we'll discuss two more common encodings, in addition to ASCII: the Latin-1 (ISO-8859-1) and Latin-2 (ISO-8859-2) character sets. Here's how they line up with with ASCII.
The first obvious problem here is that the two Latin encodings define more characters than ASCII do, so they have characters that do not exist in the ASCII-encoding. It's for example impossible for me to represent my name (André) using the ASCII encoding, but it's not a problem with Latin-1 nor Latin-2. The offending character is é, if you haven't already guessed it.
Moving on, the Latin-1 and Latin-2 encodings illustrate the problem of using the same number for two different characters. Here's a comparison for codepoints 192 through 199 for Latin-1 and Latin-2:
- ASCII is a seven bit encoding. Seven bits lets you count from 0 to 127. Consequently, you can represent 128 different characters.
- Latin-1 is an eight bit encoding. Eight bits (a byte) lets you count from 0 to 255. You could therefore theoretically represent 256 different characters, but 32 are unused, leaving 224 assigned. Latin-1 was defined to handle western European languages.
- Latin-2 is also an eight bit encoding, and also has 224 assigned characters. Latin-2 copes with Eastern European languages.
- Although Latin-1 and Latin-2 contain more characters than ASCII, they are identical to ASCII for the first 128 letters, and are consequently backwards compatible for those letters.
- Check out the links to have a look at what the tables of characters look like!
The first obvious problem here is that the two Latin encodings define more characters than ASCII do, so they have characters that do not exist in the ASCII-encoding. It's for example impossible for me to represent my name (André) using the ASCII encoding, but it's not a problem with Latin-1 nor Latin-2. The offending character is é, if you haven't already guessed it.
Moving on, the Latin-1 and Latin-2 encodings illustrate the problem of using the same number for two different characters. Here's a comparison for codepoints 192 through 199 for Latin-1 and Latin-2:
To summarize, if you write the word FÅRIKÅL to a text file using the Latin-1 encoding, here's how things can go wrong depending on your choice of encoding when reading the file:
- If you read the file using the ASCII encoding, the byte "11000101" cannot be decoded to a valid codepoint. You might get an error, or an replacement character such as: � or □. Or even worse, you might get an ?. More on that in an upcoming blog post on how .NET handles errors.
- If you read the file using the Latin-2 encoding, "11000101" will be decoded to a valid codepoint, which is assigned to the letter Ĺ. FÅRIKÅL then becomes FĹRIKĹL.
To further complicate things, there are encodings that use multiple bytes to store a character. I bet you can imagine that this can open yet another world of problems, since you need to keep track of several bytes. You're right, but it's also the only way to replace all the one-byte encodings which limits a character set to 256 characters.
There must be some kind of way out of here
Unicode comes to the rescue. Quoting the consortium again:
Unicode provides a consistent way of encoding multilingual plain text and brings order to a chaotic state of affairs that has made it difficult to exchange text files internationally.The Unicode standard defines more than 100 000 characters and their codepoints at the time of writing, but can potentially define more than one million characters. That means that there's no need for several character sets anymore, Unicode can include all characters. The big players in the IT industry work together to develop the standard further, ensuring support across platforms (Microsoft, Apple, Google and more).
There's three Unicode encoding forms, UTF-8, UTF-16. UTF-32. All of these can represent all Unicode characters. The most common encoding on the web is UTF-8, which you've probably come across. The text you're reading now is for example served as UTF-8. UTF-16 is also in widespread use, for example in the .Net framework and the Java runtime environment to represent strings in memory.
UTF-8 uses one, two, three, or four bytes to encode a character. It's backwards compatible with ASCII, which means that all the one byte characters are identical to ASCII. Other characters are stored using two, three or four bytes.
UTF-16 uses two or four bytes to encode a character, while UTF-32 uses four bytes per character. The figure shows how a capital A would be encoded.
![]() |
Latin Capital Letter A encoded forms |
Since you've tagged along this far in this post, here's a fun fact. Unicode defines not just characters but also lots of symbols. The crying smiley depicted in the begining of this post is actually a unicode character. It's called "Face with tears of joy." You'll find it here, along with many others.
I hope this post helped you grasp the overarching logic behind characters and their encoding in computers. If you really want to inflict more pain to the brain, I suggest you spend some time reading the references. You can also play with text encoding in TransformTool, it supports several encodings and can show you the bytes as decimal/hex/binary.
I've highlighted some common problems related to character encoding. When you're building new systems the advice is almost always: "Stick to UTF-8." It's also safe when communicating with legacy systems that use ASCII.
Note however, UTF-8 is NOT compatible if you communicate with systems that use anything other than UTF-8 or ASCII, such as the Latin-(1,2..X) encodings. Then you either have to change the system to use UTF-8, or use the same encoding as the system when reading the data on your side. Knowing just that might help you figure out things a lot faster when things start to break.
Good luck. ☺
PS! If you're a .NET head, stay tuned for an upcoming post on some .NET encoding subtleties. You don't want to miss those.