You are here: Home Blog Python, Unicode and UnicodeDecodeError

Python, Unicode and UnicodeDecodeError

by Dan Fairs last modified Sep 30, 2010 04:21 PM
In the years I've been developing in Python, Unicode seems to be the topic which causes the greatest amount of confusion amongst developers. Hopefully much of this confusion should go away in Python 3, for reasons I'll come to at the end; but until then, the UnicodeDecodeError is the bane of many developers' lives.

Unicode and Encodings

OK, let's take a step away from text for a moment. I want you to think of a number between one and ten. Got one? Great - now, grab a pen and paper, and write it down.

What number did you think of? Well, I thought of the number six. And when I wrote it down, it looks like this:

6

Of course, if I were an ancient Roman (or possibly a clockmaker), I could have written this:

Six in Roman Numerals


They all mean the same thing - the number six. But we've written them in different ways. In other words, we've 'encoded' our idea of the number six in our head in three different ways - three different encodings.

The separation of the idea of 'the number six' from its actual representation is basically all Unicode is. The Unicode Character set (UCS) defines a set of things (loosely, a set of letters) that we can represent. How we represent each of those letters is called an encoding. There's only one Unicode, but there are many encodings. In Unicode parlance, each of those 'things' (letters) are known as 'code points'. Unicode separates the characters' meaning from their representation.

For historical reasons, the most common encoding (in Western Europe and the US, anyway) is ASCII. This is also Python's default encoding.

Let's think about ASCII for a moment. It's an encoding that uses 7 bits, which limits it to 128 possible values. That's enough to represent all the characters that Western Europe and the US use (letters in both cases, the numbers, punctuation, a few characters with diacritics). Therefore, Unicode strings that only include code points that are in these 128 ASCII characters can be encoded as ASCII. Conversely, any ASCII encoded string can be decoded to Unicode.

It's worth reiterating that terminology, as you come across it a lot: the transformation from Unicode to an encoding like ASCII is called 'encoding'. The transformation from ASCII back to Unicode is called 'decoding'.

    Unicode  ---- encode ----> ASCII
    ASCII    ---- decode ----> Unicode

Non-ASCII encodings

Most people don't live in the US or Western Europe, and therefore have a requirement to store more characters than can be represented with ASCII. What those folk need to represent *is* part of the Unicode set (Unicode is massive!) - so a different encoding is required. Common encodings have familiar names: UTF-8 and UTF-16. UTF-8, for example, uses a single byte for encoding all the ASCII values, then variable numbers of bytes to encode further characters. (The ins and outs of these encodings are beyond the scope of this article - check out their respective Wikipedia entries for the gory details.)

The fact that the first byte of UTF-8 isthe same as ASCII is important, since it means that the encoding is backwards-compatible with ASCII. However, it can mask problems in software. We'll come to this shortly.

Some terminology

Unicode-related terminology can get confusing. Here's a quick glossary:

  • To encode
    • Encoding (the verb) means to take a a Unicode string and produce a byte string
  • To decode
    • Decoding (the verb) means to take a byte string and produce a Unicode string
  • An encoding
    • An encoding (the noun) is a mapping that describes how to represent a Unicode character as a byte or series of bytes. Encodings are named (like 'ascii', or 'utf-8') and are used both when encoding (verb!) Unicode strings and decoding byte strings.


In other words, when you encode or decode, you need to specify the encoding that you're using. This will become clearer shortly.

Python, bytes and strings

You've probably noticed that there seems to be a couple of ways of writing down strings in Python. One looks like this:

  'this is a string'

Another looks like this:

  u'this is a string'

There's a good chance that you also know that the second one of those is a Unicode string. But what's the first one? And what does it actually mean to 'be a Unicode string'?

The first one is simply a sequence of bytes. This byte sequence is, by convention, an ASCII representation (ie. encoding) of a string. The whole Python standard library, and most third-party modules, happily deal with strings natively in this encoding. As long as you live in US or Western Europe, then that's probably fine for you.

The second one is a representation of a Unicode string. This can therefore contain any of the Unicode code points. It's possible that whatever you're using to edit the Python code (or just view it) might not be able to display the entire Unicode character set - for instance, a terminal usually has an encoding that it assumes data it's trying to display is in. There's a special notation, therefore, for representing arbitrary Unicode code points within a Python Unicode string: the \u and \U escapes. These will be followed by four or eight hex digits; there's some subtlety here (see the Python string reference for further information) but you can simply think of the number after the \u (or \U) representing the Unicode code point of the character. So, for example, the following Python string:

  u'\u0062'

represents LATIN SMALL LETTER B, or more simply:

  u'b'

To summarise then: the Unicode character set encompasses all characters that we may wish to represent. Individual encodings (ASCII, UTF-8, UTF-16, etc.) are representations of all or some of that full Unicode character set.

Encoding and Decoding

Byte strings and Unicode strings provide methods to perform the encoding and decoding for you. Remembering that you *encode* from Unicode to an encoding, you might try the following:

>>> u'\u0064'.encode('ascii')
'd'

As you'd expect, the Unicode string has an 'encode' method. You tell Python which encoding you want ('ascii' in this case, there are lots more supported by Python - check the docs) using the first parameter to the encode() call.

Conversely, byte strings have a decode() method:

>>> 'b'.decode('ascii')
u'b'


Here, we're telling Python to take the byte string 'b', decode it based on the ASCII decoder and return a Unicode string.

Note that in both these previous cases, we didn't really need to specify 'ascii' manually, since Python uses that as a default.

UnicodeEncodeError

So, we've established that there are encodings which can represent Unicode, or more usually, a certain subset of the Unicode character set. We've already talked about how ASCII can only represent 128 characters. So, what happens if you have a Unicode string that contains code points that are outside that 128 characters? Let's try something all too familiar to UK users: the £ sign. The Unicode code point for this character is 0x00A3:

>>> u'\u00A3'.encode('ascii')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa3'
in position 0: ordinal not in range(128)

Boom. This is Python telling you that it encountered a character in the Unicode string which it can't represent in the requested encoding. There's a fair amount of information in the error: it's giving you the character that it's having problems with, what position it was at in the string, and (in the case of ASCII) it's telling you that the number it was expecting was in the range 0 - 127.

How do you fix a UnicodeEncodeError? Well, you've got a couple of options:

  • Pick an encoding that does have a representation for the problematic character
  • Use one of the error handling arguments to encode()

The first option is obviously ideal, although its practicality depends on what you're doing with the encoded data. If you're passing it to another system that (for example) requires its text files in ASCII format, you're stuck. In that case, you're left with one of the other two options. You can pass 'ignore', 'replace', 'xmlcharrefreplace' or 'backslashreplace' to the encode call:

>>> u'\u0083'.encode('ascii', 'ignore')
''
>>> u'\u0083'.encode('ascii', 'replace')
'?'
>>> u'\u0083'.encode('ascii','xmlcharrefreplace')
'&#131;'
>>> u'\u0083'.encode('ascii','backslashreplace')
'\\x83'


If you choose one of those options, you'll have to let the eventual consumer of your encoded text know how to handle these.

UnicodeDecodeError

This one is probably more familiar to most developers. A UnicodeDecodeError occurs when you ask Python to decode a byte string using a specified encoding, but Python encounters a byte sequence in that string that isn't in the encoding that you specified (phew!). This one probably benefits from an example.

Consider once more the ASCII encoding. Being a 7-bit representation, ASCII only has 127 characters, represented by the numbers 0 - 127. So let's imagine the ASCII-encoded string below:

'Hi!'


In terms of ASCII numbers, that is:

72 105 33

Or in actual Python:

>>> s = chr(72) + chr(105) + chr(33)
>>> s
'Hi!'
>>> s.decode('ascii')
u'Hi!'

That's all great. But what happens if we add a byte that's not in the ASCII range?

>>> s = s + chr(128)
>>> s.decode('ascii')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0x80
in position 3: ordinal not in range(128)

Boom. Python is saying that it encountered a character 0x80 (which is 128 in hex, the one we added) which was at position 3 (counting from zero) in the source byte string which was not in the range 0 - 127.

This is normally caused by using the incorrect encoding to try to decode a byte string to Unicode. So, for example, if you were given a UTF-8 byte string, and tried to decode it as ASCII, then you might well see a UnicodeDecodeError.

But why only might?

Well, remember what I mentioned before - UTF-8 shares the first 127 characters with ASCII. That means that you can take a UTF-8 byte sequence, and decode it with the ASCII decoder, and *as long as there are no characters outside the ASCII range* it will work. *Only* when that byte string starts featuring characters which don't exist within the ASCII encoding do errors start being thrown.

ASCII - the default codec

Lots of Python programmers (well, US and Western European ones) can get quite a way into their Python careers converting byte strings to unicode like this:

>>> print unicode('hi!')
u'hi!'

What's going on here? Well, Python uses the ascii codec by default. So, the above is equivalent to:

>>> 'hi!'.decode('ascii')
u'hi!'

And, because most US/European test data is composed of this byte string:

  'test'

... nobody notices the problem until the Japanese office complains the intranet is broken.

Unicode Coercion

If you try to interpolate a byte string with a Unicode string, or vice-versa, Python will try and convert the byte string to Unicode using the default (ie. ascii) codec. So:

>>> u'Hi' + ' there'
u'Hi there'
>>> u'Hi %s' % 'there'
u'Hi there'
>>> 'Hi %s' % u'there'
u'Hi there'

These all work fine, because all the strings that we're working with can be represented with ASCII. Look what happens when we try a character which can't be represented with ASCII though:

>>> u'Hi ' + chr(128)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0x80
in position 0: ordinal not in range(128)


Python sees we're trying to combine a Unicode string with a byte string, so tries to decode the byte string to Unicode using the ASCII codec. Since character 128 (the Euro symbol, as it happens) can't be represented in ASCII, Python throws a UnicodeDecodeError.

In my experience, Unicode coercion is often where UnicodeDecodeErrors manifest themselves. The programmer has a Unicode string (probably a template) into which they're trying to put some data from a database. Relational databases tend to supply byte strings. Usually the encoding is a property on the database connection. Often, however, developers simply assume it's ASCII (or don't do anything special at all, which in Python amounts to the same thing). They try to stick the data from the database (perhaps in UTF-8 or ISO-8859-1) into a Unicode string using the %s format specifier, Python tries to decode the byte string using the ascii codec, and the whole thing falls flat on its face.

Why do Python byte strings have an encode() method?

The sharp-eyed amongst you will have noticed that byte strings have an encode() method as well as a decode() method. What does this do? Quite simply, it does a decode-then-encode. The byte string is decoded to Unicode using the default (ascii) encoding, and is then encoded to the target encoding specified in the call to encode() using the appropriate encoding. As you'd expect, fun and games ensue if the original byte string isn't actually encoded in ASCII at all.

Avoiding Unicode Errors

So - this is really what you care about, right? How do you avoid these Unicode problems? Well, there are three simple rules:

  • Within your application, always use Unicode
  • When you're reading text in to your application, decode it as soon as possible with the correct encoding
  • When you're outputting text from your application, encode at that point and do it explicitly

What does this mean in practice? Well, it means:

  • Whenever you're writing string literals in code, always use u''.
  • Whenever you read any text in, call .decode('encoding') on the byte string to obtain Unicode
  • Whenever you're writing text out, pick an appropriate encoding to handle whatever Unicode you're outputting - remember that ASCII can only represent a very limited subset


There are more places than you probably realise that text can get into your application. Here's some:

  • An incoming request from a web browser
  • Some text read in from a data file on disk
  • A template file read in from disk
  • Some user's input from a form
  • Some data from a database
  • Data returned from a web services call


Frameworks help a lot here. Many frameworks handle the common encoding and decoding cases (usually the template encoding, and data encoding from a database) for you, and just pass you back Unicode strings. Watch out for web request variables - many of those may be plain byte strings. Also watch out for web service responses; you might need to inspect the response headers to find out the encoding. And even then be careful; I've come across situations with in-house apps where declared encoding were simply wrong, leading to unexpected UnicodeDecodeErrors.

Figuring out which encoding to use

When you're faced with a byte string, how do you know which decoding to use? The answer is, unfortunately, simple: you don't. Some environments (such as the Web) may help you - HTTP requests and responses contain headers which specify the encoding used within them. You can inspect those, and if they're wrong - well, at least you've got someone else to blame.

If you're lucky, you know the byte string is encoding some XML. XML is gets a lot of flack, but one of the things it does right is to specify explicitly a default encoding that's actually useful (UTF-8) and provide a mechanism to declare a different encoding. So with XML, you can scan the first few bytes of the file, decode using UTF-8, and look for the magic encoding declaration. If there isn't one, then you can safely decode the rest of the file using UTF-8. If there is one, then switch encoding. Of course, your XML library of choice will do all this for you, and should give you Unicode text back once you've read your XML in.

If you're unlucky, then you've got two more options. First off, you can talk to the people who run your source (or destination) system - find out what encodings they're using, or accept, and use those.

The final, last resort option is to simply have a range of common encodings to try. A list I often use is ASCII, ISO-8859-1, UTF-8, UTF-16. Keep trying to decode with each of those in turn until one works. Which encodings you pick of course depends on what kind of files you're expecting to see. You may also run into problems of course if you have a byte string in encoding X which also happens to be valid when decoded using encoding Y - in this case, you'll just get garbage data. This is the cause of many of the 'funny character' bugs you see in web applications: byte strings being decoded using an encoding which happened to work, but was in fact not the original encoding used to create the byte string.

Python 3

I'm not going to talk too much about Python 3, since I haven't actually used it yet.

But - you rarely hear .NET or Java programmers complaining about Unicode errors. This is simply because both .NET and Java define a string to *be* Unicode in the first place. Anything involving the String class (in either runtime) is Unicode anyway; the developer sees encoding problems much less frequently as it's much less common for unexpected byte data to creep into applications. This doesn't mean the problems don't exist, of course: at the end of the day, text is still being encoded to and from byte strings; it's just done explicitly. (The fact that the default encoding on MS Windows, the OS on which many of these systems run, is UTF-16 helps here too - many more characters can be encoded in UTF-16 than ASCII).

My understanding is that Python 3 takes this general approach. Python 2's 'str' type is gone. In its place is the 'unicode' type (equivalent to Java and .NET's String class), and the 'bytes' type. String operations are done on 'unicode' instances.

Coding in a Unicode world

Unicode is here to stay. The days of writing software that would only need to work in American universities, where the only language and script used was US English in Latin text are long gone. There's no magic to Unicode and the various encodings, and once you understand what's going on, there's no reason to have that sick feeling in the pit of your stomach the next time you see a UnicodeDecodeErrror. Just remember these rules:

  • Decode on the way in
  • Unicode everywhere in your application
  • Encode on the way out

 

Filed under: , , ,
Joel Bernstein
Joel Bernstein says:
Sep 30, 2010 04:15 PM
I think you may be technically incorrect about UTF-16; it is variable-width in a sense, but unlike UTF-8 it is not byte compatible with ASCII 0-127.

If you open a UTF-8 file in a text editor that was expecting ASCII, you get readable text. Try opening a UTF-16 file with the same editor, and you get your text with a bunch of null characters interspaced.

UTF-16 also potentially suffers from endianess ambiguity, which is *awesome*.

For another great article about all of this stuff, check out http://www.joelonsoftware.com/articles/Unicode.html
Dan Fairs
Dan Fairs says:
Sep 30, 2010 04:15 PM
Thanks for the headsup - I'll amend the article to remove the reference to UTF-16 being byte-compatible with ASCII.
Tom Dunham
Tom Dunham says:
Sep 30, 2010 04:15 PM
Python 3's approach is introduced here:

http://diveintopython3.org/strings.html


Nice article BTW, and you're being linked already:

https://secure.mysociety.org/[…]/005872.html
Keyton Weissinger
Keyton Weissinger says:
Sep 30, 2010 04:15 PM
Thank you for taking time to write this. Here's some more (older but still good) info: http://eric.themoritzfamily.com/[…]/

There was also a good article from Pyzine (http://www.pyzine.com/[…]/article_Encodings.html) but it appears to be (temporarily?) dead?

Keyton
Anonymous says:
Sep 30, 2010 04:15 PM
I'm having a really hard time coming up with a western european language that is well served by ASCII. Not even English is properly served by ASCII, as prior to ASCII there were quite a few words that were supposed to have diacritics!

ISO-8859-1 exists for a reason doncha know.

Hm... Italian, maybe... I don't know Italian that well.
Francesco
Francesco says:
Oct 01, 2010 02:55 PM
Italian is not particularly well served by ASCII, actually. The fact is that accented vowels (which represent practically all occurrences of diacritics) have semi-decent two-characters ASCII approximations, like "e'" and "a`". That does not make them pretty, though :)
Dan Fairs
Dan Fairs says:
Sep 30, 2010 04:15 PM
Very true! Perhaps I should have omitted Western European, and just said 'American English' :)
Dave R.
Dave R. says:
Sep 30, 2010 04:15 PM
Brilliant article! Now I know why I've been getting so many bloody errors (str.encode == str.decode + str.encode). Thank you so much for that: I work with non-ascii text a lot.

Btw, a Roman would surely write VI for 6.
Dan Fairs
Dan Fairs says:
Oct 01, 2010 12:26 PM
D'oh! Yes indeed :) Oh well...
Bart
Bart says:
Oct 01, 2010 12:56 PM
Thanks for this great article. The part about explaining the terminology (decode/encode) was very enlightening (I think I always confused the two before) and the section "Avoiding Unicode Errors" was very useful too!
Daniel R
Daniel R says:
Oct 01, 2010 01:10 PM
Nice article, Dan.

I find the number-one cause of problems in all this is that people don't realise that Unicode is not an encoding.

So you'll see people saying things like "I've got my text encoded in Unicode, but when I try and print it..." Of course, what they usually have is text encoded in utf-8, so when they try to use 'encode' they are actually doing an implicit decode-then-encode, as you note above.
dvine
dvine says:
Oct 02, 2010 01:35 PM
Very good roundup of the subject and entertaining to read. Thanks a lot!
Steve Wedig
Steve Wedig says:
Oct 02, 2010 05:25 PM
cool blog design
Vinay Sajip
Vinay Sajip says:
Oct 02, 2010 05:33 PM
Nice article, Dan.

One clarification: In Python 3, the two types are 'str' and 'bytes'. The 'str' type is Unicode, just as in Java / .NET. There's no 'unicode' type in Python 3: using it will give you a NameError.
Hraban
Hraban says:
Oct 04, 2010 10:10 PM
BTW, there's also composed and decomposed Unicode. Too much applications can't correctly handle decomposed unicode strings, but e.g. OSX's filename are encoded in this way.
See unicodedata.normalize in the Python docs!
jholster
jholster says:
Jan 18, 2011 12:05 PM
Nice article, but I would also like to point out that Western Europeans do NOT use ASCII (= American Standard Code for Information Interchange).

ASCII was created in 1960 when computer resources were extremely limited, thus the 7-bit limit. It makes no sense for modern applications to have such an arbitrary restriction.

Always (I mean ALWAYS) use Unicode in your program if it handles real-world text input, whether it's English or not. There are no excuses. Not even English text cannot be presented by the ancient ASCII encoding, because it can contain foreign names, brands, loan-words, etc. with non-ASCII characters.
Ashish
Ashish says:
Jan 18, 2011 02:00 PM
Great Article!!, Having dealt with various Unicode related Issues over the years myself, I will add my 2 cents on the subject. When dealing with data with unknown encoding, try python Chardet module - http://chardet.feedparser.org/, or Iconv module - http://pypi.python.org/pypi/iconv for encoding detection.
AswadKannar
AswadKannar says:
May 24, 2011 12:56 PM
Hi - I am really happy to find this. cool job!
Andreas
Andreas says:
Jun 26, 2011 07:18 PM
The most common encoding nowadays is UTF-8. If you encounter a UnicodeEncodeError, try: unicode_string.encode('UTF-8'). This words, because every Unicode character can be encoded in UTF-8.

Converting Unicode to UTF-8 might sound weird at first, because.. what's the difference between the two? Well, UTF-8 is one way to store Unicode characters (in 8 Bit blocks), while internally, Python uses a 16 or 32 Bit integer for each character.

The Python Unicode HOWTO is a good read:
http://docs.python.org/howto/unicode.html
Dan O
Dan O says:
Aug 31, 2011 09:48 PM
Still a great article. :) Thanks!
Add comment

You can add a comment by filling out the form below. Plain text formatting.

Stereoplex is sponsored by Fez Consulting Ltd