Revision 10 as of 2007-07-13 03:35:32

Clear message

The UnicodeDecodeError normally happens when decoding an str string from a certain coding. Since codings map only a limited number of str strings to unicode characters, an illegal sequence of str characters will cause the coding-specific decode() to fail.

Paradoxically, a UnicodeDecodeError may happen when _encoding_. The cause of it seems to be the coding-specific encode() functions that normally expect a parameter of type unicode. It appears that on seeing an str parameter, the encode() functions "up-convert" it into unicode before converting to their own coding. It also appears that such "up-conversion" makes no assumption of str parameter's coding, choosing a default ascii decoder. Hence a decoding failure inside an encoder.

Unlike a similar case with UnicodeEncodeError, such a failure cannot be always avoided. This is because the str result of encode() must be a legal coding-specific sequence. However, a more flexible treatment of an unexpected str argument might first validate the str argument by attempting to decode it, then return it unmodified if the validation was successful. As of Python2.5, this is not implemented.

   1 >>> u"a".encode("utf-8")
   2 'a'
   3 >>> u"\u0411".encode("utf-8")
   4 '\xd0\x91'
   5 >>> "a".encode("utf-8")
   6 'a'
   7 >>> "\xd0\x91".encode("utf-8")
   8 Traceback (most recent call last):
   9   File "<stdin>", line 1, in <module>
  10 UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)


CategoryUnicode

Unable to edit the page? See the FrontPage for instructions.