Differences between revisions 8 and 9
Revision 8 as of 2007-04-02 19:40:58
Size: 6303
Editor: cscfpc15
Comment:
Revision 9 as of 2007-04-02 22:57:27
Size: 6349
Editor: cscfpc15
Comment:
Deletions are marked like this. Additions are marked like this.
Line 72: Line 72:
The {{{write}}} and {{{read}}} methods do not invoke codecs internally.  Their {{{.encoding}}} attribute is set to {{{None}}} in Python2.5. To complement {{{print}}} statement's automatic encoding with automatic decoding of input data into {{{unicode}}} strings in {{{sys.stdin.read/readline}}}, one can wrap the file into a StreamReader instance: The {{{write}}} and {{{read}}} methods do not invoke codecs internally. Python2.5's file {{{open}}} built-in sets the {{{.encoding}}} attribute of the resulting instance to {{{None}}}. To complement {{{print}}} statement's automatic encoding with automatic decoding of input data into {{{unicode}}} strings in {{{sys.stdin.read/readline}}}, one can wrap the file into a StreamReader instance:

If you try to print a unicode string to console and get a message like this one:

>>> print u"\u03A9"
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "C:\Python24\lib\encodings\cp866.py", line 18, in encode
    return codecs.charmap_encode(input,errors,encoding_map)
UnicodeEncodeError: 'charmap' codec can't encode character u'\u1234' in position
 0: character maps to <undefined>

That means you're using legacy, limited or misconfigured console. If you're just trying to play with unicode at interactive prompt move to a modern unicode-aware console. Most modern Python distributions come with IDLE where you'll be able to print all unicode characters.

Standard Microsoft Windows console

By default console in Microsoft Windows is able to display 256 characters. Python will automatically detect what characters are supported by this console. If you try to print unprintable character you will get UnicodeEncodeError.

Various UNIX consoles

There is no standard way to query UNIX console for find out what characters it supports but fortunately there is a way to find out what characters are considered to be printable. Locale category LC_CTYPE defines what characters are printable. To find out its value type at python prompt:

   1 >>> import locale
   2 >>> locale.getdefaultlocale()[1]
   3 'utf-8'

If you got any other value you won't be able to print all unicode characters. As soon as you try to print a unprintable character you will get UnicodeEncodeError. To fix this situation you need to set the environment variable LANG to one of supported by your system unicode locales. To get the full list of locales use command "locale -a", look for locales that end with string ".utf-8". If you have set LANG variable but now instead of UnicodeEncodeError you see garbage on your screen you need to set up your terminal to use font unicode font. Consult terminal manual on how to do it.

print, write, read and Unicode in pre-3.0 Python

Because file operations are 8-bit clean, reading data from the original stdin will return str's containing data in the input character set. Writing these str's to stdout without any codecs will result in the output identical to the input.

  $ echo $LANG
  en_CA.utf8

  $ python -c 'import sys; line = sys.stdin.readline(); print str(type(line)), len(line); print line;'
  [TYPING: абв ENTER]
  <type 'str'> 7
  абв

  $ echo "абв" | python -c 'import sys; line = sys.stdin.readline(); print str(type(line)), len(line); print line;'
  <type 'str'> 7
  абв
  $ echo "абв" | python -c 'import sys; line = sys.stdin.readline(); print str(type(line)), len(line); print line;' | cat
  <type 'str'> 7
  абв

Since programmers need to convert 8-bit input streams to Unicode and write Unicode to 8-bit output streams, the designers of the print statement built the required transformation into the argument type coercion routine.

  • When Python finds the output to be a terminal and sets the .encoding attributes of stdout and stderr, the print statement's handler will automatically convert unicode strings into str strings in the course of argument coercion.

    $ python -c 'import sys; print str(sys.stdout.encoding); print u"\u0411\n"'
    UTF-8
    Б
  • When Python does not see the desired character set of the output, it sets .encoding to None, and print's coercion will invoke the "ascii" codec.

    $ python -c 'import sys; print str(sys.stdout.encoding); print u"\u0411\n"' 2>&1 | cat
    None
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
    UnicodeEncodeError: 'ascii' codec can't encode character u'\u0411' in position 0: ordinal not in range(128)

I (IL) believe reading from stdin does not involve coercion at all because the existing ways to read from stdin such as "for line in sys.stdin" do not convey the expected type of the returned value to the stdin handler. A function that would complement the print statement might look like this:

  uline = typed_read(unicode)   # Generally, a list of input data types along with an optional parsing format line.

The write and read methods do not invoke codecs internally. Python2.5's file open built-in sets the .encoding attribute of the resulting instance to None. To complement print statement's automatic encoding with automatic decoding of input data into unicode strings in sys.stdin.read/readline, one can wrap the file into a StreamReader instance:

  $ python -c 'import sys, codecs, locale; print str(sys.stdin.encoding); sys.stdin = codecs.getreader(locale.getpreferredencoding())(sys.stdin); line = sys.stdin.readline(); print type(line), len(line)' 2>&1
  UTF-8
  [TYPING: абв ENTER]
  <type 'unicode'> 4
  $ echo "абв" | python -c 'import sys, codecs, locale; print str(sys.stdin.encoding); sys.stdin = codecs.getreader(locale.getpreferredencoding())(sys.stdin); line = sys.stdin.readline(); print type(line), len(line)'
  None
  <type 'unicode'> 4

Wrapping sys.stdout into an instance of StreamWriter will allow writing unicode data with sys.stdout.write() and print.

  $ python -c 'import sys, codecs, locale; print str(sys.stdout.encoding); sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout); line = u"\u0411\n"; print type(line), len(line); sys.stdout.write(line); print line'
  UTF-8
  <type 'unicode'> 2
  Б
  Б

  $ python -c 'import sys, codecs, locale; print str(sys.stdout.encoding); sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout); line = u"\u0411\n"; print type(line), len(line); sys.stdout.write(line); print line' | cat
  None
  <type 'unicode'> 2
  Б
  Б

The write call will execute StreamWriter.write which in turn invokes codec-specific encode and passes the result to the underlying file. It appears that the print statement will not fail due to the argument type coercion when sys.stdout is wrapped.

See also: ["Unicode"]


CategoryUnicode

PrintFails (last edited 2012-11-25 11:32:18 by techtonik)

Unable to edit the page? See the FrontPage for instructions.