Is there a way to list accepted encoding?
add link to list of standard encodings
|Deletions are marked like this.||Additions are marked like this.|
|Line 113:||Line 113:|
Sure you do: it's one of the subsections linked from that page:
[http://docs.python.org/lib/standard-encodings.html Standard Encodings].
-- FredDrake [[DateTime(2005-09-10T01:13:02Z)]]
Resources to help you learn how to handle Unicode in your Python programs:
General Unicode Resources
[http://www.joelonsoftware.com/articles/Unicode.html The Absolute Minimum Every Software Developer Must Know about Unicode] - short intro to Unicode
Search the Python reference for:
- unichr builtin
- string handling - example: u'Hello\u0020World !'
[http://www.python.org/doc/current/lib/module-unicodedata.html unicodedata] module
- regular expressions - see the (?u) flag, and the re.UNICODE constant
exceptions - UnicodeEncodeError
[http://dalchemy.com/opensource/unicodedoc/ End to End Unicode Web Applications in Python]
[http://diveintopython.org/xml_processing/unicode.html Dive Into Python: Unicode]
[http://www.egenix.com/files/python/Unicode-EPC2002-Talk.pdf Python and Unicode] (pdf talk) He also has a [http://www.reportlab.com/i18n/python_unicode_tutorial.html brief tutorial].
[http://www.jorendorff.com/articles/unicode/index.html Unicode for Programmers] - Java and Python info
[http://effbot.org/zone/unicode-objects.htm Python Unicode Objects] - brief notes
[http://www.intertwingly.net/blog/1581.html HTMLifying and UnHTMLifying] - see atomef.py
[http://docs.python.org/lib/standard-encodings.html The standard encodings list] is for the current version of python. [http://en.wikipedia.org/wiki/GB2312 GB2312] (PRC Chinese,) for example, is in Python2.4, but [http://www.xahlee.org/perl-python/charset_encoding.html not in Python2.2, nor Python2.3.]
Conversation between Lion and Bayle
That looks like 32-bits per character, so I'd say it's some form of little-endian utf-32.
And for some strange reason, Python only comes with "utf-8" and "utf-16" as valid "decode" values.
>>> bytes = "H\x00i\x00\n\x00" >>> unistring = bytes.decode('utf-16') >>> print unistring u'Hi\n'
You can do that for either "utf-8" or "utf-16." But for some reason, I can't say "utf-32" or "utf-32LE" (LE=little endian). I have no idea why. I also don't know how it is that my Python programs are producing UTF-32 for you..!
I've been wanting to diagram how Python unicode works, like how I diagrammed it's time use, and regex use.
Basically, "encode" is meant to be called from unicode data, and "decode" is meant to be called from bytes data. Continuing from above:
>>> bytes 'H\x00i\x00\n\x00' >>> unistring = bytes.decode('utf-16') >>> unistring u'Hi\n' >>> unistring.encode('utf-8') 'Hi\n' >>> unistring.encode('utf-16') '\xff\xfeH\x00i\x00\n\x00' >>> unistring.encode('utf-32') Traceback (most recent call last): File "<stdin>", line 1, in ? LookupError: unknown encoding: utf-32
I'm guessing that the "\xff" at the beginning of the utf-16 encoding is a byte-order marker, saying "this is little endian."
I learned about unicode stuff about 2-3 weeks ago. I kept notes about what I thought were the largest mental misconceptions, and what were the most revealing ways of thinking about it. Sadly, I've forgotten about all that. (Should'a documented it in the wiki!)
In Python, the data in a unicode or byte string is exactly the same. The difference is only in how Python treats and presents the data. I found it super-helpful to not think about what the console said, or work with the console, because the console lies. That is, the characters go through conversions even being printed to the screen: Your console has an understanding of encoding, and your fonts have an understanding of encoding, and I had a lot of difficulty seperating it out.
I had a lot easier time thinking about the concepts, instead of the concrete representations. (Which is opposite my usual course of thinking.)
"Decoded," to Python's mind, is data being treated as unicode data. "Encoded," to Python's mind, is data being treated as bytes. The data isn't actually changing form, at all. It's just the treatment of the same data that is being changed. But there is no actual conversion taking place.
So, you only ever run "decode" on a byte string. (Another thing: Don't think of native Python strings as "strings." Think of them as "bytes." And indeed: In the new Python 3.0, they're calling it just that: strings are called "bytes" in Python3, and unicode strings are called just "strings" in Python3.)
So you can decode bytes, and encode unicode strings.
Don't think about decoding unicode strings, and don't think about encoding bytes. The bytes are already coded. Only unicode strings live in pure, abstract, heavenly, platonic form. There is no code there, only perfect clarity. (At least, that's how Python makes it seem for you.)
Again, sadly, I have no idea how to get from UTF-32 to Python unicode. I don't see the path. I saw something somewhere about being able to compile something in to your Python.
That said, if I'm actually serving UTF-32 to you somehow,... ...then there's probably a way I just don't know.
On side-notes, I think the diagrams I've posted for WorkingWithTime and RegularExpressions were eaten up in the transition to MoinMoin 1.3; I'll repost them soon, after I get [http://wiki.taoriver.net/ my own wiki] upgraded to 1.3.
Is there any way to get a listing of the encodings registered? I don't see anything in [http://www.python.org/doc/current/lib/module-codecs.html codecs].
Sure you do: it's one of the subsections linked from that page: [http://docs.python.org/lib/standard-encodings.html Standard Encodings].