Size: 1750
Comment:
|
Size: 1954
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
Assorted tweaks: | * Frame optimizations: once a function is called it retains the allocated frame for use in future calls, avoiding allocation and initialization overhead. Frame size has also been slightly reduced. |
Line 5: | Line 5: |
* Frame optimizations: once a function is called it retains the allocated frame for use in future calls, avoiding allocation and initialization overhead. Frame size has also been slightly reduced | PyStone is over 10% higher on RichardJones' test machine, compared to Python 2.4 (from 20242 to 22935). PyBench reports an overall slowdown, attributed to a 150% slowdown in a piece of code that wasn't changed. Sigh. |
Things we think the Python community will like.
- Frame optimizations: once a function is called it retains the allocated frame for use in future calls, avoiding allocation and initialization overhead. Frame size has also been slightly reduced.
PyStone is over 10% higher on RichardJones' test machine, compared to Python 2.4 (from 20242 to 22935). PyBench reports an overall slowdown, attributed to a 150% slowdown in a piece of code that wasn't changed. Sigh.
Made Gzip readline 30-40% faster (BobIppolito)
Speed up Unicode operations (AndrewDalke, FredrikLundh). Most notable, repeat and most search operations (find, index, count, in) are now a lot faster. Also, rsplit is now as fast as split, and splitlines is nearly as fast as a plain split("\n"). Current stringbench results:
str(ms) uni(ms) % comment ----------------------------------------------------- 2374.13 3786.27 62.7 TOTAL 2.5a2 2347.33 1335.65 175.7 TOTAL trunk
Patch 1335972 was a combination bugfix+speedup for string->int conversion. These are the speedups measured on my Windows box for decimal strings of various lengths. Note that the difference between 9 and 10 is the difference between short and long Python ints on a 32-bit box. The patch doesn't actually do anything to speed conversion to long directly; the speedup in those cases is solely due to detecting "unsigned long" overflow more quickly:
length speedup ------ ------- 1 12.4% 2 15.7% 3 20.6% 4 28.1% 5 33.2% 6 37.5% 7 41.9% 8 46.3% 9 51.2% 10 19.5% 11 19.9% 12 23.9% 13 23.7% 14 23.3% 15 24.9% 16 25.3% 17 28.3% 18 27.9% 19 35.7%
The struct module has been rewritten to pre-compile struct descriptors (similar to the RE module). On average, this leads to a 20% speedup [BobIppolito].