Differences between revisions 30 and 31
Revision 30 as of 2010-12-23 10:27:59
Size: 4717
Editor: 228-228-19-190
Comment: Added Get Length in list
Revision 31 as of 2012-08-25 05:11:08
Size: 4884
Editor: ip70-162-184-199
Comment: wiki restore 2013-01-23
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
This page documents the time-complexity (aka "Big O" or "Big Oh") of various operations in current CPython.  Other Python implementations (or older or still-under development versions of CPython) may have slightly different performance characteristics.  However, it is generally safe to assume that they are not slower by more than a factor of O(log n).
This page documents the time-complexity (aka "Big O" or "Big Oh") of various operations in current CPython. Other Python implementations (or older or still-under development versions of CPython) may have slightly different performance characteristics. However, it is generally safe to assume that they are not slower by more than a factor of O(log n).
Line 4: Line 5:
Generally, 'n' is the number of elements currently in the container.  'k' is either the value of a parameter or the number of elements in the parameter. Generally, 'n' is the number of elements currently in the container. 'k' is either the value of a parameter or the number of elements in the parameter.
Line 7: Line 9:

Line 9: Line 13:
Internally, a list is represented as an array; the largest costs come from growing beyond the current allocation size (because everything must move), or from inserting or deleting somewhere near the beginning (because everything after that must move).  If you need to add/remove at both ends, consider using a collections.deque instead.
||<tablewidth="" tablestyle="">'''Operation''' ||'''Average Case''' ||'''[[http://en.wikipedia.org/wiki/Amortized_analysis|Amortized Worst Case]]''' ||

Internally, a list is represented as an array; the largest costs come from growing beyond the current allocation size (because everything must move), or from inserting or deleting somewhere near the beginning (because everything after that must move). If you need to add/remove at both ends, consider using a collections.deque instead.



||<tablestyle="width:">'''Operation''' ||'''Average Case''' ||'''[[http://en.wikipedia.org/wiki/Amortized_analysis|Amortized Worst Case]]''' ||
Line 24: Line 32:
||x in s||O(n)||||
||min(s), max(s)||O(n)||||
||x in s ||O(n) || ||
||min(s), max(s) ||O(n) || ||
Line 29: Line 37:


Line 30: Line 41:
A deque (double-ended queue) is represented internally as a doubly linked list.  (Well, a list of arrays rather than objects, for greater efficiency.)  Both ends are accessible, but even looking at the middle is slow, and adding to or removing from the middle is slower still.
||<tablewidth="" tablestyle="">'''Operation''' ||'''Average Case''' ||'''Amortized Worst Case''' ||


A deque (double-ended queue) is represented internally as a doubly linked list. (Well, a list of arrays rather than objects, for greater efficiency.) Both ends are accessible, but even looking at the middle is slow, and adding to or removing from the middle is slower still.



||<tablestyle="width:">'''Operation''' ||'''Average Case''' ||'''Amortized Worst Case''' ||
Line 43: Line 59:


Line 44: Line 63:

Line 45: Line 66:
||'''Operation'''||'''Average case'''||'''Worst Case'''||
||x in s||O(1)||O(n)||
||Union s|t||[[http://wiki.python.org/moin/TimeComplexity_(SetCode)|O(len(s)+len(t))]]||||
||Intersection s&t||O(min(len(s), len(t)) ||O(len(s) * len(t))||
||Difference s-t||O(len(s))||||
||s.difference_update(t)||O(len(t))||||
||Symmetric Difference s^t||?||||
 * As seen in the [[http://svn.python.org/projects/python/trunk/Objects/setobject.c|source code]] the complexities for set difference s-t or s.difference(t) ({{{set_difference()}}}) and in-place set difference s.difference_update(t) ({{{set_difference_update_internal()}}}) are different! The first one is O(len(s)) (for every element in s add it to the new set, if not in t). The second one is O(len(t)) (for every element in t remove it from s). So care must be taken as to which is preferred, depending on which one is the longest set and whether a new set is needed. 
 * To perform set operations like s-t, both s and t need to be sets. However you can do the method equivalents even if t is any iterable, for example s.difference(l), where l is a list. 



||'''Operation'''
||'''Average case''' ||'''Worst Case''' ||
||x in s ||O(1) ||O(n) ||
||Union s|t ||[[http://wiki.python.org/moin/TimeComplexity_(SetCode)|O(len(s)+len(t))]] || ||
||Intersection s&t ||O(min(len(s), len(t)) ||O(len(s) * len(t)) ||
||Difference s-t ||O(len(s)) || ||
||s.difference_update(t) ||O(len(t)) || ||
||Symmetric Difference s^t ||O(len(s)) ||O(len(s) * len(t)) ||
||s.symmetric_difference_update(t) ||O(len(t)) ||O(len(t) * len(s)) ||




 * As seen in the [[http://svn.python.org/projects/python/trunk/Objects/setobject.c|source code]] the complexities for set difference s-t or s.difference(t) ({{{set_difference()}}}) and in-place set difference s.difference_update(t) ({{{set_difference_update_internal()}}}) are different! The first one is O(len(s)) (for every element in s add it to the new set, if not in t). The second one is O(len(t)) (for every element in t remove it from s). So care must be taken as to which is preferred, depending on which one is the longest set and whether a new set is needed.
 * To perform set operations like s-t, both s and t need to be sets. However you can do the method equivalents even if t is any iterable, for example s.difference(l), where l is a list.
Line 57: Line 86:
The Average Case times listed for dict objects assume that the hash function for the objects is sufficiently robust to make collisions uncommon.  The Average Case assumes the keys used in parameters are selected uniformly at random from the set of all keys.

The Average Case times listed for dict objects assume that the hash function for the objects is sufficiently robust to make collisions uncommon. The Average Case assumes the keys used in parameters are selected uniformly at random from the set of all keys.
Line 60: Line 92:
||<tablewidth="" tablestyle="">'''Operation''' ||'''Average Case''' ||'''Amortized Worst Case''' ||


||<tablestyle="width:">'''Operation''' ||'''Average Case''' ||'''Amortized Worst Case''' ||
Line 68: Line 103:


Line 69: Line 107:
[1] = These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container.
Line 71: Line 108:
[2] = For these operations, the worst case ''n'' is the maximum size the container ever achieved, rather than just the current size.  For example, if N objects are added to a dictionary, then N-1 are deleted, the dictionary will still be sized for N objects (at least) until another insertion is made.
[1] = These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container.


[2] = For these operations, the worst case ''n'' is the maximum size the container ever achieved, rather than just the current size. For example, if N objects are added to a dictionary, then N-1 are deleted, the dictionary will still be sized for N objects (at least) until another insertion is made.

This page documents the time-complexity (aka "Big O" or "Big Oh") of various operations in current CPython. Other Python implementations (or older or still-under development versions of CPython) may have slightly different performance characteristics. However, it is generally safe to assume that they are not slower by more than a factor of O(log n).

Generally, 'n' is the number of elements currently in the container. 'k' is either the value of a parameter or the number of elements in the parameter.

list

The Average Case assumes parameters generated uniformly at random.

Internally, a list is represented as an array; the largest costs come from growing beyond the current allocation size (because everything must move), or from inserting or deleting somewhere near the beginning (because everything after that must move). If you need to add/remove at both ends, consider using a collections.deque instead.

Operation

Average Case

Amortized Worst Case

Copy

O(n)

O(n)

Append[1]

O(1)

O(1)

Insert

O(n)

O(n)

Get Item

O(1)

O(1)

Set Item

O(1)

O(1)

Delete Item

O(n)

O(n)

Iteration

O(n)

O(n)

Get Slice

O(k)

O(k)

Del Slice

O(n)

O(n)

Set Slice

O(k+n)

O(k+n)

Extend[1]

O(k)

O(k)

Sort

O(n log n)

O(n log n)

Multiply

O(nk)

O(nk)

x in s

O(n)

min(s), max(s)

O(n)

Get Length

O(1)

O(1)

collections.deque

A deque (double-ended queue) is represented internally as a doubly linked list. (Well, a list of arrays rather than objects, for greater efficiency.) Both ends are accessible, but even looking at the middle is slow, and adding to or removing from the middle is slower still.

Operation

Average Case

Amortized Worst Case

Copy

O(n)

O(n)

append

O(1)

O(1)

appendleft

O(1)

O(1)

pop

O(1)

O(1)

popleft

O(1)

O(1)

extend

O(k)

O(k)

extendleft

O(k)

O(k)

rotate

O(k)

O(k)

remove

O(n)

O(n)

set

See dict -- the implementation is intentionally very similar.

Operation

Average case

Worst Case

x in s

O(1)

O(n)

Union s|t

O(len(s)+len(t))

Intersection s&t

O(min(len(s), len(t))

O(len(s) * len(t))

Difference s-t

O(len(s))

s.difference_update(t)

O(len(t))

Symmetric Difference s^t

O(len(s))

O(len(s) * len(t))

s.symmetric_difference_update(t)

O(len(t))

O(len(t) * len(s))

  • As seen in the source code the complexities for set difference s-t or s.difference(t) (set_difference()) and in-place set difference s.difference_update(t) (set_difference_update_internal()) are different! The first one is O(len(s)) (for every element in s add it to the new set, if not in t). The second one is O(len(t)) (for every element in t remove it from s). So care must be taken as to which is preferred, depending on which one is the longest set and whether a new set is needed.

  • To perform set operations like s-t, both s and t need to be sets. However you can do the method equivalents even if t is any iterable, for example s.difference(l), where l is a list.

dict

The Average Case times listed for dict objects assume that the hash function for the objects is sufficiently robust to make collisions uncommon. The Average Case assumes the keys used in parameters are selected uniformly at random from the set of all keys.

Note that there is a fast-path for dicts that (in practice) only deal with str keys; this doesn't affect the algorithmic complexity, but it can significantly affect the constant factors: how quickly a typical program finishes.

Operation

Average Case

Amortized Worst Case

Copy[2]

O(n)

O(n)

Get Item

O(1)

O(n)

Set Item[1]

O(1)

O(n)

Delete Item

O(1)

O(n)

Iteration[2]

O(n)

O(n)

Notes

[1] = These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container.

[2] = For these operations, the worst case n is the maximum size the container ever achieved, rather than just the current size. For example, if N objects are added to a dictionary, then N-1 are deleted, the dictionary will still be sized for N objects (at least) until another insertion is made.

TimeComplexity (last edited 2023-01-19 22:35:03 by AndrewBadr)

Unable to edit the page? See the FrontPage for instructions.