Differences between revisions 12 and 38 (spanning 26 versions)
Revision 12 as of 2007-10-17 16:28:28
Size: 4880
Editor: c-71-228-235-214
Comment:
Revision 38 as of 2020-12-22 21:57:53
Size: 9244
Editor: eriky
Comment: fixed url
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
## page was renamed from GIL
In CPython, the '''global interpreter lock''', or '''GIL''', is a mutex that prevents multiple native threads from executing Python bytecodes at once. This lock is necessary mainly because CPython's memory management is not thread-safe. (However, since the GIL exists, other features have grown to depend on the guarantees that it enforces.)
In CPython, the '''global interpreter lock''', or '''GIL''', is a mutex that protects access to Python objects, preventing multiple threads from executing Python bytecodes at once. The GIL prevents race conditions and ensures thread safety. A nice explanation of [[https://python.land/python-concurrency/the-python-gil|how the Python GIL helps in these areas can be found here]]. In short, this mutex is necessary mainly because CPython's memory management is not thread-safe.
Line 4: Line 3:
CPython extensions must be GIL-aware in order to avoid defeating threads. For an explanation, see [http://docs.python.org/api/threads.html Global interpreter lock]. In hindsight, the GIL is not ideal, since it prevents multithreaded CPython programs from taking full advantage of multiprocessor systems in certain situations. Luckily, many potentially blocking or long-running operations, such as I/O, image processing, and NumPy number crunching, happen ''outside'' the GIL. Therefore it is only in multithreaded programs that spend a lot of time inside the GIL, interpreting CPython bytecode, that the GIL becomes a bottleneck.
Line 6: Line 5:
The GIL is controversial because it prevents multithreaded CPython programs from taking full advantage of multiprocessor systems in certain situations. Note that potentially blocking or long-running operations, such as I/O, image processing, and NumPy number crunching, happen ''outside'' the GIL. Therefore it is only in multithreaded programs that spend a lot of time inside the GIL, interpreting CPython bytecode, that the GIL becomes a bottleneck. Unfortunately, since the GIL exists, other features have grown to depend on the guarantees that it enforces. This makes it hard to remove the GIL without breaking many official and unofficial Python packages and modules.
Line 8: Line 7:
Another issue with removing the GIL is the current ease of incorporating external libraries without having to worry about threading; libraries that wish to deal with a threaded environment must explicitly choose to do so. [[http://www.dabeaz.com/python/GIL.pdf|The GIL can degrade performance]] even when it is not a bottleneck. Summarizing the linked slides: The system call overhead is significant, especially on multicore hardware. Two threads calling a function may take twice as much time as a single thread calling the function twice. The GIL can cause I/O-bound threads to be scheduled ahead of CPU-bound threads, and it prevents signals from being delivered.

CPython extensions must be GIL-aware in order to avoid defeating threads. For an explanation, see [[https://docs.python.org/3/c-api/init.html#thread-state-and-the-global-interpreter-lock|Global interpreter lock]].
Line 11: Line 12:

Jython and IronPython have no GIL and can fully exploit multiprocessor systems.
 * [[Jython]] and IronPython have no GIL and can fully exploit multiprocessor systems
 * PyPy currently has a GIL like CPython
 * in Cython the GIL exists, but can be released temporarily using a "with" statement
Line 16: Line 18:
== Eliminating the GIL ==
Getting rid of the GIL is an occasional topic on the python-dev mailing list. No one has managed it yet. The following properties are all highly desirable for any potential GIL replacement; some are hard requirements.
Line 17: Line 21:
== Eliminating the GIL ==  * '''Simplicity.''' The proposal must be implementable and maintainable in the long run.
 * '''Concurrency.''' The point of eliminating the GIL would be to improve the performance of multithreaded programs. So any proposal must show that it actually does so in practice.
 * '''Speed.''' The [[BDFL]] has said he will reject any proposal in this direction that slows down single-threaded programs. (citation needed) Note that this is harder than it looks. The existing reference count mechanism is very fast in the non-concurrent case, but means that almost any reference to an object is a modification (at least to the refcount); many concurrent GC algorithms assume that modifications are rare.
 * '''Features.''' The proposal must support existing CPython features including `__del__` and weak references.
  . `__del__` isn't thread-safe, becoming a large problem if any sort of locking becomes commonplace. My CPython fork will provide a safer and more reliable replacement. --Rhamphoryncus
   . What's not thread-safe about `__del__`? --jorendorff
    . The language doesn't say anything about what sort of atomicity any operation has. That dict operations are often thread-safe in CPython today is mostly to avoid low-level crashes.
    This is normally solved in threads by using locks, but `__del__` may be executed currently holding the same lock you want, resulting in a deadlock.
    To fix this (without adding a memory model to the language) requires you run `__del__` in a dedicated system thread and require you to use locks (such as those provided by a monitor.) (Non-blocking algorithms are possible in assembly, but insanely overcomplicated from a Python perspect.) --Rhamphoryncus
Line 19: Line 31:
Getting rid of the GIL is an occasional topic on the python-dev mailing list. No one has managed it yet. The following properties are all highly desirable for any potential GIL replacement; some are hard requirements.  * '''API compatibility.''' The proposal should be source-compatible with the macros used by all existing CPython extensions (`Py_INCREF` and friends). See [[http://docs.python.org/api/countingRefs.html|Python/C API Reference Manual: Reference Counting]].
 * '''Prompt destruction''' (nice to have?). The existing reference-counting scheme destroys objects as soon as they become unreachable, except for objects in reference cycles. Those are collected later by Python's cycle collector. Some CPython programs depend on this, e.g. to close `file`s promptly, so it would be nice to keep this feature.
 * '''Ordered destruction''' (nice to have?). Barring cycles, Python currently always destroys an unreachable object ''X'' before destroying any other objects referenced by ''X''. This means all the object's attributes are still there when `__del__` runs. (Many garbage collection schemes don't guarantee this.)
  . I'd say this is necessary for Python. There's very little you can usefully do with a half-destroyed object. That which you can do, you could also do without being exposed to half-destroyed objects. --Rhamphoryncus
   . The [[http://docs.python.org/ref/customization.html|language reference]] doesn't require this. I doubt Jython or IronPython provides it. --jorendorff
    . They seem deliberately vague. Java distinguishes finalized from non-finalized objects, and a single finalizer is ordered with regard to non-finalized objects. The catch is that it's not ordered with regard to other finalizers, so you need to program as if they may already be deleted. In practise this means avoiding finalizers unless absolutely necessary, and if necessary they must not depend on each other.
    CPython uses low-level finalizers extensively, so it must have stronger guarantees. Because of this, it refuses to delete cycles involving `__del__`. The interesting realization is that, although finalizers in java are run even in the face of reference cycles, they cannot be correct unless you eliminate finalizer cycles. Finding ways to tell the GC implementation of this key distinction lets you have your cake and eat it too. --Rhamphoryncus
     . I don't subscribe to "it must have stronger guarantees". CPython uses C-level finalizers mostly for memory management and occasionally (as with `file` objects) for some non-order-dependent cleanup. --jorendorff
      . The implementation doesn't distinguish C-level finalizers from Python-level finalizers (except to refuse to delete cycles involving `__del__`), which is why it needs stronger guarantees. If you made `__del__` use a separate pass then you could loosen it to what Java provides. --Rhamphoryncus
Line 21: Line 41:
 * '''Simplicity.''' The proposal must be implementable and maintainable in the long run. === API compatibility in detail ===
API compatibility is an especially difficult aspect of the problem. All concurrent memory management schemes we've found rely on one or more of the following techniques, all of which are incompatible with the existing Python/C API.
Line 23: Line 44:
 * '''Concurrency.''' The point of eliminating the GIL would be to improve the performance of multithreaded programs. So any proposal must show that it actually does so in practice.

 * '''Speed.''' The ["BDFL"] has said he will reject any proposal in this direction that slows down single-threaded programs. (citation needed) Note that this is harder than it looks. The existing reference count mechanism is very fast in the non-concurrent case, but means that almost any reference to an object is a modification (at least to the refcount); many concurrent GC algorithms assume that modifications are rare.

 * '''Features.''' The proposal must support existing CPython features including `__del__` and weak references.

 * '''Source compatibility.''' The proposal should be source-compatible with the macros used by all existing CPython extensions (`Py_INCREF` and friends). See [http://docs.python.org/api/countingRefs.html Python/C API Reference Manual: Reference Counting].

 * '''Prompt destruction''' (nice to have?). The existing reference-counting scheme destroys objects as soon as they become unreachable, except for objects in reference cycles. Those are collected later by Python's cycle collector. Some CPython programs depend on this, e.g. to close `file`s promptly, so it would be nice to keep this feature.

 * '''Ordered destruction''' (nice to have?). Barring cycles, Python currently always destroys an unreachable object ''X'' before destroying any other objects referenced by ''X''. This means all the object's attributes are still there when `__del__` runs. (Many garbage collection schemes don't guarantee this.)

Source compatibility is an especially difficult problem. All concurrent memory management schemes we've found rely on one or more of the following techniques, all of which are non-source-compatible with existing CPython extensions.

 * '''Tracing.''' Most garbage collectors need to be able to start with an object and enumerate all the objects that it points to. The builtin CPython pointer-containing types, like `PyList` and `PyDict`, all have a `tp_traverse` method that can do this, but not all extension types have that method.

 * '''Write barriers.''' A write barrier is a small piece of code that executes whenever a pointer variable is modified. Alas, no matter how you hack the `Py_INCREF` etc. macros, you can't make a write barrier hook out of them. Even if you could, many schemes require a different write barrier for stack variables vs. global variables vs. object fields that point to other objects; nothing in the Python/C API makes that distinction.

 * '''Exact stack information.''' Exact garbage collection schemes need to be able to mark all objects reachable from local C variables. This means they need to know where such variables are located on the C stack (and/or registers)--something CPython does not currently track.
 * '''Tracing.''' Most garbage collectors need to be able to start with an object and enumerate all the objects that it points to. The builtin CPython pointer-containing types, like `PyList` and `PyDict`, all have a `tp_traverse` method that can do this, but not all extension types have that method.
 * '''Write barriers.''' A write barrier is a small piece of code that executes whenever a pointer variable is modified. Alas, no matter how you hack the `Py_INCREF` etc. macros, you can't make a write barrier hook out of them. Even if you could, many schemes require a different write barrier for stack variables vs. global variables vs. object fields that point to other objects; nothing in the Python/C API makes that distinction.
 * '''Exact stack information.''' Exact garbage collection schemes need to be able to mark all objects reachable from local C variables. To do this, some schemes need to know where such variables are located on the C stack (and/or registers)--something the Python/C API does not require extensions to track.
Line 44: Line 49:

Another issue in this area is that existing C extensions depend on the GIL guarantees. They assume that when extension code is called, all other threads are locked out. If an extension does need to deal with a threaded environment, it explicitly opts in (by releasing the GIL). Therefore any would-be GIL replacement must provide GIL-like guarantees by default. Threading must remain opt-in for extensions.

== Recent discussions ==
 * [[http://mail.python.org/pipermail/python-dev/2009-October/093321.html|message on 2009-10-25 by Antoine Pitrou]]: Reworking the GIL (for 3.2)
 * [[http://www.dabeaz.com/GIL/|Understanding the Python GIL]]: David Beazley at PyCon 2010
 * [[http://bugs.python.org/issue7753|issue #7753]]: Backport to 2.7 ''(rejected)''
 * [[http://bugs.python.org/issue7946|issue #7946]]: Convoy effect with I/O bound threads and New GIL
 * [[http://bugs.python.org/issue8299|issue #8299]]: Improve GIL in 2.7

In CPython, the global interpreter lock, or GIL, is a mutex that protects access to Python objects, preventing multiple threads from executing Python bytecodes at once. The GIL prevents race conditions and ensures thread safety. A nice explanation of how the Python GIL helps in these areas can be found here. In short, this mutex is necessary mainly because CPython's memory management is not thread-safe.

In hindsight, the GIL is not ideal, since it prevents multithreaded CPython programs from taking full advantage of multiprocessor systems in certain situations. Luckily, many potentially blocking or long-running operations, such as I/O, image processing, and NumPy number crunching, happen outside the GIL. Therefore it is only in multithreaded programs that spend a lot of time inside the GIL, interpreting CPython bytecode, that the GIL becomes a bottleneck.

Unfortunately, since the GIL exists, other features have grown to depend on the guarantees that it enforces. This makes it hard to remove the GIL without breaking many official and unofficial Python packages and modules.

The GIL can degrade performance even when it is not a bottleneck. Summarizing the linked slides: The system call overhead is significant, especially on multicore hardware. Two threads calling a function may take twice as much time as a single thread calling the function twice. The GIL can cause I/O-bound threads to be scheduled ahead of CPU-bound threads, and it prevents signals from being delivered.

CPython extensions must be GIL-aware in order to avoid defeating threads. For an explanation, see Global interpreter lock.

Non-CPython implementations

  • Jython and IronPython have no GIL and can fully exploit multiprocessor systems

  • PyPy currently has a GIL like CPython

  • in Cython the GIL exists, but can be released temporarily using a "with" statement

[Mention place of GIL in StacklessPython.]

Eliminating the GIL

Getting rid of the GIL is an occasional topic on the python-dev mailing list. No one has managed it yet. The following properties are all highly desirable for any potential GIL replacement; some are hard requirements.

  • Simplicity. The proposal must be implementable and maintainable in the long run.

  • Concurrency. The point of eliminating the GIL would be to improve the performance of multithreaded programs. So any proposal must show that it actually does so in practice.

  • Speed. The BDFL has said he will reject any proposal in this direction that slows down single-threaded programs. (citation needed) Note that this is harder than it looks. The existing reference count mechanism is very fast in the non-concurrent case, but means that almost any reference to an object is a modification (at least to the refcount); many concurrent GC algorithms assume that modifications are rare.

  • Features. The proposal must support existing CPython features including __del__ and weak references.

    • __del__ isn't thread-safe, becoming a large problem if any sort of locking becomes commonplace. My CPython fork will provide a safer and more reliable replacement. --Rhamphoryncus

      • What's not thread-safe about __del__? --jorendorff

        • The language doesn't say anything about what sort of atomicity any operation has. That dict operations are often thread-safe in CPython today is mostly to avoid low-level crashes.

          This is normally solved in threads by using locks, but __del__ may be executed currently holding the same lock you want, resulting in a deadlock. To fix this (without adding a memory model to the language) requires you run __del__ in a dedicated system thread and require you to use locks (such as those provided by a monitor.) (Non-blocking algorithms are possible in assembly, but insanely overcomplicated from a Python perspect.) --Rhamphoryncus

  • API compatibility. The proposal should be source-compatible with the macros used by all existing CPython extensions (Py_INCREF and friends). See Python/C API Reference Manual: Reference Counting.

  • Prompt destruction (nice to have?). The existing reference-counting scheme destroys objects as soon as they become unreachable, except for objects in reference cycles. Those are collected later by Python's cycle collector. Some CPython programs depend on this, e.g. to close files promptly, so it would be nice to keep this feature.

  • Ordered destruction (nice to have?). Barring cycles, Python currently always destroys an unreachable object X before destroying any other objects referenced by X. This means all the object's attributes are still there when __del__ runs. (Many garbage collection schemes don't guarantee this.)

    • I'd say this is necessary for Python. There's very little you can usefully do with a half-destroyed object. That which you can do, you could also do without being exposed to half-destroyed objects. --Rhamphoryncus
      • The language reference doesn't require this. I doubt Jython or IronPython provides it. --jorendorff

        • They seem deliberately vague. Java distinguishes finalized from non-finalized objects, and a single finalizer is ordered with regard to non-finalized objects. The catch is that it's not ordered with regard to other finalizers, so you need to program as if they may already be deleted. In practise this means avoiding finalizers unless absolutely necessary, and if necessary they must not depend on each other.

          CPython uses low-level finalizers extensively, so it must have stronger guarantees. Because of this, it refuses to delete cycles involving __del__. The interesting realization is that, although finalizers in java are run even in the face of reference cycles, they cannot be correct unless you eliminate finalizer cycles. Finding ways to tell the GC implementation of this key distinction lets you have your cake and eat it too. --Rhamphoryncus

          • I don't subscribe to "it must have stronger guarantees". CPython uses C-level finalizers mostly for memory management and occasionally (as with file objects) for some non-order-dependent cleanup. --jorendorff

            • The implementation doesn't distinguish C-level finalizers from Python-level finalizers (except to refuse to delete cycles involving __del__), which is why it needs stronger guarantees. If you made __del__ use a separate pass then you could loosen it to what Java provides. --Rhamphoryncus

API compatibility in detail

API compatibility is an especially difficult aspect of the problem. All concurrent memory management schemes we've found rely on one or more of the following techniques, all of which are incompatible with the existing Python/C API.

  • Tracing. Most garbage collectors need to be able to start with an object and enumerate all the objects that it points to. The builtin CPython pointer-containing types, like PyList and PyDict, all have a tp_traverse method that can do this, but not all extension types have that method.

  • Write barriers. A write barrier is a small piece of code that executes whenever a pointer variable is modified. Alas, no matter how you hack the Py_INCREF etc. macros, you can't make a write barrier hook out of them. Even if you could, many schemes require a different write barrier for stack variables vs. global variables vs. object fields that point to other objects; nothing in the Python/C API makes that distinction.

  • Exact stack information. Exact garbage collection schemes need to be able to mark all objects reachable from local C variables. To do this, some schemes need to know where such variables are located on the C stack (and/or registers)--something the Python/C API does not require extensions to track.

It is barely credible that CPython might someday make tp_traverse mandatory for pointer-carrying types; adding support for write barriers or stack bookkeeping to the Python/C API seems extremely unlikely.

Another issue in this area is that existing C extensions depend on the GIL guarantees. They assume that when extension code is called, all other threads are locked out. If an extension does need to deal with a threaded environment, it explicitly opts in (by releasing the GIL). Therefore any would-be GIL replacement must provide GIL-like guarantees by default. Threading must remain opt-in for extensions.

Recent discussions

GlobalInterpreterLock (last edited 2020-12-22 21:57:53 by eriky)

Unable to edit the page? See the FrontPage for instructions.