Differences between revisions 54 and 89 (spanning 35 versions)
Revision 54 as of 2010-10-22 17:04:23
Size: 11325
Editor: OAKESHOTT
Comment: Add link to jug
Revision 89 as of 2021-05-17 12:21:51
Size: 22404
Comment: Adding Pyston in JIT section
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
Line 5: Line 4:
== Just In Time Compilation ==
Some Python libraries allow compiling Python functions at run time, this is called Just In Time (JIT) compilation.

 * [[https://nuitka.net/pages/overview.html|Nuitka]] - As the authors say: Nuitka is a Python compiler written in Python !
  * You feed Nuitka your Python app, it does a lot of clever things, and spits out an executable or extension module.
  * Nuitka translates Python into a C program that then is linked against libpython to execute exactly like CPython
  * For version 0.6 of Nuitka and Python 2.7 speedup was 312% !

 * [[http://numba.pydata.org/|Numba]] - Numba is an open source JIT compiler that translates a subset of Python and NumPy code into fast machine code.
  * Numba can use vectorized instructions (SIMD - Single Instruction Multiple Data) like SSE/AVX
  * Numba can simplify multithreading
  * Numba can compile on GPU

 * [[https://pythran.readthedocs.io/en/latest/|Pythran]] - Pythran is an ahead of time compiler for a subset of the Python language, with a focus on scientific computing.
  * It takes a Python module annotated with a few interface description and turns it into a native Python module with the same interface, but (hopefully) faster.
  * It is meant to efficiently compile scientific programs, and takes advantage of multi-cores and SIMD instruction units.

 
 * [[https://www.pyston.org/|Pyston]] - Pyston is a faster CPython implementation using C optimisation and [[https://luajit.org/dynasm.html|DynASM]] Just In Time compiler
  * After having been stopped in 2017 new version is back again since Python 3.8.8
  * It says to be 30% faster than CPython by just replacing CPython by Pyston version without updating your code.
Line 6: Line 27:
Some libraries, often to preserve some similarity with more familiar concurrency models (such as Python's threading API), employ parallel processing techniques which limit their relevance to SMP-based hardware, mostly due to the usage of process creation functions such as the UNIX fork system call. However, a technique called process migration may permit such libraries to be useful in certain kinds of computational clusters as well, notably [[http://en.wikipedia.org/wiki/Single-system_image|single-system image]] cluster solutions ([[http://kerrighed.org/|Kerrighed]], [[http://sourceforge.net/projects/ssic-linux|OpenSSI]], [[http://openmosix.sourceforge.net/|OpenMosix]] being examples).
Line 7: Line 29:
Some libraries, often to preserve some similarity with more familiar concurrency models (such as Python's threading API), employ parallel processing techniques which limit their relevance to SMP-based hardware, mostly due to the usage of process creation functions such as the UNIX fork system call. However, a technique called process migration may permit such libraries to be useful in certain kinds of computational clusters as well, notably [[http://en.wikipedia.org/wiki/Single-system_image|single-system image]] cluster solutions ([[http://kerrighed.org|Kerrighed]], [[http://sourceforge.net/projects/ssic-linux|OpenSSI]], [[http://openmosix.sourceforge.net/|OpenMosix]] being examples).
 * [[http://dispy.sourceforge.net/|dispy]] - Python module for distributing computations (functions or programs) computation processors (SMP or even distributed over network) for parallel execution. The computations can be scheduled by supplying arguments in SIMD style of parallel processing. The computation units can be shared by multiple processes/users simultaneously if desired. dispy is implemented with asynchronous sockets, coroutines and efficient polling mechanisms for high performance and scalability.
Line 10: Line 31:
 * [[http://honeypot.net/multi-processing-map-python|forkmap]] - fork-based process creation using a function resembling Python's built-in map function (''Unix, Mac, Cygwin'')
 * [[http://honeypot.net/yet-another-python-map|ppmap]] - variant of forkmap using pp to manage the subprocesses (''Unix, Mac, Cygwin'')
 * [[http://poshmodule.sourceforge.net/|POSH]] Python Object Sharing is an extension module to Python that allows objects to be placed in shared memory.  POSH allows concurrent processes to communicate simply by assigning objects to shared container objects. (''POSIX/UNIX/Linux only'')
 * [[http://honeypot.net/2007/07/06/multi-processing-map-python/|forkmap (original)]] - fork-based process creation using a function resembling Python's built-in map function (''Unix, Mac, Cygwin''). (Original version)
 * [[https://github.com/larroy/mycelium/blob/master/dist/utils/forkfun.py|forkfun (modified)]] - fork-based process creation using a function resembling Python's built-in map function (''Unix, Mac, Cygwin''). (New version from July-2011 with modifications)
 * [[https://joblib.readthedocs.io/en/latest/|Joblib]] - Joblib is a set of tools to provide lightweight pipelining in Python. In particular:
  * transparent disk-caching of functions and lazy re-evaluation (memoize pattern)
  * easy simple parallel computing (single computer)
 * [[http://honeypot.net/2008/04/16/yet-another-python-map/
|ppmap]] - variant of forkmap using pp to manage the subprocesses (''Unix, Mac, Cygwin'')
 * [[http://poshmodule.sourceforge.net/|POSH]] Python Object Sharing is an extension module to Python that allows objects to be placed in shared memory. POSH allows concurrent processes to communicate simply by assigning objects to shared container objects. (''POSIX/UNIX/Linux only'')
Line 15: Line 40:
 * [[http://www.python.org/pypi/processing|processing]] - process-based using either fork on Unix or the subprocess module on Windows, implementing an API like the standard library's threading API and providing familiar objects such as queues and semaphores.  Can use native semaphores, message queues etc or can use of a manager process for sharing objects (''Unix and Windows''). [[http://docs.python.org/dev/library/multiprocessing.html#module-multiprocessing|Included in Python 2.6/3.0 as multiprocessing]], and [[http://pypi.python.org/pypi/multiprocessing/|backported under the same name]].  * [[http://www.python.org/pypi/processing|processing]] - process-based using either fork on Unix or the subprocess module on Windows, implementing an API like the standard library's threading API and providing familiar objects such as queues and semaphores. Can use native semaphores, message queues etc or can use of a manager process for sharing objects (''Unix and Windows''). [[http://docs.python.org/dev/library/multiprocessing.html#module-multiprocessing|Included in Python 2.6/3.0 as multiprocessing]], and [[http://pypi.python.org/pypi/multiprocessing/|backported under the same name]].
 * [[http://code.google.com/p/pycsp|PyCSP]] Communicating Sequential Processes for Python allows easy construction of processes and synchronised communication.
 * [[https://github.com/classner/pymp|PyMP]] - OpenMP inspired, fork-based framework for conveniently creating parallel for-loops and sections. Supports Python 2 and 3. (''Unix only'')
 * [[https://github.com/ray-project/ray|Ray]] - Parallel (and distributed) process-based execution framework which uses a lightweight API based on dynamic task graphs and actors to flexibly express a wide range of applications. Uses shared-memory and zero-copy serialization for efficient data handling within a single machine. Supports low-latency and high-throughput task scheduling. Includes higher-level libraries for machine learning and AI applications. Supports Python 2 and 3. (''Linux, Mac'')
Line 17: Line 45:
 * [[https://github.com/IBM/torc_py|torcpy]] - a platform-agnostic adaptive load balancing library that orchestrates the scheduling of task parallelism on both shared and distributed memory platforms. It takes advantage of MPI and multithreading, supports parallel nested loops and map functions and task stealing at all levels of parallelism.
 * [[http://github.com/undefx/vecpy|VecPy]] (Vectorizing Python for concurrent SIMD execution) - Takes as input a Python function on scalars and outputs a symantically equivalent C++ function over vectors which leverages multi-threading and SIMD vector intrinsics. Generated code is compiled into a native, shared library that can be called from Python (as a module), Java (through JNI), and C++. (''Linux-only; requires Python 3, g++'')
Line 21: Line 51:
Line 26: Line 55:
 * [[http://discoproject.org/|disco]] - an implementation of map-reduce. Core written in Erlang, jobs in Python. Inspired by Google's mapreduce and Apache hadoop.  * [[https://github.com/UIUC-PPL/charm4py|Charm4py]] - General-purpose parallel/distributed computing framework for the productive development of fast, parallel and scalable applications. Built on top of Charm++, a mature runtime system used in High-performance Computing, capable of scaling applications to supercomputers. It is based on an efficient actor model, allowing many actors per process, asynchronous method invocation, actor migration and load balancing. Seamlessly integrates modern concurrency features into the actor model. Supports ''Linux, Windows, macOS''.
 * [[https://dask.org/|Dask]] - Dask is a flexible library for parallel computing in Python. It offers
  * Dynamic task scheduling optimized for computation. This is similar to Airflow, Luigi, Celery, or Make, but optimized for interactive computational workloads.
  * “Big Data” collections like parallel arrays, dataframes, and lists that extend common interfaces like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments. These parallel collections run on top of dynamic task schedulers.
  * It extends Numpy/Pandas data structures allowing computing on many cores, many servers and managing data that does not fit in memory
 * [[http://code.google.com/p/deap/|Deap]] is a evolutionary algorithm library, which contains a parallelization module named '''DTM''', standing for Distributed Task Manager, which allows an easy parallelization over a cluster of computers. This module can be used separately -- e.g. to compute something else than evolutionary algorithms -- and offers an interface similar to the multiprocessing.Pool module (map, apply, synchronous or asynchronous spawns, etc.), providing a complete abstraction of the startup process and the communication and load balancing layers. It currently works over MPI, with mpi4py or PyMPI, or directly over TCP. Its unique structure allows some interesting features, like nested parallel map (a parallel map calling another distributed operation, and so on).
 * [[http://discoproject.org/|disco]] - an implementation of map-reduce. Developed by [[http://research.nokia.com/|Nokia]]. Core written in Erlang, jobs in Python. Inspired by Google's mapreduce and Apache hadoop.
 * [[http://dispy.sourceforge.net/|dispy]] - Python module for distributing computations (functions or programs) along with any dependencies (files, other Python functions, classes, modules) to nodes connected via network. The computations can be scheduled by supplying arguments in SIMD style of parallel processing. The nodes can be shared by multiple processes/users simultaneously if desired. dispy is implemented with asynchronous sockets, coroutines and efficient polling mechanisms for high performance and scalability.
 * [[http://code.google.com/p/distributed-python-for-scripting/|DistributedPython]] - Very simple Python distributed computing framework, using ssh and the multiprocessing and subprocess modules. At the top level, you generate a list of command lines and simply request they be executed in parallel. Works in Python 2.6 and 3.
Line 29: Line 66:
 * [[http://ipython.scipy.org/moin/|IPython]] - the IPython shell supports [[http://ipython.org/ipython-doc/stable/parallel/index.html|interactive parallel computing]] across multiple IPython instances
 * [[https://wwoods.github.io/job_stream|job_stream]] - An MPI/multiprocessing-based library for easy, distributed pipeline processing, with an emphasis on running scientific simulations. Uses decorators in a way that allows users to organize their code similarly to a traditional, non-distributed application. Can be used to realize map/reduce or more complicated distributed frameworks. Python 3 and 2.7+ compatible.
Line 30: Line 69:
 * [[http://mpi4py.scipy.org|mpi4py]] - MPI-based solution  * [[http://mpi4py.scipy.org/|mpi4py]] - MPI-based solution
Line 35: Line 74:
 * [[https://pypi.python.org/pypi/pycompss/|PyCOMPSs]] - A task based a programming model which aims to ease the development of parallel applications for distributed infrastructures, such as Clusters and Clouds. Offers a sequential interface, but at execution time the runtime system is able to exploit the inherent parallelism of applications at task level.
Line 36: Line 76:
 * [[http://pyMPI.sourceforge.net/|pyMPI]] - MPI-based solution  * [[http://pympi.sourceforge.net/|pyMPI]] - MPI-based solution
Line 38: Line 78:
 * [[http://pypi.python.org/pypi/pastset|pyPastSet]] - tuple-based structured distributed shared memory system in Python using the powerful Pyro distributed object framework for the core communication.
Line 40: Line 81:
 * [[http://pyro.sourceforge.net/|Pyro]] PYthon Remote Objects, distributed object system, takes care of network communication between your objects once you split them over different machines on the network  * [[http://irmen.home.xs4all.nl/pyro/|Pyro]] PYthon Remote Objects, distributed object system, takes care of network communication between your objects once you split them over different machines on the network
 * [[https://github.com/ray-project/ray|Ray]] - Parallel and distributed process-based execution framework which uses a lightweight API based on dynamic task graphs and actors to flexibly express a wide range of applications. Uses shared-memory and zero-copy serialization for efficient data handling within a single machine. Provides recovery from process and machine failures. Uses a bottom-up hierarchical scheduling scheme to support low-latency and high-throughput task scheduling. Includes higher-level libraries for machine learning and AI applications. Supports Python 2 and 3. (''Linux, Mac'')
Line 43: Line 85:
  * Scientific.Distributed``Computing.Master``Slave implements a master-slave model in which a master process requests computational tasks that are executed by an arbitrary number of slave processes. The strong points are ease of use and the possibility to work with a varying number of slave process. It is less suited for the construction of large, modular parallel applications. Ideal for parallel scripting. Uses [[http://pyro.sourceforge.net/|"Pyro"]]. (''works wherever Pyro works'')
  * Scientific.BSP is an object-oriented implementation of the [[http://www.bsp-worldwide.org/|"Bulk Synchronous Parallel (BSP)"]] model for parallel computing, whose main advantages over message passing are the impossibility of deadlocks and the possibility to evaluate the computational cost of an algorithm as a function of machine parameters. The Python implementation of BSP features parallel data objects, communication of arbitrary Python objects, and a framework for defining distributed data objects implementing parallelized methods. (''works on all platforms that have an MPI library or an implementation of BSPlib'')
   * Scientific.MPI is an interface to MPI that emphasizes the possibility to combine Python and C code, both using MPI. Contrary to pypar and pyMPI, it does not support the communication of arbitrary Python objects, being instead optimized for Numeric/NumPy arrays. (''works on all platforms that have an MPI library'')
  * Scientific.DistributedComputing.MasterSlave implements a master-slave model in which a master process requests computational tasks that are executed by an arbitrary number of slave processes. The strong points are ease of use and the possibility to work with a varying number of slave process. It is less suited for the construction of large, modular parallel applications. Ideal for parallel scripting. Uses [[http://pyro.sourceforge.net/|"Pyro"]]. (''works wherever Pyro works'')
  * Scientific.BSP is an object-oriented implementation of the [[http://www.bsp-worldwide.org/|"Bulk Synchronous Parallel (BSP)"]] model for parallel computing, whose main advantages over message passing are the impossibility of deadlocks and the possibility to evaluate the computational cost of an algorithm as a function of machine parameters. The Python implementation of BSP features parallel data objects, communication of arbitrary Python objects, and a framework for defining distributed data objects implementing parallelized methods. (''works on all platforms that have an MPI library or an implementation of BSPlib'')
  * Scientific.MPI is an interface to MPI that emphasizes the possibility to combine Python and C code, both using MPI. Contrary to pypar and pyMPI, it does not support the communication of arbitrary Python objects, being instead optimized for Numeric/NumPy arrays. (''works on all platforms that have an MPI library'')

 * [[http://scoop.googlecode.com/|SCOOP]] (Scalable COncurrent Operations in Python) is a distributed task module allowing concurrent parallel programming on various environments, from heterogeneous grids to supercomputers. It provides a parallel map function, among others.
Line 47: Line 91:
 * [[https://spark.apache.org/docs/0.9.0/python-programming-guide.html|PySpark]] - PySpark allow using [[https://spark.apache.org|Spark]] cluster with Python
Line 49: Line 94:
   * Send tasks to remote servers or to same machine via XML RPC call
    * GUI to launch, monitor, and kill remote tasks
    * GUI can automatically launch tasks every day, hour, etc.
    * Works on the Microsoft Windows operating system
          * Can run as a windows service
          * Jobs submitted to windows can run as submitting user or as service user     * Inputs/outputs are python objects via python pickle
    * Pure python implementation
    * Supports simple load-balancing to send tasks to best servers 
  * Send tasks to remote servers or to same machine via XML RPC call
  * GUI to launch, monitor, and kill remote tasks
  * GUI can automatically launch tasks every day, hour, etc.
  * Works on the Microsoft Windows operating system
   * Can run as a windows service
   * Jobs submitted to windows can run as submitting user or as service user

* Inputs/outputs are python objects via python pickle
  * Pure python implementation
  * Supports simple load-balancing to send tasks to best servers

 * [[https://github.com/IBM/torc_py|torcpy]] - a platform-agnostic adaptive load balancing library that orchestrates the scheduling of task parallelism on both shared and distributed memory platforms. It takes advantage of MPI and multithreading, supports parallel nested loops and map functions and task stealing at all levels of parallelism.
Line 60: Line 109:
Cloud computing is similar to cluster computing, except the developer's compute resources are owned and managed by a third party, the "cloud provider". By not having to purchase and set up hardware, the developer is able to run massively parallel workloads cheaper and easier.
Line 61: Line 111:
Cloud computing is similar to cluster computing, except the developer's compute resources are owned and managed by a third party, the "cloud provider". By not having to purchase and set up hardware, the developer is able to run massively parallel workloads cheaper and easier.  * [[http://code.google.com/appengine/|Google App Engine]] - supports Python.
 * [[http://www.picloud.com/|PiCloud]] - is a cloud-computing platform that integrates into Python. It allows developers to leverage the computing power of [[http://aws.amazon.com/|Amazon Web Services]] (AWS) without having to manage, maintain, or configure their own virtual servers. [[http://www.picloud.com/|PiCloud]] integrates into a Python code base via its custom library, ''cloud''. Offloading the execution of a function to PiCloud's auto-scaling cluster (located on AWS) is as simple as passing the desired function into PiCloud's ''cloud'' library.
  . For example, invoking ''cloud.call(foo)'' results in ''foo()'' being executed on PiCloud. Invoking ''cloud.map(foo, range(10))'' results in 10 functions, ''foo(0)'', ''foo(1)'', etc. being executed on PiCloud.
Line 63: Line 115:
  * [[http://code.google.com/appengine/|Google App Engine]] - supports Python.  * [[https://pypi.python.org/pypi/pycompss/|PyCOMPSs]] - A task based a programming model which aims to ease the development of parallel applications for distributed infrastructures, such as Clusters and Clouds. Offers a sequential interface, but at execution time the runtime system is able to exploit the inherent parallelism of applications at task level.
Line 65: Line 117:
  * [[http://www.picloud.com|PiCloud]] - is a cloud-computing platform that integrates into Python. It allows developers to leverage the computing power of [[http://aws.amazon.com|Amazon Web Services]] (AWS) without having to manage, maintain, or configure their own virtual servers. [[http://www.picloud.com|PiCloud]] integrates into a Python code base via its custom library, ''cloud''. Offloading the execution of a function to !PiCloud's auto-scaling cluster (located on AWS) is as simple as passing the desired function into !PiCloud's ''cloud'' library.

   For example, invoking ''cloud.call(foo)'' results in ''foo()'' being executed on !PiCloud.
   Invoking ''cloud.map(foo, range(10))'' results in 10 functions, ''foo(0)'', ''foo(1)'', etc. being executed on !PiCloud.

 * [[http://web.mit.edu/stardev/cluster|StarCluster]] - is a cluster-computing toolkit for the [[http://aws.amazon.com/|AWS cloud]]. [[http://web.mit.edu/stardev/cluster|StarCluster]] has been designed to simplify the process of building, configuring, and managing clusters of virtual machines on Amazon’s EC2 cloud. It allows you to easily create one or more clusters, add/remove nodes to a running cluster, easily build new AMIs, easily create and format new EBS volumes, write plugins in python to customize cluster configuration, and much more. See [[http://web.mit.edu/stardev/cluster/docs/latest|StarCluster's documentation]] for more details.
Line 73: Line 120:
Line 75: Line 121:
 * [[http://code.google.com/p/migrid/|Minimum intrusion Grid]] - a complete Grid middleware written in Python
Line 79: Line 126:
Line 85: Line 131:

[[http://pypi.python.org/pypi?:action=browse&show=all&c=450 | Topic :: System :: Distributed Computing ]]
[[http://pypi.python.org/pypi?:action=browse&show=all&c=450|Topic :: System :: Distributed Computing]]
Line 89: Line 134:
The above lists should be arranged in ascending alphabetical order - please respect this when adding new frameworks or tools.
Line 90: Line 136:
The above lists should be arranged in ascending alphabetical order - please respect this when adding new frameworks or tools.

Parallel Processing and Multiprocessing in Python

A number of Python-related libraries exist for the programming of solutions either employing multiple CPUs or multicore CPUs in a symmetric multiprocessing (SMP) or shared memory environment, or potentially huge numbers of computers in a cluster or grid environment. This page seeks to provide references to the different libraries and solutions available.

Just In Time Compilation

Some Python libraries allow compiling Python functions at run time, this is called Just In Time (JIT) compilation.

  • Nuitka - As the authors say: Nuitka is a Python compiler written in Python !

    • You feed Nuitka your Python app, it does a lot of clever things, and spits out an executable or extension module.
    • Nuitka translates Python into a C program that then is linked against libpython to execute exactly like CPython
    • For version 0.6 of Nuitka and Python 2.7 speedup was 312% !
  • Numba - Numba is an open source JIT compiler that translates a subset of Python and NumPy code into fast machine code.

    • Numba can use vectorized instructions (SIMD - Single Instruction Multiple Data) like SSE/AVX
    • Numba can simplify multithreading
    • Numba can compile on GPU
  • Pythran - Pythran is an ahead of time compiler for a subset of the Python language, with a focus on scientific computing.

    • It takes a Python module annotated with a few interface description and turns it into a native Python module with the same interface, but (hopefully) faster.
    • It is meant to efficiently compile scientific programs, and takes advantage of multi-cores and SIMD instruction units.
  • Pyston - Pyston is a faster CPython implementation using C optimisation and DynASM Just In Time compiler

    • After having been stopped in 2017 new version is back again since Python 3.8.8
    • It says to be 30% faster than CPython by just replacing CPython by Pyston version without updating your code.

Symmetric Multiprocessing

Some libraries, often to preserve some similarity with more familiar concurrency models (such as Python's threading API), employ parallel processing techniques which limit their relevance to SMP-based hardware, mostly due to the usage of process creation functions such as the UNIX fork system call. However, a technique called process migration may permit such libraries to be useful in certain kinds of computational clusters as well, notably single-system image cluster solutions (Kerrighed, OpenSSI, OpenMosix being examples).

  • dispy - Python module for distributing computations (functions or programs) computation processors (SMP or even distributed over network) for parallel execution. The computations can be scheduled by supplying arguments in SIMD style of parallel processing. The computation units can be shared by multiple processes/users simultaneously if desired. dispy is implemented with asynchronous sockets, coroutines and efficient polling mechanisms for high performance and scalability.

  • delegate - fork-based process creation with pickled data sent through pipes

  • forkmap (original) - fork-based process creation using a function resembling Python's built-in map function (Unix, Mac, Cygwin). (Original version)

  • forkfun (modified) - fork-based process creation using a function resembling Python's built-in map function (Unix, Mac, Cygwin). (New version from July-2011 with modifications)

  • Joblib - Joblib is a set of tools to provide lightweight pipelining in Python. In particular:

    • transparent disk-caching of functions and lazy re-evaluation (memoize pattern)
    • easy simple parallel computing (single computer)
  • ppmap - variant of forkmap using pp to manage the subprocesses (Unix, Mac, Cygwin)

  • POSH Python Object Sharing is an extension module to Python that allows objects to be placed in shared memory. POSH allows concurrent processes to communicate simply by assigning objects to shared container objects. (POSIX/UNIX/Linux only)

  • pp (Parallel Python) - process-based, job-oriented solution with cluster support (Windows, Linux, Unix, Mac)

  • pprocess (previously parallel/pprocess) - fork-based process creation with asynchronous channel-based communications employing pickled data (tutorial) (currently only POSIX/UNIX/Linux, perhaps Cygwin)

  • processing - process-based using either fork on Unix or the subprocess module on Windows, implementing an API like the standard library's threading API and providing familiar objects such as queues and semaphores. Can use native semaphores, message queues etc or can use of a manager process for sharing objects (Unix and Windows). Included in Python 2.6/3.0 as multiprocessing, and backported under the same name.

  • PyCSP Communicating Sequential Processes for Python allows easy construction of processes and synchronised communication.

  • PyMP - OpenMP inspired, fork-based framework for conveniently creating parallel for-loops and sections. Supports Python 2 and 3. (Unix only)

  • Ray - Parallel (and distributed) process-based execution framework which uses a lightweight API based on dynamic task graphs and actors to flexibly express a wide range of applications. Uses shared-memory and zero-copy serialization for efficient data handling within a single machine. Supports low-latency and high-throughput task scheduling. Includes higher-level libraries for machine learning and AI applications. Supports Python 2 and 3. (Linux, Mac)

  • remoteD - fork-based process creation with a dictionary-based communications paradigm (platform independent, according to PyPI entry)

  • torcpy - a platform-agnostic adaptive load balancing library that orchestrates the scheduling of task parallelism on both shared and distributed memory platforms. It takes advantage of MPI and multithreading, supports parallel nested loops and map functions and task stealing at all levels of parallelism.

  • VecPy (Vectorizing Python for concurrent SIMD execution) - Takes as input a Python function on scalars and outputs a symantically equivalent C++ function over vectors which leverages multi-threading and SIMD vector intrinsics. Generated code is compiled into a native, shared library that can be called from Python (as a module), Java (through JNI), and C++. (Linux-only; requires Python 3, g++)

Advantages of such approaches include convenient process creation and the ability to share resources. Indeed, the fork system call permits efficient sharing of common read-only data structures on modern UNIX-like operating systems.

Cluster Computing

Unlike SMP architectures and especially in contrast to thread-based concurrency, cluster (and grid) architectures offer high scalability due to the relative absence of shared resources, although this can make the programming paradigms seem somewhat alien to uninitiated developers. In this domain, some overlap with other distributed computing technologies may be observed (see DistributedProgramming for more details).

  • batchlib - a distributed computation system with automatic selection of processing services (no longer developed)

  • Celery - a distributed task queue based on distributed message passing

  • Charm4py - General-purpose parallel/distributed computing framework for the productive development of fast, parallel and scalable applications. Built on top of Charm++, a mature runtime system used in High-performance Computing, capable of scaling applications to supercomputers. It is based on an efficient actor model, allowing many actors per process, asynchronous method invocation, actor migration and load balancing. Seamlessly integrates modern concurrency features into the actor model. Supports Linux, Windows, macOS.

  • Dask - Dask is a flexible library for parallel computing in Python. It offers

    • Dynamic task scheduling optimized for computation. This is similar to Airflow, Luigi, Celery, or Make, but optimized for interactive computational workloads.
    • “Big Data” collections like parallel arrays, dataframes, and lists that extend common interfaces like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments. These parallel collections run on top of dynamic task schedulers.

    • It extends Numpy/Pandas data structures allowing computing on many cores, many servers and managing data that does not fit in memory
  • Deap is a evolutionary algorithm library, which contains a parallelization module named DTM, standing for Distributed Task Manager, which allows an easy parallelization over a cluster of computers. This module can be used separately -- e.g. to compute something else than evolutionary algorithms -- and offers an interface similar to the multiprocessing.Pool module (map, apply, synchronous or asynchronous spawns, etc.), providing a complete abstraction of the startup process and the communication and load balancing layers. It currently works over MPI, with mpi4py or PyMPI, or directly over TCP. Its unique structure allows some interesting features, like nested parallel map (a parallel map calling another distributed operation, and so on).

  • disco - an implementation of map-reduce. Developed by Nokia. Core written in Erlang, jobs in Python. Inspired by Google's mapreduce and Apache hadoop.

  • dispy - Python module for distributing computations (functions or programs) along with any dependencies (files, other Python functions, classes, modules) to nodes connected via network. The computations can be scheduled by supplying arguments in SIMD style of parallel processing. The nodes can be shared by multiple processes/users simultaneously if desired. dispy is implemented with asynchronous sockets, coroutines and efficient polling mechanisms for high performance and scalability.

  • DistributedPython - Very simple Python distributed computing framework, using ssh and the multiprocessing and subprocess modules. At the top level, you generate a list of command lines and simply request they be executed in parallel. Works in Python 2.6 and 3.

  • exec_proxy - a system for executing arbitrary programs and transferring files (no longer developed)

  • execnet - asynchronous execution of client-provided code fragments (formerly py.execnet)

  • IPython - the IPython shell supports interactive parallel computing across multiple IPython instances

  • job_stream - An MPI/multiprocessing-based library for easy, distributed pipeline processing, with an emphasis on running scientific simulations. Uses decorators in a way that allows users to organize their code similarly to a traditional, non-distributed application. Can be used to realize map/reduce or more complicated distributed frameworks. Python 3 and 2.7+ compatible.

  • jug - A task based parallel framework

  • mpi4py - MPI-based solution

  • NetWorkSpaces appears to be a rebranding and rebinding of Lindaspaces for Python

  • PaPy - Parallel(uses multiprocessing) and distributed(uses RPyC) work-flow engine, with a distributed imap implementation.

  • papyros - lightweight master-slave based parallel processing. Clients submit jobs to a master object which is monitored by one or more slave objects that do the real work. Two main implementations are currently provided, one using multiple threads and one multiple processes in one or more hosts through Pyro.

  • pp (Parallel Python) - "is a python module which provides mechanism for parallel execution of python code on SMP (systems with multiple processors or cores) and clusters (computers connected via network)."

  • PyCOMPSs - A task based a programming model which aims to ease the development of parallel applications for distributed infrastructures, such as Clusters and Clouds. Offers a sequential interface, but at execution time the runtime system is able to exploit the inherent parallelism of applications at task level.

  • PyLinda - distributed computing using tuple spaces

  • pyMPI - MPI-based solution

  • pypar - Numeric Python and MPI-based solution

  • pyPastSet - tuple-based structured distributed shared memory system in Python using the powerful Pyro distributed object framework for the core communication.

  • pypvm - PVM-based solution

  • pynpvm - PVM-based solution for NumPy

  • Pyro PYthon Remote Objects, distributed object system, takes care of network communication between your objects once you split them over different machines on the network

  • Ray - Parallel and distributed process-based execution framework which uses a lightweight API based on dynamic task graphs and actors to flexibly express a wide range of applications. Uses shared-memory and zero-copy serialization for efficient data handling within a single machine. Provides recovery from process and machine failures. Uses a bottom-up hierarchical scheduling scheme to support low-latency and high-throughput task scheduling. Includes higher-level libraries for machine learning and AI applications. Supports Python 2 and 3. (Linux, Mac)

  • rthread - distributed execution of functions via SSH

  • ScientificPython contains three subpackages for parallel computing:

    • Scientific.DistributedComputing.MasterSlave implements a master-slave model in which a master process requests computational tasks that are executed by an arbitrary number of slave processes. The strong points are ease of use and the possibility to work with a varying number of slave process. It is less suited for the construction of large, modular parallel applications. Ideal for parallel scripting. Uses "Pyro". (works wherever Pyro works)

    • Scientific.BSP is an object-oriented implementation of the "Bulk Synchronous Parallel (BSP)" model for parallel computing, whose main advantages over message passing are the impossibility of deadlocks and the possibility to evaluate the computational cost of an algorithm as a function of machine parameters. The Python implementation of BSP features parallel data objects, communication of arbitrary Python objects, and a framework for defining distributed data objects implementing parallelized methods. (works on all platforms that have an MPI library or an implementation of BSPlib)

    • Scientific.MPI is an interface to MPI that emphasizes the possibility to combine Python and C code, both using MPI. Contrary to pypar and pyMPI, it does not support the communication of arbitrary Python objects, being instead optimized for Numeric/NumPy arrays. (works on all platforms that have an MPI library)

  • SCOOP (Scalable COncurrent Operations in Python) is a distributed task module allowing concurrent parallel programming on various environments, from heterogeneous grids to supercomputers. It provides a parallel map function, among others.

  • seppo - based on Pyro mobile code, providing a parallel map function which evaluates each iteration "in a different process, possibly in a different computer".

  • PySpark - PySpark allow using Spark cluster with Python

  • "Star-P for Python is an interactive parallel computing platform ..."

  • superpy distributes python programs across a cluster of machines or across multiple processors on a single machine. Key features include:

    • Send tasks to remote servers or to same machine via XML RPC call
    • GUI to launch, monitor, and kill remote tasks
    • GUI can automatically launch tasks every day, hour, etc.
    • Works on the Microsoft Windows operating system
      • Can run as a windows service
      • Jobs submitted to windows can run as submitting user or as service user
    • Inputs/outputs are python objects via python pickle
    • Pure python implementation
    • Supports simple load-balancing to send tasks to best servers
  • torcpy - a platform-agnostic adaptive load balancing library that orchestrates the scheduling of task parallelism on both shared and distributed memory platforms. It takes advantage of MPI and multithreading, supports parallel nested loops and map functions and task stealing at all levels of parallelism.

Cloud Computing

Cloud computing is similar to cluster computing, except the developer's compute resources are owned and managed by a third party, the "cloud provider". By not having to purchase and set up hardware, the developer is able to run massively parallel workloads cheaper and easier.

  • Google App Engine - supports Python.

  • PiCloud - is a cloud-computing platform that integrates into Python. It allows developers to leverage the computing power of Amazon Web Services (AWS) without having to manage, maintain, or configure their own virtual servers. PiCloud integrates into a Python code base via its custom library, cloud. Offloading the execution of a function to PiCloud's auto-scaling cluster (located on AWS) is as simple as passing the desired function into PiCloud's cloud library.

    • For example, invoking cloud.call(foo) results in foo() being executed on PiCloud. Invoking cloud.map(foo, range(10)) results in 10 functions, foo(0), foo(1), etc. being executed on PiCloud.

  • PyCOMPSs - A task based a programming model which aims to ease the development of parallel applications for distributed infrastructures, such as Clusters and Clouds. Offers a sequential interface, but at execution time the runtime system is able to exploit the inherent parallelism of applications at task level.

  • StarCluster - is a cluster-computing toolkit for the AWS cloud. StarCluster has been designed to simplify the process of building, configuring, and managing clusters of virtual machines on Amazon’s EC2 cloud. It allows you to easily create one or more clusters, add/remove nodes to a running cluster, easily build new AMIs, easily create and format new EBS volumes, write plugins in python to customize cluster configuration, and much more. See StarCluster's documentation for more details.

Grid Computing

  • Ganga - an interface to the Grid that is being developed jointly by the ATLAS and LHCb experiments at CERN.

  • Minimum intrusion Grid - a complete Grid middleware written in Python

  • PEG - Python Extensions for the Grid

  • pyGlobus - see the Python Core project for related software

Trove classifiers

Topic :: System :: Distributed Computing

Editorial Notes

The above lists should be arranged in ascending alphabetical order - please respect this when adding new frameworks or tools.


ParallelProcessing (last edited 2021-05-17 13:47:48 by MordicusEtCubitus)

Unable to edit the page? See the FrontPage for instructions.