Differences between revisions 3 and 49 (spanning 46 versions)
Revision 3 as of 2007-01-17 13:28:06
Size: 5199
Editor: mic92-2-82-66-102-10
Comment:
Revision 49 as of 2010-07-20 01:28:29
Size: 10678
Editor: 64-71-26-218
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
A number of Python-related libraries exist for the programming of solutions either employing multiple CPUs or multicore CPUs in a [http://en.wikipedia.org/wiki/Symmetric_multiprocessing symmetric multiprocessing (SMP)] or shared memory environment, or potentially huge numbers of computers in a cluster or grid environment. This page seeks to provide references to the different libraries and solutions available. A number of Python-related libraries exist for the programming of solutions either employing multiple CPUs or multicore CPUs in a [[http://en.wikipedia.org/wiki/Symmetric_multiprocessing|symmetric multiprocessing (SMP)]] or shared memory environment, or potentially huge numbers of computers in a cluster or grid environment. This page seeks to provide references to the different libraries and solutions available.
Line 7: Line 7:
Some libraries, often to preserve some similarity with more familiar concurrency models (such as Python's threading API), employ parallel processing techniques which limit their relevance to SMP-based hardware, mostly due to the usage of process creation functions such as the UNIX fork system call. However, a technique called process migration may permit such libraries to be useful in certain kinds of computational clusters as well, notably single-system image cluster solutions ([http://openmosix.sourceforge.net/ OpenMosix] being one such example). Some libraries, often to preserve some similarity with more familiar concurrency models (such as Python's threading API), employ parallel processing techniques which limit their relevance to SMP-based hardware, mostly due to the usage of process creation functions such as the UNIX fork system call. However, a technique called process migration may permit such libraries to be useful in certain kinds of computational clusters as well, notably [[http://en.wikipedia.org/wiki/Single-system_image|single-system image]] cluster solutions ([[http://kerrighed.org|Kerrighed]], [[http://sourceforge.net/projects/ssic-linux|OpenSSI]], [[http://openmosix.sourceforge.net/|OpenMosix]] being examples).
Line 9: Line 9:
 * [http://www.python.org/pypi/parallel parallel/pprocess] - fork-based process creation with asynchronous channel-based communications
 * [http://www.parallelpython.com/ ppsmp] - process-based, job-oriented solution (''source code not available, has restrictive licence'')
 * [http://www.python.org/pypi/processing processing] - fork-based process creation (using threads on other platforms), implementing an API like the standard library's threading API and providing familiar objects such as queues and semaphores through the use of a manager process
 * [http://www.python.org/pypi/remoteD remoteD] - fork-based process creation with a dictionary-based communications paradigm
 * [[http://lfw.org/python/delegate.html|delegate]] - fork-based process creation with pickled data sent through pipes
 * [[http://honeypot.net/multi-processing-map-python|forkmap]] - fork-based process creation using a function resembling Python's built-in map function (''Unix, Mac, Cygwin'')
 * [[http://honeypot.net/yet-another-python-map|ppmap]] - variant of forkmap using pp to manage the subprocesses (''Unix, Mac, Cygwin'')
 * [[http://poshmodule.sourceforge.net/|POSH]] Python Object Sharing is an extension module to Python that allows objects to be placed in shared memory. POSH allows concurrent processes to communicate simply by assigning objects to shared container objects. (''POSIX/UNIX/Linux only'')
 * [[http://www.parallelpython.com/|pp]] (Parallel Python) - process-based, job-oriented solution with cluster support (''Windows, Linux, Unix, Mac'')
 * [[http://www.python.org/pypi/pprocess|pprocess]] (previously parallel/pprocess) - fork-based process creation with asynchronous channel-based communications employing pickled data [[http://www.boddie.org.uk/python/pprocess/tutorial.html|(tutorial)]] (''currently only POSIX/UNIX/Linux, perhaps Cygwin'')
 * [[http://www.python.org/pypi/processing|processing]] - process-based using either fork on Unix or the subprocess module on Windows, implementing an API like the standard library's threading API and providing familiar objects such as queues and semaphores. Can use native semaphores, message queues etc or can use of a manager process for sharing objects (''Unix and Windows''). [[http://docs.python.org/dev/library/multiprocessing.html#module-multiprocessing|Included in Python 2.6/3.0 as multiprocessing]], and [[http://pypi.python.org/pypi/multiprocessing/|backported under the same name]].
 * [[http://www.python.org/pypi/remoteD|remoteD]] - fork-based process creation with a dictionary-based communications paradigm (''platform independent, according to PyPI entry'')
Line 18: Line 22:
Unlike SMP architectures and especially in contrast to thread-based concurrency, cluster (and grid) architectures offer high scalability due to the relative absence of shared resources, although this can make the programming paradigms seem somewhat alien to uninitiated developers. In this domain, some overlap with other distributed computing technologies may be observed. Unlike SMP architectures and especially in contrast to thread-based concurrency, cluster (and grid) architectures offer high scalability due to the relative absence of shared resources, although this can make the programming paradigms seem somewhat alien to uninitiated developers. In this domain, some overlap with other distributed computing technologies may be observed (see DistributedProgramming for more details).
Line 20: Line 24:
 * [http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/software.htm#pycluster pycluster] - binding for the [http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/ Cluster] software (apparently oriented towards bioinformatics tasks)
 * [http://www-users.cs.york.ac.uk/~aw/pylinda/ PyLinda] - distributed computing using tuple spaces
 * [http://pyMPI.sourceforge.net/ pyMPI] - MPI-based solution
 * [http://datamining.anu.edu.au/~ole/pypar/ pypar] - Numeric Python and MPI-based solution
 * [http://pypvm.sourceforge.net/ pypvm] - PVM-based solution
 * [http://www.cimec.org.ar/python/ "Resources for Parallel Computing in Python"]
   * [http://www.cimec.org.ar/python/mpi4py.html "MPI for Python"] - MPI-based solution
   * [http://www.cimec.org.ar/python/python.html "Parallelized Python Interpreter"] - interactive, parallelized version of the Python interpreter
 * [http://dirac.cnrs-orleans.fr/ScientificPython/ ScientificPython] contains three subpackages for parallel computing:
   * Scientific.Distributed``Computing.Master``Slave implements a master-slave model in which a master process requests computational tasks that are executed by an arbitrary number of slave processes. The strong points are ease of use and the possibility to work with a varying number of slave process. It is less suited for the construction of large, modular parallel applications. Ideal for parallel scripting. Uses [http://pyro.sourceforge.net/ "Pyro"].
   * Scientific.BSP is an object-oriented implementation of the [http://www.bsp-worldwide.org/ "Bulk Synchronous Parallel (BSP)"] model for parallel computing, whose main advantages over message passing are the impossibility of deadlocks and the possibility to evaluate the computational cost of an algorithm as a function of machine parameters. The Python implementation of BSP features parallel data objects, communication of arbitrary Python objects, and a framework for defining distributed data objects implementing parallelized methods.
   * Scientific.MPI is an interface to MPI that emphasizes the possibility to combine Python and C code, both using MPI. Contrary to pypar and pyMPI, it does not support the communication of arbitrary Python objects, being instead optimized for Numeric/NumPy arrays.
 * [[http://seweb.se.wtb.tue.nl/~hat/batchlib.html|batchlib]] - a distributed computation system with automatic selection of processing services (''no longer developed'')
 * [[http://celeryproject.org/|Celery]] - a distributed task queue based on distributed message passing
 * [[http://discoproject.org/|disco]] - an implementation of map-reduce. Core written in Erlang, jobs in Python. Inspired by Google's mapreduce and Apache hadoop.
 * [[http://seweb.se.wtb.tue.nl/~hat/execproxy.html|exec_proxy]] - a system for executing arbitrary programs and transferring files (''no longer developed'')
 * [[http://codespeak.net/execnet/|execnet]] - asynchronous execution of client-provided code fragments (formerly [[http://codespeak.net/py/dist/execnet.html|py.execnet]])
 * [[http://mpi4py.scipy.org|mpi4py]] - MPI-based solution
 * [[http://www.lindaspaces.com/products/NWS_overview.html|NetWorkSpaces]] appears to be a rebranding and rebinding of [[http://www.lindaspaces.com/products/linda.html|Lindaspaces]] for Python
 * [[http://code.google.com/p/papy/|PaPy]] - Parallel(uses multiprocessing) and distributed(uses RPyC) work-flow engine, with a distributed imap implementation.
 * [[http://code.google.com/p/papyros/|papyros]] - lightweight master-slave based parallel processing. Clients submit jobs to a master object which is monitored by one or more slave objects that do the real work. Two main implementations are currently provided, one using multiple threads and one multiple processes in one or more hosts through [[http://pyro.sourceforge.net/|Pyro]].
 * [[http://www.parallelpython.com/|pp]] (Parallel Python) - "is a python module which provides mechanism for parallel execution of python code on SMP (systems with multiple processors or cores) and clusters (computers connected via network)."
 * [[http://www-users.cs.york.ac.uk/~aw/pylinda/|PyLinda]] - distributed computing using tuple spaces
 * [[http://pyMPI.sourceforge.net/|pyMPI]] - MPI-based solution
 * [[http://code.google.com/p/pypar/|pypar]] - Numeric Python and MPI-based solution
 * [[http://pypvm.sourceforge.net/|pypvm]] - PVM-based solution
 * [[http://pynpvm.sourceforge.net/|pynpvm]] - PVM-based solution for NumPy
 * [[http://pyro.sourceforge.net/|Pyro]] PYthon Remote Objects, distributed object system, takes care of network communication between your objects once you split them over different machines on the network
 * [[http://www.cs.tut.fi/~ask/rthread/index.html|rthread]] - distributed execution of functions via SSH
 * [[http://dirac.cnrs-orleans.fr/ScientificPython/|ScientificPython]] contains three subpackages for parallel computing:
   * Scientific.Distributed``Computing.Master``Slave implements a master-slave model in which a master process requests computational tasks that are executed by an arbitrary number of slave processes. The strong points are ease of use and the possibility to work with a varying number of slave process. It is less suited for the construction of large, modular parallel applications. Ideal for parallel scripting. Uses [[http://pyro.sourceforge.net/|"Pyro"]]. (''works wherever Pyro works'')
   * Scientific.BSP is an object-oriented implementation of the [[http://www.bsp-worldwide.org/|"Bulk Synchronous Parallel (BSP)"]] model for parallel computing, whose main advantages over message passing are the impossibility of deadlocks and the possibility to evaluate the computational cost of an algorithm as a function of machine parameters. The Python implementation of BSP features parallel data objects, communication of arbitrary Python objects, and a framework for defining distributed data objects implementing parallelized methods. (''works on all platforms that have an MPI library or an implementation of BSPlib'')
   * Scientific.MPI is an interface to MPI that emphasizes the possibility to combine Python and C code, both using MPI. Contrary to pypar and pyMPI, it does not support the communication of arbitrary Python objects, being instead optimized for Numeric/NumPy arrays. (''works on all platforms that have an MPI library'')
 * [[http://www.its.caltech.edu/~astraw/seppo.html|seppo]] - based on Pyro mobile code, providing a parallel map function which evaluates each iteration "in a different process, possibly in a different computer".
 * "[[http://www.interactivesupercomputing.com/getpr.php?id=246|Star-P for Python]] is an interactive parallel computing platform ..."
 * [[http://code.google.com/p/superpy/|superpy]] distributes python programs across a cluster of machines or across multiple processors on a single machine. Key features include:
    * Send tasks to remote servers or to same machine via XML RPC call
    * GUI to launch, monitor, and kill remote tasks
    * GUI can automatically launch tasks every day, hour, etc.
    * Works on the Microsoft Windows operating system
          * Can run as a windows service
          * Jobs submitted to windows can run as submitting user or as service user
    * Inputs/outputs are python objects via python pickle
    * Pure python implementation
    * Supports simple load-balancing to send tasks to best servers

== Cloud Computing ==

Cloud computing is similar to cluster computing, except the developer's compute resources are owned and managed by a third party, the "cloud provider". By not having to purchase and set up hardware, the developer is able to run massively parallel workloads cheaper and easier.

 * [[http://www.picloud.com/|PiCloud]] is a server-less cloud computing platform that integrates into the Python language. A developer can run any function on [[http://www.picloud.com/|PiCloud]'s servers by simply passing it into [[http://www.picloud.com/|PiCloud]'s "cloud" library.
Line 35: Line 67:
 * [http://grail.sdsc.edu/projects/peg/ PEG] - Python Extensions for the Grid
 * [http://www.python.org/pypi/pyGlobus pyGlobus] - see the [http://dev.globus.org/wiki/Python_Core Python Core] project for related software
 * [[http://ganga.web.cern.ch/ganga/|Ganga]] - an interface to the Grid that is being developed jointly by the ATLAS and LHCb experiments at CERN.
 * [[http://g
rail.sdsc.edu/projects/peg/|PEG]] - Python Extensions for the Grid
 * [[http://www.python.org/pypi/pyGlobus|pyGlobus]] - see the [[http://dev.globus.org/wiki/Python_Core|Python Core]] project for related software
Line 40: Line 73:
 * [http://content.cs.luc.edu/projects/comp412/hydrafs Hydra File System] - a distributed file system  * [[http://content.cs.luc.edu/projects/comp412/hydrafs|Hydra File System]] - a distributed file system
 * [[http://kosmosfs.sourceforge.net/|Kosmos Distributed File System]] - has Python bindings
 * [[http://allmydata.org/source/tahoe/trunk/docs/about.html|Tahoe: a secure, decentralized, fault-tolerant filesystem]]

== Trove classifiers ==

[[http://pypi.python.org/pypi?:action=browse&show=all&c=450 | Topic :: System :: Distributed Computing ]]
Line 45: Line 84:
----
CategoryArchive

Parallel Processing and Multiprocessing in Python

A number of Python-related libraries exist for the programming of solutions either employing multiple CPUs or multicore CPUs in a symmetric multiprocessing (SMP) or shared memory environment, or potentially huge numbers of computers in a cluster or grid environment. This page seeks to provide references to the different libraries and solutions available.

Symmetric Multiprocessing

Some libraries, often to preserve some similarity with more familiar concurrency models (such as Python's threading API), employ parallel processing techniques which limit their relevance to SMP-based hardware, mostly due to the usage of process creation functions such as the UNIX fork system call. However, a technique called process migration may permit such libraries to be useful in certain kinds of computational clusters as well, notably single-system image cluster solutions (Kerrighed, OpenSSI, OpenMosix being examples).

  • delegate - fork-based process creation with pickled data sent through pipes

  • forkmap - fork-based process creation using a function resembling Python's built-in map function (Unix, Mac, Cygwin)

  • ppmap - variant of forkmap using pp to manage the subprocesses (Unix, Mac, Cygwin)

  • POSH Python Object Sharing is an extension module to Python that allows objects to be placed in shared memory. POSH allows concurrent processes to communicate simply by assigning objects to shared container objects. (POSIX/UNIX/Linux only)

  • pp (Parallel Python) - process-based, job-oriented solution with cluster support (Windows, Linux, Unix, Mac)

  • pprocess (previously parallel/pprocess) - fork-based process creation with asynchronous channel-based communications employing pickled data (tutorial) (currently only POSIX/UNIX/Linux, perhaps Cygwin)

  • processing - process-based using either fork on Unix or the subprocess module on Windows, implementing an API like the standard library's threading API and providing familiar objects such as queues and semaphores. Can use native semaphores, message queues etc or can use of a manager process for sharing objects (Unix and Windows). Included in Python 2.6/3.0 as multiprocessing, and backported under the same name.

  • remoteD - fork-based process creation with a dictionary-based communications paradigm (platform independent, according to PyPI entry)

Advantages of such approaches include convenient process creation and the ability to share resources. Indeed, the fork system call permits efficient sharing of common read-only data structures on modern UNIX-like operating systems.

Cluster Computing

Unlike SMP architectures and especially in contrast to thread-based concurrency, cluster (and grid) architectures offer high scalability due to the relative absence of shared resources, although this can make the programming paradigms seem somewhat alien to uninitiated developers. In this domain, some overlap with other distributed computing technologies may be observed (see DistributedProgramming for more details).

  • batchlib - a distributed computation system with automatic selection of processing services (no longer developed)

  • Celery - a distributed task queue based on distributed message passing

  • disco - an implementation of map-reduce. Core written in Erlang, jobs in Python. Inspired by Google's mapreduce and Apache hadoop.

  • exec_proxy - a system for executing arbitrary programs and transferring files (no longer developed)

  • execnet - asynchronous execution of client-provided code fragments (formerly py.execnet)

  • mpi4py - MPI-based solution

  • NetWorkSpaces appears to be a rebranding and rebinding of Lindaspaces for Python

  • PaPy - Parallel(uses multiprocessing) and distributed(uses RPyC) work-flow engine, with a distributed imap implementation.

  • papyros - lightweight master-slave based parallel processing. Clients submit jobs to a master object which is monitored by one or more slave objects that do the real work. Two main implementations are currently provided, one using multiple threads and one multiple processes in one or more hosts through Pyro.

  • pp (Parallel Python) - "is a python module which provides mechanism for parallel execution of python code on SMP (systems with multiple processors or cores) and clusters (computers connected via network)."

  • PyLinda - distributed computing using tuple spaces

  • pyMPI - MPI-based solution

  • pypar - Numeric Python and MPI-based solution

  • pypvm - PVM-based solution

  • pynpvm - PVM-based solution for NumPy

  • Pyro PYthon Remote Objects, distributed object system, takes care of network communication between your objects once you split them over different machines on the network

  • rthread - distributed execution of functions via SSH

  • ScientificPython contains three subpackages for parallel computing:

    • Scientific.DistributedComputing.MasterSlave implements a master-slave model in which a master process requests computational tasks that are executed by an arbitrary number of slave processes. The strong points are ease of use and the possibility to work with a varying number of slave process. It is less suited for the construction of large, modular parallel applications. Ideal for parallel scripting. Uses "Pyro". (works wherever Pyro works)

    • Scientific.BSP is an object-oriented implementation of the "Bulk Synchronous Parallel (BSP)" model for parallel computing, whose main advantages over message passing are the impossibility of deadlocks and the possibility to evaluate the computational cost of an algorithm as a function of machine parameters. The Python implementation of BSP features parallel data objects, communication of arbitrary Python objects, and a framework for defining distributed data objects implementing parallelized methods. (works on all platforms that have an MPI library or an implementation of BSPlib)

    • Scientific.MPI is an interface to MPI that emphasizes the possibility to combine Python and C code, both using MPI. Contrary to pypar and pyMPI, it does not support the communication of arbitrary Python objects, being instead optimized for Numeric/NumPy arrays. (works on all platforms that have an MPI library)

  • seppo - based on Pyro mobile code, providing a parallel map function which evaluates each iteration "in a different process, possibly in a different computer".

  • "Star-P for Python is an interactive parallel computing platform ..."

  • superpy distributes python programs across a cluster of machines or across multiple processors on a single machine. Key features include:

    • Send tasks to remote servers or to same machine via XML RPC call
    • GUI to launch, monitor, and kill remote tasks
    • GUI can automatically launch tasks every day, hour, etc.
    • Works on the Microsoft Windows operating system
      • Can run as a windows service
      • Jobs submitted to windows can run as submitting user or as service user
    • Inputs/outputs are python objects via python pickle
    • Pure python implementation
    • Supports simple load-balancing to send tasks to best servers

Cloud Computing

Cloud computing is similar to cluster computing, except the developer's compute resources are owned and managed by a third party, the "cloud provider". By not having to purchase and set up hardware, the developer is able to run massively parallel workloads cheaper and easier.

Grid Computing

  • Ganga - an interface to the Grid that is being developed jointly by the ATLAS and LHCb experiments at CERN.

  • PEG - Python Extensions for the Grid

  • pyGlobus - see the Python Core project for related software

Trove classifiers

Topic :: System :: Distributed Computing

Editorial Notes

The above lists should be arranged in ascending alphabetical order - please respect this when adding new frameworks or tools.


CategoryArchive

ParallelProcessing (last edited 2021-05-17 13:47:48 by MordicusEtCubitus)

Unable to edit the page? See the FrontPage for instructions.