Differences between revisions 1 and 6 (spanning 5 versions)
Revision 1 as of 2007-01-14 01:50:05
Size: 3971
Editor: PaulBoddie
Comment: Initial lists of solutions.
Revision 6 as of 2007-01-17 15:12:11
Size: 6265
Editor: PaulBoddie
Comment: Changed section of POSH. Added entries for other solutions. Reordered.
Deletions are marked like this. Additions are marked like this.
Line 10: Line 10:
 * [http://poshmodule.sourceforge.net/ POSH] Python Object Sharing is an extension module to Python that allows objects to be placed in shared memory. POSH allows concurrent processes to communicate simply by assigning objects to shared container objects.
Line 18: Line 19:
Unlike SMP architectures and especially in contrast to thread-based concurrency, cluster (and grid) architectures offer high scalability due to the relative absence of shared resources, although this can make the programming paradigms seem somewhat alien to uninitiated developers. In this domain, some overlap with other distributed computing technologies may be observed. Unlike SMP architectures and especially in contrast to thread-based concurrency, cluster (and grid) architectures offer high scalability due to the relative absence of shared resources, although this can make the programming paradigms seem somewhat alien to uninitiated developers. In this domain, some overlap with other distributed computing technologies may be observed (see DistributedProgramming for more details).
Line 20: Line 21:
 * [http://seweb.se.wtb.tue.nl/~hat/batchlib.html batchlib] - a distributed computation system with automatic selection of processing services (''no longer developed'')
 * [http://seweb.se.wtb.tue.nl/~hat/execproxy.html exec_proxy] - a system for executing arbitrary programs and transferring files (''no longer developed'')
 * [http://codespeak.net/py/current/doc/execnet.html py.execnet] - asynchronous execution of client-provided code fragments
Line 25: Line 29:
 * [http://pyro.sourceforge.net/ Pyro] PYthon Remote Objects, distributed object system, takes care of network communication between your objects once you split them over different machines on the network
 * [http://www.cs.tut.fi/~ask/rthread/index.html rthread] - distributed execution of functions via SSH
Line 28: Line 34:
 * [http://dirac.cnrs-orleans.fr/ScientificPython/ ScientificPython] - MPI and BSP-based solutions, as well as a Pyro-based master-slave process manager solution  * [http://dirac.cnrs-orleans.fr/ScientificPython/ ScientificPython] contains three subpackages for parallel computing:
   * Scientific.Distributed``Computing.Master``Slave implements a master-slave model in which a master process requests computational tasks that are executed by an arbitrary number of slave processes. The strong points are ease of use and the possibility to work with a varying number of slave process. It is less suited for the construction of large, modular parallel applications. Ideal for parallel scripting. Uses [http://pyro.sourceforge.net/ "Pyro"].
   * Scientific.BSP is an object-oriented implementation of the [http://www.bsp-worldwide.org/ "Bulk Synchronous Parallel (BSP)"] model for parallel computing, whose main advantages over message passing are the impossibility of deadlocks and the possibility to evaluate the computational cost of an algorithm as a function of machine parameters. The Python implementation of BSP features parallel data objects, communication of arbitrary Python objects, and a framework for defining distributed data objects implementing parallelized methods.
   * Scientific.MPI is an interface to MPI that emphasizes the possibility to combine Python and C code, both using MPI. Contrary to pypar and pyMPI, it does not support the communication of arbitrary Python objects, being instead optimized for Numeric/NumPy arrays.

Parallel Processing and Multiprocessing in Python

A number of Python-related libraries exist for the programming of solutions either employing multiple CPUs or multicore CPUs in a [http://en.wikipedia.org/wiki/Symmetric_multiprocessing symmetric multiprocessing (SMP)] or shared memory environment, or potentially huge numbers of computers in a cluster or grid environment. This page seeks to provide references to the different libraries and solutions available.

Symmetric Multiprocessing

Some libraries, often to preserve some similarity with more familiar concurrency models (such as Python's threading API), employ parallel processing techniques which limit their relevance to SMP-based hardware, mostly due to the usage of process creation functions such as the UNIX fork system call. However, a technique called process migration may permit such libraries to be useful in certain kinds of computational clusters as well, notably single-system image cluster solutions ([http://openmosix.sourceforge.net/ OpenMosix] being one such example).

  • [http://www.python.org/pypi/parallel parallel/pprocess] - fork-based process creation with asynchronous channel-based communications

  • [http://poshmodule.sourceforge.net/ POSH] Python Object Sharing is an extension module to Python that allows objects to be placed in shared memory. POSH allows concurrent processes to communicate simply by assigning objects to shared container objects.

  • [http://www.parallelpython.com/ ppsmp] - process-based, job-oriented solution (source code not available, has restrictive licence)

  • [http://www.python.org/pypi/processing processing] - fork-based process creation (using threads on other platforms), implementing an API like the standard library's threading API and providing familiar objects such as queues and semaphores through the use of a manager process

  • [http://www.python.org/pypi/remoteD remoteD] - fork-based process creation with a dictionary-based communications paradigm

Advantages of such approaches include convenient process creation and the ability to share resources. Indeed, the fork system call permits efficient sharing of common read-only data structures on modern UNIX-like operating systems.

Cluster Computing

Unlike SMP architectures and especially in contrast to thread-based concurrency, cluster (and grid) architectures offer high scalability due to the relative absence of shared resources, although this can make the programming paradigms seem somewhat alien to uninitiated developers. In this domain, some overlap with other distributed computing technologies may be observed (see DistributedProgramming for more details).

Grid Computing

Editorial Notes

The above lists should be arranged in ascending alphabetical order - please respect this when adding new frameworks or tools.

ParallelProcessing (last edited 2021-05-17 13:47:48 by MordicusEtCubitus)

Unable to edit the page? See the FrontPage for instructions.