Parallel computing in CIF#
Using threads#
The thread decorator#
The thread decorator automatically transform a fonction so it can called
with multiple arguments in individual threads.
- pycif.utils.parallel.thread(func: Callable[[V], T], processes: int | None = None, unpack: bool = False) Callable[[Iterable[V]], Iterable[T]][source]#
Decorate a function to be called in threads with multiple arguments
Warning
If the
unpackargument is False (default), the decorated function must take exactly one argument.- Args:
- func (callable (V) -> T):
Function to be parallelised with threads.
- processes (int, optional):
Number of threads to use, if processes is None then the number returned by os.process_cpu_count() is used. Defaults to None.
- unpack (bool, optional):
Unpack the arguments when calling th decorated function. If unpack is False
func([a, b])returns[func(a), func(b)]If unpack is Truefunc([a, b])returns[func(*a), func(*b)]Defaults to False.
- Returns:
- callable (Iterable[V]) -> T:
Function parallelised on threads, takes a chunksize argument
>>> @thread ... def foo(x): ... return 2*x ... >>> foo([2, 5, 4, 8]) [4, 10, 8, 16]
With
unpack=True>>> @thread(unpack=True) ... def foo(a, b): ... return a + b ... >>> foo([(1, 2), (3, 4), (5, 6)]) [3, 7, 11]
The AutoThreads class#
The AutoThreads can be used to automatically start and join threads.
- class pycif.utils.parallel.AutoThreads(timeout: float | None = None)[source]#
A group of threads that can be used as a context manager for automatic joining.
With a context manager:
>>> with AutoThreads() as threads: ... threads.start(foo, 1) ... threads.start(bar, 2, flag=True) ... # Some other code ... threads.start(bar, 24, flag=False) ... >>> # Implicit join here
Without a context manager:
>>> threads = AutoThreads() >>> threads.start(foo, 1) >>> threads.start(bar, 2, flag=True) >>> # Some other code >>> threads.start(bar, 24, flag=False) >>> threads.join() # Explicit join here
- Args:
- timeoutfloat, optional
Default timeout in seconds, by default None
Using multiple cores#
The multicore decorator#
The multicore decorator automatically transform a fonction so it can called
with multiple arguments in individual processes on a multicore CPU.
- pycif.utils.parallel.multicore(func: Callable[[V], T], processes: int | None = None, unpack: bool = False) Callable[[Iterable[V]], Iterable[T]][source]#
Decorate a function to be called in threads with multiple arguments
Note
Due to pickling limitations, this decorator can not be used at the decorated function definition, and must be called afterwrad.
Warning
If the
unpackargument is False (default), the decorated function must take exactly one argument.- Args:
- func (callable (V) -> T):
Function to be parallelised on multiple cores.
- processes (int, optional):
Number of threads to use, if processes is None then the number returned by os.process_cpu_count() is used. Defaults to None.
- unpack (bool, optional):
Unpack the arguments when calling th decorated function. If unpack is False
func([a, b])returns[func(a), func(b)]If unpack is Truefunc([a, b])returns[func(*a), func(*b)]Defaults to False.
- Returns:
- callable (Iterable[V]) -> T:
Function parallelised on multiple cores, takes a chunksize argument
>>> def foo(x): ... return 2*x ... >>> parallel_foo = multicore(foo) >>> parallel_foo([2, 5, 4, 8]) [4, 10, 8, 16]
With
unpack=True>>> def foo(a, b): ... return a + b ... >>> parallel_foo = multicore(foo, unpack=True) >>> parallel_foo([(1, 2), (3, 4), (5, 6)]) [3, 7, 11]