0

より多くのスレッド (サブプロセス) を実行するように修正したい既存の python スクリプトがあります。この例では、3 つのスレッドを同時に実行するように修正したとします。

ちなみに、スクリプトは、Web サーバーへのクライアント要求を生成し、応答時間を測定するだけです。

#!/usr/bin/python26

from library.rpc.client import EllisClient

ec = EllisClient(ellis_user='fred', ellis_pass='flintstone')
params={'domain_name':'alestel.com','mig_name':'terramexico2'}


def test_response():
    L = []
    L = ec.get_full_domain(params)

if __name__ == '__main__':
    from timeit import Timer

    t = Timer("test_response()", "from __main__ import test_response")
    print t.timeit(number=10)

相対的な初心者として、ドキュメントは私にはあまり明確ではありません。任意の提案をいただければ幸いです。

4

1 に答える 1

0

If you want explicit control over the processes you're running, you want multiprocessing.Process:

def test_3_parallel_responses():
    procs = [multiprocess.Process(target=test_response) for _ in range(3)]
    for proc in procs:
        proc.start()
    for proc in procs:
        proc.join()

That's all there is to it.

There are various differences between threads and processes, but the big one is that you don't get to implicitly share values between processes; you have to pass them around (through the startup args and return value, or through a Queue, or some external means like a socket or pipe) or explicitly share them (through a Value or Array, or some external means like an file).

For a more realistic use case, you usually don't want to directly control what the processes are doing; you want to create a pool of processes, and just queue up jobs to get done by whichever process is free next. For that, you want either multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor. The latter is a bit simpler, but requires Python 3.2 or a third-party library, so I'll show the former:

def test_3_pooled_responses():
    pool = multiprocessing.Pool(3)
    for i in range(3):
        pool.apply(test_response)
    pool.close()
    pool.join()

More commonly, you want to actually pass parameters to the function. In the simplest case, this actually makes things even simpler—if you can write the sequential version as a list comprehension or map call, you can write the parallel version as a pool.map call. Let's say you had a test_response(host) call that returns some value, and you wanted to run it on host1, host2, and host3:

def test_3_pooled_responses():
    pool = multiprocessing.Pool(3)
    responses = pool.map(test_response, ['host1', 'host2', 'host3'])
    pool.close()
    pool.join()
于 2013-02-01T22:58:20.757 に答える