在python中使用多重处理时无法使用所有处理器

1 python python-3.x python-multiprocessing

我正在研究python,并尝试学习多处理。当我尝试以下代码时,它应该并行运行,但仅使用一个处理器。我无法理解原因。可能是什么问题,为什么它没有使用我PC的所有4个核心。

我尝试的代码如下:

import multiprocessing
import time
start = time.perf_counter()

def do_something():
    print("hello")
    time.sleep(1)
    print("done")

p1 = multiprocessing.Process(target=do_something())
p2 = multiprocessing.Process(target=do_something())
p1.start()
p2.start()
p1.join()
p2.join()
finish = time.perf_counter()
print(f'finished in {round(finish-start,1)} sec')
Run Code Online (Sandbox Code Playgroud)

结果:

hello
done
hello
done
finished in 2.1 sec
Run Code Online (Sandbox Code Playgroud)

它应在1秒钟(大约)内执行

我用来查找内核数的代码:

import multiprocessing
print("Number of cpu : ", multiprocessing.cpu_count())
Run Code Online (Sandbox Code Playgroud)

结果:

Number of cpu :  4
Run Code Online (Sandbox Code Playgroud)

我尝试过的另一段代码是:

from multiprocessing import Lock, Process, Queue, current_process
import time
import queue # imported for using queue.Empty exception


def do_job(tasks_to_accomplish, tasks_that_are_done):
    while True:
        try:
            '''
                try to get task from the queue. get_nowait() function will 
                raise queue.Empty exception if the queue is empty. 
                queue(False) function would do the same task also.
            '''
            task = tasks_to_accomplish.get_nowait()
        except queue.Empty:

            break
        else:
            '''
                if no exception has been raised, add the task completion 
                message to task_that_are_done queue
            '''
            print(task)
            tasks_that_are_done.put(task + ' is done by ' + current_process().name)
            time.sleep(.5)
    return True


def main():
    number_of_task = 10
    number_of_processes = 4
    tasks_to_accomplish = Queue()
    tasks_that_are_done = Queue()
    processes = []

    for i in range(number_of_task):
        tasks_to_accomplish.put("Task no " + str(i))

    # creating processes
    for w in range(number_of_processes):
        p = Process(target=do_job(tasks_to_accomplish, tasks_that_are_done))
        processes.append(p)
        p.start()

    # completing process
    for p in processes:
        p.join()

    # print the output
    while not tasks_that_are_done.empty():
        print(tasks_that_are_done.get())

    return True


if __name__ == '__main__':
    main()
Run Code Online (Sandbox Code Playgroud)

结果:

Task no 0
Task no 1
Task no 2
Task no 3
Task no 4
Task no 5
Task no 6
Task no 7
Task no 8
Task no 9
Task no 0 is done by MainProcess
Task no 1 is done by MainProcess
Task no 2 is done by MainProcess
Task no 3 is done by MainProcess
Task no 4 is done by MainProcess
Task no 5 is done by MainProcess
Task no 6 is done by MainProcess
Task no 7 is done by MainProcess
Task no 8 is done by MainProcess
Task no 9 is done by MainProcess
Run Code Online (Sandbox Code Playgroud)

提出建议后,我做了如下更改:

import multiprocessing
import time
start = time.perf_counter()

def do_something():
    print("hello")
    time.sleep(1)
    print("done")

p1 = multiprocessing.Process(target=do_something)
p2 = multiprocessing.Process(target=do_something)
p1.start()
p2.start()
p1.join()
p2.join()
finish = time.perf_counter()
print(f'finished in {round(finish-start,1)} sec')
Run Code Online (Sandbox Code Playgroud)

我得到的结果是:

finished in 0.2 sec
Run Code Online (Sandbox Code Playgroud)

Dan*_*man 7

目标必须是可调用的。

p1 = multiprocessing.Process(target=do_something)
Run Code Online (Sandbox Code Playgroud)

使用它的方式是,您在主流程中调用该方法,并将结果传递给多处理。