alb*_*uno 1210 python concurrency multithreading python-multithreading
我试图理解Python中的线程.我看过文档和示例,但坦率地说,很多例子都过于复杂,我很难理解它们.
你如何清楚地展示为多线程划分的任务?
phi*_*hem 1352
自从2010年提出这个问题以来,如何使用带有map和pool的 python进行简单的多线程处理已经有了真正的简化.
下面的代码来自一篇文章/博客文章,你绝对应该检查(没有隶属关系) - 一行中的并行性:日常线程任务的更好模型.我将在下面总结 - 它最终只是几行代码:
from multiprocessing.dummy import Pool as ThreadPool
pool = ThreadPool(4)
results = pool.map(my_function, my_array)
Run Code Online (Sandbox Code Playgroud)
哪个是多线程版本:
results = []
for item in my_array:
results.append(my_function(item))
Run Code Online (Sandbox Code Playgroud)
描述
Map是一个很酷的小函数,是轻松将并行性注入Python代码的关键.对于那些不熟悉的人来说,地图是从像Lisp这样的函数式语言中解脱出来的.它是一个在序列上映射另一个函数的函数.
Map为我们处理序列上的迭代,应用函数,并将所有结果存储在最后的方便列表中.
履行
地图函数的并行版本由两个库提供:多处理,以及它鲜为人知,但同样出色的步骤子:multiprocessing.dummy.
multiprocessing.dummy
与多处理模块完全相同,但使用线程(一个重要的区别 - 对CPU密集型任务使用多个进程; IO期间(和期间)的线程):
multiprocessing.dummy复制多处理的API,但只不过是线程模块的包装器.
import urllib2
from multiprocessing.dummy import Pool as ThreadPool
urls = [
'http://www.python.org',
'http://www.python.org/about/',
'http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html',
'http://www.python.org/doc/',
'http://www.python.org/download/',
'http://www.python.org/getit/',
'http://www.python.org/community/',
'https://wiki.python.org/moin/',
]
# make the Pool of workers
pool = ThreadPool(4)
# open the urls in their own threads
# and return the results
results = pool.map(urllib2.urlopen, urls)
# close the pool and wait for the work to finish
pool.close()
pool.join()
Run Code Online (Sandbox Code Playgroud)
时间结果如下:
Single thread: 14.4 seconds
4 Pool: 3.1 seconds
8 Pool: 1.4 seconds
13 Pool: 1.3 seconds
Run Code Online (Sandbox Code Playgroud)
传递多个参数(仅在Python 3.3及更高版本中起作用):
要传递多个数组:
results = pool.starmap(function, zip(list_a, list_b))
Run Code Online (Sandbox Code Playgroud)
或传递常量和数组:
results = pool.starmap(function, zip(itertools.repeat(constant), list_a))
Run Code Online (Sandbox Code Playgroud)
如果您使用的是早期版本的Python,则可以通过此变通方法传递多个参数.
(感谢user136036的有用评论)
Ale*_*lli 705
这是一个简单的示例:您需要尝试一些备用URL并返回第一个要响应的内容.
import Queue
import threading
import urllib2
# called by each thread
def get_url(q, url):
q.put(urllib2.urlopen(url).read())
theurls = ["http://google.com", "http://yahoo.com"]
q = Queue.Queue()
for u in theurls:
t = threading.Thread(target=get_url, args = (q,u))
t.daemon = True
t.start()
s = q.get()
print s
Run Code Online (Sandbox Code Playgroud)
这是一种将线程用作简单优化的情况:每个子线程都在等待URL解析和响应,以便将其内容放在队列中; 每个线程都是一个守护进程(如果主线程结束,则不会保持进程 - 这种情况比较常见); 主线程启动所有子线程,get
在队列上执行等待,直到其中一个完成a put
,然后发出结果并终止(这将取消任何可能仍在运行的子线程,因为它们是守护线程).
在Python中正确使用线程总是连接到I/O操作(因为CPython不使用多个内核来运行CPU绑定任务,因此线程的唯一原因是在等待某些I/O时没有阻止进程).顺便说一句,队列几乎总是将工作分配到线程和/或收集工作结果的最佳方式,并且它们本质上是线程安全的,因此它们可以避免担心锁,条件,事件,信号量和其他相互关联线程协调/通信概念.
Mic*_*yan 253
注意:对于Python中的实际并行化,您应该使用多处理模块来分叉并行执行的多个进程(由于全局解释器锁定,Python线程提供交错但实际上是串行执行,而不是并行执行,并且仅在交错I/O操作).
但是,如果您只是在寻找交错(或者正在进行可以并行化的I/O操作,尽管全局解释器锁定),那么线程模块就是起点.作为一个非常简单的例子,让我们通过并行求和子范围来考虑求和大范围的问题:
import threading
class SummingThread(threading.Thread):
def __init__(self,low,high):
super(SummingThread, self).__init__()
self.low=low
self.high=high
self.total=0
def run(self):
for i in range(self.low,self.high):
self.total+=i
thread1 = SummingThread(0,500000)
thread2 = SummingThread(500000,1000000)
thread1.start() # This actually causes the thread to run
thread2.start()
thread1.join() # This waits until the thread has completed
thread2.join()
# At this point, both threads have completed
result = thread1.total + thread2.total
print result
Run Code Online (Sandbox Code Playgroud)
请注意,上面是一个非常愚蠢的例子,因为它绝对没有I/O,并且由于全局解释器锁,它将在CPython中以串行方式执行,尽管是交错的(带有上下文切换的额外开销).
Kai*_*Kai 98
与其他提到的一样,CPython只能因为GIL而使用线程进行I\O等待.如果要从CPU绑定任务的多个内核中受益,请使用多处理:
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
Run Code Online (Sandbox Code Playgroud)
Dou*_*ams 90
只需注意,线程不需要队列.
这是我能想象的最简单的例子,它显示了10个并发运行的进程.
import threading
from random import randint
from time import sleep
def print_number(number):
# Sleeps a random 1 to 10 seconds
rand_int_var = randint(1, 10)
sleep(rand_int_var)
print "Thread " + str(number) + " slept for " + str(rand_int_var) + " seconds"
thread_list = []
for i in range(1, 10):
# Instantiates the thread
# (i) does not make a sequence, so (i,)
t = threading.Thread(target=print_number, args=(i,))
# Sticks the thread in a list so that it remains accessible
thread_list.append(t)
# Starts threads
for thread in thread_list:
thread.start()
# This blocks the calling thread until the thread whose join() method is called is terminated.
# From http://docs.python.org/2/library/threading.html#thread-objects
for thread in thread_list:
thread.join()
# Demonstrates that the main process waited for threads to complete
print "Done"
Run Code Online (Sandbox Code Playgroud)
Jim*_*Jty 48
Alex Martelli给出的答案对我有所帮助,但是这里的修改版本我认为更有用(至少对我而言).
try:
# for python3
import queue
from urllib.request import urlopen
except:
# for python2
import Queue as queue
from urllib2 import urlopen
import threading
worker_data = ['http://google.com', 'http://yahoo.com', 'http://bing.com']
#load up a queue with your data, this will handle locking
q = queue.Queue()
for url in worker_data:
q.put(url)
#define a worker function
def worker(url_queue):
queue_full = True
while queue_full:
try:
#get your data off the queue, and do some work
url = url_queue.get(False)
data = urlopen(url).read()
print(len(data))
except queue.Empty:
queue_full = False
#create as many threads as you want
thread_count = 5
for i in range(thread_count):
t = threading.Thread(target=worker, args = (q,))
t.start()
Run Code Online (Sandbox Code Playgroud)
dol*_*hin 24
我发现这非常有用:创建与核心一样多的线程并让它们执行(大量)任务(在这种情况下,调用shell程序):
import Queue
import threading
import multiprocessing
import subprocess
q = Queue.Queue()
for i in range(30): #put 30 tasks in the queue
q.put(i)
def worker():
while True:
item = q.get()
#execute a task: call a shell program and wait until it completes
subprocess.call("echo "+str(item), shell=True)
q.task_done()
cpus=multiprocessing.cpu_count() #detect number of cores
print("Creating %d threads" % cpus)
for i in range(cpus):
t = threading.Thread(target=worker)
t.daemon = True
t.start()
q.join() #block until all tasks are done
Run Code Online (Sandbox Code Playgroud)
sta*_*fry 23
给定一个函数,将f
其如下所示:
import threading
threading.Thread(target=f).start()
Run Code Online (Sandbox Code Playgroud)
传递参数 f
threading.Thread(target=f, args=(a,b,c)).start()
Run Code Online (Sandbox Code Playgroud)
Jer*_*ril 21
Python 3具有启动并行任务的功能.这使我们的工作更轻松.
以下是一个见解:
ThreadPoolExecutor示例
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))
Run Code Online (Sandbox Code Playgroud)
ProcessPoolExecutor
import concurrent.futures
import math
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def main():
with concurrent.futures.ProcessPoolExecutor() as executor:
for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
print('%d is prime: %s' % (number, prime))
if __name__ == '__main__':
main()
Run Code Online (Sandbox Code Playgroud)
dvr*_*d77 18
对我来说,线程的完美示例是监视异步事件.看看这段代码.
# thread_test.py
import threading
import time
class Monitor(threading.Thread):
def __init__(self, mon):
threading.Thread.__init__(self)
self.mon = mon
def run(self):
while True:
if self.mon[0] == 2:
print "Mon = 2"
self.mon[0] = 3;
Run Code Online (Sandbox Code Playgroud)
您可以通过打开IPython会话并执行以下操作来使用此代码:
>>>from thread_test import Monitor
>>>a = [0]
>>>mon = Monitor(a)
>>>mon.start()
>>>a[0] = 2
Mon = 2
>>>a[0] = 2
Mon = 2
Run Code Online (Sandbox Code Playgroud)
等几分钟
>>>a[0] = 2
Mon = 2
Run Code Online (Sandbox Code Playgroud)
Shu*_*ary 18
使用blazing new concurrent.futures模块
def sqr(val):
import time
time.sleep(0.1)
return val * val
def process_result(result):
print(result)
def process_these_asap(tasks):
import concurrent.futures
with concurrent.futures.ProcessPoolExecutor() as executor:
futures = []
for task in tasks:
futures.append(executor.submit(sqr, task))
for future in concurrent.futures.as_completed(futures):
process_result(future.result())
# Or instead of all this just do:
# results = executor.map(sqr, tasks)
# list(map(process_result, results))
def main():
tasks = list(range(10))
print('Processing {} tasks'.format(len(tasks)))
process_these_asap(tasks)
print('Done')
return 0
if __name__ == '__main__':
import sys
sys.exit(main())
Run Code Online (Sandbox Code Playgroud)
对于那些以前用Java弄脏过的人来说,执行者的方法似乎很熟悉.
另外还要注意:为了保持宇宙的理智,如果你不使用with
上下文(这对你来说太棒了,不要忘记关闭你的池/执行者)
Yib*_*ibo 15
大多数文档和教程都使用Python Threading
和Queue
模块,对于初学者来说,它们似乎无法应对.
也许考虑concurrent.futures.ThreadPoolExecutor
python 3 的模块.结合with
子句和列表理解,它可能是一个真正的魅力.
from concurrent.futures import ThreadPoolExecutor, as_completed
def get_url(url):
# Your actual program here. Using threading.Lock() if necessary
return ""
# List of urls to fetch
urls = ["url1", "url2"]
with ThreadPoolExecutor(max_workers = 5) as executor:
# Create threads
futures = {executor.submit(get_url, url) for url in urls}
# as_completed() gives you the threads once finished
for f in as_completed(futures):
# Get the results
rs = f.result()
Run Code Online (Sandbox Code Playgroud)
Pir*_*App 14
我在这里看到很多例子,没有真正的工作正在执行+他们主要是CPU绑定.以下是CPU绑定任务的示例,该任务计算1000万到1005万之间的所有素数.我在这里使用了所有4种方法
import math
import timeit
import threading
import multiprocessing
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
def time_stuff(fn):
"""
Measure time of execution of a function
"""
def wrapper(*args, **kwargs):
t0 = timeit.default_timer()
fn(*args, **kwargs)
t1 = timeit.default_timer()
print("{} seconds".format(t1 - t0))
return wrapper
def find_primes_in(nmin, nmax):
"""
Compute a list of prime numbers between the given minimum and maximum arguments
"""
primes = []
#Loop from minimum to maximum
for current in range(nmin, nmax + 1):
#Take the square root of the current number
sqrt_n = int(math.sqrt(current))
found = False
#Check if the any number from 2 to the square root + 1 divides the current numnber under consideration
for number in range(2, sqrt_n + 1):
#If divisible we have found a factor, hence this is not a prime number, lets move to the next one
if current % number == 0:
found = True
break
#If not divisible, add this number to the list of primes that we have found so far
if not found:
primes.append(current)
#I am merely printing the length of the array containing all the primes but feel free to do what you want
print(len(primes))
@time_stuff
def sequential_prime_finder(nmin, nmax):
"""
Use the main process and main thread to compute everything in this case
"""
find_primes_in(nmin, nmax)
@time_stuff
def threading_prime_finder(nmin, nmax):
"""
If the minimum is 1000 and the maximum is 2000 and we have 4 workers
1000 - 1250 to worker 1
1250 - 1500 to worker 2
1500 - 1750 to worker 3
1750 - 2000 to worker 4
so lets split the min and max values according to the number of workers
"""
nrange = nmax - nmin
threads = []
for i in range(8):
start = int(nmin + i * nrange/8)
end = int(nmin + (i + 1) * nrange/8)
#Start the thrread with the min and max split up to compute
#Parallel computation will not work here due to GIL since this is a CPU bound task
t = threading.Thread(target = find_primes_in, args = (start, end))
threads.append(t)
t.start()
#Dont forget to wait for the threads to finish
for t in threads:
t.join()
@time_stuff
def processing_prime_finder(nmin, nmax):
"""
Split the min, max interval similar to the threading method above but use processes this time
"""
nrange = nmax - nmin
processes = []
for i in range(8):
start = int(nmin + i * nrange/8)
end = int(nmin + (i + 1) * nrange/8)
p = multiprocessing.Process(target = find_primes_in, args = (start, end))
processes.append(p)
p.start()
for p in processes:
p.join()
@time_stuff
def thread_executor_prime_finder(nmin, nmax):
"""
Split the min max interval similar to the threading method but use thread pool executor this time
This method is slightly faster than using pure threading as the pools manage threads more efficiently
This method is still slow due to the GIL limitations since we are doing a CPU bound task
"""
nrange = nmax - nmin
with ThreadPoolExecutor(max_workers = 8) as e:
for i in range(8):
start = int(nmin + i * nrange/8)
end = int(nmin + (i + 1) * nrange/8)
e.submit(find_primes_in, start, end)
@time_stuff
def process_executor_prime_finder(nmin, nmax):
"""
Split the min max interval similar to the threading method but use the process pool executor
This is the fastest method recorded so far as it manages process efficiently + overcomes GIL limitations
RECOMMENDED METHOD FOR CPU BOUND TASKS
"""
nrange = nmax - nmin
with ProcessPoolExecutor(max_workers = 8) as e:
for i in range(8):
start = int(nmin + i * nrange/8)
end = int(nmin + (i + 1) * nrange/8)
e.submit(find_primes_in, start, end)
def main():
nmin = int(1e7)
nmax = int(1.05e7)
print("Sequential Prime Finder Starting")
sequential_prime_finder(nmin, nmax)
print("Threading Prime Finder Starting")
threading_prime_finder(nmin, nmax)
print("Processing Prime Finder Starting")
processing_prime_finder(nmin, nmax)
print("Thread Executor Prime Finder Starting")
thread_executor_prime_finder(nmin, nmax)
print("Process Executor Finder Starting")
process_executor_prime_finder(nmin, nmax)
main()
Run Code Online (Sandbox Code Playgroud)
以下是我的Mac OSX 4核心机器上的结果
Sequential Prime Finder Starting
9.708213827005238 seconds
Threading Prime Finder Starting
9.81836523200036 seconds
Processing Prime Finder Starting
3.2467174359990167 seconds
Thread Executor Prime Finder Starting
10.228896902000997 seconds
Process Executor Finder Starting
2.656402041000547 seconds
Run Code Online (Sandbox Code Playgroud)
Pit*_*tto 13
我想贡献一个简单的例子,以及当我不得不自己解决这个问题时我发现有用的解释。
在此答案中,您将找到有关 Python 的GIL(全局解释器锁)的一些信息和使用 multiprocessing.dummy 编写的简单日常示例以及一些简单的基准测试。
全局解释器锁 (GIL)
Python 不允许真正意义上的多线程。它有一个多线程包,但是如果你想通过多线程来加速你的代码,那么使用它通常不是一个好主意。
Python 有一个称为全局解释器锁 (GIL) 的构造。GIL 确保在任何时候只有一个“线程”可以执行。一个线程获取 GIL,做一些工作,然后将 GIL 传递给下一个线程。
这发生得非常快,因此在人眼看来,您的线程似乎是并行执行的,但它们实际上只是轮流使用相同的 CPU 内核。
所有这些 GIL 传递都会增加执行开销。这意味着如果你想让你的代码运行得更快,那么使用线程包通常不是一个好主意。
有理由使用 Python 的线程包。如果你想同时运行一些东西,而且效率不是问题,那么它完全没问题,也很方便。或者,如果您正在运行需要等待某些东西(例如某些 I/O)的代码,那么它可能很有意义。但是线程库不会让您使用额外的 CPU 内核。
多线程可以外包给操作系统(通过执行多处理),以及一些调用 Python 代码的外部应用程序(例如Spark或Hadoop),或一些 Python 代码调用的代码(例如:您可以让你的 Python 代码调用一个 C 函数来完成昂贵的多线程工作)。
为什么这很重要
因为很多人在了解 GIL 是什么之前花费了大量时间试图找到他们花哨的 Python 多线程代码中的瓶颈。
清楚这些信息后,这是我的代码:
#!/bin/python
from multiprocessing.dummy import Pool
from subprocess import PIPE,Popen
import time
import os
# In the variable pool_size we define the "parallelness".
# For CPU-bound tasks, it doesn't make sense to create more Pool processes
# than you have cores to run them on.
#
# On the other hand, if you are using I/O-bound tasks, it may make sense
# to create a quite a few more Pool processes than cores, since the processes
# will probably spend most their time blocked (waiting for I/O to complete).
pool_size = 8
def do_ping(ip):
if os.name == 'nt':
print ("Using Windows Ping to " + ip)
proc = Popen(['ping', ip], stdout=PIPE)
return proc.communicate()[0]
else:
print ("Using Linux / Unix Ping to " + ip)
proc = Popen(['ping', ip, '-c', '4'], stdout=PIPE)
return proc.communicate()[0]
os.system('cls' if os.name=='nt' else 'clear')
print ("Running using threads\n")
start_time = time.time()
pool = Pool(pool_size)
website_names = ["www.google.com","www.facebook.com","www.pinterest.com","www.microsoft.com"]
result = {}
for website_name in website_names:
result[website_name] = pool.apply_async(do_ping, args=(website_name,))
pool.close()
pool.join()
print ("\n--- Execution took {} seconds ---".format((time.time() - start_time)))
# Now we do the same without threading, just to compare time
print ("\nRunning NOT using threads\n")
start_time = time.time()
for website_name in website_names:
do_ping(website_name)
print ("\n--- Execution took {} seconds ---".format((time.time() - start_time)))
# Here's one way to print the final output from the threads
output = {}
for key, value in result.items():
output[key] = value.get()
print ("\nOutput aggregated in a Dictionary:")
print (output)
print ("\n")
print ("\nPretty printed output: ")
for key, value in output.items():
print (key + "\n")
print (value)
Run Code Online (Sandbox Code Playgroud)
Chi*_*ora 12
以下是使用线程进行CSV导入的一个非常简单的示例.[图书馆馆藏因不同目的可能有所不同]
助手功能:
from threading import Thread
from project import app
import csv
def import_handler(csv_file_name):
thr = Thread(target=dump_async_csv_data, args=[csv_file_name])
thr.start()
def dump_async_csv_data(csv_file_name):
with app.app_context():
with open(csv_file_name) as File:
reader = csv.DictReader(File)
for row in reader:
#DB operation/query
Run Code Online (Sandbox Code Playgroud)
驱动功能:
import_handler(csv_file_name)
Run Code Online (Sandbox Code Playgroud)
Ben*_*ari 10
借用这篇文章,我们知道如何在多线程、多处理和 async/ 之间进行选择asyncio
及其用法。
Python 3有一个新的内置库来实现并发和并行:concurrent.futures
所以我将通过一个实验来演示运行四个任务(即.sleep()
方法)Threading-Pool
:
from concurrent.futures import ThreadPoolExecutor, as_completed
from time import sleep, time
def concurrent(max_worker):
futures = []
tic = time()
with ThreadPoolExecutor(max_workers=max_worker) as executor:
futures.append(executor.submit(sleep, 2)) # Two seconds sleep
futures.append(executor.submit(sleep, 1))
futures.append(executor.submit(sleep, 7))
futures.append(executor.submit(sleep, 3))
for future in as_completed(futures):
if future.result() is not None:
print(future.result())
print(f'Total elapsed time by {max_worker} workers:', time()-tic)
concurrent(5)
concurrent(4)
concurrent(3)
concurrent(2)
concurrent(1)
Run Code Online (Sandbox Code Playgroud)
输出:
Total elapsed time by 5 workers: 7.007831811904907
Total elapsed time by 4 workers: 7.007944107055664
Total elapsed time by 3 workers: 7.003149509429932
Total elapsed time by 2 workers: 8.004627466201782
Total elapsed time by 1 workers: 13.013478994369507
Run Code Online (Sandbox Code Playgroud)
[注意]:
multiprocessing
而不是threading
),您可以ThreadPoolExecutor
将ProcessPoolExecutor
.使用简单示例的多线程将有所帮助.您可以运行它并轻松理解多线程如何在python中工作.我使用lock来阻止访问其他线程,直到前一个线程完成他们的工作.通过使用
tLock = threading.BoundedSemaphore(value = 4)
在这行代码中,您可以一次允许多个进程并保持其余的线程,该线程将在稍后或之前完成的进程之后运行.
import threading
import time
#tLock = threading.Lock()
tLock = threading.BoundedSemaphore(value=4)
def timer(name, delay, repeat):
print "\r\nTimer: ", name, " Started"
tLock.acquire()
print "\r\n", name, " has the acquired the lock"
while repeat > 0:
time.sleep(delay)
print "\r\n", name, ": ", str(time.ctime(time.time()))
repeat -= 1
print "\r\n", name, " is releaseing the lock"
tLock.release()
print "\r\nTimer: ", name, " Completed"
def Main():
t1 = threading.Thread(target=timer, args=("Timer1", 2, 5))
t2 = threading.Thread(target=timer, args=("Timer2", 3, 5))
t3 = threading.Thread(target=timer, args=("Timer3", 4, 5))
t4 = threading.Thread(target=timer, args=("Timer4", 5, 5))
t5 = threading.Thread(target=timer, args=("Timer5", 0.1, 5))
t1.start()
t2.start()
t3.start()
t4.start()
t5.start()
print "\r\nMain Complete"
if __name__ == "__main__":
Main()
Run Code Online (Sandbox Code Playgroud)
以前的解决方案都没有在我的 GNU/Linux 服务器(我没有管理员权限)上实际使用多核。他们只是在一个核心上运行。
我使用较低级别的os.fork
接口来生成多个进程。这是对我有用的代码:
from os import fork
values = ['different', 'values', 'for', 'threads']
for i in range(len(values)):
p = fork()
if p == 0:
my_function(values[i])
break
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
937180 次 |
最近记录: |