该[f'str']字符串格式化在Python 3.6是近期推出。链接。我正在尝试比较.format()和f'{expr}方法。
f ' <text> { <expression> <optional !s, !r, or !a> <optional : format specifier> } <text> ... '
Run Code Online (Sandbox Code Playgroud)
以下是将华氏温度转换为摄氏温度的列表理解。
使用该.format()方法将结果以浮点数打印到两个小数点,并添加字符串摄氏:
Fahrenheit = [32, 60, 102]
F_to_C = ['{:.2f} Celsius'.format((x - 32) * (5/9)) for x in Fahrenheit]
print(F_to_C)
# output ['0.00 Celsius', '15.56 Celsius', '38.89 Celsius']
Run Code Online (Sandbox Code Playgroud)
我正在尝试使用f'{expr}方法复制以上内容:
print(f'{[((x - 32) * (5/9)) for x in Fahrenheit]}') # This prints the float numbers without formatting
# output: …Run Code Online (Sandbox Code Playgroud) python string list-comprehension number-formatting python-3.6
为什么下面的代码只适用于multiprocessing.dummy,而不适用于 simple multiprocessing。
import urllib.request
#from multiprocessing.dummy import Pool #this works
from multiprocessing import Pool
urls = ['http://www.python.org', 'http://www.yahoo.com','http://www.scala.org', 'http://www.google.com']
if __name__ == '__main__':
with Pool(5) as p:
results = p.map(urllib.request.urlopen, urls)
Run Code Online (Sandbox Code Playgroud)
错误 :
Traceback (most recent call last):
File "urlthreads.py", line 31, in <module>
results = p.map(urllib.request.urlopen, urls)
File "C:\Users\patri\Anaconda3\lib\multiprocessing\pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Users\patri\Anaconda3\lib\multiprocessing\pool.py", line 657, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '[<http.client.HTTPResponse object at 0x0000016AEF204198>]'. Reason: …Run Code Online (Sandbox Code Playgroud) python urllib multiprocessing python-multithreading python-multiprocessing
我正试图在两个内核上完全同时运行两个Python函数。每个进程都运行一个很长的循环(理论上是无限循环)。重要的是,它们必须同时保持同步,即使是最小的延迟也可能导致长期问题。
我认为我的问题是我像这样连续运行它们
# define the processes and assign them functions
first_process = multiprocessing.Process(name='p1', target='first_function')
second_process = multiprocessing.Process(name='p2', target='second_function')
# start the processes
first_process.start()
second_process.start()
Run Code Online (Sandbox Code Playgroud)
我time.time()在每个功能的开始处打印以测量时间差。输出结果是:
first function time: 1553812298.9244068
second function time: 1553812298.9254067
Run Code Online (Sandbox Code Playgroud)
差别是0.0009999275207519531秒。如前所述,这种差异将对长期产生重大影响。
综上所述,如何在两个不同的内核上完全同时运行两个功能?如果Python无法做到这一点,我还应该检查哪些其他选项?
python parallel-processing multiprocessing python-3.x python-multiprocessing
我开始熟悉 Python 的multiprocessing模块。以下代码按预期工作:
#outputs 0 1 2 3
from multiprocessing import Pool
def run_one(x):
print x
return
pool = Pool(processes=12)
for i in range(4):
pool.apply_async(run_one, (i,))
pool.close()
pool.join()
Run Code Online (Sandbox Code Playgroud)
但是,现在,如果我围绕上述代码包装一个函数,print则不会执行语句(或至少重定向输出):
#outputs nothing
def run():
def run_one(x):
print x
return
pool = Pool(processes=12)
for i in range(4):
pool.apply_async(run_one, (i,))
pool.close()
pool.join()
Run Code Online (Sandbox Code Playgroud)
如果我将run_one定义移到之外run,则输出再次是预期的,当我调用时run():
#outputs 0 1 2 3
def run_one(x):
print x
return
def run():
pool = Pool(processes=12)
for i in range(4):
pool.apply_async(run_one, …Run Code Online (Sandbox Code Playgroud) 刚刚__length_hint__()从PEP 424(https://www.python.org/dev/peps/pep-0424/)遇到了这个非常棒的迭代器方法.哇!一种获取迭代器长度而不会耗尽迭代器的方法.
我的问题:
编辑: BTW,我看到__length__hint__()从当前位置到结束的计数.即部分消耗的迭代器将报告剩余长度.有趣.
I'm trying to parallelize a script, but for an unknown reason the kernel just freeze without any errors thrown.
minimal working example:
from multiprocessing import Pool
def f(x):
return x*x
p = Pool(6)
print(p.map(f, range(10)))
Run Code Online (Sandbox Code Playgroud)
Interestingly, all works fine if I define my function in another file then import it. How can I make it work without the need of another file?
I work with spyder (anaconda) and I have the same result if I run my code from the …
python windows parallel-processing multiprocessing python-multiprocessing
下面的代码将三个数字放入队列中。然后它尝试从队列中取回号码。但它永远不会。如何从队列中获取数据?
import multiprocessing
queue = multiprocessing.Queue()
for i in range(3):
queue.put(i)
while not queue.empty():
print queue.get()
Run Code Online (Sandbox Code Playgroud) 最初的问题:使用celery任务队列,我希望进程池中的进程使用共享的 CUDA 数组(即,我希望所有进程都访问一个数组,而不是每个进程都有其唯一的数组。这是安全的,因为只读取执行)。Pytorch 的torch.multiprocessing库允许这样做,并且根据文档,它是multiprocessing.
billiard似乎multiprocessing是创建进程池的两个可行选择。目前,celeryPython 任务队列库由于一些功能改进而billiard使用过度。有人在这里问了一个问题,但答案并不具体。multiprocessing
It backports changes from the Python 2.7 and 3.x.
The current version is compatible with Py2.4 - 2.7 and falls back to multiprocessing for 3.x,
the next version will only support 2.6, 2.7 and 3.x.
Run Code Online (Sandbox Code Playgroud)
我需要替换billiardinmultiprocessing的celery源代码(以便使用 pytorch 的多处理库torch.multiprocessing),但这可以吗?multiprocessing和之间有什么区别billiard?
python ×8
python-3.x ×2
asynchronous ×1
celery ×1
generator ×1
python-3.6 ×1
queue ×1
string ×1
urllib ×1
windows ×1