我正在运行以下代码,通过aiohttp发出5个请求:
import aiohttp
import asyncio
def fetch_page(url, idx):
try:
url = 'http://google.com'
response = yield from aiohttp.request('GET', url)
print(response.status)
except Exception as e:
print(e)
def main():
try:
url = 'http://google.com'
urls = [url] * 5
coros = []
for idx, url in enumerate(urls):
coros.append(asyncio.Task(fetch_page(url, idx)))
yield from asyncio.gather(*coros)
except Exception as e:
print(e)
if __name__ == '__main__':
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
except Exception as e:
print(e)
Run Code Online (Sandbox Code Playgroud)
输出:
200
200
200
200
200
Exception ignored in: Exception ignored in: Exception …Run Code Online (Sandbox Code Playgroud) 我在Ubuntu 16上使用python 3.5.
我正在尝试使用aiohttp来编写一个简单的客户端.
这是我的代码.我从这里拿走了它.这是第一个代码示例,禁用了ssl检查:
import aiohttp
import asyncio
import async_timeout
async def fetch(session, url):
with async_timeout.timeout(10):
async with session.get(url) as response:
return await response.text()
async def main(loop):
conn = aiohttp.TCPConnector(verify_ssl=False)
async with aiohttp.ClientSession(loop=loop, connector=conn) as session:
html = await fetch(session, 'http://www.google.com')
print(html)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
Run Code Online (Sandbox Code Playgroud)
对于某些网站,此代码有效.对于其他人,包括http://python.org或http://google.com不起作用.相反,代码会生成此错误:
aiohttp.errors.ClientOSError: [Errno 101] Cannot connect to host google.com:80 ssl:False [Can not connect to google.com:80 [Network is unreachable]]
Run Code Online (Sandbox Code Playgroud)
我尝试了一个简单的requests脚本,如下所示:
import requests …Run Code Online (Sandbox Code Playgroud) 我有一个REST API包装器,应该在交互式Python会话中运行.HTTP请求既可以通过自动后台线程(使用API包装器),也可以由最终用户通过交互式会话手动完成.我试图将所有HTTP请求管理从前一个新的每线程请求方法迁移到asyncio,但由于我无法在主线程中运行asyncio循环(它必须是免费的ad-hoc Python命令/请求),我写了以下内容在后台线程中运行它:
import aiohttp
import asyncio
from concurrent.futures import ThreadPoolExecutor
def start_thread_loop(pool=None):
"""Starts thread with running loop, bounding the loop to the thread"""
def init_loop(loop):
asyncio.set_event_loop(loop) # bound loop to thread
loop.run_forever()
_pool = ThreadPoolExecutor() if pool is None else pool
loop = asyncio.new_event_loop()
future = _pool.submit(init_loop, loop)
return future, loop
def send_to_loop(coro, loop):
"""Wraps couroutine in Task object and sends it to given loop"""
return asyncio.run_coroutine_threadsafe(coro, loop=loop)
Run Code Online (Sandbox Code Playgroud)
实际的API包装器类似于以下内容:
class Foo:
def __init__(self):
_, self.loop = start_thread_loop()
self.session = …Run Code Online (Sandbox Code Playgroud) 我正在运行一个一次性的Fargate任务,该任务运行一个小的python脚本。任务定义被配置为用于awslogs将日志发送到Cloudwatch,但是我面临一个非常奇怪的间歇性问题。
日志有时会出现在新创建的Cloudwatch流中,有时却不会。我尝试删除部分代码,而现在,这就是我所拥有的。
当我删除asyncio / aiohttp提取逻辑时,打印语句通常出现在Cloudwatch日志中。虽然由于问题是断断续续的,但我不能100%肯定会一直发生。
但是,由于包含了获取逻辑,有时在Fargate任务退出后,我会得到完全为空的日志流。没有日志显示“作业开始”,“作业结束”或“将文件放入S3”。也没有错误日志。尽管如此,当我检查S3存储桶时,仍创建了具有相应时间戳的文件,表明脚本确实运行完毕。我无法理解这是怎么可能的。
#!/usr/bin/env python3.6
import asyncio
import datetime
import time
from aiohttp import ClientSession
import boto3
def s3_put(bucket, key, body):
try:
print(f"Putting file into {bucket}/{key}")
client = boto3.client("s3")
client.put_object(Bucket=bucket,Key=key,Body=body)
except Exception:
print(f"Error putting object into S3 Bucket: {bucket}/{key}")
raise
async def fetch(session, number):
url = f'https://jsonplaceholder.typicode.com/todos/{number}'
try:
async with session.get(url) as response:
return await response.json()
except Exception as e:
print(f"Failed to fetch {url}")
print(e)
return None
async def fetch_all():
tasks = []
async …Run Code Online (Sandbox Code Playgroud) 这是我的代码:
import asyncio
from aiohttp import ClientSession
async def main():
url = "https://stackoverflow.com/"
async with ClientSession() as session:
async with session.get(url) as resp:
print(resp.status)
asyncio.run(main())
Run Code Online (Sandbox Code Playgroud)
如果我在我的电脑上运行它,一切正常,但如果我在 pythonanywhere 上运行它,我会收到这个错误:
Traceback (most recent call last):
File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 936, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore # noqa
File "/usr/lib/python3.8/asyncio/base_events.py", line 1017, in create_connection
raise exceptions[0]
File "/usr/lib/python3.8/asyncio/base_events.py", line 1002, in create_connection
sock = await self._connect_sock(
File "/usr/lib/python3.8/asyncio/base_events.py", line 916, in _connect_sock
await self.sock_connect(sock, address)
File "/usr/lib/python3.8/asyncio/selector_events.py", line 485, in …Run Code Online (Sandbox Code Playgroud) 想象一下一个异步aiohttpWeb 应用程序,它由通过连接的 Postgresql 数据库支持asyncpg,并且不执行其他 I/O。我怎样才能有一个托管应用程序逻辑的中间层,而不是异步的?(我知道我可以简单地使所有内容异步 - 但想象我的应用程序具有大量应用程序逻辑,仅受数据库 I/O 约束,并且我无法触及其中的所有内容)。
伪代码:
\nasync def handler(request):\n # call into layers over layers of application code, that simply emits SQL\n ...\n\ndef application_logic():\n ...\n # This doesn\'t work, obviously, as await is a syntax\n # error inside synchronous code.\n data = await asyncpg_conn.execute("SQL")\n ...\n # What I want is this:\n data = asyncpg_facade.execute("SQL")\n ...\nRun Code Online (Sandbox Code Playgroud)\n如何asyncpg构建同步 fa\xc3\xa7ade 来允许应用程序逻辑进行数据库调用?async.run()在这种情况下,诸如 using或等浮动的配方asyncio.run_coroutine_threadsafe()不起作用,因为我们来自已经异步的上下文。我认为这不可能是不可能的,因为已经有一个事件循环原则上可以运行协 …
我用的很asyncio漂亮aiohttp.主要的想法是我向服务器发出请求(它返回链接),然后我想从所有链接并行下载文件(类似于一个示例).
码:
import aiohttp
import asyncio
@asyncio.coroutine
def downloader(file):
print('Download', file['title'])
yield from asyncio.sleep(1.0) # some actions to download
print('OK', file['title'])
def run():
r = yield from aiohttp.request('get', 'my_url.com', True))
raw = yield from r.json()
tasks = []
for file in raw['files']:
tasks.append(asyncio.async(downloader(file)))
asyncio.wait(tasks)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
Run Code Online (Sandbox Code Playgroud)
但是,当我尝试运行它时,我有许多"下载..."输出和
Task was destroyed but it is pending!
Run Code Online (Sandbox Code Playgroud)
而且没有'OK + filename'.
我该如何解决这个问题?
我想使用龙卷风和asyncio库,如aiohttp和本机python 3.5协同程序,它似乎在最新的龙卷风版本(4.3)中得到支持.但是,在龙卷风事件循环中使用它时,请求处理程序将无限期挂起.当不使用aiohttp(即没有行r = await aiohttp.get('http://google.com/')和text = await r.text()下面)时,请求处理程序正常进行.
我的测试代码如下:
from tornado.ioloop import IOLoop
import tornado.web
import tornado.httpserver
import aiohttp
IOLoop.configure('tornado.platform.asyncio.AsyncIOLoop')
class MainHandler(tornado.web.RequestHandler):
async def get(self):
r = await aiohttp.get('http://google.com/')
text = await r.text()
self.write("Hello, world, text is: {}".format(text))
if __name__ == "__main__":
app = tornado.web.Application([
(r"/", MainHandler),
])
server = tornado.httpserver.HTTPServer(app)
server.bind(8888, '127.0.0.1')
server.start()
IOLoop.current().start()
Run Code Online (Sandbox Code Playgroud) 我正在试验limit和limit_per_host参数aiohttp.connector.TCPConnector.
在下面的脚本中,我connector = aiohttp.connector.TCPConnector(limit=25, limit_per_host=5)转到aiohttp.ClientSession,然后打开2个请求到docs.aiohttp.org和3到github.com.
结果session.request是一个实例aiohttp.ClientResponse,在这个例子中我故意不.close()通过.close()或者调用它__aexit__.我认为这会使连接池保持打开状态,并减少与(host,ssl,port)三倍的可用连接数-1.
下表表示._available_connections()每个请求之后. 即使在完成对docs.aiohttp.org的第二次请求后,为什么数字仍为4? 这两个连接可能仍然是开放的,尚未访问._content或已关闭.可用连接不应减少1吗?
After Request Num. To _available_connections
1 docs.aiohttp.org 4
2 docs.aiohttp.org 4 <--- Why?
3 github.com 4
4 github.com 3
5 github.com 2
Run Code Online (Sandbox Code Playgroud)
此外,为什么._acquired_per_host只包含1个键? 我想我可能会理解的方法TCPConnector; 是什么解释了上面的行为?
完整脚本:
import aiohttp
async def main():
connector = aiohttp.connector.TCPConnector(limit=25, limit_per_host=5)
print("Connector arguments:")
print("_limit:", connector._limit)
print("_limit_per_host:", connector._limit_per_host)
print("-" …Run Code Online (Sandbox Code Playgroud) 我有一个test.py文件和一个AsyncioCurl.py文件。
我已经使用session而不仅仅是aiohttp.request
但它也给了我这个错误:
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x000001FAEFEA7DA0>
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x000001FAF10AC648>, 119890.906)]']
connector: <aiohttp.connector.TCPConnector object at 0x000001FAF0F702B0>
Run Code Online (Sandbox Code Playgroud)
测试文件
import asyncio
from AsyncioCurl import AsyncioCurl
async def a():
payload = {}
url = "https://awebsiteisthere.com"
data = await AsyncioCurl().get(url,payload)
print(data)
task = [
a()
]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(task))
Run Code Online (Sandbox Code Playgroud)
AsyncioCurl.py
import asyncio
import aiohttp
from Log import Log
from Base import sign
from config import config
class AsyncioCurl: …Run Code Online (Sandbox Code Playgroud) aiohttp ×10
python-asyncio ×10
python ×9
python-3.x ×6
amazon-ecs ×1
asyncpg ×1
docker ×1
tornado ×1