我正在使用 cleanrl 库,特别是脚本dqn_atari.py dqn_atari.py,我按照 说明进行操作以保存和加载目标和 Q 网络。
\n我在 conda 环境中本地运行它。
\n我之前没有加载过东西,所以错误可能是由于我的 wandb 配置造成的。错误为“wandb:错误访问 wandb_entity/wandb_project_name/project_id 的权限被拒绝”并出现在网上:
\nmodel = run.file("agent.pt")\nRun Code Online (Sandbox Code Playgroud)\n完整的输出是:
\nwandb: Currently logged in as: elena (use `wandb login --relogin` to force relogin)\n wandb: Tracking run with wandb version 0.12.15\nwandb: Run data is saved locally in /home/elena/workspace/playground/cleanrl/wandb/run-20220424_180429-2moec0qp\nwandb: Run `wandb offline` to turn off syncing.\nwandb: Resuming run BreakoutNoFrameskip-v4__dqn-save__1__1650816268\nwandb: \xe2\xad\x90 View project at https://wandb.ai/elena/test\nwandb: View run at https://wandb.ai/elena/test/runs/2moec0qp\nA.L.E: Arcade Learning Environment (version 0.7.4+069f8bd)\n[Powered by Stella]\n/home/elena/anaconda3/envs/cleanrl/lib/python3.8/site-packages/stable_baselines3/common/buffers.py:219: UserWarning: …Run Code Online (Sandbox Code Playgroud) 我在 python 中有以下代码,它仅提取有关“人工智能”的文章的介绍,而我想提取所有子部分(历史、目标...)
import requests
def get_wikipedia_page(page_title):
endpoint = "https://en.wikipedia.org/w/api.php"
params = {
"format": "json",
"action": "query",
"prop": "extracts",
"exintro": "",
"explaintext": "",
"titles": page_title
}
response = requests.get(endpoint, params=params)
data = response.json()
pages = data["query"]["pages"]
page_id = list(pages.keys())[0]
return pages[page_id]["extract"]
page_title = "Artificial intelligence"
wikipedia_page = get_wikipedia_page(page_title)
Run Code Online (Sandbox Code Playgroud)
有人建议使用另一种方法来解析html并使用BeautifulSoup转换为文本:
from urllib.request import urlopen
from bs4 import BeautifulSoup
url = "https://en.wikipedia.org/wiki/Artificial_intelligence"
html = urlopen(url).read()
soup = BeautifulSoup(html, features="html.parser")
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() …Run Code Online (Sandbox Code Playgroud)