小编bli*_*yes的帖子

尽管我已登录,Wandb 仍抛出权限被拒绝的错误

我正在使用 cleanrl 库,特别是脚本dqn_atari.py dqn_atari.py,我按照 说明进行操作以保存和加载目标和 Q 网络。

\n

我在 conda 环境中本地运行它。

\n

我之前没有加载过东西,所以错误可能是由于我的 wandb 配置造成的。错误为“wandb:错误访问 wandb_entity/wandb_project_name/project_id 的权限被拒绝”并出现在网上:

\n
model = run.file("agent.pt")\n
Run Code Online (Sandbox Code Playgroud)\n

完整的输出是:

\n
wandb: Currently logged in as: elena (use `wandb login --relogin` to force relogin)\n    wandb: Tracking run with wandb version 0.12.15\nwandb: Run data is saved locally in /home/elena/workspace/playground/cleanrl/wandb/run-20220424_180429-2moec0qp\nwandb: Run `wandb offline` to turn off syncing.\nwandb: Resuming run BreakoutNoFrameskip-v4__dqn-save__1__1650816268\nwandb: \xe2\xad\x90 View project at https://wandb.ai/elena/test\nwandb:  View run at https://wandb.ai/elena/test/runs/2moec0qp\nA.L.E: Arcade Learning Environment (version 0.7.4+069f8bd)\n[Powered by Stella]\n/home/elena/anaconda3/envs/cleanrl/lib/python3.8/site-packages/stable_baselines3/common/buffers.py:219: UserWarning: …
Run Code Online (Sandbox Code Playgroud)

python wandb

6
推荐指数
1
解决办法
4543
查看次数

如何以纯文本形式提取维基百科页面的所有部分?

我在 python 中有以下代码,它仅提取有关“人工智能”的文章的介绍,而我想提取所有子部分(历史、目标...)

import requests

def get_wikipedia_page(page_title):
  endpoint = "https://en.wikipedia.org/w/api.php"
  params = {
    "format": "json",
    "action": "query",
    "prop": "extracts",
    "exintro": "",
    "explaintext": "",
    "titles": page_title
  }
  response = requests.get(endpoint, params=params)
  data = response.json()
  pages = data["query"]["pages"]
  page_id = list(pages.keys())[0]
  return pages[page_id]["extract"]

page_title = "Artificial intelligence"
wikipedia_page = get_wikipedia_page(page_title)
Run Code Online (Sandbox Code Playgroud)

有人建议使用另一种方法来解析html并使用BeautifulSoup转换为文本:

from urllib.request import urlopen
from bs4 import BeautifulSoup

url = "https://en.wikipedia.org/wiki/Artificial_intelligence"
html = urlopen(url).read()
soup = BeautifulSoup(html, features="html.parser")

# kill all script and style elements
for script in soup(["script", "style"]):
    script.extract() …
Run Code Online (Sandbox Code Playgroud)

python wikipedia-api

0
推荐指数
1
解决办法
1904
查看次数

标签 统计

python ×2

wandb ×1

wikipedia-api ×1