我做了一个使用api来请求一些数据的python代码,但是api每分钟只允许20个请求。我正在使用urllib请求数据。我也使用for循环,因为数据位于文件中:
for i in hashfile:
hash = i
url1 = "https://hashes.org/api.php?act=REQUEST&key="+key+"&hash="+hash
print(url1)
response = urllib.request.urlopen(url2).read()
strr = str(response)
if "plain" in strr:
parsed_json = json.loads(response.decode("UTF-8"))
print(parsed_json['739c5b1cd5681e668f689aa66bcc254c']['plain'])
writehash = i+parsed_json
hashfile.write(writehash + "\n")
elif "INVALID HASH" in strr:
print("You have entered an invalid hash.")
elif "NOT FOUND" in strr:
print("The hash is not found.")
elif "LIMIT REACHED" in strr:
print("You have reached the max requests per minute, please try again in one minute.")
elif "INVALID KEY!" in strr:
print("You have entered …Run Code Online (Sandbox Code Playgroud) 我是Python的新手,但直到现在我还不知道。我在for循环中有一个基本程序,该程序从站点请求数据并将其保存到文本文件中,但是当我在任务管理器中检查时,发现内存使用量只会增加吗?长时间运行时,这可能对我来说是个问题。这是Python的标准做法,还是可以更改?这是程序的基本内容
savefile = open("file.txt", "r+")
for i in savefile:
#My code goes here
savefile.write(i)
#end of loop
savefile.close()
Run Code Online (Sandbox Code Playgroud)