我想使用 GPT 4 模型将 csv 文件中的文本翻译成英语,但我不断收到以下错误。即使我更新了版本,我仍然收到相同的错误。
\nimport openai\nimport pandas as pd\nimport os\nfrom tqdm import tqdm\n\n\nopenai.api_key = os.getenv("API")\n\ndef translate_text(text):\n response = openai.Completion.create(\n model="text-davinci-003", # GPT-4 modeli\n prompt=f"Translate the following Turkish text to English: '{text}'",\n max_tokens=60\n )\n # Yeni API yap\xc4\xb1s\xc4\xb1na g\xc3\xb6re yan\xc4\xb1t\xc4\xb1n al\xc4\xb1nmas\xc4\xb1\n return response.choices[0].text.strip()\n\ndf = pd.read_excel('/content/3500-turkish-dataset-column-name.xlsx')\n\ncolumn_to_translate = 'review'\n\ndf[column_to_translate + '_en'] = ''\n\nfor index, row in tqdm(df.iterrows(), total=df.shape[0]):\n translated_text = translate_text(row[column_to_translate])\n df.at[index, column_to_translate + '_en'] = translated_text\n\ndf.to_csv('path/to/your/translated_csvfile.csv', index=False)\n\nRun Code Online (Sandbox Code Playgroud)\n 0%| | 0/3500 [00:00<?, ?it/s]\n---------------------------------------------------------------------------\nAPIRemovedInV1 Traceback (most recent call last)\n<ipython-input-27-337b5b6f4d32> in …Run Code Online (Sandbox Code Playgroud) 使用 OpenAI API 及以下是我的简单代码,但收到“steam_to_file”方法的弃用警告。
代码 -
from openai import OpenAI
from pathlib import Path
client = OpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
)
speech_file_path = Path(__file__).parent / "speech.mp3"
response = client.audio.speech.create(
model="tts-1",
voice="alloy",
input= '''I see skies of blue and clouds of white
The bright blessed days, the dark sacred nights
And I think to myself
What a wonderful world
'''
)
response.stream_to_file(speech_file_path)
Run Code Online (Sandbox Code Playgroud)
IDE——Visual Studio 代码
警告如下 -
** DeprecationWarning:由于错误,此方法实际上并不传输响应内容,.with_streaming_response.method()应使用 response.stream_to_file("song.mp3")**
有人可以帮忙吗?
我尝试检查不同的论坛,但找不到与stream_to_file相关的错误。
我正在使用Python 3.12
我正在尝试使用 Python 3.11、Windows 操作系统、pip 完全升级来安装 OpenAI,但出现此错误。
\n这是完整的错误消息:
\nCollecting openai\n Using cached openai-0.26.0.tar.gz (54 kB)\n Installing build dependencies ... done\n Getting requirements to build wheel ... error\n error: subprocess-exited-with-error\n\n \xc3\x97 Getting requirements to build wheel did not run successfully.\n \xe2\x94\x82 exit code: 1\n \xe2\x95\xb0\xe2\x94\x80> [21 lines of output]\n Traceback (most recent call last):\n File "C:\\Users\\vocal\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pip\\_vendor\\pep517\\in_process\\_in_process.py", line 351, in <module>\n main()\n File "C:\\Users\\vocal\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pip\\_vendor\\pep517\\in_process\\_in_process.py", line 333, in main\n json_out[\'return_val\'] = hook(**hook_input[\'kwargs\'])\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "C:\\Users\\vocal\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\pip\\_vendor\\pep517\\in_process\\_in_process.py", line 118, in get_requires_for_build_wheel\n return hook(config_settings)\n ^^^^^^^^^^^^^^^^^^^^^\n …Run Code Online (Sandbox Code Playgroud) 我尝试了以下代码,但只得到了部分结果,例如
[{"light_id": 0, "color
Run Code Online (Sandbox Code Playgroud)
我期待本页建议的完整 JSON:
https://medium.com/@richardhayes777/using-chatgpt-to-control-hue-lights-37729959d94f
import json
import os
import time
from json import JSONDecodeError
from typing import List
import openai
openai.api_key = "xxx"
HEADER = """
I have a hue scale from 0 to 65535.
red is 0.0
orange is 7281
yellow is 14563
purple is 50971
pink is 54612
green is 23665
blue is 43690
Saturation is from 0 to 254
Brightness is from 0 to 254
Two JSONs should be returned in a list. Each …Run Code Online (Sandbox Code Playgroud) embeddingsOpenAI有相当多的教程。我无法理解它们是如何工作的。
参考https://platform.openai.com/docs/guides/embeddings/what-are-embeddings, anembedding是 avector或list。将字符串传递给embedding模型,模型返回一个数字(用最简单的术语来说)。我可以使用这个号码。
如果我使用一个简单的字符串来获取它embeddings,我会得到一个巨大的列表
result = get_embedding("I live in space", engine = "textsearchcuriedoc001mc")
Run Code Online (Sandbox Code Playgroud)
result打印时
[5.4967957112239674e-05,
-0.01301578339189291,
-0.002223075833171606,
0.013594076968729496,
-0.027540158480405807,
0.008867159485816956,
0.009403547272086143,
-0.010987567715346813,
0.01919262297451496,
0.022209804505109787,
-0.01397960539907217,
-0.012806257233023643,
-0.027908924967050552,
0.013074451126158237,
0.024942029267549515,
0.0200139675289392 , ..... -> truncated this much, much, much longer list
Run Code Online (Sandbox Code Playgroud)
问题 1 - 这个庞大的列表与我的 4 字文本有何关联?
问题2 -
我创建embeddings了要在查询中使用的文本。注意,与原文内容的文字完全一致I live in space
queryembedding = get_embedding(
'I live in space',
engine="textsearchcuriequery001mc"
)
queryembedding …Run Code Online (Sandbox Code Playgroud) 我正在尝试使用我自己创建的提示为 Langchain 创建 load_summarize_chain 。
\nllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.7)\nPROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])\nchain = load_summarize_chain(llm, chain_type="refine", verbose=True, prompt=PROMPT)\nRun Code Online (Sandbox Code Playgroud)\n但是,只有当 chain_type 设置为“stuff”时,我才能成功创建链。当我尝试将其指定为“map_reduce”或“refine”时,我收到如下错误消息:
\nllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.7)\nPROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])\nchain = load_summarize_chain(llm, chain_type="refine", verbose=True, prompt=PROMPT)\nRun Code Online (Sandbox Code Playgroud)\n这是怎么回事\xef\xbc\x9f
\n我认为这可能是因为“map_reduce”或“refine”无法直接在load_summarize_chain,或者其他一些原因。
我正在测试 OpenAI 的不同模型,我注意到并非所有模型都经过足够的开发或训练以给出可靠的响应。
我测试的型号如下:
model_engine = "text-davinci-003"
model_engine = "davinci"
model_engine = "curie"
model_engine = "babbage"
model_engine = "ada"
Run Code Online (Sandbox Code Playgroud)
davinci我需要了解和之间的区别text-davinci-003,以及如何改进响应以匹配使用 ChatGPT 时的响应。
我正在探索 Whisper API 的功能,想知道它是否可以用于生成带有转录的 .SRT 文件。据我了解,使用 Whisper 包在本地运行模型时可以实现对 .SRT 的转录。不幸的是,我不具备在本地运行模型的计算资源,因此我倾向于直接使用 API。
有没有人有这方面的经验或者可以提供有关如何通过 API 处理它的指导?
import os
import openai
openai.api_key = API_KEY
audio_file = open("audio.mp3", "rb")
transcript = openai.Audio.transcribe("whisper-1", audio_file)
print(transcript.text)
Run Code Online (Sandbox Code Playgroud) 我有一个 pdf 文件列表,我想分析每个文档的第一页以提取信息。我尝试过很多免费和付费的 OCR,但就我而言,结果还不够好。
所以我想尝试在Python中使用ChatGPT API。我该怎么办?
另外,我在openAI Vision文档中看到有一个detail参数,但没有提供示例,我该如何使用这个参数?
我尝试从 langchain 调用 AzureChatOpenAI() 。通常我会这样做:
model = AzureChatOpenAI(
openai_api_base=os.getenv("OPENAI_API_BASE"),
openai_api_version="2023-03-15-preview",
deployment_name=os.getenv("GPT_DEPLOYMENT_NAME"),
openai_api_key=os.getenv("OPENAI_API_KEY"),
openai_api_type="azure",
Run Code Online (Sandbox Code Playgroud)
)
但我收到警告
python3.9/site-packages/langchain/chat_models/azure_openai.py:155: UserWarning: As of openai>=1.0.0, Azure endpoints should be specified via the `azure_endpoint` param not `openai_api_base` (or alias `base_url`). Updating `openai_api_base` from https://xxxx.openai.azure.com/ to https://xxxx.openai.azure.com/openai.
warnings.warn(
python3.9/site-packages/langchain/chat_models/azure_openai.py:162: UserWarning: As of openai>=1.0.0, if `deployment_name` (or alias `azure_deployment`) is specified then `openai_api_base` (or alias `base_url`) should not be. Instead use `deployment_name` (or alias `azure_deployment`) and `azure_endpoint`.
warnings.warn(
python3.9/site-packages/langchain/chat_models/azure_openai.py:170: UserWarning: As of openai>=1.0.0, if `openai_api_base` (or alias `base_url`) …Run Code Online (Sandbox Code Playgroud) openai-api ×10
python ×6
gpt-3 ×4
azure ×2
chatgpt-api ×2
langchain ×2
python-3.x ×2
azure-openai ×1
pdf ×1
whisper ×1
windows ×1