我正在尝试将 ChatGPT 用于我的Telegram机器人。我曾经使用“text-davinci-003”模型,它运行良好(即使现在也运行良好),但我对其响应不满意。
现在我尝试将模型更改为“gpt-3.5-turbo”,它抛出一个 404 响应代码,其中包含文本“错误:请求失败,状态代码 404”,仅此而已。这是我的代码:
import { Configuration, OpenAIApi } from "openai";
import { env } from "../utils/env.js";
const model = "gpt-3.5-turbo"; // works fine when it's "text-davinci-003"
const configuration = new Configuration({
apiKey: env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
export async function getChatGptResponse(request) {
try {
const response = await openai.createCompletion({
model,
prompt: request, // request comes as a string
max_tokens: 2000,
temperature: 1,
stream: false
});
console.log("Full response: ", response, `Choices: `, ...response.data.choices) …Run Code Online (Sandbox Code Playgroud) 我正在尝试一个新的用户界面,其中典型的个人资料设置是通过聊天而不是用户界面来更新的。例如,用户可以直接与机器人聊天,而不是显示前端组件来让用户取消计费。
我想知道是否有可能让我的 LLM(比如说 gpt3)生成运行这些操作所需的 graphql 查询。我想我可以将我的 graphql 模式提取到像 Pinecone 这样的矢量数据库中,然后将该上下文输入到 LLM 中,以便它可以生成适当的 GQL 查询/突变。
这是可行的还是坏主意?
到目前为止我只对此进行了理论分析
这是我的代码片段:
const { Configuration, OpenAI, OpenAIApi } = require ("openai");
const configuration = new Configuration({
apiKey: 'MY KEY'
})
const openai = new OpenAIApi(configuration)
async function start() {
const response = await openai.createChatCompletion({
model:"text-davinci-003",
prompt: "Write a 90 word essay about Family Guy",
temperature: 0,
max_tokens: 1000
})
console.log(response.data.choices[0].text)
}
start()
Run Code Online (Sandbox Code Playgroud)
当我跑步时:node index
我遇到这个问题:
data: {
error: {
message: 'Invalid URL (POST /v1/chat/completions)',
type: 'invalid_request_error',
param: null,
code: null
}
}
},
isAxiosError: true,
toJSON: [Function: toJSON]
}
Run Code Online (Sandbox Code Playgroud)
Node.js …
当我的项目运行此代码时它将返回
openai.error.APIConnectionError: Error communicating with OpenAI
async def embeddings_acreate(input: list[str]):
return await openai.Embedding.acreate(
api_key=await get_openai_api_key(),
model='text-embedding-ada-002',
input=input,
timeout=60,
)
Run Code Online (Sandbox Code Playgroud)
但如果我尝试:
import openai
import logging
openai.api_key = 'secret'
input_list = [
"tell me your name"
]
response = openai.Embedding.create(
model="text-embedding-ada-002",
input=input_list
)
embeddings = response["data"]
print(embeddings)
Run Code Online (Sandbox Code Playgroud)
有效......
我希望使用异步并使其
我想使用StuffDocumentsChain,但文档中建议示例的行为ConversationChain无法按我想要的方式工作:
import fs from 'fs';
import path from 'path';
import { OpenAI } from "langchain/llms/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { ConversationalRetrievalQAChain } from "langchain/chains";
const model = new OpenAI({openAIApiKey: 'sk-...', modelName: 'gpt-3.5-turbo'});
const text = fs.readFileSync(path.resolve(__dirname, './data.txt'), 'utf-8');
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);
const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings(
{openAIApiKey: 'sk-...', modelName: …Run Code Online (Sandbox Code Playgroud) 我正在使用 Node.js 和 React 中的 openAI Whisper API 创建一个转录器。我希望用户能够在浏览器中录制音频文件并转录他们的录音。我通过将已录制的音频 blob 的缓冲区数据保存到 mp3 文件中来执行此操作,然后使用 createTranscription() api 调用输入 fs.createReadStream(recorded_audio_file.mp3) ,该输出输出 400 错误。当我使用 Windows 录音机录制音频文件并输入该文件时,API 调用工作正常。这是我的反应记录器组件
import React, { useState, useEffect, useRef } from "react";
import Microphone from "./Microphone/Microphone";
const TSST = () => {
const BASE_URL = process.env.REACT_APP_SERVER_URL || "http://localhost:5000";
const mediaRecorder = useRef(null);
const [stream, setStream] = useState(null);
const [audioChunks, setAudioChunks] = useState([]);
const [audio, setAudio] = useState(null);
const [audioFile, setAudioFile] = useState(null);
const [transcribtion, setTranscription] = useState("");
const [audioBlob, setAudioBlob] …Run Code Online (Sandbox Code Playgroud) 我部署了 Azure 开放 AI 帐户和 GPT4 模型。我可以使用它的API进行图像到文本的描述吗?如果是,我将如何给它图像?我正在使用这段代码。但它给我一个错误。
import openai
# open ai key
openai.api_type = "azure"
openai.api_version = "2023-03-15-preview"
openai.api_base = 'https://xxxxxx.openai.azure.com/'
openai.api_key = "xxxxxxxxxxxxx"
image_url="https://cdn.repliers.io/IMG-X5925532_9.jpg"
def generate_image_description(image_url):
prompt = f"What is in this image? {image_url}"
print(prompt)
response = openai.ChatCompletion.create(
engine="GPT4v0314",
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
temperature=0.0,
)
description = response.choices[0].text.strip()
return description
Run Code Online (Sandbox Code Playgroud)
错误就像;APIError:来自 API 的无效响应对象:“不支持的数据类型\n”(HTTP 响应代码为 400)
我在解释里面提到过。
我刚刚确认,使用 API 进行聊天完成时,响应是纯文本形式的。
如何格式化响应中的文本,至少是新行、表格、项目符号点、标题......类似的东西?
我正在使用AzureOpenAI来测试LangChain使用宪法的自我批评。
\n这一切都有效,除了我得到了多个答案,最奇怪的部分是,它生成随机的、不需要的问题,并回答它们。
\n这是我的 Python 代码(我用 替换了敏感信息[XXX-XXX]):
import os\nfrom langchain.llms import AzureOpenAI\nfrom langchain.prompts import PromptTemplate\nfrom langchain.chains.llm import LLMChain\n\nfrom langchain.chains.constitutional_ai.base import ConstitutionalChain\nfrom langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\n\nos.environ["OPENAI_API_TYPE"] = "azure"\nos.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"\nos.environ["OPENAI_API_BASE"] = "https://[XXX-XXX].openai.azure.com/"\nos.environ["OPENAI_API_KEY"] = "[XXX-XXX]"\n\nqa_prompt = PromptTemplate(\n template="""You are a Microsoft specialist and know everything about the software it sells. Your aim is to help operators and employees when using the software.\n\nQuestion: {question}\n\nAnswer:""",\n input_variables=["question"],\n)\n\nllm = AzureOpenAI(\n deployment_name="[XXX-XXX]",\n model_name="[XXX-XXX]"\n)\n\n\nqa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n\nethical_principle = ConstitutionalPrinciple(\n name="Ethical Principle",\n critique_request="The model should only …Run Code Online (Sandbox Code Playgroud) 我收到一个错误说
ValueError: `run` not supported when there is not exactly one output key. Got ['answer', 'sources', 'source_documents'].
Run Code Online (Sandbox Code Playgroud)
这是回溯错误
File "C:\Users\Science-01\anaconda3\envs\gpt-dev\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "C:\Users\Science-01\Documents\Working Folder\Chat Bot\Streamlit\alpha-test.py", line 67, in <module>
response = chain.run(prompt, return_only_outputs=True)
File "C:\Users\Science-01\anaconda3\envs\gpt-dev\lib\site-packages\langchain\chains\base.py", line 228, in run
raise ValueError(
Run Code Online (Sandbox Code Playgroud)
我尝试在 Streamlit 上运行 langchain。我使用RetrievalQAWithSourcesChain和ChatPromptTemplate
这是我的代码
import os
import streamlit as st
from apikey import apikey
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings …Run Code Online (Sandbox Code Playgroud) openai-api ×10
python ×4
chatgpt-api ×3
langchain ×3
node.js ×3
azure-openai ×2
embedding ×2
asynchronous ×1
azure ×1
gpt-3 ×1
gpt-4 ×1
graphql ×1
javascript ×1
py-langchain ×1
reactjs ×1
recorder ×1
telegram-bot ×1
typescript ×1