我使用会话代理和一些工具,其中之一是计算器工具(作为示例)。
代理初始化如下:
conversational_agent = initialize_agent(
agent='chat-conversational-react-description',
tools=[CalculatorTool()],
llm=llm_gpt4,
verbose=True,
max_iterations=2,
early_stopping_method="generate",
memory=memory,
# agent_kwargs=dict(output_parser=output_parser),
)
Run Code Online (Sandbox Code Playgroud)
当CalculatorTool激活时,它将返回一个字符串输出,代理获取该输出并进一步处理它以获得“最终答案”,从而更改输出的格式CalculatorTool
例如,对于 input 10*10,工具run()函数将返回100,该函数将被传播回代理,代理将调用self._take_next_step()并继续处理输出。
它将创建类似的最终输出the result of your prompt of 10x10 is 100
我不想要LLM添加的格式,只想要100.
我想在CalculatorTool完成后打破链条,并将其输出按原样返回给客户端。
我还有返回序列化数据的工具,对于图表来说,由代理的下一次迭代重新处理该数据将使其无效。
我是 Chroma 数据库(以及相关的 python 库)的全新用户。
当我调用geta时collection,嵌入总是none,即使在将文档添加到集合时显式设置/定义了嵌入(因此生成嵌入不会成为问题 - 我不认为)。
对于以下代码(Python 3.10,chromadb 0.3.26),我希望在返回的字典中看到嵌入列表,但它是none.
import chromadb
chroma_client = chromadb.Client()
collection = chroma_client.create_collection(name="my_collection")
collection.add(
embeddings=[[1.2, 2.3, 4.5], [6.7, 8.2, 9.2]],
documents=["This is a document", "This is another document"],
metadatas=[{"source": "my_source"}, {"source": "my_source"}],
ids=["id1", "id2"]
)
print(collection.get())
Run Code Online (Sandbox Code Playgroud)
输出:
{'ids': ['id1', 'id2'], 'embeddings': None, 'documents': ['This is a document', 'This is another document'], 'metadatas': [{'source': 'my_source'}, {'source': 'my_source'}]}
Run Code Online (Sandbox Code Playgroud)
query使用代替时不会出现同样的问题get:
print(collection.query(query_embeddings=[[1.2, 2.3, 4.4]], include=["embeddings"]))
Run Code Online (Sandbox Code Playgroud)
输出:
{'ids': …Run Code Online (Sandbox Code Playgroud) 我正在图书馆ImportError使用一段时间。已安装最新版本的 llama-index 库并尝试在 python 3.9 上运行它。GPTSimpleVectorIndexllama-index
from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader, LLMPredictor, PromptHelper, ServiceContext
ImportError: cannot import name 'GPTSimpleVectorIndex' from 'llama_index' (E:\Experiments\OpenAI\data anaysis\llama-index-main\venv\lib\site-packages\llama_index\__init__.py
Run Code Online (Sandbox Code Playgroud)
下面给出源代码,
import os, streamlit as st
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, LLMPredictor, PromptHelper, ServiceContext
from langchain.llms.openai import OpenAI
Run Code Online (Sandbox Code Playgroud) 我正在尝试使用内存和多个输入在 LangChain 中运行一条链。我能找到的最接近的错误发布在此处,但在那一错误中,他们仅传递一个输入。
这是设置:
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
llm = OpenAI(
model="text-davinci-003",
openai_api_key=environment_values["OPEN_AI_KEY"], # Used dotenv to store API key
temperature=0.9,
client="",
)
memory = ConversationBufferMemory(memory_key="chat_history")
prompt = PromptTemplate(
input_variables=[
"text_one",
"text_two",
"chat_history"
],
template=(
"""You are an AI talking to a huamn. Here is the chat
history so far:
{chat_history}
Here is some more text:
{text_one}
and here is a even more text:
{text_two}
""" …Run Code Online (Sandbox Code Playgroud) 我一直在尝试使用
Chromadb版本0.4.8Langchain版本0.0.276如SentenceTransformerEmbeddingFunction下面的代码片段所示。
from langchain.vectorstores import Chroma
from chromadb.utils import embedding_functions
# other imports
embedding = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="all-MiniLM-L6-v2")
Run Code Online (Sandbox Code Playgroud)
但是,它会引发以下错误。
RuntimeError: Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.
Run Code Online (Sandbox Code Playgroud)
有趣的是,我确实有所需的sqlite3( 3.43.0) 可用,我可以使用该命令进行验证sqlite3 --version。
将不胜感激任何帮助。谢谢。
我尝试使用UnstructuredURLLoader如下
from langchain.document_loaders import UnstructuredURLLoader\n\nloaders = UnstructuredURLLoader(urls=urls)\ndata = loaders.load()\nRun Code Online (Sandbox Code Playgroud)\n但有些页面报告说
\nlibmagic is unavailable but assists in filetype detection on file-like objects. Please consider installing libmagic for better results.\nError fetching or processing https://wellfound.com/company/chorus-one, exception: Invalid file. The FileType.UNK file type is not supported in partition.\nRun Code Online (Sandbox Code Playgroud)\n而在我的 conda 环境中我似乎拥有它
\n%pip list | grep libmagic\nlibmagic 1.0\nRun Code Online (Sandbox Code Playgroud)\n但我没有python-libmagic。当我尝试安装它时:
pip install python-libmagic
我不断收到错误:
\nCollecting python-libmagic\n Using cached python_libmagic-0.4.0-py3-none-any.whl\nCollecting cffi==1.7.0 (from python-libmagic)\n Using …Run Code Online (Sandbox Code Playgroud) 这是完整的代码。它在https://learn.deeplearning.ai/笔记本上运行得非常好。但是当我在本地计算机上运行它时,出现以下错误
ImportError:无法导入 docarray python 包
我尝试过重新安装/强制安装 langchain 和 lanchain[docarray] (pip 和 pip3)。我使用迷你 conda 虚拟环境。蟒蛇版本3.11.4
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.schema import Document
from langchain.indexes import VectorstoreIndexCreator
import openai
import os
os.environ['OPENAI_API_KEY'] = "xxxxxx" #not needed in DLAI
docs = [
Document(
page_content="""[{"API_Name":"get_invoice_transactions","API_Description":"This API when called will provide the list of transactions","API_Inputs":[],"API_Outputs":[]}]"""
),
Document(
page_content="""[{"API_Name":"get_invoice_summary_year","API_Description":"this api summarizes the invoices by vendor, product and year","API_Inputs":[{"API_Input":"Year","API_Input_Type":"Text"}],"API_Outputs":[{"API_Output":"Purchase Volume","API_Output_Type":"Float"},{"API_Output":"Vendor Name","API_Output_Type":"Text"},{"API_Output":"Year","API_Output_Type":"Text"},{"API_Output":"Item","API_Output_Type":"Text"}]}]"""
),
Document(
page_content="""[{"API_Name":"loan_payment","API_Description":"This API calculates the monthly payment for a loan","API_Inputs":[{"API_Input":"Loan_Amount","API_Input_Type":"Float"},{"API_Input":"Interest_Rate","API_Input_Type":"Float"},{"API_Input":"Loan_Term","API_Input_Type":"Integer"}],"API_Outputs":[{"API_Output":"Monthly_Payment","API_Output_Type":"Float"},{"API_Output":"Total_Interest","API_Output_Type":"Float"}]}]"""
),
Document( …Run Code Online (Sandbox Code Playgroud) from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
local_path = './models/gpt4all-converted.bin'
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
template = """Question: {question}
Answer: Let's think step by step.
"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = GPT4All(model=local_path,
callback_manager=callback_manager, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
# question = input("Enter your question: ")
llm_chain.run(question)
Run Code Online (Sandbox Code Playgroud)
尝试在本地使用 gpt4all 测试 langchain 并收到此错误。看起来像是版本的东西。我尝试了很多上网冲浪但没有得到任何结果。
Exception ignored in: <function …Run Code Online (Sandbox Code Playgroud) 我正在 Langchain 和 OpenAI 的帮助下创建一个应用程序。我正在加载数据并JSONLoader希望将其存储在矢量存储中,以便我可以根据用户请求进行检索以回答特定于我的数据的问题。Langchain 文档将 HNSWLib 描述为仅 Node.js 应用程序的可能存储。根据我的理解,NEXT 是建立在 Node.js 之上的,因此它可以运行 SS javascript,所以我应该能够使用它。我还应该提到JSONLoader也只能在 NodeJS 上运行,它运行得很好,所以我认为它应该已经全部设置完毕。
我按照新路由处理程序的文档在 app/api/llm/route.ts 中创建了一个 API 路由,并安装了该hnswlib-node包。
import { NextRequest } from 'next/server';
import { OpenAI } from 'langchain/llms/openai';
import { RetrievalQAChain } from 'langchain/chains';
import { JSONLoader } from 'langchain/document_loaders/fs/json';
import { HNSWLib } from 'langchain/vectorstores/hnswlib';
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';
import path from 'path';
// eslint-disable-next-line @typescript-eslint/no-unused-vars, no-unused-vars
export const GET = async (req: …Run Code Online (Sandbox Code Playgroud) 我想向检索器传递一个相似度阈值。到目前为止我只能弄清楚如何传递 ak 值,但这不是我想要的。我怎样才能通过门槛呢?
from langchain.document_loaders import PyPDFLoader
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
def get_conversation_chain(vectorstore):
llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')
qa = ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(search_kwargs={'k': 2}), return_source_documents=True, verbose=True)
return qa
loader = PyPDFLoader("sample.pdf")
# get pdf raw text
pages = loader.load_and_split()
faiss_index = FAISS.from_documents(list_of_documents, OpenAIEmbeddings())
# create conversation chain
chat_history = []
qa = get_conversation_chain(faiss_index)
query = "What is a sunflower?"
result = qa({"question": query, "chat_history": chat_history})
Run Code Online (Sandbox Code Playgroud) langchain ×10
python ×4
chromadb ×2
openai-api ×2
py-langchain ×2
gpt4all ×1
llama-index ×1
next.js ×1
nlp ×1
npm ×1
pygpt4all ×1
reactjs ×1
sqlite ×1
typescript ×1