Hud*_*kin 6 python openai-api py-langchain
我正在尝试构建一个可以讨论 pdf 的聊天机器人,并且我让它与内存一起使用ConversationBufferMemory,ConversationalRetrievalChain就像本例中一样。https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
现在我试图给人工智能一些特殊的指令,让它像海盗一样说话(只是为了测试它是否收到指令)。我认为这应该是一个SystemMessage,或者带有提示模板的东西?
我已经尝试了我发现的所有内容,但文档中的所有示例都是针对的ConversationChain,但我最终遇到了问题。到目前为止,唯一没有出现任何错误的是:
template = """Given the following conversation respond to the best of your ability in a pirate voice and end every sentence with Ay Ay Matey
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
PROMPT = PromptTemplate(
input_variables=["chat_history", "question"], template=template
)
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer')
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), PROMPT, memory=memory, return_source_documents=True)
Run Code Online (Sandbox Code Playgroud)
它仍然对结果没有任何影响,所以我不知道它是否有任何作用。我还认为这是错误的方法,我应该使用SystemMessages (也许在内存上,而不是在 qa 上),但我从文档中尝试的任何操作都不起作用,我不知道该怎么做。
demo中会出现异常:document_variable_name context was not found in llm_chain input_variables: ['chat_history', 'question'] (type=value_error)
这是由于缺少提示上下文造成的。尝试注入上下文,如下所示:
template = """Given the following conversation respond to the best of your ability in a pirate voice and end every sentence with Ay Ay Matey
Context: {context}
Chat History: {chat_history}
Follow Up Input: {question}
Standalone question:"""
PROMPT = PromptTemplate(
input_variables=["context", "chat_history", "question"],
template=template
)
Run Code Online (Sandbox Code Playgroud)
您不能PROMPT直接作为 param 传递ConversationalRetrievalChain.from_llm()。尝试使用combine_docs_chain_kwargs参数来传递你的PROMPT. 请参阅以下示例并参考您提供的示例代码:
template = """Given the following conversation respond to the best of your ability in a pirate voice and end every sentence with Ay Ay Matey
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
PROMPT = PromptTemplate(
input_variables=["chat_history", "question"],
template=template
)
memory = ConversationBufferMemory(
memory_key='chat_history',
return_messages=True,
output_key='answer'
)
qa = ConversationalRetrievalChain.from_llm(
llm=OpenAI(temperature=0),
retriever=vectorstore.as_retriever(),
memory=memory,
return_source_documents=True,
combine_docs_chain_kwargs={"prompt": PROMPT}
)
Run Code Online (Sandbox Code Playgroud)
然后得到结果:
result = qa({"question": query})
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
7479 次 |
| 最近记录: |