有没有办法训练大型语言模型(LLM)来存储特定的上下文?例如,我有一个很长的故事,我想提出问题,但我不想把整个故事放在每个提示中。如何才能让LLM“记住这个故事”?
考虑到 GPT-3 模型没有能够记忆过去对话的参数,目前看来“记忆”过去对话的唯一方法就是在提示中包含过去的对话。
如果我们看一下下面的例子:
You are a friendly support person. The customer will ask you questions, and you will provide polite responses
Q: My phone won't start. What do I do? <-- This is a past question
A: Try plugging your phone into the charger for an hour and then turn it on. The most common cause for a phone not starting is that the battery is dead.
Q: I've tried that. What else can I try? <-- This is a past question
A: Hold the button in for 15 seconds. It may need a reset.
Q: I did that. It worked, but the screen is blank. <-- This is a current question
A:
Run Code Online (Sandbox Code Playgroud)
应遵循的规则:
你将面临的问题:
text-davinci-003,它是4096令牌。当达到此限制时,OpenAI API 将抛出错误。发生这种情况时,您需要减少过去的提示-完成对的数量(例如,仅包括最近的 4 个过去的提示-完成对)。优点:
缺点:
| 归档时间: |
|
| 查看次数: |
5632 次 |
| 最近记录: |