ConversationalRetrievalChain 与 LLMChain

问题描述 投票:0回答:1

我开发了一个运行良好的脚本,如下:

def get_response_from_query(db, query, k=4):

    docs = db.similarity_search(query, k=k)
    docs_page_content = " ".join([d.page_content for d in docs])

    chat = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.2)

    # Template to use for the system message prompt
    template = """
      this is a custom prompt template
        """

    system_message_prompt = SystemMessagePromptTemplate.from_template(template)

    # Human question prompt
    human_template = "Answer the following question: {question}"
    human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

    chat_prompt = ChatPromptTemplate.from_messages(
        [system_message_prompt, human_message_prompt]
    )

    chain = LLMChain(llm=chat, prompt=chat_prompt)

    response = chain.run(question=query, docs=docs_page_content)
    response = response.replace("\n", "")
    return response, docs

这非常有效,直到我尝试在带有 Streamlit 的新应用程序中使用它。

首先,执行相似搜索的前两行代码会因以下错误而破坏代码:

删除这些行后,其余代码将不会向界面返回任何内容。

我尝试使用不同的方法,如下:

def get_conversation_chain(vectorstore):
    llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.2)

    memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)
    conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm,retriever=vectorstore.as_retriever(),memory=memory)
    return conversation_chain

这很好用,直到我尝试向 ConversationalRetrievalChain 添加第四个参数,即ombine_docs_chain_kwargs={"prompt":prompt}。使用代码这里。 在这里,我以与在第一个代码中相同的方式构建提示,但我不断收到占位符 {docs} 或 {user_question} 缺少上下文的错误:

ValidationError:在 llm_chain input_variables 中找不到 StuffDocumentsChain 根 document_variable_name 上下文的 1 个验证错误:[](type=value_error)

由于相似性搜索失败,我无法将任何内容传递给 {docs},即使删除它也不起作用。 {context} 从哪里来?

我在这里缺少什么?我应该使用什么方法,ConversationalRetrievalChain 还是 LLMChain?为什么相似性搜索不起作用?

我唯一需要的是告诉法学硕士在不确定时不要回答,为此我需要发送带有用户提示的自定义模板/提示。

我是人工智能新手,请帮忙。

如上所述,我尝试了两种方法

python artificial-intelligence openai-api langchain
1个回答
0
投票

我使用对话记忆创建了一个对话,然后要求法学院总结该对话。 如果您使用矢量数据库来初始化对话,然后评估内存消息聊天历史记录

conversation = [
    {"user": "Hey Louise, can you help me with something?"},
    {"ai": "Of course! What do you need help with today?"},
    {"user": "I’m trying to organize a surprise birthday party for my friend. Any ideas?"},
    {"ai": "That sounds fun! What does your friend enjoy? We can brainstorm a theme based on their interests."},
    {"user": "They love the outdoors and hiking. I was thinking of doing something in a park."},
    {"ai": "Great idea! How about a picnic party? You could set up a cozy area with blankets, snacks, and decorations that reflect nature. Maybe even some outdoor games!"},
    {"user": "I love that! What kind of snacks should I include?"},
    {"ai": "Consider packing finger foods that are easy to eat outdoors. Sandwiches, fruit skewers, veggie platters, and some homemade cookies would be perfect. Don’t forget refreshing drinks!"},
    {"user": "Good call! How about decorations? I want it to feel special."},
    {"ai": "You could use biodegradable decorations like paper lanterns, wildflower bouquets, and string lights. A nature-themed cake would also be a nice centerpiece!"},
    {"user": "These are awesome suggestions! How many people should I invite?"},
    {"ai": "It depends on the size of the park area you’ll be using. A smaller group of close friends might make it feel more intimate, while a larger group could bring more energy. What do you think?"},
    {"user": "Maybe around 10-15 people. That feels manageable."},
    {"ai": "Perfect! Just remember to send out the invites soon so everyone can mark their calendars. Do you need help with the invitation wording?"},
    {"user": "Yes, please! I want it to be fun and casual."},
    {"ai": "How about this: “Join us for a surprise outdoor adventure to celebrate [Friend’s Name]’s birthday! Bring your favorite snacks and your love for nature. Let’s make some unforgettable memories!”"},
    {"user": "I love it! Thanks, Louise. You’ve been a huge help."},
    {"ai": "Anytime! Have a blast planning the party, and let me know if you need anything else."}
]

def example_tool(input_text):
    system_prompt = "You are a Louise AI agent. Louise, you will be fair and reasonable in your responses to subjective statements. Logic puzzle the facts or theorize future events or optimize facts providing resulting inferences. Think"
    return f"{system_prompt} Processed input: {input_text}"

# Initialize the LLM
llm = LangChainChatOpenAI(model="gpt-4o-mini", temperature=0, openai_api_key=key)

# Define tools
tools = [
    Tool(
        name="ExampleTool",
        func=example_tool,
        description="A simple tool that processes input text."
    )
]

# Initialize memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

# Loop through the conversation and add messages to memory
for message in conversation:
    if "user" in message:
        memory.chat_memory.add_user_message(message["user"])
    elif "ai" in message:
        memory.chat_memory.add_ai_message(message["ai"])

# Initialize the agent with memory
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
    handle_parsing_errors=True,
    memory=memory
)

# Query to recall previous discussion
query = "Tell me in detail about our previous discussion about the party. Louise enumerate the foods that will be served at the party."
response = agent.run(query)

# Print the response
print(response)


print(memory.chat_memory.messages)
© www.soinside.com 2019 - 2024. All rights reserved.