ValueError:当没有一个输出键时,不支持`run`。得到['answer'、'sources'、'source_documents']。 (langchain/Streamlit)

问题描述 投票:0回答:2

我收到一个错误提示

ValueError: `run` not supported when there is not exactly one output key. Got ['answer', 'sources', 'source_documents'].

这是回溯错误

File "C:\Users\Science-01\anaconda3\envs\gpt-dev\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
File "C:\Users\Science-01\Documents\Working Folder\Chat Bot\Streamlit\alpha-test.py", line 67, in <module>
    response = chain.run(prompt, return_only_outputs=True)
File "C:\Users\Science-01\anaconda3\envs\gpt-dev\lib\site-packages\langchain\chains\base.py", line 228, in run
    raise ValueError(

我尝试在 Streamlit 上运行 langchain。我用

RetrievalQAWithSourcesChain
ChatPromptTemplate

这是我的代码

import os

import streamlit as st

from apikey import apikey

from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.llms import OpenAI

from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

from langchain.chat_models import ChatOpenAI

os.environ['OPENAI_API_KEY'] = apikey

st.title('🐔 OpenAI Testing')
prompt = st.text_input('Put your prompt here')

loader = DirectoryLoader('./',glob='./*.pdf', loader_cls=PyPDFLoader)
pages = loader.load_and_split()

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size = 1000,
    chunk_overlap  = 200,
    length_function = len,
)

docs = text_splitter.split_documents(pages)
embeddings = OpenAIEmbeddings()

docsearch = Chroma.from_documents(docs, embeddings)

system_template = """
Use the following pieces of context to answer the users question.
If you don't know the answer, just say that "I don't know", don't try to make up an answer.
----------------
{summaries}"""

messages = [
    SystemMessagePromptTemplate.from_template(system_template),
    HumanMessagePromptTemplate.from_template("{question}")
]
prompt = ChatPromptTemplate.from_messages(messages)

chain_type_kwargs = {"prompt": prompt}
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, max_tokens=256)  # Modify model_name if you have access to GPT-4
chain = RetrievalQAWithSourcesChain.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=docsearch.as_retriever(search_kwargs={'k':2}),
    return_source_documents=True,
    chain_type_kwargs=chain_type_kwargs
)

if prompt:
    response = chain.run(prompt, return_only_outputs=True)
    st.write(response)

好像错误在

chain.run()
,有人知道如何解决这个错误吗?谢谢!

python openai-api langchain py-langchain
2个回答
3
投票

我找到了解决方案,更改此代码

if prompt:
    response = chain.run(prompt, return_only_outputs=True)
    st.write(response)

到此

if st.button('Generate'):
    if prompt:
        with st.spinner('Generating response...'):
            response = chain({"question": prompt}, return_only_outputs=True)
            answer = response['answer']
            st.write(answer)
    else:
        st.warning('Please enter your prompt')

我还添加了

st.button
st.spinner
st.warning
(可选)


0
投票

只需删除运行即可工作 示例:

parent_chain = SequentialChain(
    chains=[chain, chain2],
    input_variables=['name'],output_variables=['person','dob'],
    verbose=True)

if input_text: 
    st.write(parent_chain.run({'name':input_text}))

当我在“parent_chain”前面输入“run”时,上面的脚本给我一个错误,但是当我像下面一样删除 run 时,它会给我一个输出

输出示例:

parent_chain = SequentialChain(
    chains=[chain, chain2],
    input_variables=['name'],output_variables=['person','dob'],
    verbose=True)

if input_text: 
    st.write(parent_chain({'name':input_text}))
© www.soinside.com 2019 - 2024. All rights reserved.