使用 llamaIndex TS 代理的 My Line 聊天机器人无法记住之前的问题

问题描述 投票:0回答:1

我创建了一个使用 llamaindex 打字稿的 LINE 聊天机器人,它可以很好地回答问题。并希望使用一个能够记住之前的问题并能够做出响应的代理,该代理可以在循环时使用它,如下面的代码所示。

import { OpenAI } from "llamaindex";
import fs from "node:fs/promises";
import { Document, VectorStoreIndex, QueryEngineTool, OpenAIAgent } from "llamaindex";
import readlineSync from 'readline-sync'; // Changed import to readline-sync

const path = "data/car.json";

async function main() {
  const essay = await fs.readFile(path, "utf-8");
  const document = new Document({ text: essay, id_: path });
  const index = await VectorStoreIndex.fromDocuments([document]);

  const individual_query_engine_tools = [
    new QueryEngineTool({
      queryEngine: index.asQueryEngine(),
      metadata: {
        name: "vector_index",
        description: "Useful when you want to answer questions about information about cars assembled in Thailand.",
      },
    }),
  ];

  const agent = new OpenAIAgent({
    tools: [...individual_query_engine_tools],
    verbose: true
  });

  while (true) {
    const userInput = readlineSync.question("Enter your question (type 'exit' to quit): ");
    if (userInput.toLowerCase() === 'exit') {
      console.log("Exiting...");
      break;
    }
    else if(userInput.toLowerCase() !== '') {
        const response = await agent.chat({
            message: userInput,
          });
        console.log(response.message.content);
      }
  }
}

main().catch(console.error);

输出


Enter your question (type 'exit' to quit): toyota yaris price?
The price of the Toyota Yaris ranges from 559,000 to 709,000 baht.

Enter your question (type 'exit' to quit): color?
The Toyota Yaris is available in 6 colors: white, black, red, blue, gray, and yellow.

Enter your question (type 'exit' to quit): sound system?
The Toyota Yaris is equipped with a JBL sound system featuring 8 speakers, providing a high-quality audio experience.

但是当我在 LINE 聊天机器人项目中使用它时,结果并没有达到预期。代码如下

const line = require('@line/bot-sdk');
const express = require('express');
const axios = require('axios').default;
const dotenv = require('dotenv');
const { OpenAI } = require("llamaindex");
const fs = require("fs").promises;
const { Document, VectorStoreIndex, QueryEngineTool, OpenAIAgent } = require("llamaindex");
const readlineSync = require('readline-sync');

dotenv.config();

const lineConfig = {
    channelAccessToken: process.env.ACESS_TOKEN,
    channelSecret: process.env.SECRET_TOKEN
};

const client = new line.Client(lineConfig);
const app = express();

app.post('/webhook', line.middleware(lineConfig), async(req, res) => {
    try {
        const events = req.body.events; // Corrected variable name to events
        console.log('events =>>>>', events); // Corrected console log message

        if (events && events.length > 0) {
            await Promise.all(events.map(async(event) => {
                await handleEvent(event);
            }));
        }

        res.status(200).send('OK');
    } catch (error) {
        console.error('Error processing events:', error);
        res.status(500).end();
    }
});

const handleEvent = async(event) => {
    if (event.type !== 'message' || event.message.type !== 'text') {
        return null;
    } else if (event.type === 'message') {

        const userQuery = event.message.text;

        // setTimeout(function() {

        //     client.pushMessage(event.source.userId, {
        //         "type": "sticker",
        //         "packageId": "11538",
        //         "stickerId": "51626518"
        //     });

        // }, 2000);



        // Load the chatbot model and process the user query
        const path = "data/car.json";
        const essay = await fs.readFile(path, "utf-8");
        const document = new Document({ text: essay, id_: path });
        const index = await VectorStoreIndex.fromDocuments([document]);

        const individual_query_engine_tools = [
            new QueryEngineTool({
                queryEngine: index.asQueryEngine(),
                metadata: {
                    name: "vector_index",
                    description: "Useful when you want to answer questions about information about cars assembled in Thailand.",
                },
            }),
        ];

        const agent = new OpenAIAgent({
            tools: [...individual_query_engine_tools],
            verbose: true
        });

        try {
            const { response } = await agent.chat({ message: userQuery });

            // Second message with the actual response
            await client.pushMessage(event.source.userId, {
                type: 'text',
                text: response
            });

        } catch (error) {
            console.error("Error handling query:", error);
            return client.pushMessage(event.source.userId, {
                type: 'text',
                text: 'Sorry, there was an error processing your request.'
            });
        }
    }
};

app.listen(4000, () => {
    console.log('listening on 4000');
});

LINE 聊天机器人我可以在第一句话中通过 LINE 回答我的问题,但是当我提出一个需要记住上一个问题的上下文的问题时,它不记得并且总是认为这是第一个问题。

Line 聊天机器人中的答案是这样的。

在此输入图片描述

我个人的看法是,这是因为每次事件发生时代码都会重新启动。 来自代码第 38 行(我不知道我是否正确。)

const handleEvent = async(event) => {

我是泰国大学四年级的学生。我想尝试研究如何从 llamaindex 制作聊天机器人。因此,如果我在代码的任何部分犯了错误,我希望阅读这篇文章的每个人都能教我。感谢您前来阅读。我希望我能尽快解决这个问题。谢谢!

javascript typescript line chatbot llama-index
1个回答
0
投票

我强烈建议您使用 langchain,这样您每次都可以将历史记录传递到提示中。

from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.callbacks import get_openai_callback

contextualize_q_system_prompt = """Given a chat history and the latest user question \
        which might reference context in the chat history, formulate a standalone question \
        which can be understood without the chat history. Do NOT answer the question, \
        just reformulate it if needed and otherwise return it as is."""

contextualize_q_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", contextualize_q_system_prompt),
        MessagesPlaceholder(variable_name="chat_history"),
        ("human", "{question}"),
    ]
)

# This using Streamlit for Session but you can sub whatevewr you want
with get_openai_callback() as cb:
    ai_msg = contextualize_q_prompt.invoke({"question": question, 
"chat_history": st.session_state['history']})

if 'costs' not in st.session_state:
    st.session_state['costs'] = []

with st.chat_message('Question:'):
    st.markdown(question)

with st.chat_message('Answer'):
    st.markdown(ai_msg.content)

for message in st.session_state['history']:
    if isinstance(message, AIMessage):
        with st.chat_message('Answer'):
            st.markdown(message.content)
    elif isinstance(message, HumanMessage):
        with st.chat_message('Question'):
            st.markdown(message.content)

st.session_state['history'].extend([HumanMessage(content=question), ai_msg])

costs = st.session_state.get('costs', [])
st.sidebar.markdown("## Costs")
st.sidebar.markdown(f"**Total cost: ${sum(costs):.5f}**")
for cost in costs:
    st.sidebar.markdown(f"- ${cost:.5f}")
© www.soinside.com 2019 - 2024. All rights reserved.