我正在使用 Streamlit 创建一个聊天应用程序,该应用程序将连接到 LLM 以响应用户。
当法学硕士生成回复时,我希望显示一个微调器,直到可以打印回复为止。
目前,我正在用一个简单的
time.sleep(5)
来嘲笑LLM的响应生成。但是,微调器在这 5 秒内不会显示,然后 UI 将根据响应进行更新。
Streamlit 应用程序:
import streamlit as st
from sensei.ui import text, utils
st.chat_input("Your response...", key="disabled_chat_input", disabled=True)
if "messages" not in st.session_state:
st.session_state["messages"] = [
{"name": "Sensei", "avatar": "🥷", "content": message, "translated": True, "printed": False}
for message in text.ONBOARDING_START_MESSAGES[st.session_state.source_language]
]
for message in st.session_state.messages:
with st.chat_message(name=message["name"], avatar=message["avatar"]):
if message["name"] == "Sensei" and not message["printed"]:
utils.stream_message(message=message["content"])
message["printed"] = True
else:
st.markdown(body=message["content"])
if st.session_state.messages[-1]["name"] == "user":
with st.spinner("Thinking..."):
sensei_response = utils.temp_get_response()
st.session_state.messages.append(
{"name": "Sensei", "avatar": "🥷", "content": sensei_response, "translated": True, "printed": False}
)
st.rerun()
if user_response := st.chat_input(placeholder="Your response...", key="enabled_chat_input"):
st.session_state.messages.append({"name": "user", "avatar": "user", "content": user_response})
st.rerun()
temp_get_response
功能:
def temp_get_response() -> str:
"""Get a response from the user."""
time.sleep(5)
return "Well isn't that just wonderful!"
stream_message
函数(这不是问题,因为如果我在没有流式传输的情况下正常写入,行为是相同的):
def stream_message(message: str) -> None:
"""Stream a message to the chat."""
message_placeholder = st.empty()
full_response = ""
for chunk in message.split():
full_response += chunk + " "
time.sleep(0.1)
message_placeholder.markdown(body=full_response + "▌")
message_placeholder.markdown(body=full_response)
我似乎简单地通过重新排序最后 2 个 if 语句并删除
st.rerun()
调用就解决了这个问题。我真的不太清楚为什么会解决这个问题,但它确实解决了!
import streamlit as st
from sensei.ui import text, utils
st.chat_input("Your response...", key="disabled_chat_input", disabled=True)
if "messages" not in st.session_state:
st.session_state["messages"] = [
{"name": "Sensei", "avatar": "🥷", "content": message, "printed": False}
for message in text.ONBOARDING_START_MESSAGES[st.session_state.source_language]
]
for message in st.session_state.messages:
with st.chat_message(name=message["name"], avatar=message["avatar"]):
if message["name"] == "Sensei" and not message["printed"]:
utils.stream_message(message=message["content"])
message["printed"] = True
else:
st.markdown(body=message["content"])
if user_response := st.chat_input(placeholder="Your response...", key="enabled_chat_input"):
with st.chat_message(name="user", avatar="user"):
st.markdown(body=user_response)
st.session_state.messages.append({"name": "user", "avatar": "user", "content": user_response})
if st.session_state.messages[-1]["name"] == "user":
with st.spinner("Thinking...", cache=True):
sensei_response = utils.temp_get_response()
with st.chat_message(name=message["name"], avatar=message["avatar"]):
utils.stream_message(message=sensei_response)
st.session_state.messages.append({"name": "Sensei", "avatar": "🥷", "content": sensei_response, "printed": True})