目前我可以在 HuggingfaceEndpoint 中发送语法作为构造函数中的参数:
class QuestionValidator(BaseModel):
label: str
llm_base = HuggingFaceEndpoint(
endpoint_url=ENDPOINT_URL,
max_new_tokens=1024,
seed=60,
model_kwargs={"grammar": {"type": "json", "value": QuestionValidator.schema()}}
)
llm_with_grammar = prompt | llm_base | JsonOutputParser()
res = llm_with_grammar.invoke(
{"question": "tell me a joke?", "table_schemas": schema_info}
)
print(res)
但是,我想动态发送语法?例如,我可以使用
HuggingFaceEndpoint
来代替构造函数 (invoke
) 来传递特定语法吗?
我还发现langchain中的
with_config
是相关的,但不知道如何使用它:
https://python.langchain.com/v0.1/docs/expression_language/primitives/configure/
类似以下内容:
llm_base = HuggingFaceEndpoint(
endpoint_url=ENDPOINT_URL,
max_new_tokens=1024,
seed=60,
)
grammar1 = {"grammar": {"type": "json", "value": QuestionValidator.schema()}}
grammar2 = {"grammar": {"type": "json", "value": AnswerValidator.schema()}}
llm_with_grammar1 = prompt_tmplate_1 | llm_base.with_config(grammar1) | StrOutputParser()
llm_with_grammar2 = prompt_tmplate_2 | llm_base.with_config(grammar2) | StrOutputParser()
搜索langchain后,我没有找到相关的内容来修复上述问题。我找到了使用
ChatModel
的替代方法:
chat_model = ChatHuggingFace(llm=llm_base)
grammar1 = {"type": "json", "value": QuestionValidator.schema()}
res = chat_model.invoke([
("system", system_prompt.format(table_schemas=schema_info)),
("user", "How many employees do we have in total?")
],
response_format=grammar1
)
我还找到了另一种选择:
def llm_with_grammar(inputs, llm, grammar):
return llm.invoke(inputs, response_format=grammar)
chain = prompt | functools.partial(llm_with_grammar, llm=chat_model, grammar=grammar1) | JsonOutputParser()