我在 langchain 的帮助下使用 GPT4all 创建了 RAG Pipeline。当我将 API 用于其他目的时,它工作正常,但当我点击 RAG 管道时,它会给出访问被拒绝错误。
Error getting: Permission Error
Root Cause: Unable to create model download directory in system profile. Access denied to 'C:\Windows\system32\config\systemprofile.cache'
环境:
Python 版本:3.12
操作系统:Windows
关键包:
完整错误:
{
"error":"Error in Database Chain: Failed to create model download directory",
"traceback":"Traceback (most recent call last):
File "C:\Python312\Lib\site-packages\gpt4all\gpt4all.py", line 323, in retrieve_model
os.makedirs(DEFAULT_MODEL_DIRECTORY, exist_ok=True)
File "", line 215, in makedirs
File "", line 225, in makedirs
PermissionError: [WinError 5] Access is denied:
'C:\\Windows\\system32\\config\\systemprofile\\.cache'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\inetpub\wwwroot\hashmove-ai\HMAI\Controllers\ConversationalAIv2_Controller.py", line 143, in post
hmgpt_response = query_response(input_query, query_intent)
File "C:\inetpub\wwwroot\hashmove-ai\HMAI_Business\ConversationalAIv2_Business.py", line 281, in query_response
return response_functions.get(intent_response, irrelevant)(query)
File "C:\inetpub\wwwroot\hashmove-ai\HMAI_Business\ConversationalAIv2_Business.py", line 251, in logistics
chain = log_chain(load_llm(), vector_db(), memory(), log_prompt())
File "C:\inetpub\wwwroot\hashmove-ai\HMAI_Business\ConversationalAIv2_Business.py", line 90, in load_llm
loaded_llm = GPT4All(
File "C:\Python312\Lib\site-packages\langchain_core\load\serializable.py", line 125, in init
super().init(*args, **kwargs)
File "C:\Python312\Lib\site-packages\pydantic\main.py", line 212, in init
validated_self = self.pydantic_validator.validate_python(data, self_instance=self)
File "C:\Python312\Lib\site-packages\pydantic\_internal\_decorators_v1.py", line 148, in _wrapper1
return validator(values)
File "C:\Python312\Lib\site-packages\langchain_core\utils\pydantic.py", line 208, in wrapper
return func(cls, values)
File "C:\Python312\Lib\site-packages\langchain_community\llms\gpt4all.py", line 145, in validate_environment
values["client"] = GPT4AllModel(
File "C:\Python312\Lib\site-packages\gpt4all\gpt4all.py", line 235, in init
self.config: ConfigType = self.retrieve_model(model_name, model_path=model_path, allow_download=allow_download, verbose=verbose)
File "C:\Python312\Lib\site-packages\gpt4all\gpt4all.py", line 325, in retrieve_model
raise RuntimeError("Failed to create model download directory") from e
RuntimeError: Failed to create model download directory
"}
尝试手动提供缓存路径:
os.environ["TIKTOKEN_CACHE_DIR"] = "embeddings-cache"
但没有成功。
注意:我已使用 IIS 服务器将其部署在服务器上。