我正在尝试使用 proc_open 从 PHP 脚本运行 Python 脚本。 Python 脚本直接从命令行运行时工作正常,但从 PHP 运行时会引发 UnicodeEncodeError。我怀疑这个问题可能与 PHP 中如何捕获和处理输出有关,但我不确定如何修复它。
我正在开发一个项目,需要从 PHP 脚本调用 Python 脚本。 Python 脚本处理一些数据并打印结果。这是我正在使用的 PHP 代码:
<?php
function my_shell_exec($cmd, &$stdout=null, &$stderr=null) {
$proc = proc_open($cmd,[
1 => ['pipe','w'],
2 => ['pipe','w'],
],$pipes);
$stdout = stream_get_contents($pipes[1]);
fclose($pipes[1]);
$stderr = stream_get_contents($pipes[2]);
fclose($pipes[2]);
return proc_close($proc);
}
$output = my_shell_exec('.\\.venv\\Scripts\\activate && .\\.venv\\Scripts\\python.exe infer.py', $stdout, $stderr);
var_dump($output);
var_dump($stdout);
var_dump($stderr);
exit();
PHP 的输出:
int(1)
string(183) "Init model
Model already exists in keras-model/model-transventricular-v3.keras, skipping model init.
Infer image
(1, 256, 256, 1)
" keras-model\model-transventricular-v3.keras "
"
string(2416) "2024-06-09 22:57:14.361024: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
...
Traceback (most recent call last):
File "D:\path\to\infer.py", line 220, in <module>
infer_image("input.dat")
File "D:\path\to\infer.py", line 201, in infer_image
prediction = model.predict(images)
File "C:\path\to\python\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 19-38: character maps to <undefined>
直接命令行执行(工作正常):
(.venv) D:\path\to\project>.\.venv\Scripts\activate && .\.venv\Scripts\python.exe infer.py
2024-06-09 22:56:04.593392: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on.
...
Init model
Model already exists in keras-model/model-transventricular-v3.keras, skipping model init.
Infer image
(1, 256, 256, 1)
I0000 00:00:1717948579.475089 18516 service.cc:153] StreamExecutor device (0): Host, Default Version
2024-06-09 22:56:19.530820: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
I0000 00:00:1717948580.392302 18516 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
1/1 ━━━━━━━━━━━━━━━━━━━━ 2s 2s/step
Python 脚本(infer.py):
import os
from keras.models import load_model # type: ignore
# from keras.optimizers import Adam # type: ignore
from dotenv import load_dotenv
import cv2
import numpy as np
import gdown
if __name__ == '__main__':
if not os.path.exists('input.dat'):
print("input.dat not exists")
exit(0)
print("Init model")
init_model()
print("Infer image")
infer_image("input.dat")
def infer_image(image_filepath):
# Preprocess image
images = preproces(image_filepath)
print(images.shape)
path = os.path.join('keras-model', 'model-transventricular-v3.keras')
print('"', path, '"')
# load model
model = load_model(path)
# Predict image
prediction = model.predict(images)
predictions = np.argmax(prediction, axis=1)
# further processing
不知道刚刚发生了什么,执行此解决方案可以解决我的问题:
if __name__ == '__main__':
sys.stdin.reconfigure(encoding='utf-8')
sys.stdout.reconfigure(encoding='utf-8')
# ...
输出:
int(0)
string(407) "Init model
Model already exists in keras-model/model-transventricular-v3.keras, skipping model init.
Infer image
(1, 256, 256, 1)
" keras-model\model-transventricular-v3.keras "
[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 2s/step
[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m2s[0m 2s/step
"
string(1601) "2024-06-09 23:19:26.963207: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may
see slightly different numerical results due to floating-point round-off errors from different computation orders. To
turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-06-09 23:19:27.937896: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly
different numerical results due to floating-point round-off errors from different computation orders. To turn them off,
set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-06-09 23:19:29.885933: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to
use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler
flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1717949982.644210 22100 service.cc:145] XLA service 0x2097abd9cb0 initialized for platform Host (this does
not guarantee that XLA will be used). Devices:
I0000 00:00:1717949982.644533 22100 service.cc:153] StreamExecutor device (0): Host, Default Version
2024-06-09 23:19:42.697126: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash
reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
I0000 00:00:1717949983.623732 22100 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once
for the lifetime of the process.
"