我使用vscode SSH远程连接Raspberry PI 5来开发调试OpenCV程序。我想到的程序是实现摄像头视频流的捕获。这个CV程序可以正确识别我有一个相机,但是当我想显示其中一帧时,我发现Mat变量是空的。以下是我的一些留言:
代码:
#include <opencv2/opencv.hpp>
#include <iostream>
int main() {
cv::VideoCapture cap(0);
if (!cap.isOpened()) {
std::cerr << "Could not open or find the camera!" << std::endl;
return -1;
}
cap.set(cv::CAP_PROP_FRAME_WIDTH, 640);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 480);
cap.set(cv::CAP_PROP_FPS, 25);
cv::Mat frame;
while (true) {
cap >> frame;
if (frame.empty()) {
std::cerr << "Could not grab a frame!" << std::endl;
break;
}
cv::imshow("Video Stream", frame);
if (cv::waitKey(30) >= 0) {
break;
}
}
cap.release();
cv::destroyAllWindows();
return 0;
}
/*
The Result of the code is:
Could not grab a frame!
[1] + Done "/usr/bin/gdb" --interpreter=mi --tty=${DbgTerm} 0<"/tmp/Microsoft-MIEngine-In-0kexp0y1.dep" 1>"/tmp/Microsoft-MIEngine-Out-31cvy4ts.g3d"
*/
其他信息:
lytslwy@raspberrypi:~/Project/OpenCV_Pro $ libcamera-hello --list-cameras Available cameras ----------------- 0 : ov5647 [2592x1944 10-bit GBRG] (/base/axi/pcie@120000/rp1/i2c@80000/ov5647@36) Modes: 'SGBRG10_CSI2P' : 640x480 [58.92 fps - (16, 0)/2560x1920 crop] 1296x972 [43.25 fps - (0, 0)/2592x1944 crop] 1920x1080 [30.62 fps - (348, 434)/1928x1080 crop]
By the way,I had tried something to test it:
Like cheese:
(cheese:2840): GStreamer-CRITICAL **: 16:18:55.218: gst_value_set_int_range_step: assertion 'end % step == 0' failed
(cheese:2840): GStreamer-CRITICAL **: 16:18:55.218: gst_value_set_int_range_step: assertion 'end % step == 0' failed
(cheese:2840): cheese-WARNING **: 16:18:55.466: Description Failed to allocate the requested memory.: ../sys/v4l2/gstv4l2src.c(976): gst_v4l2src_decide_allocation (): /GstCameraBin:camerabin/GstWrapperCameraBinSrc:camera_source/GstBin:bin28/GstV4l2Src:v4l2src1:
Buffer pool activation failed
Like GST:
lytslwy@raspberrypi:~/Project/OpenCV_Pro $ gst-launch-1.0 v4l2src ! videoconvert ! autovideosink
Set pause pipe...
The pipeline is in use and does not require PREROLL ...
Pipes are PREROLLED ...
Set playback pipeline ...
New clock: GstSystemClock
ERR:from /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:Description Failed to allocate the requested memory。
Additional debugging information:
../sys/v4l2/gstv4l2src.c(976): gst_v4l2src_decide_allocation (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Buffer pool activation failed
Error: from component /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:Internal data stream error.
Additional debugging information:
../libs/gst/base/gstbasesrc.c(3132): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.095578868
Setting up a NULL pipeline ...
Releasing pipeline resources
...
Like guvcview:
V4L2_CORE: (get_v4l2_frame) video stream must be started first
V4L2_CORE: (get_v4l2_frame) video stream must be started first
V4L2_CORE: (get_v4l2_frame) video stream must be started first
但是我可以使用 `libcamera-hello --qt-preview` 来显示相机在我的电脑中捕获的内容。如果你知道我哪里做错了,无论代码,相机......,给我一些建议。
大约四天前我和你处于同一条船上。我发现使用 picamera 2 是我让它工作的唯一方法。
from picamera2 import Picamera2
from ultralytics import YOLO
import cv2
import numpy as np
# Load the YOLO model
# Initialize the Raspberry Pi camera
picam2 = Picamera2()
config = picam2.create_preview_configuration(main={"size": (640, 480)})
picam2.configure(config)
picam2.start()
# Function to process frames with YOLO
def process_frame(frame):
# Display video feed
try:
print("Press 'q' to quit.")
while True:
# Capture frame from the Raspberry Pi camera
frame = picam2.capture_array()
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR) # Convert to OpenCV's BGR format
# Process frame with YOLO
processed_frame = process_frame(frame)
# Show the video feed
cv2.imshow("Video Feed", processed_frame)
# Exit if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
finally:
picam2.stop()
cv2.destroyAllWindows()