我想从 RTSP 摄像机流中提取帧,每分钟存储有时间戳和日期,持续 24 小时。我该怎么办?我的代码:
imagesFolder = "C:/Users/<user>/documents"
cap = cv2.VideoCapture("rtsp://username:password@cameraIP/axis-media/media.amp")
frameRate = cap.get(5) #frame rate
count = 0
while cap.isOpened():
frameId = cap.get(1) # current frame number
ret, frame = cap.read()
if (ret != True):
break
if (frameId % math.floor(frameRate) == 0):
filename = imagesFolder + "/image_" + str(datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")) + ".jpg"
cv2.imwrite(filename, frame)
cap.release()
print ("Done!")
cv2.destroyAllWindows()
您可以简单地在帧捕获之间等待 60 秒,并在 24*60 个周期后中断循环。
我尝试使用公共 RTSP 流测试我的代码,但出现黑帧,因此我无法测试我的代码。
这是代码:
import cv2
import time
from datetime import datetime
import getpass
#imagesFolder = "C:/Users/<user>/documents"
# https://stackoverflow.com/questions/842059/is-there-a-portable-way-to-get-the-current-username-in-python
imagesFolder = "C:/Users/" + getpass.getuser() + "/documents"
#cap = cv2.VideoCapture("rtsp://username:password@cameraIP/axis-media/media.amp")
# Use public RTSP Streaming for testing, but I am getting black frames!
cap = cv2.VideoCapture("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov")
frameRate = cap.get(5) #frame rate
count = 0
while cap.isOpened():
start_time = time.time()
frameId = cap.get(1) # current frame number
ret, frame = cap.read()
if (ret != True):
break
filename = imagesFolder + "/image_" + str(datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")) + ".jpg"
cv2.imwrite(filename, frame)
# Show frame for testing
cv2.imshow('frame', frame)
cv2.waitKey(1)
count += 1
#Break loop after 24*60 minus
if count > 24*60:
break
elapsed_time = time.time() - start_time
# Wait for 60 seconds (subtract elapsed_time in order to be accurate).
time.sleep(60 - elapsed_time)
cap.release()
print ("Done!")
cv2.destroyAllWindows()
上面的代码示例不起作用 - 第一帧每分钟重复一次。
建议的解决方案:
这是更新后的代码(从公共 RTSP 读取):
import cv2
import time
from datetime import datetime
import getpass
imagesFolder = "C:/Users/" + getpass.getuser() + "/documents"
#cap = cv2.VideoCapture("rtsp://username:password@cameraIP/axis-media/media.amp")
# Use public RTSP Streaming for testing:
cap = cv2.VideoCapture("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov")
#cap = cv2.VideoCapture("test2.mp4")
frameRate = cap.get(5) #frame rate
cur_time = time.time() # Get current time
# start_time_24h measures 24 hours
start_time_24h = cur_time
# start_time_1min measures 1 minute
start_time_1min = cur_time - 59 # Subtract 59 seconds for start grabbing first frame after one second (instead of waiting a minute for the first frame).
while cap.isOpened():
frameId = cap.get(1) # current frame number
ret, frame = cap.read()
if (ret != True):
break
cur_time = time.time() # Get current time
elapsed_time_1min = cur_time - start_time_1min # Time elapsed from previous image saving.
# If 60 seconds were passed, reset timer, and store image.
if elapsed_time_1min >= 60:
# Reset the timer that is used for measuring 60 seconds
start_time_1min = cur_time
filename = imagesFolder + "/image_" + str(datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")) + ".jpg"
#filename = "image_" + str(datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")) + ".jpg"
cv2.imwrite(filename, frame)
# Show frame for testing
cv2.imshow('frame', frame)
cv2.waitKey(1)
elapsed_time_24h = time.time() - start_time_24h
#Break loop after 24*60*60 seconds
if elapsed_time_24h > 24*60*60:
break
#time.sleep(60 - elapsed_time) # Sleeping is a bad idea - we need to grab all the frames.
cap.release()
print ("Done!")
cv2.destroyAllWindows()
现在来自公共 RTSP 的图像看起来不错:
您可以尝试使用 FFmpeg(而不是 OpenCV)捕获视频流。
阅读以下博客:使用 FFMPEG 在 Python 中读取和写入视频帧
如果您使用的是 Windows 操作系统,请从此处下载最新的稳定 64 位静态版本(当前为 4.2.2)。
解压 zip 文件,并将
ffmpeg.exe
放在与 Python 脚本相同的文件夹中。
这里是代码(使用 FFmpeg 作为子进程和
stdout
作为 PIPE
进行捕获):
import cv2
import time
from datetime import datetime
import getpass
import numpy as np
import subprocess as sp
imagesFolder = "C:/Users/" + getpass.getuser() + "/documents"
#cap = cv2.VideoCapture("rtsp://username:password@cameraIP/axis-media/media.amp")
# Use public RTSP Streaming for testing:
in_stream = "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov"
cap = cv2.VideoCapture(in_stream)
#cap = cv2.VideoCapture("test2.mp4")
frameRate = cap.get(5) #frame rate
# Get resolution of input video
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Release VideoCapture - it was used just for getting video resolution
cap.release()
#in_stream = "rtsp://xxx.xxx.xxx.xxx:xxx/Streaming/Channels/101?transportmode=multicast",
#Use public RTSP Streaming for testing
in_stream = "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov"
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
FFMPEG_BIN = "ffmpeg" # on Linux ans Mac OS (also works on Windows when ffmpeg.exe is in the path)
#FFMPEG_BIN = "ffmpeg.exe" # on Windows
command = [ FFMPEG_BIN,
'-i', in_stream,
'-f', 'image2pipe',
'-pix_fmt', 'bgr24',
'-vcodec', 'rawvideo', '-an', '-']
# Open sub-process that gets in_stream as input and uses stdout as an output PIPE.
pipe = sp.Popen(command, stdout=sp.PIPE, bufsize=10**8)
cur_time = time.time() # Get current time
# start_time_24h measures 24 hours
start_time_24h = cur_time
# start_time_1min measures 1 minute
start_time_1min = cur_time - 30 # Subtract 30 seconds for start grabbing first frame after 30 seconds (instead of waiting a minute for the first frame).
while True:
# read width*height*3 bytes from stdout (= 1 frame)
raw_frame = pipe.stdout.read(width*height*3)
if len(raw_frame) != (width*height*3):
print('Error reading frame!!!') # Break the loop in case of an error (too few bytes were read).
break
cur_time = time.time() # Get current time
elapsed_time_1min = cur_time - start_time_1min # Time elapsed from previous image saving.
# If 60 seconds were passed, reset timer, and store image.
if elapsed_time_1min >= 60:
# Reset the timer that is used for measuring 60 seconds
start_time_1min = cur_time
# Transform the byte read into a numpy array, and reshape it to video frame dimensions
frame = np.fromstring(raw_frame, np.uint8)
frame = frame.reshape((height, width, 3))
filename = imagesFolder + "/image_" + str(datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")) + ".jpg"
#filename = "image_" + str(datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")) + ".jpg"
cv2.imwrite(filename, frame)
# Show frame for testing
cv2.imshow('frame', frame)
cv2.waitKey(1)
elapsed_time_24h = time.time() - start_time_24h
#Break loop after 24*60*60 seconds
if elapsed_time_24h > 24*60*60:
break
#time.sleep(60 - elapsed_time) # Sleeping is a bad idea - we need to grab all the frames.
print ("Done!")
pipe.kill() # Kill the sub-process after 24 hours
cv2.destroyAllWindows()
我尝试使用 apscheduler API 但没有成功。也许有人可以以不同的方式看待这个问题并使其发挥作用。
import cv2
import math
import datetime
from datetime import datetime
from apscheduler.schedulers.blocking import BlockingScheduler
imagesFolder = "C:/Users/" + getpass.getuser() + "/documents"
cap = cv2.VideoCapture("rtsp://username:password@CameraIP/axis-media/media.amp")
frameRate = cap.get(5) #frame rate
count = 0
def some_job():
while cap.isOpened():
frameId = cap.get(1)
ret, frame = cap.read()
if (ret != True):
break
if (frameId % math.floor(frameRate) == 0):
filename = imagesFolder + "/image_" + str(datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")) + ".jpg"
cv2.imwrite(filename, frame)
scheduler = BlockingScheduler()
scheduler.add_job(some_job, 'interval', seconds=60, start_date='2020-04-07 16:23:00', end_date='2020-04-08 16:23:00')
scheduler.start()
cap.release()
print ("Done!")
# Closes all the frames
cv2.destroyAllWindows()
这比其他解决方案效果更好。这里唯一的挑战是照片在前 3 分钟(前 3 张照片在几秒钟内拍摄,但在每分钟稍后保存)保存后停止保存。现在的解决方案是确保它在停止前长达 24 小时内节省每一分钟。
import cv2
import time
import getpass
import numpy as np
import subprocess as sp
from datetime import datetime
imagesFolder = "C:/Users/" + getpass.getuser() + "/documents"
# RTSP Streaming:
in_stream = "rtsp://username:password@cameraIP/axis-media/media.amp"
cap = cv2.VideoCapture(in_stream)
frameRate = cap.get(5) #frame rate
# Get resolution of input video
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Release VideoCapture - it was used just for getting video resolution
cap.release()
in_stream = "rtsp://username:password@cameraIP/axis-media/media.amp"
FFMPEG_BIN = "ffmpeg.exe" # on Windows
# Suspecting camera supports TCP protocol hence added: '-rtsp_transport', 'tcp'
command = [ FFMPEG_BIN,
'-rtsp_transport', 'tcp',
'-i', in_stream,
'-f', 'image2pipe',
'-pix_fmt', 'bgr24',
'-vcodec', 'rawvideo', '-an', '-']
# Open sub-process that gets in_stream as input and uses stdout as an output PIPE.
pipe = sp.Popen(command, stdout=sp.PIPE, bufsize=10**8)
cur_time = time.time() # Get current time
# start_time_24h measures 24 hours
start_time_24h = cur_time
# start_time_1min measures 1 minute
start_time_1min = cur_time - 30 # Subtract 30 seconds for start grabbing first frame after 30 seconds (instead of waiting a minute for the first frame).
while True:
# read width*height*3 bytes from stdout (= 1 frame)
raw_frame = pipe.stdout.read(width*height*3)
if len(raw_frame) != (width*height*3):
print('Error reading frame!!!') # Break the loop in case of an error (too few bytes were read).
break
cur_time = time.time() # Get current time
elapsed_time_1min = cur_time - start_time_1min # Time elapsed from previous image saving.
# If 60 seconds were passed, reset timer, and store image.
if elapsed_time_1min >= 60:
# Reset the timer that is used for measuring 60 seconds
start_time_1min = cur_time
# Transform the byte read into a numpy array, and reshape it to video frame dimensions
frame = np.fromstring(raw_frame, np.uint8)
frame = frame.reshape((height, width, 3))
filename = imagesFolder + "/image_" + str(datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")) + ".jpg"
#filename = "image_" + str(datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")) + ".jpg"
cv2.imwrite(filename, frame)
# Show frame for testing
cv2.imshow('frame', frame)
cv2.waitKey(1)
elapsed_time_24h = time.time() - start_time_24h
#Break loop after 24*60*60 seconds
if elapsed_time_24h > 24*60*60:
break
#time.sleep(60 - elapsed_time) # Sleeping is a bad idea - we need to grab all the frames.
print ("Done!")
pipe.kill() # Kill the sub-process after 24 hours
cv2.destroyAllWindows()