我正在使用 GStreamer 在 Rust 中构建一个视频分析应用程序,用户可以选择不同的处理规模(例如,分析每一帧、每第二帧、每第十帧等)。我尝试使用 videorate 元素实现丢帧,但它似乎只影响进度的显示,而不影响实际的帧处理。
例如,2 分钟长的 30fps 视频(约 3600 帧):
选择10倍速(每10帧处理一次)时,进度显示“360/360帧”。 当达到“360/360”时,它表示已完成,但会在后台继续处理。 实际处理时间与处理所有帧相同。 后端似乎仍在解码和处理所有 3600 帧。 这是我当前的管道设置:
fn create_single_pipeline(
path: &PathBuf,
data_tx: &Sender<FrameCmd>,
video_index: usize,
orientation: Orientation,
is_muxed: bool,
processing_scale: f32, // this adjusting the input frame
) -> Result<gst::Pipeline> {
let file_info = MediaInfo::from(path)?;
let duration = file_info.duration;
let video = file_info.video.first().ok_or(anyhow!("No video streams found"))?;
let input_fps = f64::from(video.framerate.numer()) / f64::from(video.framerate.denom());
let target_fps = (input_fps * processing_scale as f64).max(1.0);
let video_estimated_total_frames: u64 = ((target_fps * duration.as_secs_f64()).floor() as u64).max(1);
let pipeline = gst::Pipeline::new();
let src = gst::ElementFactory::make("filesrc")
.property("location", path.to_str().ok_or(anyhow!("Invalid path"))?)
.build()?;
let decodebin = gst::ElementFactory::make("decodebin").build()?;
pipeline.add_many([&src, &decodebin])?;
gst::Element::link_many([&src, &decodebin])?;
let queue = gst::ElementFactory::make("queue")
.property("max-size-buffers", 2u32)
.property("max-size-bytes", 0u32)
.property("max-size-time", 0u64)
.build()?;
let videorate = gst::ElementFactory::make("videorate")
.property("drop-only", true)
.property("max-rate", (target_fps as f64).ceil() as i32)
.build()?;
let rate_caps = gst::Caps::builder("video/x-raw")
.field("framerate", gst::Fraction::new(
(target_fps * 1000.0) as i32,
1000
))
.build();
let rate_filter = gst::ElementFactory::make("capsfilter")
.property("caps", &rate_caps)
.build()?;
let convert = gst::ElementFactory::make("videoconvert").build()?;
let scale = gst::ElementFactory::make("videoscale").build()?;
pipeline.add_many([&queue, &videorate, &rate_filter, &convert, &scale])?;
gst::Element::link_many([&queue, &videorate, &rate_filter, &convert, &scale])?;
这是我处理视频接收器的方法:
fn setup_video_sink(
sink: gst_app::AppSink,
width: u32,
height: u32,
estimated_total_frames: u64,
side: VideoFrameSide,
video_index: usize,
data_tx: Sender<FrameCmd>,
) -> Result<(), Error> {
sink.set_callbacks(
gst_app::AppSinkCallbacks::builder()
.new_sample(move |appsink| {
let sample = appsink.pull_sample().map_err(|_| gst::FlowError::Eos)?;
let buffer = sample.buffer().ok_or_else(|| {
element_error!(
appsink,
gst::ResourceError::Failed,
("Failed to get buffer from appsink")
);
gst::FlowError::Error
})?;
let map = buffer.map_readable()?;
let samples = map.as_slice();
let frame = match side {
VideoFrameSide::Left => VideoFrameData::left(
width,
height,
estimated_total_frames,
samples,
video_index,
),
VideoFrameSide::Right => VideoFrameData::right(
width,
height,
estimated_total_frames,
samples,
video_index,
),
}?;
data_tx.send(FrameCmd::video_frame(frame))?;
Ok(gst::FlowSuccess::Ok)
})
.build(),
);
Ok(())
}
我使用视频速率进行丢帧的方法是否正确?尽管有速率限制,管道似乎仍在处理所有帧。
我是否应该考虑不同的方法或不同的元素来实现解码时的实际丢帧?
使用 AppSink 是否有一些特定的内容可能会绕过丢帧?
环境:
GStreamer 版本:最新 操作系统:macOS Rust GStreamer 绑定版本:最新
任何指导将不胜感激。
在应用程序外部检查您的设置是否按预期工作。
例如
% gst-launch-1.0 -v videotestsrc num-buffers=100 ! video/x-raw,framerate=30/1 ! videorate max-rate=3 ! identity silent=false ! fakesink
[..]
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:00.000000000, duration: 0:00:00.333333333, offset: 0, offset_end: 1, flags: 00000040 discont , meta: GstParentBufferMeta) 0x156f15ec0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:00.333333333, duration: 0:00:00.333333333, offset: 1, offset_end: 2, flags: 00000000 , meta: GstParentBufferMeta) 0x146f0c5e0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:00.666666666, duration: 0:00:00.333333334, offset: 2, offset_end: 3, flags: 00000000 , meta: GstParentBufferMeta) 0x156f162a0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:01.000000000, duration: 0:00:00.333333333, offset: 3, offset_end: 4, flags: 00000000 , meta: GstParentBufferMeta) 0x156f15ec0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:01.333333333, duration: 0:00:00.333333333, offset: 4, offset_end: 5, flags: 00000000 , meta: GstParentBufferMeta) 0x156f162a0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:01.666666666, duration: 0:00:00.333333334, offset: 5, offset_end: 6, flags: 00000000 , meta: GstParentBufferMeta) 0x156f15ec0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:02.000000000, duration: 0:00:00.333333333, offset: 6, offset_end: 7, flags: 00000000 , meta: GstParentBufferMeta) 0x156f162a0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:02.333333333, duration: 0:00:00.333333333, offset: 7, offset_end: 8, flags: 00000000 , meta: GstParentBufferMeta) 0x156f15ec0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:02.666666666, duration: 0:00:00.333333334, offset: 8, offset_end: 9, flags: 00000000 , meta: GstParentBufferMeta) 0x156f162a0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:03.000000000, duration: 0:00:00.333333333, offset: 9, offset_end: 10, flags: 00000000 , meta: GstParentBufferMeta) 0x156f15ec0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (614400 bytes, dts: none, pts: 0:00:03.333333333, duration: 0:00:00.333333333, offset: 10, offset_end: 11, flags: 00000000 , meta: GstParentBufferMeta) 0x156f162a0
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = event ******* (identity0:sink) E (type: eos (28174), ) 0x60000347a290
从 100 个帧中创建 10 个左右的帧就好了。
或者,您可以在管道中的某个垫上安装
PadProbe
,然后您自己决定何时丢弃框架。
请注意,解码器仍将处理完整的流。除非您的用例中有非常特殊的编解码器,否则它在时间上不依赖于前面的帧。