混合到 mp4 文件时为什么会跳过我提供给它的帧之一?

问题描述 投票:0回答:1

背景

过去,我基于这里创建甚至分享了如何从一系列位图创建MP4文件的示例,这里,并且我还在Github上发布了代码,这里

它似乎可以很好地处理单个图像,例如:

@WorkerThread
private fun testImage() {
    Log.d("AppLog", "testImage")
    val startTime = System.currentTimeMillis()
    Log.d("AppLog", "start")
    val videoFile = File(ContextCompat.getExternalFilesDirs(this, null)[0], "image.mp4")
    if (videoFile.exists())
        videoFile.delete()
    videoFile.parentFile!!.mkdirs()
    val timeLapseEncoder = TimeLapseEncoder()
    val bitmap = BitmapFactory.decodeResource(resources, R.drawable.test)
    val width = bitmap.width
    val height = bitmap.height
    timeLapseEncoder.prepareForEncoding(videoFile.absolutePath, width, height)
    val frameDurationInMs = 1000
    timeLapseEncoder.encodeFrame(bitmap, frameDurationInMs)
    timeLapseEncoder.finishEncoding()
    val endTime = System.currentTimeMillis()
    Log.d("AppLog", "it took ${endTime - startTime} ms to convert a single image ($width x $height) to mp4")
}

问题

当我尝试处理多个帧,甚至只是 2 帧时,我可以看到有时它会跳过一些帧,使视频也更短。

例如,这个场景应该需要 2 帧,每帧需要 5 秒,但输出为 5 秒而不是 10 秒,并且它忽略整个第二帧:

@WorkerThread
private fun testImages() {
    Log.d("AppLog", "testImages")
    val startTime = System.currentTimeMillis()
    Log.d("AppLog", "start")
    val videoFile = File(ContextCompat.getExternalFilesDirs(this, null)[0], "images.mp4")
    if (videoFile.exists())
        videoFile.delete()
    videoFile.parentFile!!.mkdirs()
//        Log.d("AppLog", "success creating parent?${videoFile.parentFile.exists()}")
    val timeLapseEncoder = TimeLapseEncoder()
    val bitmap = BitmapFactory.decodeResource(resources, R.drawable.frame1)
    val width = bitmap.width
    val height = bitmap.height
    timeLapseEncoder.prepareForEncoding(videoFile.absolutePath, width, height)
    val delay = 5000
    timeLapseEncoder.encodeFrame(bitmap, delay)
    val   bitmap2 = BitmapFactory.decodeResource(resources, R.drawable.frame2)
    timeLapseEncoder.encodeFrame(bitmap2, delay)
    timeLapseEncoder.finishEncoding()
    val endTime = System.currentTimeMillis()
    Log.d("AppLog", "it took ${endTime - startTime} ms to convert a single image ($width x $height) to ${videoFile.absolutePath} ${videoFile.exists()} ${videoFile.length()}")
}

我尝试过的事情

我尝试检查代码并进行调试,但看起来不错......

奇怪的是,如果我改变持续时间并添加更多帧,似乎没问题,例如:

这将产生 12 秒的视频,其中前 6 秒是一张图像,其余 6 秒是另一张图像。

我也尝试与我最初所做的等效,只是在更多框架中:

for (i in 0 until 500)
    timeLapseEncoder.encodeFrame(bitmap, 10)
val bitmap2 = BitmapFactory.decodeResource(resources, R.drawable.frame2)
for (i in 0 until 500)
    timeLapseEncoder.encodeFrame(bitmap2, 10)

这根本没有为每张图像创造 5 秒的时间...

我认为这可能是 fps 的问题,但它已经在代码中设置得很好,为 30,这是合理的,并且可能高于 MP4 格式允许的最小值。

问题

  1. 我的使用方式有什么问题吗?为什么它会跳帧,使视频比我预期的要短?

  2. 是否有更好的方法从图像创建 MP4 文件,在其中逐帧设置每一帧的持续时间?不需要大型库并且没有有问题的许可证的解决方案?

android mp4 video-encoding mediamuxer
1个回答
0
投票

我想我已经找到了解决这个问题的方法,使用更简化的方法来复用到 MP4:

class BitmapToVideoEncoder(outputPath: String?, width: Int, height: Int, bitRate: Int, frameRate: Int) {
    private var encoder: MediaCodec?
    private val inputSurface: Surface
    private var mediaMuxer: MediaMuxer?
    private var videoTrackIndex = 0
    private var isMuxerStarted: Boolean
    private var presentationTimeUs: Long

    init {
        val format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, width, height)
        format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface)
        format.setInteger(MediaFormat.KEY_BIT_RATE, bitRate)
        format.setInteger(MediaFormat.KEY_FRAME_RATE, frameRate)
        format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1)
        encoder = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC)
        encoder!!.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
        inputSurface = encoder!!.createInputSurface()
        encoder!!.start()
        mediaMuxer = MediaMuxer(outputPath!!, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
        isMuxerStarted = false
        presentationTimeUs = 0
    }

    @Throws(IOException::class)
    fun encodeFrame(bitmap: Bitmap, durationInMs: Long) {
        val frameDurationUs = durationInMs * 1000
        drawBitmapToSurface(bitmap)
        // Wait for a short period to ensure frame is submitted
        try {
            Thread.sleep(durationInMs / 10)
        } catch (e: InterruptedException) {
            e.printStackTrace()
        }
        drainEncoder(false)
        presentationTimeUs += frameDurationUs
    }

    @Throws(IOException::class)
    fun finishEncoding() {
        drainEncoder(true)
        release()
    }

    private fun drawBitmapToSurface(bitmap: Bitmap) {
        val canvas = inputSurface.lockCanvas(null)
        canvas.drawBitmap(bitmap, 0f, 0f, null)
        inputSurface.unlockCanvasAndPost(canvas)
    }

    @Throws(IOException::class)
    private fun drainEncoder(endOfStream: Boolean) {
        if (endOfStream) {
          //Sending end of stream signal to encoder
            encoder!!.signalEndOfInputStream()
        }

        val bufferInfo = MediaCodec.BufferInfo()
        while (true) {
            val encoderStatus = encoder!!.dequeueOutputBuffer(bufferInfo, 10000)
            @Suppress("DEPRECATION")
            when {
                encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER -> {
                    if (!endOfStream) {
                        break
                    }
                }
                encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED -> {
                    //Output buffers changed
                }
                encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED -> {
                    if (isMuxerStarted) {
                        throw RuntimeException("format changed twice")
                    }
                    val newFormat = encoder!!.outputFormat
                    videoTrackIndex = mediaMuxer!!.addTrack(newFormat)
                    mediaMuxer!!.start()
                    isMuxerStarted = true
                }
                encoderStatus < 0 -> {
        //                Unexpected result from encoder
                }
                else -> {
                    val encodedData = encoder!!.getOutputBuffer(encoderStatus)
                        ?: throw RuntimeException("encoderOutputBuffer $encoderStatus was null")
                    if (bufferInfo.size != 0) {
                        if (!isMuxerStarted) {
                            throw RuntimeException("muxer hasn't started")
                        }
                        // Adjust the bufferInfo to have the correct presentation time
                        bufferInfo.presentationTimeUs = presentationTimeUs
                        encodedData.position(bufferInfo.offset)
                        encodedData.limit(bufferInfo.offset + bufferInfo.size)
                        mediaMuxer!!.writeSampleData(videoTrackIndex, encodedData, bufferInfo)
                    }
                    encoder!!.releaseOutputBuffer(encoderStatus, false)
                    if ((bufferInfo.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                        //End of stream reached
                        break
                    }
                }
            }
        }
    }

    private fun release() {
        if (encoder != null) {
            encoder!!.stop()
            encoder!!.release()
            encoder = null
        }
        if (mediaMuxer != null) {
            mediaMuxer!!.stop()
            mediaMuxer!!.release()
            mediaMuxer = null
        }
    }

}

这里处理的是所有输入位图的分辨率与 CTOR 参数相同且不透明的情况。此外,输入分辨率应与设备可以处理编码的分辨率相匹配。

为了解决这个问题,有两种方法:

  1. 切换到支持透明度的WEBM,然后始终居中。
  2. 需要设置一些背景并始终适合中心。

至于你可以处理的分辨率,我需要检查一下这样的东西支持什么:

  MediaCodec codec = MediaCodec.createEncoderByType(mimeType);
        MediaCodecInfo codecInfo = codec.getCodecInfo();
        MediaCodecInfo.CodecCapabilities capabilities = codecInfo.getCapabilitiesForType(mimeType);
        MediaCodecInfo.VideoCapabilities videoCapabilities = capabilities.getVideoCapabilities();
        codec.release();

我没有在这里添加这个,因为它变得更复杂。我可能会将它添加到存储库中,或者准备拥有它。

根据我的测试,这应该可以正常工作。

© www.soinside.com 2019 - 2024. All rights reserved.