如何同步AVPlayer和MTKView

问题描述 投票:2回答:2

我有一个项目,用户可以在其中拍摄视频,以后再向其中添加滤镜或更改基本设置(例如亮度和对比度)。为此,我使用BBMetalImage,它基本上以MTKView返回视频(在项目中命名为BBMetalView)。

一切都很好-我可以播放视频,添加滤镜和所需的效果,但是没有音频。关于此,我asked the author推荐使用AVPlayer(或AVAudioPlayer)。所以我做了。但是,视频和音频不同步。可能首先是因为比特率不同,并且库的作者还提到帧速率可能因过滤过程而有所不同(这消耗的时间是可变的):

渲染视图的FPS与实际速率不完全相同。由于视频源输出帧是由过滤器和过滤器的处理时间是可变的。

[首先,我将视频裁剪为所需的宽高比(4:5)。我使用AVVideoProfileLevelH264HighAutoLevel作为AVVideoProfileLevelKey在本地保存了该文件(480x600)。使用NextLevelSessionExporter的音频配置具有以下设置:AVEncoderBitRateKey: 128000AVNumberOfChannelsKey: 2AVSampleRateKey: 44100

然后,BBMetalImage库将获取此保存的音频文件,并提供MTKView(BBMetalView)来显示视频,使我可以实时添加滤镜和效果。设置看起来像这样:

self.metalView = BBMetalView(frame: CGRect(x: 0, y: self.view.center.y - ((UIScreen.main.bounds.width * 1.25) / 2), width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.width * 1.25))
self.view.addSubview(self.metalView)
self.videoSource = BBMetalVideoSource(url: outputURL)
self.videoSource.playWithVideoRate = true
self.videoSource.audioConsumer = self.metalAudio
self.videoSource.add(consumer: self.metalView)
self.videoSource.add(consumer: self.videoWriter)
self.audioItem = AVPlayerItem(url: outputURL)                            
self.audioPlayer = AVPlayer(playerItem: self.audioItem)
self.playerLayer = AVPlayerLayer(player: self.audioPlayer)
self.videoPreview.layer.addSublayer(self.playerLayer!)
self.playerLayer?.frame = CGRect(x: 0, y: 0, width: 0, height: 0)
self.playerLayer?.backgroundColor = UIColor.black.cgColor
self.startVideo()

[startVideo()像这样:

audioPlayer.seek(to: .zero)
audioPlayer.play()
videoSource.start(progress: { (frameTime) in
    print(frameTime)
}) { [weak self] (finish) in
guard let self = self else { return }
    self.startVideo()
}

由于外部库/库,这可能都非常模糊。但是,我的问题很简单:有什么办法可以使MTKViewAVPlayer同步?这将对我有很大帮助,并且我相信Silence-GitHub还将在库中实现此功能,以帮助许多其他用户。欢迎提出任何有关如何解决此问题的想法!

ios swift synchronization avplayer metal
2个回答
0
投票

由于您的情况,您似乎需要尝试两种方法中的一种:

[1)尝试应用某种对您的视频具有理想效果的叠加层。我可以尝试这样的操作,但是我个人没有这样做。

2)之前需要花费更多时间-从某种意义上说,该程序将需要一些时间(取决于您的过滤条件,时间会有所不同),以重新制作具有所需效果的新视频。您可以尝试一下,看看它是否适合您。

我使用某个地方的SO源代码制作了自己的VideoCreator。

//Recreates a new video with applied filter
    public static func createFilteredVideo(asset: AVAsset, completionHandler: @escaping (_ asset: AVAsset) -> Void) {
        let url = (asset as? AVURLAsset)!.url
        let snapshot = url.videoSnapshot()
        guard let image = snapshot else { return }
        let fps = Int32(asset.tracks(withMediaType: .video)[0].nominalFrameRate)
        let writer = VideoCreator(fps: Int32(fps), width: image.size.width, height: image.size.height, audioSettings: nil)

        let timeScale = asset.duration.timescale
        let timeValue = asset.duration.value
        let frameTime = 1/Double(fps) * Double(timeScale)
        let numberOfImages = Int(Double(timeValue)/Double(frameTime))
        let queue = DispatchQueue(label: "com.queue.queue", qos: .utility)
        let composition = AVVideoComposition(asset: asset) { (request) in
            let source = request.sourceImage.clampedToExtent()
            //This is where you create your filter and get your filtered result. 
            //Here is an example
            let filter = CIFilter(name: "CIBlendWithMask")
            filter!.setValue(maskImage, forKey: "inputMaskImage")
            filter!.setValue(regCIImage, forKey: "inputImage")
            let filteredImage = filter!.outputImage.clamped(to: source.extent)
            request.finish(with: filteredImage, context: nil)
        }

        var i = 0
        getAudioFromURL(url: url) { (buffer) in
            writer.addAudio(audio: buffer, time: .zero)
            i == 0 ? writer.startCreatingVideo(initialBuffer: buffer, completion: {}) : nil
            i += 1
        }

        let group = DispatchGroup()
        for i in 0..<numberOfImages {
            group.enter()
            autoreleasepool {
                let time = CMTime(seconds: Double(Double(i) * frameTime / Double(timeScale)), preferredTimescale: timeScale)
                let image = url.videoSnapshot(time: time, composition: composition)
                queue.async {

                    writer.addImageAndAudio(image: image!, audio: nil, time: time.seconds)
                    group.leave()
                }
            }
        }
        group.notify(queue: queue) {
            writer.finishWriting()
            let url = writer.getURL()

            //Now create exporter to add audio then do completion handler
            completionHandler(AVAsset(url: url))

        }
    }

    static func getAudioFromURL(url: URL, completionHandlerPerBuffer: @escaping ((_ buffer:CMSampleBuffer) -> Void)) {
        let asset = AVURLAsset(url: url, options: [AVURLAssetPreferPreciseDurationAndTimingKey: NSNumber(value: true as Bool)])

        guard let assetTrack = asset.tracks(withMediaType: AVMediaType.audio).first else {
            fatalError("Couldn't load AVAssetTrack")
        }


        guard let reader = try? AVAssetReader(asset: asset)
            else {
                fatalError("Couldn't initialize the AVAssetReader")
        }
        reader.timeRange = CMTimeRange(start: .zero, duration: asset.duration)

        let outputSettingsDict: [String : Any] = [
            AVFormatIDKey: Int(kAudioFormatLinearPCM),
            AVLinearPCMBitDepthKey: 16,
            AVLinearPCMIsBigEndianKey: false,
            AVLinearPCMIsFloatKey: false,
            AVLinearPCMIsNonInterleaved: false
        ]
        let readerOutput = AVAssetReaderTrackOutput(track: assetTrack,
                                                    outputSettings: outputSettingsDict)
        readerOutput.alwaysCopiesSampleData = false
        reader.add(readerOutput)

        while reader.status == .reading {
            guard let readSampleBuffer = readerOutput.copyNextSampleBuffer() else { break }
            completionHandlerPerBuffer(readSampleBuffer)

        }
    }

extension URL {
    func videoSnapshot(time:CMTime? = nil, composition:AVVideoComposition? = nil) -> UIImage? {
        let asset = AVURLAsset(url: self)
        let generator = AVAssetImageGenerator(asset: asset)
        generator.appliesPreferredTrackTransform = true
        generator.requestedTimeToleranceBefore = .zero
        generator.requestedTimeToleranceAfter = .zero
        generator.videoComposition = composition

        let timestamp = time == nil ? CMTime(seconds: 1, preferredTimescale: 60) : time

        do {
            let imageRef = try generator.copyCGImage(at: timestamp!, actualTime: nil)
            return UIImage(cgImage: imageRef)
        }
        catch let error as NSError
        {
            print("Image generation failed with error \(error)")
            return nil
        }
    }
}

下面是VideoCreator

//
//  VideoCreator.swift
//  AKPickerView-Swift
//
//  Created by Impression7vx on 7/16/19.
//

import UIKit

import AVFoundation
import UIKit
import Photos

@available(iOS 11.0, *)
public class VideoCreator: NSObject {

    private var settings:RenderSettings!
    private var imageAnimator:ImageAnimator!

    public override init() {
        self.settings = RenderSettings()
        self.imageAnimator = ImageAnimator(renderSettings: self.settings)
    }

    public convenience init(fps: Int32, width: CGFloat, height: CGFloat, audioSettings: [String:Any]?) {
        self.init()
        self.settings = RenderSettings(fps: fps, width: width, height: height)
        self.imageAnimator = ImageAnimator(renderSettings: self.settings, audioSettings: audioSettings)
    }

    public convenience init(width: CGFloat, height: CGFloat) {
        self.init()
        self.settings = RenderSettings(width: width, height: height)
        self.imageAnimator = ImageAnimator(renderSettings: self.settings)
    }

    func startCreatingVideo(initialBuffer: CMSampleBuffer?, completion: @escaping (() -> Void)) {
        self.imageAnimator.render(initialBuffer: initialBuffer) {
            completion()
        }
    }

    func finishWriting() {
        self.imageAnimator.isDone = true
    }

    func addImageAndAudio(image:UIImage, audio:CMSampleBuffer?, time:CFAbsoluteTime) {
        self.imageAnimator.addImageAndAudio(image: image, audio: audio, time: time)
    }

    func getURL() -> URL {
        return settings!.outputURL
    }

    func addAudio(audio: CMSampleBuffer, time: CMTime) {
        self.imageAnimator.videoWriter.addAudio(buffer: audio, time: time)
    }
}


@available(iOS 11.0, *)
public struct RenderSettings {

    var width: CGFloat = 1280
    var height: CGFloat = 720
    var fps: Int32 = 2   // 2 frames per second
    var avCodecKey = AVVideoCodecType.h264
    var videoFilename = "video"
    var videoFilenameExt = "mov"

    init() { }

    init(width: CGFloat, height: CGFloat) {
        self.width = width
        self.height = height
    }

    init(fps: Int32) {
        self.fps = fps
    }

    init(fps: Int32, width: CGFloat, height: CGFloat) {
        self.fps = fps
        self.width = width
        self.height = height
    }

    var size: CGSize {
        return CGSize(width: width, height: height)
    }

    var outputURL: URL {
        // Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
        // Using the CachesDirectory ensures the file won't be included in a backup of the app.
        let fileManager = FileManager.default
        if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
            return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
        }
        fatalError("URLForDirectory() failed")
    }
}

@available(iOS 11.0, *)
public class ImageAnimator {

    // Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
    static let kTimescale: Int32 = 600

    let settings: RenderSettings
    let videoWriter: VideoWriter
    var imagesAndAudio:SynchronizedArray<(UIImage, CMSampleBuffer?, CFAbsoluteTime)> = SynchronizedArray<(UIImage, CMSampleBuffer?, CFAbsoluteTime)>()
    var isDone:Bool = false
    let semaphore = DispatchSemaphore(value: 1)

    var frameNum = 0

    class func removeFileAtURL(fileURL: URL) {
        do {
            try FileManager.default.removeItem(atPath: fileURL.path)
        }
        catch _ as NSError {
            // Assume file doesn't exist.
        }
    }

    init(renderSettings: RenderSettings, audioSettings:[String:Any]? = nil) {
        settings = renderSettings
        videoWriter = VideoWriter(renderSettings: settings, audioSettings: audioSettings)
    }

    func addImageAndAudio(image: UIImage, audio: CMSampleBuffer?, time:CFAbsoluteTime) {
        self.imagesAndAudio.append((image, audio, time))
//        print("Adding to array -- \(self.imagesAndAudio.count)")
    }

    func render(initialBuffer: CMSampleBuffer?, completion: @escaping ()->Void) {

        // The VideoWriter will fail if a file exists at the URL, so clear it out first.
        ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)

        videoWriter.start(initialBuffer: initialBuffer)
        videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
            //ImageAnimator.saveToLibrary(self.settings.outputURL)
            completion()
        }

    }

    // This is the callback function for VideoWriter.render()
    func appendPixelBuffers(writer: VideoWriter) -> Bool {

        //Don't stop while images are NOT empty
        while !imagesAndAudio.isEmpty || !isDone {

            if(!imagesAndAudio.isEmpty) {
                let date = Date()

                if writer.isReadyForVideoData == false {
                    // Inform writer we have more buffers to write.
//                    print("Writer is not ready for more data")
                    return false
                }

                autoreleasepool {
                    //This should help but truly doesn't suffice - still need a mutex/lock
                    if(!imagesAndAudio.isEmpty) {
                        semaphore.wait() // requesting resource
                        let imageAndAudio = imagesAndAudio.first()!
                        let image = imageAndAudio.0
//                        let audio = imageAndAudio.1
                        let time = imageAndAudio.2
                        self.imagesAndAudio.removeAtIndex(index: 0)
                        semaphore.signal() // releasing resource
                        let presentationTime = CMTime(seconds: time, preferredTimescale: 600)

//                        if(audio != nil) { videoWriter.addAudio(buffer: audio!) }
                        let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
                        if success == false {
                            fatalError("addImage() failed")
                        }
                        else {
//                            print("Added image @ frame \(frameNum) with presTime: \(presentationTime)")
                        }

                        frameNum += 1
                        let final = Date()
                        let timeDiff = final.timeIntervalSince(date)
//                        print("Time: \(timeDiff)")
                    }
                    else {
//                        print("Images was empty")
                    }
                }
            }
        }

        print("Done writing")
        // Inform writer all buffers have been written.
        return true
    }

}

@available(iOS 11.0, *)
public class VideoWriter {

    let renderSettings: RenderSettings
    var audioSettings: [String:Any]?
    var videoWriter: AVAssetWriter!
    var videoWriterInput: AVAssetWriterInput!
    var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
    var audioWriterInput: AVAssetWriterInput!
    static var ci:Int = 0
    var initialTime:CMTime!

    var isReadyForVideoData: Bool {
        return (videoWriterInput == nil ? false : videoWriterInput!.isReadyForMoreMediaData )
    }

    var isReadyForAudioData: Bool {
        return (audioWriterInput == nil ? false : audioWriterInput!.isReadyForMoreMediaData)
    }

    class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize, alpha:CGImageAlphaInfo) -> CVPixelBuffer? {

        var pixelBufferOut: CVPixelBuffer?

        let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
        if status != kCVReturnSuccess {
            fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
        }

        let pixelBuffer = pixelBufferOut!

        CVPixelBufferLockBaseAddress(pixelBuffer, [])

        let data = CVPixelBufferGetBaseAddress(pixelBuffer)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
                                bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: alpha.rawValue)

        context!.clear(CGRect(x: 0, y: 0, width: size.width, height: size.height))

        let horizontalRatio = size.width / image.size.width
        let verticalRatio = size.height / image.size.height
        //aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
        let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit

        let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)

        let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
        let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0

        let cgImage = image.cgImage != nil ? image.cgImage! : image.ciImage!.convertCIImageToCGImage()

        context!.draw(cgImage!, in: CGRect(x: x, y: y, width: newSize.width, height: newSize.height))

        CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
        return pixelBuffer
    }

    @available(iOS 11.0, *)
    init(renderSettings: RenderSettings, audioSettings:[String:Any]? = nil) {
        self.renderSettings = renderSettings
        self.audioSettings = audioSettings
    }

    func start(initialBuffer: CMSampleBuffer?) {

        let avOutputSettings: [String: AnyObject] = [
            AVVideoCodecKey: renderSettings.avCodecKey as AnyObject,
            AVVideoWidthKey: NSNumber(value: Float(renderSettings.width)),
            AVVideoHeightKey: NSNumber(value: Float(renderSettings.height))
        ]

        let avAudioSettings = audioSettings

        func createPixelBufferAdaptor() {
            let sourcePixelBufferAttributesDictionary = [
                kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
                kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.width)),
                kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.height))
            ]
            pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
                                                                      sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
        }

        func createAssetWriter(outputURL: URL) -> AVAssetWriter {
            guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileType.mov) else {
                fatalError("AVAssetWriter() failed")
            }

            guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaType.video) else {
                fatalError("canApplyOutputSettings() failed")
            }

            return assetWriter
        }

        videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
        videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: avOutputSettings)
//        if(audioSettings != nil) {
        audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil)
        audioWriterInput.expectsMediaDataInRealTime = true
//        }

        if videoWriter.canAdd(videoWriterInput) {
            videoWriter.add(videoWriterInput)
        }
        else {
            fatalError("canAddInput() returned false")
        }

//        if(audioSettings != nil) {
            if videoWriter.canAdd(audioWriterInput) {
                videoWriter.add(audioWriterInput)
            }
            else {
                fatalError("canAddInput() returned false")
            }
//        }

        // The pixel buffer adaptor must be created before we start writing.
        createPixelBufferAdaptor()

        if videoWriter.startWriting() == false {
            fatalError("startWriting() failed")
        }


        self.initialTime = initialBuffer != nil ? CMSampleBufferGetPresentationTimeStamp(initialBuffer!) : CMTime.zero
        videoWriter.startSession(atSourceTime: self.initialTime)

        precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
    }

    func render(appendPixelBuffers: @escaping (VideoWriter)->Bool, completion: @escaping ()->Void) {

        precondition(videoWriter != nil, "Call start() to initialze the writer")

        let queue = DispatchQueue(__label: "mediaInputQueue", attr: nil)
        videoWriterInput.requestMediaDataWhenReady(on: queue) {
            let isFinished = appendPixelBuffers(self)
            if isFinished {
                self.videoWriterInput.markAsFinished()
                self.videoWriter.finishWriting() {
                    DispatchQueue.main.async {
                        print("Done Creating Video")
                        completion()
                    }
                }
            }
            else {
                // Fall through. The closure will be called again when the writer is ready.
            }
        }
    }

    func addAudio(buffer: CMSampleBuffer, time: CMTime) {
        if(isReadyForAudioData) {
            print("Writing audio \(VideoWriter.ci) of a time of \(CMSampleBufferGetPresentationTimeStamp(buffer))")
            let duration = CMSampleBufferGetDuration(buffer)
            let offsetBuffer = CMSampleBuffer.createSampleBuffer(fromSampleBuffer: buffer, withTimeOffset: time, duration: duration)
            if(offsetBuffer != nil) {
                print("Added audio")
                self.audioWriterInput.append(offsetBuffer!)
            }
            else {
                print("Not adding audio")
            }
        }

        VideoWriter.ci += 1
    }

    func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {

        precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")
        //1
        let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size, alpha: CGImageAlphaInfo.premultipliedFirst)!

        return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime + self.initialTime)
    }
}

0
投票

我正在进一步研究-尽管可以更新我的答案,但我希望在新的领域中打开切线以分离这些想法。 Apple声明我们可以使用AVVideoComposition来“要使用创建的视频合成进行回放,请从与合成来源相同的资产中创建一个AVPlayerItem对象,然后将该合成分配给播放器项目的videoComposition属性。要导出合成到新的电影文件,请从相同的源资产创建一个AVAssetExportSession对象,然后将构图分配给导出会话的videoComposition属性。“]

https://developer.apple.com/documentation/avfoundation/avasynchronousciimagefilteringrequest

因此,您可以尝试使用AVPlayer作为原始URL。然后尝试应用您的过滤器。

let filter = CIFilter(name: "CIGaussianBlur")!
let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in

    // Clamp to avoid blurring transparent pixels at the image edges
    let source = request.sourceImage.imageByClampingToExtent()
    filter.setValue(source, forKey: kCIInputImageKey)

    // Vary filter parameters based on video timing
    let seconds = CMTimeGetSeconds(request.compositionTime)
    filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)

    // Crop the blurred output to the bounds of the original image
    let output = filter.outputImage!.imageByCroppingToRect(request.sourceImage.extent)

    // Provide the filter output to the composition
    request.finishWithImage(output, context: nil)
})

let asset = AVAsset(url: originalURL)
let item = AVPlayerItem(asset: asset)
item.videoComposition = composition
let player = AVPlayer(playerItem: item)

我确定您知道从这里做什么。这可能使您可以“实时”进行过滤。我可以看到的潜在问题是,这和您原来遇到的问题一样,而运行每一帧仍需要花费一定的时间,并导致音频和视频之间的延迟。但是,这可能不会发生。如果您确实可以使用,则用户选择过滤器后,即可使用AVAssetExportSession导出特定的videoComposition

更多here,如果您需要帮助!

© www.soinside.com 2019 - 2024. All rights reserved.