我正在尝试将我从
AVAudioPCMBuffer
的点击块获得的AVAudioNode
转换为CMSampleBuffer
以附加AVAssetWriter
的音频输入。我还创建了一个具有正确时间的新示例缓冲区 - 缓冲区时间和资产编写器开始写入的时间的增量。如果我使用来自相机的AVCaptureAudioDataOutput
,代码实际上可以正常工作,但如果我从AVAudioEngine
.获得pcm数据,则代码无法正常工作。
我正在注入正确的音频设置。从我正在安装 tap 的同一节点的输出格式中获取它。即,
engine.mainMixerNode.outputFormat(forBus: 0).settings
这是我在 Github 上找到的用于在 pcm 和示例缓冲区之间进行转换的代码。
public extension AVAudioPCMBuffer {
/// Converts `AVAudioPCMBuffer` to `CMSampleBuffer`
/// - Returns: `CMSampleBuffer`
func asCMSampleBuffer() -> CMSampleBuffer? {
let audioBufferList = mutableAudioBufferList
let asbd = format.streamDescription
var sampleBuffer: CMSampleBuffer? = nil
var format: CMFormatDescription? = nil
var status = CMAudioFormatDescriptionCreate(
allocator: kCFAllocatorDefault,
asbd: asbd,
layoutSize: 0,
layout: nil,
magicCookieSize: 0,
magicCookie: nil,
extensions: nil,
formatDescriptionOut: &format
)
guard (status == noErr) else { return nil }
var timing: CMSampleTimingInfo = CMSampleTimingInfo(
duration: CMTime(value: 1, timescale: Int32(asbd.pointee.mSampleRate)),
presentationTimeStamp: CMClockGetTime(CMClockGetHostTimeClock()),
decodeTimeStamp: CMTime.invalid
)
status = CMSampleBufferCreate(
allocator: kCFAllocatorDefault,
dataBuffer: nil,
dataReady: false,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: format,
sampleCount: CMItemCount(frameLength),
sampleTimingEntryCount: 1,
sampleTimingArray: &timing,
sampleSizeEntryCount: 0,
sampleSizeArray: nil,
sampleBufferOut: &sampleBuffer
)
guard (status == noErr) else {
Log.error(
category: "AVAudioPCMBuffer to CMSampleBuffer",
message: "CMSampleBufferCreate returned error",
metadata: ["Error": status]
)
return nil
}
guard let sampleBuffer else {
Log.error(
category: "AVAudioPCMBuffer to CMSampleBuffer",
message: "Sample buffer not found"
)
return nil
}
status = CMSampleBufferSetDataBufferFromAudioBufferList(
sampleBuffer,
blockBufferAllocator: kCFAllocatorDefault,
blockBufferMemoryAllocator: kCFAllocatorDefault,
flags: 0,
bufferList: audioBufferList
)
guard (status == noErr) else {
Log.error(
category: "AVAudioPCMBuffer to CMSampleBuffer",
message: "CMSampleBufferSetDataBufferFromAudioBufferList returned error",
metadata: ["Error": status]
)
return nil
}
return sampleBuffer
}
}
这就是我如何创建具有新时序的新样本缓冲区。
time
是AVAssetWriter
开始写作的时间。
func getCorrectTimeStampedBuffer(from buffer: CMSampleBuffer) -> CMSampleBuffer? {
let timeStamp = CMSampleBufferGetPresentationTimeStamp(buffer).seconds
let newTime: CMTime = .init(seconds: timeStamp - time, preferredTimescale: timeScale)
var newBuffer: CMSampleBuffer?
var count: Int = 0
var info = CMSampleTimingInfo(
duration: newTime,
presentationTimeStamp: newTime,
decodeTimeStamp: newTime
)
CMSampleBufferGetSampleTimingInfoArray(
buffer,
entryCount: 0,
arrayToFill: &info,
entriesNeededOut: &count
)
info.decodeTimeStamp = newTime
info.presentationTimeStamp = newTime
CMSampleBufferCreateCopyWithNewTiming(
allocator: kCFAllocatorDefault,
sampleBuffer: buffer,
sampleTimingEntryCount: count,
sampleTimingArray: &info,
sampleBufferOut: &newBuffer
)
return newBuffer
}
我已经仔细检查并正确使用了音频设置,因为我在开始写作之前进行了设置。
func getAudioInput() -> AVAssetWriterInput {
let input = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings)
input.expectsMediaDataInRealTime = true
return input
}
我已经在状态上安装了一个观察器。
writer.publisher(for: \.status)
.filter { $0 == .failed }
.sink { [writer] value in
print("failed", writer.error!)
}
.store(in: &cancellables)
我像这样附加音频。
func appendAudio(buffer: CMSampleBuffer) {
guard audioInput?.isReadyForMoreMediaData == true else {
Log.debug(
category: "VideoRecorder",
message: "Audio input can't be appended as it's not ready for more media data"
)
return
}
let newBuffer = getCorrectTimeStampedBuffer(from: buffer)
let hasAppended = audioInput!.append(newBuffer ?? buffer)
if hasAppended {
Log.debug(category: "VideoRecorder appendAudio", message: "Buffer appended")
} else {
Log.error(
category: "VideoRecorder appendAudio",
message: "Could not append audio",
metadata: [
"Asset writer status": String(describing: assetWriter?.status.rawValue),
"Asset writer error": String(describing: assetWriter?.error),
"Asset writer error description": String(describing: assetWriter?.error?.localizedDescription)
]
)
}
}
调用
audioInput!.append(newBuffer ?? buffer)
后立即发生错误。在此之前状态是.writing
.
因此,
hasAppeared
是false
.
最糟糕的是错误什么也没说。我什至无法使用
-12780
代码找到任何信息。 AVAssetWriter
是否提供任何上下文错误?
这是返回的错误
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x283895a40 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}}