如何在 Objective C、iOS 中缩放/调整 CVPixelBufferRef 的大小

问题描述 投票:0回答:5

我正在尝试将图像大小从 CVPixelBufferRef 调整为 299x299。 理想情况下,还会裁剪图像。原始像素缓冲区为 640x320,目标是缩放/裁剪到 299x299,而不丢失宽高比(裁剪到中心)。

我在 Objective C 中找到了调整 UIImage 大小的代码,但没有找到调整 CVPixelBufferRef 大小的代码。我发现了许多不同图像类型的对象 C 的各种非常复杂的示例,但没有一个专门用于调整 CVPixelBufferRef 的大小。

最简单/最好的方法是什么,请包含确切的代码。

...我尝试了 selton 的答案,但这不起作用,因为缩放缓冲区中的结果类型不正确(进入断言代码),

OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
  int doReverseChannels;
  if (kCVPixelFormatType_32ARGB == sourcePixelFormat) {
    doReverseChannels = 1;
  } else if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
    doReverseChannels = 0;
  } else {
    assert(false);  // Unknown source format
  }
ios objective-c iphone image
5个回答
23
投票

使用 CoreMLHelpers 作为灵感。我们可以创建一个 C 函数来满足您的需要。根据您的像素格式要求,我认为该解决方案将是最有效的选择。我使用

AVCaputureVideoDataOutput
进行测试。

我希望这有帮助!

AVCaptureVideoDataOutputSampleBufferDelegate
实施。这里的大部分工作是创建一个居中裁剪矩形。利用
AVMakeRectWithAspectRatioInsideRect
是关键(它正是您想要的)。

- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; {

    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    if (pixelBuffer == NULL) { return; }

    size_t height = CVPixelBufferGetHeight(pixelBuffer);
    size_t width = CVPixelBufferGetWidth(pixelBuffer);

    CGRect videoRect = CGRectMake(0, 0, width, height);
    CGSize scaledSize = CGSizeMake(299, 299);

    // Create a rectangle that meets the output size's aspect ratio, centered in the original video frame
    CGRect centerCroppingRect = AVMakeRectWithAspectRatioInsideRect(scaledSize, videoRect);

    CVPixelBufferRef croppedAndScaled = createCroppedPixelBuffer(pixelBuffer, centerCroppingRect, scaledSize);

    // Do other things here
    // For example
    CIImage *image = [CIImage imageWithCVImageBuffer:croppedAndScaled];
    // End example

    CVPixelBufferRelease(croppedAndScaled);
}

方法一:数据操控与加速

该函数的基本前提是,它首先裁剪到指定的矩形,然后缩放到最终所需的大小。通过简单地忽略矩形之外的数据来实现裁剪。缩放是通过 Accelerate 的

vImageScale_ARGB8888
功能实现的。再次感谢
CoreMLHelpers
的见解。

void assertCropAndScaleValid(CVPixelBufferRef pixelBuffer, CGRect cropRect, CGSize scaleSize) {
    CGFloat originalWidth = (CGFloat)CVPixelBufferGetWidth(pixelBuffer);
    CGFloat originalHeight = (CGFloat)CVPixelBufferGetHeight(pixelBuffer);

    assert(CGRectContainsRect(CGRectMake(0, 0, originalWidth, originalHeight), cropRect));
    assert(scaleSize.width > 0 && scaleSize.height > 0);
}

void pixelBufferReleaseCallBack(void *releaseRefCon, const void *baseAddress) {
    if (baseAddress != NULL) {
        free((void *)baseAddress);
    }
}

// Returns a CVPixelBufferRef with +1 retain count
CVPixelBufferRef createCroppedPixelBuffer(CVPixelBufferRef sourcePixelBuffer, CGRect croppingRect, CGSize scaledSize) {

    OSType inputPixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
    assert(inputPixelFormat == kCVPixelFormatType_32BGRA
           || inputPixelFormat == kCVPixelFormatType_32ABGR
           || inputPixelFormat == kCVPixelFormatType_32ARGB
           || inputPixelFormat == kCVPixelFormatType_32RGBA);

    assertCropAndScaleValid(sourcePixelBuffer, croppingRect, scaledSize);

    if (CVPixelBufferLockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly) != kCVReturnSuccess) {
        NSLog(@"Could not lock base address");
        return nil;
    }

    void *sourceData = CVPixelBufferGetBaseAddress(sourcePixelBuffer);
    if (sourceData == NULL) {
        NSLog(@"Error: could not get pixel buffer base address");
        CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
        return nil;
    }

    size_t sourceBytesPerRow = CVPixelBufferGetBytesPerRow(sourcePixelBuffer);
    size_t offset = CGRectGetMinY(croppingRect) * sourceBytesPerRow + CGRectGetMinX(croppingRect) * 4;

    vImage_Buffer croppedvImageBuffer = {
        .data = ((char *)sourceData) + offset,
        .height = (vImagePixelCount)CGRectGetHeight(croppingRect),
        .width = (vImagePixelCount)CGRectGetWidth(croppingRect),
        .rowBytes = sourceBytesPerRow
    };

    size_t scaledBytesPerRow = scaledSize.width * 4;
    void *scaledData = malloc(scaledSize.height * scaledBytesPerRow);
    if (scaledData == NULL) {
        NSLog(@"Error: out of memory");
        CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
        return nil;
    }

    vImage_Buffer scaledvImageBuffer = {
        .data = scaledData,
        .height = (vImagePixelCount)scaledSize.height,
        .width = (vImagePixelCount)scaledSize.width,
        .rowBytes = scaledBytesPerRow
    };

    /* The ARGB8888, ARGB16U, ARGB16S and ARGBFFFF functions work equally well on
     * other channel orderings of 4-channel images, such as RGBA or BGRA.*/
    vImage_Error error = vImageScale_ARGB8888(&croppedvImageBuffer, &scaledvImageBuffer, nil, 0);
    CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);

    if (error != kvImageNoError) {
        NSLog(@"Error: %ld", error);
        free(scaledData);
        return nil;
    }

    OSType pixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
    CVPixelBufferRef outputPixelBuffer = NULL;
    CVReturn status = CVPixelBufferCreateWithBytes(nil, scaledSize.width, scaledSize.height, pixelFormat, scaledData, scaledBytesPerRow, pixelBufferReleaseCallBack, nil, nil, &outputPixelBuffer);

    if (status != kCVReturnSuccess) {
        NSLog(@"Error: could not create new pixel buffer");
        free(scaledData);
        return nil;
    }

    return outputPixelBuffer;
}

方法二:CoreImage

此方法更易于阅读,并且具有与传入的像素缓冲区格式相当不可知的优点,这对于某些用例来说是一个优点。当然,您只能使用 CoreImage 支持的格式。

CVPixelBufferRef createCroppedPixelBufferCoreImage(CVPixelBufferRef pixelBuffer,
                                                   CGRect cropRect,
                                                   CGSize scaleSize,
                                                   CIContext *context) {

    assertCropAndScaleValid(pixelBuffer, cropRect, scaleSize);

    CIImage *image = [CIImage imageWithCVImageBuffer:pixelBuffer];
    image = [image imageByCroppingToRect:cropRect];

    CGFloat scaleX = scaleSize.width / CGRectGetWidth(image.extent);
    CGFloat scaleY = scaleSize.height / CGRectGetHeight(image.extent);

    image = [image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];

    // Due to the way [CIContext:render:toCVPixelBuffer] works, we need to translate the image so the cropped section is at the origin
    image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-image.extent.origin.x, -image.extent.origin.y)];

    CVPixelBufferRef output = NULL;

    CVPixelBufferCreate(nil,
                        CGRectGetWidth(image.extent),
                        CGRectGetHeight(image.extent),
                        CVPixelBufferGetPixelFormatType(pixelBuffer),
                        nil,
                        &output);

    if (output != NULL) {
        [context render:image toCVPixelBuffer:output];
    }

    return output;
}

创建 CIContext 可以在调用站点完成,也可以创建并存储在属性上。有关选项的信息,请参阅文档

// Create a CIContext using default settings, this will
// typically use the GPU and Metal by default if supported
if (self.context == nil) {
    self.context = [CIContext context];
}

4
投票
    func assertCropAndScaleValid(_ pixelBuffer: CVPixelBuffer, _ cropRect: CGRect, _ scaleSize: CGSize) {
        let originalWidth: CGFloat = CGFloat(CVPixelBufferGetWidth(pixelBuffer))
        let originalHeight: CGFloat = CGFloat(CVPixelBufferGetHeight(pixelBuffer))

        assert(CGRect(x: 0, y: 0, width: originalWidth, height: originalHeight).contains(cropRect))
        assert(scaleSize.width > 0 && scaleSize.height > 0)
    }

    func createCroppedPixelBufferCoreImage(pixelBuffer: CVPixelBuffer,
                                           cropRect: CGRect,
                                           scaleSize: CGSize,
                                           context: inout CIContext
    ) -> CVPixelBuffer {
        assertCropAndScaleValid(pixelBuffer, cropRect, scaleSize)
        var image = CIImage(cvImageBuffer: pixelBuffer)
        image = image.cropped(to: cropRect)

        let scaleX = scaleSize.width / image.extent.width
        let scaleY = scaleSize.height / image.extent.height

        image = image.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
        image = image.transformed(by: CGAffineTransform(translationX: -image.extent.origin.x, y: -image.extent.origin.y))

        var output: CVPixelBuffer? = nil

        CVPixelBufferCreate(nil, Int(image.extent.width), Int(image.extent.height), CVPixelBufferGetPixelFormatType(pixelBuffer), nil, &output)

        if output != nil {
            context.render(image, to: output!)
        } else {
            fatalError("Error")
        }
        return output!
    }

@allenh 答案的 Swift 版本


0
投票

步骤1

[CIImage imageWithCVPixelBuffer:
开始,将 CVPixelBuffer 转换为 UIImage,然后使用标准方法将该 CIImage 转换为 CGImage,然后将该 CGImage 转换为 UIImage。

CIImage *ciimage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimage = [context
                   createCGImage:ciimage
                   fromRect:CGRectMake(0, 0, 
                          CVPixelBufferGetWidth(pixelBuffer),
                          CVPixelBufferGetHeight(pixelBuffer))];

UIImage *uiimage = [UIImage imageWithCGImage:cgimage];
CGImageRelease(cgimage);

步骤2

将图像放置在 UIImageView 中,将其缩放到所需的尺寸/裁剪

UIImageView *imageView = [[UIImageView alloc] initWithFrame:/*CGRect with new dimensions*/];
imageView.contentMode = /*UIViewContentMode with desired scaling/clipping style*/;
imageView.image = uiimage;

步骤3

使用如下所示的方式对所述 imageView 的 CALayer 进行快照:

#define snapshotOfView(__view) (\
(^UIImage *(void) {\
CGRect __rect = [__view bounds];\
UIGraphicsBeginImageContextWithOptions(__rect.size, /*(BOOL)Opaque*/, /*(float)scaleResolution*/);\
CGContextRef __context = UIGraphicsGetCurrentContext();\
[__view.layer renderInContext:__context];\
UIImage *__image = UIGraphicsGetImageFromCurrentImageContext();\
UIGraphicsEndImageContext();\
return __image;\
})()\
)

使用中:

uiimage = snapshotOfView(imageView);

步骤4

使用如下方法将所述 UIImage 快照图像(裁剪/缩放)转换回 CVPixelBuffer:https://stackoverflow.com/a/34990820/2057171

也就是说,

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = @{
                              (NSString*)kCVPixelBufferCGImageCompatibilityKey : @YES,
                              (NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : @YES,
                              };

    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
                        CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                        &pxbuffer);
    if (status!=kCVReturnSuccess) {
        NSLog(@"Operation failed");
    }
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
                                                 CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGAffineTransform flipVertical = CGAffineTransformMake( 1, 0, 0, -1, 0, CGImageGetHeight(image) );
    CGContextConcatCTM(context, flipVertical);
    CGAffineTransform flipHorizontal = CGAffineTransformMake( -1.0, 0.0, 0.0, 1.0, CGImageGetWidth(image), 0.0 );
    CGContextConcatCTM(context, flipHorizontal);

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    return pxbuffer;
}

使用中:

pixelBuffer = [self pixelBufferFromCGImage:uiimage];

0
投票

CVPixelBufferBuffer 用于裁剪/缩放/旋转的实用程序

https://gist.github.com/lich4/d977986b92245aaf0f83aa0e1e0317de


-1
投票

您可以考虑使用

CIImage
:

CIImage *image = [CIImage imageWithCVPixelBuffer:pxbuffer];
CIImage *scaledImage = [image imageByApplyingTransform:(CGAffineTransformMakeScale(0.1, 0.1))];
CVPixelBufferRef scaledBuf = [scaledImage pixelBuffer];

您应该更改比例以适合您的目标尺寸。

最新问题
© www.soinside.com 2019 - 2025. All rights reserved.