VNSequenceRequestHandler 上的 EXC_BAD_ACCESS

问题描述 投票:0回答:2

以下代码使用 Vision 和 AVFoundation 框架在 macOS 上的内置摄像头上启用面部跟踪。在某些情况下,代码会由于队列中工作线程上的

EXC_BAD_ACCESS (code=2)
而崩溃
com.apple.VN.trackersCollectionManagementQueue (serial)
。只要没有检测到人脸,应用程序就会按预期工作,但一旦检测到人脸并尝试通过该方法跟踪它,就会崩溃

[_sequenceRequestHandler performRequests:requests onCVPixelBuffer:pixelBuffer orientation:exifOrientation error:&error]

该方法在AVCaptureVideoDataOutputSampleBufferDelegate方法内部调用

- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection)

据我了解,

EXC_BAD_ACCESS
 意味着我无法访问内存[1],错误代码(2)
KERN_PROTECTION_FAILURE
意味着指定的内存有效,但不允许所需的访问形式 [2]. 一份(可能已过时的)技术说明解释说:这是由线程尝试写入只读内存引起的。 [3]。由此,我明白问题不是由过早释放或内存损坏引起的,而是跨线程的内存访问控制。

我相信问题是在更新后出现的。在游戏模板项目(Metal、SceneKit 和 SpriteKit)的方案编辑器中选中 Debug Executable 时会发生崩溃,但在应用程序和文档应用程序模板中使用时不会崩溃。该代码在适应物理设备上的 iOS 时也能按预期工作。我尝试通过删除尽可能多的代码来隔离问题,并且可以将以下文件添加到任何模板中。

头文件

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>

NS_ASSUME_NONNULL_BEGIN

@interface LSPVision : NSObject <AVCaptureVideoDataOutputSampleBufferDelegate>

@property (nonatomic) AVCaptureVideoPreviewLayer *previewLayer;

- (void)captureFrame;

@end

NS_ASSUME_NONNULL_END

实施文件

#import "LSPVision.h"
#import <Vision/Vision.h>

@implementation LSPVision
{
    // AVCapture stuff
    AVCaptureSession *_session;
    AVCaptureVideoDataOutput *_videoDataOutput;
    
    dispatch_queue_t _videoDataOutputQueue;
    CGSize _captureDeviceResolution;
    
    // Vision requests
    NSMutableArray *_detectionRequests; // Array of VNDetectFaceRectanglesRequest
    NSMutableArray *_trackingRequests; // Array of VNTrackObjectRequest
    VNSequenceRequestHandler *_sequenceRequestHandler;
    
    BOOL _frameCapture;
}

- (nonnull instancetype)init
{
    self = [super init];
    if(self)
    {
        _session = [self _setupAVCaptureSession];
        
        [self designatePreviewLayerForCaptureSession:_session];
        
        [self _prepareVisionRequest];
        _frameCapture = YES;
        
        if (_session) {
            [_session startRunning];
        }
    }
            return self;
}

# pragma mark Setup AVSession

- (AVCaptureSession *)_setupAVCaptureSession {
    
    AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
    AVCaptureDevice *device;
    
    #if defined(TARGET_MACOS)
    if (@available(macOS 10.15, *)) {
        AVCaptureDeviceDiscoverySession *discoverySession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionFront];
        
        device = discoverySession.devices.firstObject;
    }
    
    #endif
    
    if (device != nil) {
        AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
        
        if ([captureSession canAddInput:deviceInput]) {
            [captureSession addInput:deviceInput];
        }
        
        AVCaptureDeviceFormat *lowestResolution = [self _lowestResolution420Format:device];
        
        if (lowestResolution != nil) {
            if ([device lockForConfiguration:nil]) {
                
                device.activeFormat = lowestResolution;
                [device unlockForConfiguration];
            }
        }
    }
    
    if (device != nil) {
        [self _configureVideoDataOutput:device captureSession:captureSession];
        return captureSession;
    }
    
    NSLog(@"Hold up, something went wrong with AVCaptureSession");
    [self _tearDownAVCapture];
    return nil;
}

- (AVCaptureDeviceFormat *)_lowestResolution420Format:(AVCaptureDevice *)device {
    
    AVCaptureDeviceFormat *lowestResolutionFormat = nil;
    CMVideoDimensions lowestResolutionDimensions = { .height = (int32_t)10000, .width = (int32_t)10000 };
    
    for (AVCaptureDeviceFormat *deviceFormat in device.formats) {
        
        CMFormatDescriptionRef deviceFormatDescription = deviceFormat.formatDescription;
        
        if (CMFormatDescriptionGetMediaSubType(deviceFormatDescription) == (kCVPixelFormatType_420YpCbCr8BiPlanarFullRange | kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)) {
            CMVideoDimensions candidateDimensions = CMVideoFormatDescriptionGetDimensions(deviceFormatDescription);
            
            if ((lowestResolutionFormat == nil) || candidateDimensions.width > lowestResolutionDimensions.width) {
                lowestResolutionFormat = deviceFormat;
                lowestResolutionDimensions = candidateDimensions;
                NSLog(@"Device Format: Width: %d, Height: %d", candidateDimensions.width, candidateDimensions.height);
                _captureDeviceResolution.width =  candidateDimensions.width;
                _captureDeviceResolution.height =  candidateDimensions.height;
            }
        }
    }
    
    if (lowestResolutionFormat != nil) {
        return lowestResolutionFormat;
    }
    
    return nil;
}

- (void)designatePreviewLayerForCaptureSession:(AVCaptureSession *)session {
    
    AVCaptureVideoPreviewLayer *videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
    self.previewLayer = videoPreviewLayer;
    
    videoPreviewLayer.name = @"Camera Preview";
}

- (void)_configureVideoDataOutput:(AVCaptureDevice *)inputDevice captureSession:(AVCaptureSession *)captureSession {
    
    AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new];
    videoDataOutput.alwaysDiscardsLateVideoFrames = true;
    
    // Create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured.
    // A serial dispatch queue must be used to guarantee that video frames will be delivered in order.
    dispatch_queue_t videoDataOutputQueue = dispatch_queue_create("com.example.apple-samplecode.VisionFaceTrack", NULL);
    [videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
    
    if ([captureSession canAddOutput:videoDataOutput]) {
        [captureSession addOutput:videoDataOutput];
    }
    
    [videoDataOutput connectionWithMediaType:AVMediaTypeVideo].enabled = true;
    
    _videoDataOutput = videoDataOutput;
    _videoDataOutputQueue = videoDataOutputQueue;
}

# pragma mark Vision Request

- (void)_prepareVisionRequest {
    
    NSMutableArray *requests = [NSMutableArray array];
    
    VNRequestCompletionHandler handlerBlock =  ^(VNRequest * _Nonnull request, NSError * _Nullable error) {
        
        if (error) {
            NSLog(@"Handler error: %@", error);
        }
        
        VNDetectFaceRectanglesRequest *faceDetectionRequest = (VNDetectFaceRectanglesRequest *)request;
        
        dispatch_async(dispatch_get_main_queue(), ^{
            
            for (VNFaceObservation *observation in faceDetectionRequest.results) {
                
                VNTrackObjectRequest *faceTrackingRequest = [[VNTrackObjectRequest alloc] initWithDetectedObjectObservation:observation];
                NSUInteger count = requests.count;
                [requests insertObject:faceTrackingRequest atIndex:count];
            }
            
            self->_trackingRequests = [requests copy];
        });
    };
    
    VNDetectFaceRectanglesRequest *faceDetectionRequest = [[VNDetectFaceRectanglesRequest alloc] initWithCompletionHandler:handlerBlock];
    
    _detectionRequests = [NSMutableArray arrayWithObject:faceDetectionRequest];
    _sequenceRequestHandler = [[VNSequenceRequestHandler alloc] init];
}

# pragma mark Delegate functions

// AVCaptureVideoDataOutputSampleBufferDelegate
// Handle delegate method callback on receiving a sample buffer.
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    
    if (_frameCapture == YES) {
        
        NSMutableDictionary *requestHandlerOptions = [NSMutableDictionary dictionary];
        
        CFTypeRef cameraIntrinsicData = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil);
        if (cameraIntrinsicData != nil) {
            [requestHandlerOptions setObject:CFBridgingRelease(cameraIntrinsicData) forKey:VNImageOptionCameraIntrinsics];
        }
        
        CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        if (!pixelBuffer) {
            NSLog(@"Failed to obtain a CVPixelBuffer for the current output frame.");
            return;
        }
        
        #if defined(TARGET_MACOS)
        CGImagePropertyOrientation exifOrientation = kCGImagePropertyOrientationLeftMirrored;
        #endif
        
        NSError *error;
        NSArray *requests;
        
        if (_trackingRequests.count > 0) {
            
            requests = _trackingRequests;
            
        } else {
            
            // No tracking object detected, so perform initial detection
            VNImageRequestHandler *imageRequestHandler = [[VNImageRequestHandler alloc] initWithCVPixelBuffer:pixelBuffer orientation:exifOrientation options:requestHandlerOptions];
            
            NSArray *detectionRequests = _detectionRequests;
            if (detectionRequests == nil) {
                return;
            }
            
            [imageRequestHandler performRequests:_detectionRequests error:&error];
            if (error) {
                NSLog(@"Failed to perform FaceRectangleRequest:  %@", error);
            }
            
            return;
        }
        
        // SequenceRequesthandler results in 10-20% cpu utilization
        [_sequenceRequestHandler performRequests:requests onCVPixelBuffer:pixelBuffer orientation:exifOrientation error:&error];
        
        if (error) {
            NSLog(@"Failed to perform SequenceRequest:  %@", error);
            return;
        }
        
        // Setup the next round of tracking
        NSMutableArray *newTrackingRequests = [NSMutableArray array];
        
        for (VNTrackObjectRequest *trackingRequest in requests) {
            
            NSArray *results = trackingRequest.results;
            
            trackingRequest.trackingLevel = VNRequestTrackingLevelFast;
            
            VNDetectedObjectObservation *observation = results[0];
            
            if (![observation isKindOfClass:[VNDetectedObjectObservation class]]) {
                return;
            }
            
            if (!trackingRequest.isLastFrame) {
                if (observation.confidence > 0.3f ) {
                    trackingRequest.inputObservation = observation;
                } else {
                    trackingRequest.lastFrame = true;
                }
                NSUInteger number = newTrackingRequests.count;
                [newTrackingRequests insertObject:trackingRequest atIndex:number];
            }
        }
        
        _trackingRequests = newTrackingRequests;
        
        if (newTrackingRequests.count == 0) {
            // Nothing to track, so abort.
            return;
        }
        
        NSMutableArray *faceLandmarksRequests = [NSMutableArray array];
        
        for (VNTrackObjectRequest* trackingRequest in newTrackingRequests) {
            
            VNRequestCompletionHandler handlerBlock = ^(VNRequest * _Nonnull request, NSError * _Nullable error) {
                if (error != nil) {
                    NSLog(@"Facelandmarks error: %@", error);
                }
                
                VNDetectFaceLandmarksRequest *landmarksRequest = (VNDetectFaceLandmarksRequest *)request;
                NSArray *results = landmarksRequest.results;
                if (results == nil) {
                    return;
                }
                
                // Perform all UI updates (drawing) on the main queue, not the background queue on which this handler is being called.
                dispatch_async(dispatch_get_main_queue(), ^{
                    
                    for (VNFaceObservation *faceObservation in results) {
                        [self _setEyePositionsForFace:faceObservation];
                        //NSLog(@"seeing face");
                    }
                });
            };
            
            VNDetectFaceLandmarksRequest *faceLandmarksRequest = [[VNDetectFaceLandmarksRequest alloc] initWithCompletionHandler:handlerBlock];
            
            NSArray *trackingResults = trackingRequest.results;
            if (trackingResults == nil) {
                return;
            }
            
            VNDetectedObjectObservation *observation = trackingResults[0];
            if (observation == nil) {
                return;
            }
            
            VNFaceObservation *faceObservation = [VNFaceObservation observationWithBoundingBox:observation.boundingBox];
            faceLandmarksRequest.inputFaceObservations = @[faceObservation];
            
            // Continure to track detected facial landmarks.
            NSUInteger nr = faceLandmarksRequests.count;
            [faceLandmarksRequests insertObject:faceLandmarksRequest atIndex:nr];
            
            VNImageRequestHandler *imageRequestHandler = [[VNImageRequestHandler alloc] initWithCVPixelBuffer:pixelBuffer orientation:exifOrientation options:requestHandlerOptions];
            
            [imageRequestHandler performRequests:faceLandmarksRequests error:&error];
            
            if (error != nil) {
                NSLog(@"Failed to perform FaceLandmarkRequest: %@", error);
            }
        }
    }
    //_frameCapture = NO;
}

# pragma mark Helper Functions

- (void)captureFrame {
    _frameCapture = YES;
}

- (void)_tearDownAVCapture {
    
    _videoDataOutput = nil;
    _videoDataOutputQueue = nil;
    
}

@end

调试

崩溃似乎与 Metal 有关,可能是在多个线程上。当 Vision 框架(来自工作线程)从私有神经网络框架 (Espresso) 执行 Metal Performance Shaders 时,会发生崩溃。在崩溃之前,存在与命令缓冲区相关的死锁。这最终导致地址消毒剂报告

BUS on unknown address
。我想这就是我得到
KERN_PROTECTION_FAILURE
的原因。其他线程要么正在执行 Metal,要么只是在等待。我不知道信号量是否与 Metal CPU/GPU 同步或其他有关。当代码与App模板一起使用时,Vision框架在主线程上运行,不会发生崩溃。除了提交错误报告之外,我不知道如何以任何有意义的方式解决这个问题。话虽这么说,我的调试技能还有很多不足之处,因此非常感谢任何帮助 - 不仅在解决问题方面,而且在理解问题方面。地址清理程序和线程清理程序已打开以进行以下输出。由于篇幅限制,可以在here阅读崩溃报告。现在可以从 dropbox 查看和下载崩溃的项目(在我的计算机上)。我的电脑是 2019 MB Pro 16。

控制台输出

ErrorTest1(13661,0x107776e00) malloc: nano zone abandoned due to inability to preallocate reserved vm space.
2020-12-24 09:48:35.709965+0100 ErrorTest1[13661:811227] Metal GPU Frame Capture Enabled
2020-12-24 09:48:36.675326+0100 ErrorTest1[13661:811227] [plugin] AddInstanceForFactory: No factory registered for id <CFUUID 0x6030000b7b50> F8BB1C28-BAE8-11D6-9C31-00039315CD46
2020-12-24 09:48:36.707535+0100 ErrorTest1[13661:811227] [plugin] AddInstanceForFactory: No factory registered for id <CFUUID 0x6030000bb5a0> 30010C1C-93BF-11D8-8B5B-000A95AF9C6A
2020-12-24 09:48:36.845641+0100 ErrorTest1[13661:811227] [] CMIOHardware.cpp:379:CMIOObjectGetPropertyData Error: 2003332927, failed
2020-12-24 09:48:38.717546+0100 ErrorTest1[13661:811794] [logging-persist] cannot open file at line 44580 of [02c344acea]
2020-12-24 09:48:38.717648+0100 ErrorTest1[13661:811794] [logging-persist] os_unix.c:44580: (0) open(/var/db/DetachedSignatures) - Undefined error: 0
2020-12-24 09:48:38.778975+0100 ErrorTest1[13661:811761] [Metal Compiler Warning] Warning: Compilation succeeded with: 

program_source:61:16: warning: unused variable 'input_slice_count'
    const uint input_slice_count = (INPUT_FEATURE_CHANNELS + 3) / 4;
               ^
2020-12-24 09:48:38.779198+0100 ErrorTest1[13661:811812] [Metal Compiler Warning] Warning: Compilation succeeded with: 

program_source:121:24: warning: comparison of integers of different signs: 'int' and 'const constant uint' (aka 'const constant unsigned int')
    for(int kd = 0; kd < params.inputFeatureChannels; kd++)  // _ID = 3, RGB
                    ~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
2020-12-24 09:48:38.779441+0100 ErrorTest1[13661:811838] [Metal Compiler Warning] Warning: Compilation succeeded with: 

program_source:121:24: warning: comparison of integers of different signs: 'int' and 'const constant uint' (aka 'const constant unsigned int')
    for(int kd = 0; kd < params.inputFeatureChannels; kd++)  // _ID = 3, RGB
                    ~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
2020-12-24 09:48:39.072518+0100 ErrorTest1[13661:811838] [Metal Compiler Warning] Warning: Compilation succeeded with: 

program_source:61:16: warning: unused variable 'input_slice_count'
    const uint input_slice_count = (INPUT_FEATURE_CHANNELS + 3) / 4;
               ^
2020-12-24 09:48:39.073210+0100 ErrorTest1[13661:811842] [Metal Compiler Warning] Warning: Compilation succeeded with: 

program_source:98:16: warning: unused variable 'fm_group'
    const uint fm_group = threadgroup_id.z - splitId * params.simdsPerGroupData;
               ^
program_source:121:24: warning: comparison of integers of different signs: 'int' and 'const constant uint' (aka 'const constant unsigned int')
    for(int kd = 0; kd < params.inputFeatureChannels; kd++)  // _ID = 3, RGB
                    ~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
2020-12-24 09:48:39.073538+0100 ErrorTest1[13661:811812] [Metal Compiler Warning] Warning: Compilation succeeded with: 

program_source:98:16: warning: unused variable 'fm_group'
    const uint fm_group = threadgroup_id.z - splitId * params.simdsPerGroupData;
               ^
program_source:121:24: warning: comparison of integers of different signs: 'int' and 'const constant uint' (aka 'const constant unsigned int')
    for(int kd = 0; kd < params.inputFeatureChannels; kd++)  // _ID = 3, RGB
                    ~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~

LLDB bt

* thread #5, queue = 'com.apple.VN.trackersCollectionManagementQueue', stop reason = EXC_BAD_ACCESS (code=2, address=0x70000deb1ff8)
    frame #0: 0x000000010739db33 libsystem_pthread.dylib`___chkstk_darwin + 55
    frame #1: 0x000000010739dafc libsystem_pthread.dylib`thread_start + 20
    frame #2: 0x000000010724277b libMTLCapture.dylib`___lldb_unnamed_symbol2507$$libMTLCapture.dylib + 585
    frame #3: 0x00007fff29f597be MPSNeuralNetwork`___lldb_unnamed_symbol4427$$MPSNeuralNetwork + 1907
    frame #4: 0x00007fff29f5a3c2 MPSNeuralNetwork`___lldb_unnamed_symbol4432$$MPSNeuralNetwork + 756
    frame #5: 0x00007fff29f5aa39 MPSNeuralNetwork`___lldb_unnamed_symbol4435$$MPSNeuralNetwork + 83
    frame #6: 0x00007fff339e50e8 Espresso`Espresso::MPSEngine::mps_convolution_kernel::recreate_kernel() + 230
    frame #7: 0x00007fff339e3c95 Espresso`Espresso::MPSEngine::convolution_kernel_base<Espresso::generic_convolution_kernel>::set_biases(std::__1::shared_ptr<Espresso::blob<float, 1> >) + 455
    frame #8: 0x00007fff339e724b Espresso`Espresso::MPSEngine::convolution_kernel_proxy::set_biases(std::__1::shared_ptr<Espresso::blob<float, 1> >) + 103
    frame #9: 0x00007fff338b3a8f Espresso`Espresso::generic_convolution_kernel::set_biases(std::__1::shared_ptr<Espresso::blob<float, 1> >, std::__1::shared_ptr<Espresso::abstract_batch>) + 49
    frame #10: 0x00007fff338bdee1 Espresso`Espresso::load_network_layers_post_dispatch(std::__1::shared_ptr<Espresso::net> const&, std::__1::shared_ptr<Espresso::SerDes::generic_serdes_object> const&, std::__1::shared_ptr<Espresso::cpu_context_transfer_algo_t> const&, std::__1::shared_ptr<Espresso::net_info_ir_t> const&, bool, Espresso::network_shape const&, Espresso::compute_path, bool, std::__1::shared_ptr<Espresso::blob_storage_abstract> const&) + 5940
    frame #11: 0x00007fff338ba6ee Espresso`Espresso::load_network_layers_internal(std::__1::shared_ptr<Espresso::SerDes::generic_serdes_object>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<Espresso::abstract_context> const&, Espresso::network_shape const&, std::__1::basic_istream<char, std::__1::char_traits<char> >*, Espresso::compute_path, bool, std::__1::shared_ptr<Espresso::blob_storage_abstract> const&) + 793
    frame #12: 0x00007fff338c9294 Espresso`Espresso::load_and_shape_network(std::__1::shared_ptr<Espresso::SerDes::generic_serdes_object> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<Espresso::abstract_context> const&, Espresso::network_shape const&, Espresso::compute_path, std::__1::shared_ptr<Espresso::blob_storage_abstract> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 576
    frame #13: 0x00007fff338cb715 Espresso`Espresso::load_network(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<Espresso::abstract_context> const&, Espresso::compute_path, bool) + 2496
    frame #14: 0x00007fff33d9603c Espresso`EspressoLight::espresso_plan::add_network(char const*, espresso_storage_type_t) + 350
    frame #15: 0x00007fff33daa817 Espresso`espresso_plan_add_network + 294
    frame #16: 0x00007fff30479b9d Vision`+[VNEspressoHelpers createSingleNetworkPlanFromResourceName:usingProcessingDevice:lowPriorityMode:inputBlobNames:outputBlobNames:explicitNetworkLayersStorageType:espressoResources:error:] + 517
    frame #17: 0x00007fff3047992d Vision`+[VNEspressoHelpers createSingleNetworkPlanFromResourceName:usingProcessingDevice:lowPriorityMode:inputBlobNames:outputBlobNames:espressoResources:error:] + 151
    frame #18: 0x00007fff303ce123 Vision`-[VNRPNTrackerEspressoModelCacheManager espressoResourcesFromOptions:error:] + 417
    frame #19: 0x00007fff303ce8c8 Vision`-[VNObjectTrackerRevision2 initWithOptions:error:] + 262
    frame #20: 0x00007fff304152df Vision`__54-[VNTrackerManager _createTracker:type:options:error:]_block_invoke + 207
    frame #21: 0x00000001072fc0b0 libdispatch.dylib`_dispatch_client_callout + 8
    frame #22: 0x000000010730d3b2 libdispatch.dylib`_dispatch_lane_barrier_sync_invoke_and_complete + 135
    frame #23: 0x00007fff30414f01 Vision`-[VNTrackerManager _createTracker:type:options:error:] + 261
    frame #24: 0x00007fff30414b52 Vision`-[VNTrackerManager trackerWithOptions:error:] + 509
    frame #25: 0x00007fff304dda4a Vision`-[VNRequestPerformer trackerWithOptions:error:] + 85
    frame #26: 0x00007fff30343ac4 Vision`-[VNTrackingRequest internalPerformRevision:inContext:error:] + 436
    frame #27: 0x00007fff3037fb08 Vision`-[VNRequest performInContext:error:] + 885
    frame #28: 0x00007fff303cd9a1 Vision`VNExecuteBlock + 58
    frame #29: 0x00007fff304dd105 Vision`-[VNRequestPerformer _performOrderedRequests:inContext:error:] + 674
    frame #30: 0x00007fff304dd482 Vision`-[VNRequestPerformer performRequests:inContext:onBehalfOfRequest:error:] + 352
    frame #31: 0x00007fff304dd586 Vision`-[VNRequestPerformer performRequests:inContext:error:] + 60
    frame #32: 0x00007fff304cbf1a Vision`-[VNSequenceRequestHandler _performRequests:onImageBuffer:gatheredForensics:error:] + 293
    frame #33: 0x00007fff304cc122 Vision`-[VNSequenceRequestHandler performRequests:onCVPixelBuffer:orientation:gatheredForensics:error:] + 111
    frame #34: 0x00007fff304cc0aa Vision`-[VNSequenceRequestHandler performRequests:onCVPixelBuffer:orientation:error:] + 28
  * frame #35: 0x0000000106fc5a97 ErrorTest1`-[LSPVision captureOutput:didOutputSampleBuffer:fromConnection:](self=0x0000608000047c20, _cmd="captureOutput:didOutputSampleBuffer:fromConnection:", output=0x00006030000ce770, sampleBuffer=0x0000614000091240, connection=0x00006030000d0c30) at LSPVision.m:246:9
    frame #36: 0x00007fff3786b2e0 AVFCapture`__56-[AVCaptureVideoDataOutput_Tundra _render:sampleBuffer:]_block_invoke + 213
    frame #37: 0x00000001077ff3bb libclang_rt.asan_osx_dynamic.dylib`__wrap_dispatch_async_block_invoke + 203
    frame #38: 0x00000001072fae78 libdispatch.dylib`_dispatch_call_block_and_release + 12
    frame #39: 0x00000001072fc0b0 libdispatch.dylib`_dispatch_client_callout + 8
    frame #40: 0x00000001073036b7 libdispatch.dylib`_dispatch_lane_serial_drain + 776
    frame #41: 0x0000000107304594 libdispatch.dylib`_dispatch_lane_invoke + 449
    frame #42: 0x0000000107312217 libdispatch.dylib`_dispatch_workloop_worker_thread + 1675
    frame #43: 0x000000010739eb15 libsystem_pthread.dylib`_pthread_wqthread + 314
    frame #44: 0x000000010739dae3 libsystem_pthread.dylib`start_wqthread + 15

更新

该错误似乎在 macOS Monterey 12.1 上已得到解决。

objective-c macos avfoundation metal apple-vision
2个回答
1
投票

这里有更多评论,我正在尝试重现这一点。我按原样采用了您的代码,但必须在

 中注释掉 
[self _setEyePositionsForFace:faceObservation];

                    for (VNFaceObservation *faceObservation in results) {
                        //[self _setEyePositionsForFace:faceObservation];
                        //NSLog(@"seeing face");
                    }

因为你没有给出它的实现。然而,完成后,我可以毫无问题地运行代码。为了进一步测试,我添加了如下日志。

        // SequenceRequesthandler results in 10-20% cpu utilization
        NSLog(@"aaa");
        [_sequenceRequestHandler performRequests:requests onCVPixelBuffer:pixelBuffer orientation:exifOrientation error:&error];
        NSLog(@"bbb");

据我了解,您的问题具体在于

[_sequenceRequestHandler performRequests:requests onCVPixelBuffer:pixelBuffer orientation:exifOrientation error:&error];
,但我没有遇到麻烦,并且日志显示了很多重复的
aaa
bbb
。为了进一步测试,我还添加了一个
ok
日志,如下所示

            if (error != nil) {
                NSLog(@"Failed to perform FaceLandmarkRequest: %@", error);
            } else {
                NSLog(@"ok");
            }

并且愉快地与

aaa
bbb
一起打印。

我还连接了一个按钮,如下所示

- (IBAction)buttonAction:(id)sender {
    NSLog( @"Button" );
    self.v.captureFrame;
}

其中

self.v
是(我的)
LSPVision
的一个实例,我可以毫无困难地按我想要的按钮。

我认为问题要么出在其他地方,甚至可能在我注释掉的

_setEyePositionsForFace
中,或者也许你可以提供更多代码,以便我可以在这边重现它?

FWIW 这是日志示例

2020-12-27 09:14:54.147536+0200 MetalCaptureTest[11392:317094] aaa
2020-12-27 09:14:54.184167+0200 MetalCaptureTest[11392:317094] bbb
2020-12-27 09:14:54.268926+0200 MetalCaptureTest[11392:317094] ok
2020-12-27 09:14:54.269374+0200 MetalCaptureTest[11392:317094] aaa
2020-12-27 09:14:54.314135+0200 MetalCaptureTest[11392:316676] Button
2020-12-27 09:14:54.316025+0200 MetalCaptureTest[11392:317094] bbb
2020-12-27 09:14:54.393732+0200 MetalCaptureTest[11392:317094] ok
2020-12-27 09:14:54.394171+0200 MetalCaptureTest[11392:317094] aaa
2020-12-27 09:14:54.432979+0200 MetalCaptureTest[11392:317094] bbb
2020-12-27 09:14:54.496887+0200 MetalCaptureTest[11392:317094] ok
2020-12-27 09:14:54.497389+0200 MetalCaptureTest[11392:317094] aaa
2020-12-27 09:14:54.533118+0200 MetalCaptureTest[11392:317094] bbb
2020-12-27 09:14:54.614813+0200 MetalCaptureTest[11392:317094] ok
2020-12-27 09:14:54.615394+0200 MetalCaptureTest[11392:317094] aaa
2020-12-27 09:14:54.663343+0200 MetalCaptureTest[11392:317094] bbb
2020-12-27 09:14:54.747860+0200 MetalCaptureTest[11392:317094] ok

编辑

谢谢,我得到了 dropbox 项目,它正在这边工作。根本没有崩溃。这是日志。

ErrorTest1(11743,0x10900ce00) malloc: nano zone abandoned due to inability to preallocate reserved vm space.
2020-12-27 10:55:10.445333+0200 ErrorTest1[11743:344803] Metal GPU Frame Capture Enabled
2020-12-27 10:55:10.471650+0200 ErrorTest1[11743:344803] [plugin] AddInstanceForFactory: No factory registered for id <CFUUID 0x6030000aabc0> F8BB1C28-BAE8-11D6-9C31-00039315CD46
2020-12-27 10:55:10.528628+0200 ErrorTest1[11743:344803] [plugin] AddInstanceForFactory: No factory registered for id <CFUUID 0x6030000ae130> 30010C1C-93BF-11D8-8B5B-000A95AF9C6A
2020-12-27 10:55:10.608753+0200 ErrorTest1[11743:344803] [] CMIOHardware.cpp:379:CMIOObjectGetPropertyData Error: 2003332927, failed
2020-12-27 10:55:11.408594+0200 ErrorTest1[11743:344873] [logging-persist] cannot open file at line 44580 of [02c344acea]
2020-12-27 10:55:11.408806+0200 ErrorTest1[11743:344873] [logging-persist] os_unix.c:44580: (0) open(/var/db/DetachedSignatures) - Undefined error: 0
2020-12-27 10:55:17.637382+0200 ErrorTest1[11743:344803] seeing face
2020-12-27 10:55:17.838354+0200 ErrorTest1[11743:344803] seeing face
2020-12-27 10:55:17.987583+0200 ErrorTest1[11743:344803] seeing face
2020-12-27 10:55:18.171168+0200 ErrorTest1[11743:344803] seeing face
2020-12-27 10:55:18.320957+0200 ErrorTest1[11743:344803] seeing face

FWIW 我有最新的操作系统 BS 11.1、最新的 Xcode 12.3 并在 MB Air 2017 上运行它。根据你的描述,我怀疑多线程可能是一个问题,但现在我的重点是在这边重现它。


1
投票

我还在一个应用程序中使用

Vision
时遇到了无法解释的崩溃,但在我为测试而创建的另一个应用程序中则没有。事实证明,启用“Metal API Validation”后问题就消失了。我的应用程序中确实有几个自定义 Metal 内核,但我仍然不明白根本问题是什么。

Metal API Validation

© www.soinside.com 2019 - 2024. All rights reserved.