在 osx 上将视频帧传递给 Core Image

passing video frame to Core Image on osx

大家好,你们这些很棒的程序员!在过去的几周里,我从各种有用的资源中收集了这个东西(包括来自 Whosebug 的很多帖子),试图创建一些东西来获取网络摄像头并在他们出现时检测微笑(不妨在周围画框面孔和笑容,一旦被检测到似乎并不难)。如果它是混乱的代码,请给我一些余地,因为我还在学习。 目前,我一直在尝试将图像传递给 CIImage,以便对其进行人脸分析(我计划在克服面部障碍后处理微笑)。因为如果我在 (5) 之后注释掉块,编译器就会成功 - 它会在 window 中调出一个简单的 AVCaptureVideoPreviewLayer。我认为这就是我所说的 "rootLayer",所以它就像显示输出的第一层,在我检测到视频帧中的人脸后,我将在任何 "bounds" 之后显示一个矩形在覆盖在这一层之上的新层中检测到人脸,我称该层为 "previewLayer"...正确吗?

但是对于 (5) 之后的块,编译器会抛出三个错误 -

Undefined symbols for architecture x86_64: "_CMCopyDictionaryOfAttachments", referenced from: -[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:] in AVRecorderDocument.o "_CMSampleBufferGetImageBuffer", referenced from: -[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:] in AVRecorderDocument.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation)

谁能告诉我我哪里出错了以及我的下一步是什么?

感谢任何帮助,我已经在这一点上停留了几天,我无法弄清楚,我能找到的所有示例都是 IOS 并且不起作用在 OSX.

    - (id)init
{
    self = [super init];
    if (self) {

        // Move the output part to another function
        [self addVideoDataOutput];

        // Create a capture session
        session = [[AVCaptureSession alloc] init];

        // Set a session preset (resolution)
        self.session.sessionPreset = AVCaptureSessionPreset640x480;

        // Select devices if any exist
        AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
        if (videoDevice) {
            [self setSelectedVideoDevice:videoDevice];
        } else {
            [self setSelectedVideoDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeMuxed]];
        }
        NSError *error = nil;
        //  Add an input
        videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
        [self.session addInput:self.videoDeviceInput];

        // Start the session (app opens slower if it is here but I think it is needed in order to send the frames for processing)
        [[self session] startRunning];


          // Initial refresh of device list
         [self refreshDevices];

    }
    return self;
}

-(void) addVideoDataOutput {
    // (1) Instantiate a new video data output object
    AVCaptureVideoDataOutput * captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.videoSettings = @{ (NSString *) kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };

    // discard if the data output queue is blocked (while CI processes the still image)
    captureOutput.alwaysDiscardsLateVideoFrames = YES;

    // (2) The sample buffer delegate requires a serial dispatch queue
    dispatch_queue_t captureOutputQueue;
    captureOutputQueue = dispatch_queue_create("CaptureOutputQueue", DISPATCH_QUEUE_SERIAL);
    [captureOutput setSampleBufferDelegate:self queue:captureOutputQueue];
    dispatch_release(captureOutputQueue);  //what does this do and should it be here or after we receive the processed image back?

    // (3) Define the pixel format for the video data output 
    NSString * key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
    NSNumber * value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary * settings = @{key:value};
    [captureOutput setVideoSettings:settings];

    // (4) Configure the output port on the captureSession property
    if ( [self.session canAddOutput:captureOutput] )
    [session addOutput:captureOutput];

}
// Implement the Sample Buffer Delegate Method
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

// I *think* I have a video frame now in some sort of image format... so have to convert it into a CIImage before I can process it:

    // (5) Convert CMSampleBufferRef to CVImageBufferRef, then to a CI Image (per weichsel's answer in July '13)
    CVImageBufferRef cvFrameImage = CMSampleBufferGetImageBuffer(sampleBuffer);  // Having trouble here, prog. stops and won't recognise CMSampleBufferGetImageBuffer.
    CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
    self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage options:(__bridge NSDictionary *)attachments];
    //self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage];

    //OK so it is a CIImage. Find some way to send it to a separate CIImage function to find the faces, then smiles.  Then send it somewhere else to be displayed on top of AVCaptureVideoPreviewLayer
    //TBW

}


- (NSString *)windowNibName
{
    return @"AVRecorderDocument";
}


- (void)windowControllerDidLoadNib:(NSWindowController *) aController
{
    [super windowControllerDidLoadNib:aController];

    // Attach preview to session
    CALayer *rootLayer = self.previewView.layer;
    [rootLayer setMasksToBounds:YES]; //aaron added
    self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
    [self.previewLayer setBackgroundColor:CGColorGetConstantColor(kCGColorBlack)];
    [self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
    [self.previewLayer setFrame:[rootLayer bounds]];
    //[newPreviewLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];  //don't think I need this for OSX?
    [self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
    [rootLayer addSublayer:previewLayer];
//  [newPreviewLayer release];  //what's this for?


}

(从评论区移过来的)

哇。我想两天和一个 Whosebug post 就可以弄清楚我没有将 CoreMedia.framework 添加到我的项目中。