2

顔を認識できるAppleの新機能AVMetadataFaceObjectをiOS 6アプリで使用しようとしています。基本的に彼らがそれを実現したいのは、作成AVCaptureMetadataOutputオブジェクトであり、それをAVAVCaptureSession出力として既存のものに設定します。だから、このリンクからsquarecam Appleのサンプルコードを入手しました

私はこのようなオブジェクトを作成しようとしました:

    CaptureObject = [[AVCaptureMetadataOutput alloc]init];
objectQueue =       dispatch_queue_create("VideoDataOutputQueue", NULL);//dispatch_queue_create("newQueue", NULL);
[CaptureObject setMetadataObjectsDelegate:self queue:objectQueue];

ここでセッションに入力を追加しています:

- (void)setupAVCapture

{ NSError *エラー = nil;

AVCaptureSession *session = [AVCaptureSession new];
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone)
    [session setSessionPreset:AVCaptureSessionPreset640x480];
else
    [session setSessionPreset:AVCaptureSessionPresetPhoto];

// Select a video device, make an input
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
require( error == nil, bail );

isUsingFrontFacingCamera = NO;
if ( [session canAddInput:deviceInput] )
    [session addInput:deviceInput];

// Make a still image output
stillImageOutput = [AVCaptureStillImageOutput new];
[stillImageOutput addObserver:self forKeyPath:@"capturingStillImage" options:NSKeyValueObservingOptionNew context:AVCaptureStillImageIsCapturingStillImageContext];
if ( [session canAddOutput:stillImageOutput] )
    [session addOutput:stillImageOutput ];
    **[session addOutput:CaptureObject];//////HERE///////**

     // Make a video data output
videoDataOutput = [AVCaptureVideoDataOutput new];

// we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA'
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
                                   [NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[videoDataOutput setVideoSettings:rgbOutputSettings];
[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES]; // discard if the data output queue is blocked (as we process the still image)

// create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured
// a serial dispatch queue must be used to guarantee that video frames will be delivered in order
// see the header doc for setSampleBufferDelegate:queue: for more information
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];

if ( [session canAddOutput:videoDataOutput] )
    [session addOutput:videoDataOutput];
[[videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:NO];

effectiveScale = 1.0;
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setBackgroundColor:[[UIColor blackColor] CGColor]];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
CALayer *rootLayer = [previewView layer];
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:[rootLayer bounds]];
[rootLayer addSublayer:previewLayer];
[session startRunning];

}}

したがって、基本的にデリゲートはこのメソッドを呼び出す必要があります。

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{

}

しかし、そうではありません。

何か案が?

4

1 に答える 1

2

メタデータにはメイン キューを使用することをお勧めします。それは私が見ることができる唯一のことであり、間違っている可能性があります。

AVCaptureMetadataOutput *metadataOutput;
metadataOutput = [AVCaptureMetadataOutput new];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:metadataOutput];
于 2012-12-19T11:53:27.863 に答える