4

AVCaptureSessionAVCaptureVideoPreviewLayerCALayerおよびを使用してデバイスのカメラ入力をキャプチャする iOS アプリがありUIImageViewます。

問題は、1 つの AVCapture セッションを 2 つの異なるビューで表示する必要があることです*

現在、最初の AVCapture "View" は動作し、ビデオは正常に表示されますが、2 番目の AVCapture は数ミリ秒間表示されてからフリーズします (フル フレームがレンダリングされません)。

一度に 1 つの AVCaptureSession のみがデバイスのカメラ入力をキャプチャできるため、これが機能するかどうかはわかりません (私の知る限り、そうでない場合はメモリの問題に違いありません)。

同じ AVCapture セッションを使用して 2 つの異なるビューで表示するにはどうすればよいですか?

私が使用しているコードは次のとおりです。

//CameraControl.h

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreGraphics/CoreGraphics.h>
#import <CoreVideo/CoreVideo.h>
#import <CoreMedia/CoreMedia.h>

@interface CameraControl : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate>
{
    AVCaptureSession *_captureSession;
    UIImageView *_imageView;
    CALayer *_customLayer;
    AVCaptureVideoPreviewLayer *_prevLayer;
}

// The capture session takes the input from the camera and capture it
@property (nonatomic, retain) AVCaptureSession *captureSession;

// The UIImageView we use to display the image generated from the imageBuffer
@property (nonatomic, retain) UIImageView *imageView;
// The CALayer we use to display the CGImageRef generated from the imageBuffer
@property (nonatomic, retain) CALayer *customLayer;
//The CALAyer customized by apple to display the video corresponding to a capture session
@property (nonatomic, retain) AVCaptureVideoPreviewLayer *prevLayer;

// This method initializes the capture session
- (void)initCapture;
@end

そして今ここに実装があります:

//CameraControl.m

#import "CameraControl.h"
#import <MobileCoreServices/MobileCoreServices.h>

@interface CameraControl ()

@end

@implementation CameraControl
@synthesize captureSession = _captureSession;
@synthesize imageView = _imageView;
@synthesize customLayer = _customLayer;
@synthesize prevLayer = _prevLayer;

- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
    if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) {
        return (interfaceOrientation = UIInterfaceOrientationPortrait);
    } else {
        return YES;
    }
}
- (id)init {
    self = [super init];
    if (self) {
        /*We initialize some variables (they might be not initialized depending on what is commented or not)*/
        self.imageView = nil;
        self.prevLayer = nil;
        self.customLayer = nil;
    }
    return self;
}

- (void)viewDidLoad {
    /*We intialize the capture*/
    [self initCapture];
}

- (void)initCapture {
    /*We setup the input*/
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                    deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                      error:nil];
   /*We setupt the output*/
   AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
   /*While a frame is processes in -captureOutput:didOutputSampleBuffer:fromConnection: delegate methods no other frames are added in the queue. If you don't want this behaviour set the property to NO */
   captureOutput.alwaysDiscardsLateVideoFrames = YES; 
   /*We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting in the queue because it can cause memory issues). It is similar to the inverse of the maximum framerate. In this example we set a min frame duration of 1/10 seconds so a maximum framerate of 10fps. We say that we are not able to process more than 10 frames per second.*/
   //captureOutput.minFrameDuration = CMTimeMake(1, 10);

    /*We create a serial queue to handle the processing of our frames*/
    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);
    // Set the video output to store frame in BGRA (It is supposed to be faster)
    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 
    /*And we create a capture session*/
    self.captureSession = [[AVCaptureSession alloc] init];
    /*We add input and output*/
    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];
        /*We use medium quality, ont the iPhone 4 this demo would be laging too much, the conversion in UIImage and CGImage demands too much ressources for a 720p resolution.*/
        [self.captureSession setSessionPreset:AVCaptureSessionPresetMedium];
    /*We add the Custom Layer (We need to change the orientation of the layer so that the video is displayed correctly)*/
    self.customLayer = [CALayer layer];
    self.customLayer.frame = self.view.bounds;
    //self.customLayer.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);
    self.customLayer.contentsGravity = kCAGravityResizeAspectFill;
    [self.view.layer addSublayer:self.customLayer];
    /*We add the imageView*/
        [self.view addSubview:self.imageView];
    /*We add the preview layer*/
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer: self.prevLayer];
    /*We start the capture*/
    [self.captureSession startRunning];

}

#pragma mark -
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{ 
/*We create an autorelease pool because as we are not in the main_queue our code is
 not executed in the main thread. So we have to create an autorelease pool for the thread we are in*/

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0); 
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer);  

/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

/*We release some components*/
CGContextRelease(newContext); 
CGColorSpaceRelease(colorSpace);

/*We display the result on the custom layer. All the display stuff must be done in the main thread because
 UIKit is no thread safe, and as we are not in the main thread (remember we didn't use the main_queue)
 we use performSelectorOnMainThread to call our CALayer and tell it to display the CGImage.*/
[self.customLayer performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage waitUntilDone:YES];

/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly).
    Same thing as for the CALayer we are not in the main thread so ...*/
    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];

    /*We relase the CGImageRef*/
    CGImageRelease(newImage);

    [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];

    /*We unlock the  image buffer*/
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    [pool drain];
} 

#pragma mark -
#pragma mark Memory management

- (void)viewDidUnload {
    self.imageView = nil;
    self.customLayer = nil;
    self.prevLayer = nil;
}

- (void)dealloc {
    [self.captureSession release];
    [super dealloc];
}


@end
4

2 に答える 2

2

なぜ2つのセッションが必要なのですか?あなたが達成しようとしていることを言うなら、推薦を与えるのは簡単です。

カメラが1つ(背面と前面を同時に使用することはできません)とマイクが1つしかないため、通常は1つのセッションのみが必要です。2つのセッションがあるということは、カメラから2つの画像バッファを送信することを意味し、デバイスに不必要なストレスをかけます。

セッション内のパラメータを変更したい場合は、実行中に動的に変更できます。

- (IBAction)switchHD:(id)sender {
  [AVsession beginConfiguration];
  if(sender.selectedSegmentIndex){
    session.sessionPreset = AVCaptureSessionPreset1280x720;
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspect;
  }else{
    session.sessionPreset = AVCaptureSessionPreset640x480;
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
  }
  [AVsession commitConfiguration]; 
}
于 2012-11-13T11:46:54.560 に答える
2

これを行うには、複数のセッションではなく、複数の出力をセッションにアタッチする必要があります。AV Foundationプログラミングガイドを引用するには:

キャプチャセッションから出力を取得するには、1つ以上の出力を追加します。
...
addOutput:を使用して、キャプチャセッションに出力を追加します。

于 2012-11-13T08:51:44.713 に答える