我想用做双屏幕内置摄像头在iOS上
我想下面的代码,但它表明只有一个视图。
这是一个自然的结果,我知道了。
下面是我用什么代码..
- (void)prepareCameraView:(UIView *)window
{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
CALayer *viewLayer = window.layer;
NSLog(@"viewLayer = %@", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc]
initWithSession:session];
captureVideoPreviewLayer.frame = window.bounds;
[window.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!input)
{
NSLog(@"ERROR : trying to open camera : %@", error);
}
[session addInput:input];
[session startRunning];
}
我怎样才能在iOS上获得双屏幕?
// Use this code
AVCaptureSession *session = [AVCaptureSession new];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ( [session canAddInput:deviceInput])
{
[session addInput:deviceInput];
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height)];
NSUInteger replicatorInstances = 2;
CGFloat replicatorViewHeight = (self.view.bounds.size.height - 64)/replicatorInstances;
CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0.0, self.view.bounds.size.width, replicatorViewHeight);
replicatorLayer.instanceCount = replicatorInstances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, replicatorViewHeight, 0.0);
[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];
[session startRunning];
尝试这个:
- (void)prepareCameraView:(UIView *)window
{
NSArray *captureDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
CALayer *viewLayer = window.layer;
NSLog(@"viewLayer = %@", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = CGRectMake(0.0f, 0.0f, window.bounds.size.width/2.0f, window.bounds.size.height);
[window.layer addSublayer:captureVideoPreviewLayer];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:[captureDevices objectAtIndex:0] error:&error];
if (!input)
{
NSLog(@"ERROR : trying to open camera : %@", error);
}
[session addInput:input];
[session startRunning];
}
{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
CALayer *viewLayer = window.layer;
NSLog(@"viewLayer = %@", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = CGRectMake(window.bounds.size.width/2.0f, 0.0f, window.bounds.size.width/2.0f, window.bounds.size.height);
[window.layer addSublayer:captureVideoPreviewLayer];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:[captureDevices objectAtIndex:1] error:&error];
if (!input)
{
NSLog(@"ERROR : trying to open camera : %@", error);
}
[session addInput:input];
[session startRunning];
}
}
需要注意的是它使绝对没有检查实际上有2个摄像头并将其拆分其垂直所以这在景观可能是最好的观察。你会想一些检查添加到该代码,你想怎么使用它之前,制定出每个摄像头的层正好锻炼。