在Swift中逐像素地应用视觉效果

我有一个大学的任务来创build视觉效果,并将其应用于通过设备相机捕获的video帧。 我目前可以得到的图像和显示,但不能改变像素的颜色值。

我将示例缓冲区转换为imageRefvariables,如果将其转换为UIImage,则一切正常。

但是现在我想让imageRef逐个像素地改变其颜色的值,在这个例子中改变为负面的颜色(我必须做更复杂的东西,所以我不能使用CIFilters),但是当我执行它所撞见的评论部分访问不好。

import UIKit import AVFoundation class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { let captureSession = AVCaptureSession() var previewLayer : AVCaptureVideoPreviewLayer? var captureDevice : AVCaptureDevice? @IBOutlet weak var cameraView: UIImageView! override func viewDidLoad() { super.viewDidLoad() captureSession.sessionPreset = AVCaptureSessionPresetMedium let devices = AVCaptureDevice.devices() for device in devices { if device.hasMediaType(AVMediaTypeVideo) && device.position == AVCaptureDevicePosition.Back { if let device = device as? AVCaptureDevice { captureDevice = device beginSession() break } } } } func focusTo(value : Float) { if let device = captureDevice { if(device.lockForConfiguration(nil)) { device.setFocusModeLockedWithLensPosition(value) { (time) in } device.unlockForConfiguration() } } } override func touchesBegan(touches: NSSet!, withEvent event: UIEvent!) { var touchPercent = Float(touches.anyObject().locationInView(view).x / 320) focusTo(touchPercent) } override func touchesMoved(touches: NSSet!, withEvent event: UIEvent!) { var touchPercent = Float(touches.anyObject().locationInView(view).x / 320) focusTo(touchPercent) } func beginSession() { configureDevice() var error : NSError? captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error)) if error != nil { println("error: \(error?.localizedDescription)") } previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) previewLayer?.frame = view.layer.frame //view.layer.addSublayer(previewLayer) let output = AVCaptureVideoDataOutput() let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL) output.setSampleBufferDelegate(self, queue: cameraQueue) output.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA] captureSession.addOutput(output) captureSession.startRunning() } func configureDevice() { if let device = captureDevice { device.lockForConfiguration(nil) device.focusMode = .Locked device.unlockForConfiguration() } } // MARK : - AVCaptureVideoDataOutputSampleBufferDelegate func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) CVPixelBufferLockBaseAddress(imageBuffer, 0) let baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0) let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) let width = CVPixelBufferGetWidth(imageBuffer) let height = CVPixelBufferGetHeight(imageBuffer) let colorSpace = CGColorSpaceCreateDeviceRGB() var bitmapInfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.PremultipliedFirst.toRaw())! | CGBitmapInfo.ByteOrder32Little let context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, bitmapInfo) let imageRef = CGBitmapContextCreateImage(context) CVPixelBufferUnlockBaseAddress(imageBuffer, 0) let data = CGDataProviderCopyData(CGImageGetDataProvider(imageRef)) as NSData let pixels = data.bytes var newPixels = UnsafeMutablePointer<UInt8>() //for index in stride(from: 0, to: data.length, by: 4) { /*newPixels[index] = 255 - pixels[index] newPixels[index + 1] = 255 - pixels[index + 1] newPixels[index + 2] = 255 - pixels[index + 2] newPixels[index + 3] = 255 - pixels[index + 3]*/ //} bitmapInfo = CGImageGetBitmapInfo(imageRef) let provider = CGDataProviderCreateWithData(nil, newPixels, UInt(data.length), nil) let newImageRef = CGImageCreate(width, height, CGImageGetBitsPerComponent(imageRef), CGImageGetBitsPerPixel(imageRef), bytesPerRow, colorSpace, bitmapInfo, provider, nil, false, kCGRenderingIntentDefault) let image = UIImage(CGImage: newImageRef, scale: 1, orientation: .Right) dispatch_async(dispatch_get_main_queue()) { self.cameraView.image = image } } } 

您在像素操作循环中访问不良,因为newPixels UnsafeMutablePointer使用内置的RawPointer初始化,并指向内存中的0x0000,我认为它指向一个未分配的内存空间,您无权存储数据。

对于更长的解释和“解决scheme”,我做了一些改变…

首先,自从发布OP之后,Swift发生了一些变化,这一行必须根据rawValue的函数进行修改:

  //var bitmapInfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.PremultipliedFirst.toRaw())! | CGBitmapInfo.ByteOrder32Little var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue) | CGBitmapInfo.ByteOrder32Little 

还有一些指针需要更改,所以我发布了所有的变化(我留下了原始的行与注释标记)。

  let data = CGDataProviderCopyData(CGImageGetDataProvider(imageRef)) as NSData //let pixels = data.bytes let pixels = UnsafePointer<UInt8>(data.bytes) let imageSize : Int = Int(width) * Int(height) * 4 //var newPixels = UnsafeMutablePointer<UInt8>() var newPixelArray = [UInt8](count: imageSize, repeatedValue: 0) for index in stride(from: 0, to: data.length, by: 4) { newPixelArray[index] = 255 - pixels[index] newPixelArray[index + 1] = 255 - pixels[index + 1] newPixelArray[index + 2] = 255 - pixels[index + 2] newPixelArray[index + 3] = pixels[index + 3] } bitmapInfo = CGImageGetBitmapInfo(imageRef) //let provider = CGDataProviderCreateWithData(nil, newPixels, UInt(data.length), nil) let provider = CGDataProviderCreateWithData(nil, &newPixelArray, UInt(data.length), nil) 

一些解释:所有旧的像素字节必须被铸造到UInt8,所以而不是做它改变像素为UnsafePointer。 然后我为新像素创build了一个数组,并删除了新的像素指针,并直接处理数组。 最后将指向新数组的指针添加到提供者来创build图像。 并删除了字节的修改。

在这之后,我能够得到一些负面的图像进入我的观点,以一个非常低的性能,每十秒左右一个图像(iPhone 5,通过XCode)。 在图像视图中显示第一帧需要很长时间。

当我将captureSession.stopRunning()添加到didOutputSampleBuffer函数的开头,然后在使用captureSession.startRunning()重新开始处理后,得到了一些更快的响应。 有了这个我有近1fps。

感谢Interrest的挑战!