AVFoundation : Implementing Barcode Scanning in iOS8 With Swift

January 24, 2015
Shrikar Archak

<Image alt="Implementing Barcode Scanning in iOS8 With Swift" objectFit="contain" src="/static/images/barcode_inventory.png" height={350} width={1000} placeholder="blur" quality={100} />

In iOS 7 apple introduced support for reading MachineReadable Code(Barcodes). As of today it supports these machine formats for reading. Frameworks also provide core images filters to generate these barcodes. In this post we will implement the same for iOS 8 and Swift.

  • UPCE
  • Code39
  • Code39Mod43
  • EAN13
  • EAN8
  • Code93
  • Code128
  • PDF417
  • QR
  • Aztec
  • Interleaved2of5
  • ITF14
  • DataMatrix

This is how the final example will work.

<iframe src="//www.youtube.com/embed/I2V6NE3ojFw" width="420" height="315" frameborder="0" allowfullscreen="allowfullscreen" ></iframe>

To implement barcode scanning in our app we need to have some idea about how AVFoundation works.

AVCaptureSession

AVCaptureSession is one of the key object that will help in managing the data flow from the capture stage through our input devices like camera/mic to output like a movie file. We can also provide custom presets which will control the quality/bitrate of the output.

AVCaptureDevice

An AVCaptureDevice object represents a physical capture device and the properties associated with that device. You use a capture device to configure the properties of the underlying hardware. A capture device also provides input data (such as audio or video) to an AVCaptureSession object. We also have the flexibility to set the properties on the input device like (focus, exposure etc) but the should be done while having a lock on that particular device object

AVCaptureInputDevice

AVCaptureInputDevice is useful for capturing the data from the input device.

AVCaptureVideoPreviewLayer

AVCaptureVideoPreviewLayer is special CGlayer which will help us display the data as captured from our input device

Here is the flow on how the start capturing input from the device.

  • Get the session object let session = AVCaptureSession()
  • Add the input and the metadataoutput object to the session
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
var error : NSError?
let inputDevice = AVCaptureDeviceInput(device: captureDevice, error: &error)

if let inp = inputDevice {
session.addInput(inp)
} else {
println(error)
}

let output = AVCaptureMetadataOutput()
session.addOutput(output)
output.metadataObjectTypes = output.availableMetadataObjectTypes
  • Adding the preview layer to display the captured data. We will set the videoGravity to AspectFill so that it covers the full screen.
func addPreviewLayer() {
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.bounds = self.view.bounds
previewLayer?.position = CGPointMake(CGRectGetMidX(self.view.bounds), CGRectGetMidY(self.view.bounds))
self.view.layer.addSublayer(previewLayer)
}
  • Implementing the AVCaptureMetadataOutputObjectsDelegate delegate to be called when the barcode is detected. To get this working properly we need to call the setMetadataObjectsDelegate on the output device which is added to the session object . We need to have this setup before we startRunning the capture on the session object.
output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
session.startRunning()

The queue is dispatch queue on which to execute the delegate’s methods. This queue must be a serial queue to ensure that metadata objects are delivered in the order in which they were received.

  • Transfrom the coordinates . We will use the convertPoint function on the UIView to tranform the coordinates. These are the corners which were detected by the framework and we will use it to display the identified barcode.
func translatePoints(points : [AnyObject], fromView : UIView, toView: UIView) -> [CGPoint] {
    var translatedPoints : [CGPoint] = []
    for point in points {
        var dict = point as NSDictionary
        let x = CGFloat((dict.objectForKey("X") as NSNumber).floatValue)
        let y = CGFloat((dict.objectForKey("Y") as NSNumber).floatValue)
        let curr = CGPointMake(x, y)
        let currFinal = fromView.convertPoint(curr, toView: toView)
        translatedPoints.append(currFinal)
    }
    return translatedPoints
}
  • Drawing the outline around the barcode. The translatedPoints in the above function can be used to draw a bezierpath.
  • To remove the outline once the barcode moves off the screen . We setup the timer which will be called when the the barcode is detected. In that startTimer function we invalidate and remove the outline as
//
// DiscoveredBarCodeView.swift
// BarcodeInventory
//
// Created by Shrikar Archak on 1/22/15.
// Copyright (c) 2015 Shrikar Archak. All rights reserved.
//

import UIKit

class DiscoveredBarCodeView: UIView {

var borderLayer : CAShapeLayer?
var corners : [CGPoint]?
override init(frame: CGRect) {
super.init(frame: frame)
self.setMyView()
}

required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}

func drawBorder(points : [CGPoint]) {
self.corners = points
let path = UIBezierPath()

println(points)
path.moveToPoint(points.first!)
for (var i = 1; i < points.count; i++) {
path.addLineToPoint(points[i])
}
path.addLineToPoint(points.first!)
borderLayer?.path = path.CGPath
}

func setMyView() {
borderLayer = CAShapeLayer()
borderLayer?.strokeColor = UIColor.redColor().CGColor
borderLayer?.lineWidth = 2.0
borderLayer?.fillColor = UIColor.clearColor().CGColor
self.layer.addSublayer(borderLayer)
}

}

Here is the full code for the ViewController

//  ViewController.swift
//  BarcodeInventory
//
//  Created by Shrikar Archak on 1/20/15.
//  Copyright (c) 2015 Shrikar Archak. All rights reserved.
//

import UIKit
import AVFoundation

class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {

    let session = AVCaptureSession()
    var previewLayer : AVCaptureVideoPreviewLayer?
    var  identifiedBorder : DiscoveredBarCodeView?
    var timer : NSTimer?

    /* Add the preview layer here */
    func addPreviewLayer() {
        previewLayer = AVCaptureVideoPreviewLayer(session: session)
        previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
        previewLayer?.bounds = self.view.bounds
        previewLayer?.position = CGPointMake(CGRectGetMidX(self.view.bounds), CGRectGetMidY(self.view.bounds))
        self.view.layer.addSublayer(previewLayer)
    }

    override func viewDidLoad() {
        super.viewDidLoad()
        let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
        var error : NSError?
        let inputDevice = AVCaptureDeviceInput(device: captureDevice, error: &error)

        if let inp = inputDevice {
            session.addInput(inp)
        } else {
            println(error)
        }
        addPreviewLayer()

        identifiedBorder = DiscoveredBarCodeView(frame: self.view.bounds)
        identifiedBorder?.backgroundColor = UIColor.clearColor()
        identifiedBorder?.hidden = true;
        self.view.addSubview(identifiedBorder!)


        /* Check for metadata */
        let output = AVCaptureMetadataOutput()
        session.addOutput(output)
        output.metadataObjectTypes = output.availableMetadataObjectTypes
        println(output.availableMetadataObjectTypes)
        output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
        session.startRunning()
    }

    override func viewWillAppear(animated: Bool) {

    }
    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
    }

    override func viewWillDisappear(animated: Bool) {
        session.stopRunning()
    }

    func translatePoints(points : [AnyObject], fromView : UIView, toView: UIView) -> [CGPoint] {
        var translatedPoints : [CGPoint] = []
        for point in points {
            var dict = point as NSDictionary
            let x = CGFloat((dict.objectForKey("X") as NSNumber).floatValue)
            let y = CGFloat((dict.objectForKey("Y") as NSNumber).floatValue)
            let curr = CGPointMake(x, y)
            let currFinal = fromView.convertPoint(curr, toView: toView)
            translatedPoints.append(currFinal)
        }
        return translatedPoints
    }

    func startTimer() {
        if timer?.valid != true {
            timer = NSTimer.scheduledTimerWithTimeInterval(0.2, target: self, selector: "removeBorder", userInfo: nil, repeats: false)
        } else {
            timer?.invalidate()
        }
    }

    func removeBorder() {
        /* Remove the identified border */
        self.identifiedBorder?.hidden = true
    }

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, fromConnection connection: AVCaptureConnection!) {
        for data in metadataObjects {
            let metaData = data as AVMetadataObject
            let transformed = previewLayer?.transformedMetadataObjectForMetadataObject(metaData) as? AVMetadataMachineReadableCodeObject
            if let unwraped = transformed {
                identifiedBorder?.frame = unwraped.bounds
                identifiedBorder?.hidden = false
                let identifiedCorners = self.translatePoints(unwraped.corners, fromView: self.view, toView: self.identifiedBorder!)
                identifiedBorder?.drawBorder(identifiedCorners)
                self.identifiedBorder?.hidden = false
                self.startTimer()

            }
        }
    }
}

Please let me know if you have any questions/comments.

Subscribe to the newsletter

Get notified when new content or topic is released.

You won't receive any spam! ✌️