A SUCCESSFUL DEBUT AND A LOOK AT THE NEXT STEPS

We were thrilled to recently demonstrate our technology platform at JDA Focus 2017. Over 2.5 days, approximately 150 conference attendees joined our "temporary workforce" and experienced Augmented Reality (AR) - many for the first time. Using a workflow configuration named "Pick-to-Sight," our hands-on workers filled orders while our wearable technology visually guided them through the process.

To get a glimpse of what they were seeing, check out this video: 
 
 
 
 
 
 
 
 
With attendee feedback still fresh in our minds, it's a good time to reflect on where we stand in today's technology landscape, and where we see this road going.
 State of the Art
For starters, we were extremely pleased with how quickly the "temps" mastered the equipment. Even those who remarked that it "took a little getting used to" had to concede that they were fully operational within a minute or two. Compare that to some other workplace technologies and we think we're on the right track.

Secondly, we reaffirmed our position that head-worn AR devices, or VIEW devices, offer the best human experience for the hands-on workforce in the market today, at a comparable price per unit.

Compared to an RF scanner:
  • VIEW devices require no holstering, aiming, screen positioning, or typing on tiny keyboards. All the labor required to facilitate data entry in the system goes away, as the human has two hands productive at all times.

  • VIEW devices capture more visual data than a traditional barcode scanner. To capture the same data as a VIEW device, the RF device must embed a camera rather than a standard scanner, and even then, the camera must be aimed (while a VIEW device sees whatever the human is seeing).

  • VIEW devices can capture meaningful data even when not directly facilitating the human's task execution. The camera can be activated based on any number of events, and will always capture something within the line of sight of the human operator.

Compared to Voice recognition:
  • VIEW devices present instructional information visually as long as required by the human operator. A voice instruction is spoken once, and the intended recipient must be both listening and ready to retain the information (or ask for a repeat). Instructions must therefore be broken down into relatively small chunks.

  • VIEW devices use AR tags, such as the icons in the video above, to communicate information. Icons can transcend language, dialect and accent. Of the dozen countries represented by our ad hoc workforce at the show, the red "X" icon was immediately and universally recognized as a negative marker.
  • VIEW devices capture data at the speed of light. Complex data entry and simultaneous field capture are all possible, given our image decoding technology (in the video above, this technology is on display as we "find" all the barcodes in the frame). By comparison, data entry in voice is spoken and linear. It must be limited in complexity to minimize human error. While we do use some voice confirmations in our workflows, we typically design them in as simple boolean (Yes / No) or single digit numeric entries. To achieve complex data entry with voice, paired scanners are typically added to the solution (also adding to the integration complexity and the price tag).

VIEW devices flip some traditional paradigms around. Other devices instruct the user to "go find something" - a bin label, a UPC code, and tell the device when it's found. VIEW devices actually help guide the human in, spotting the target in many cases before the human does. What is your scanner doing to help your productivity once that instruction is given? Nothing. A VIEW device is working to spot the objective and help, or augment, its human operator's abilities.
 
Looking Ahead

Our final observation from the show is how many attendees are ready to push this technology even further. While we have the sci-fi industry to thank for some of this, we do agree that AR technology is the ideal platform for future human experience innovations.

Today, we are most confident in our ability to detect barcodes, 1-D and 2-D, in the field of view.

Looking forward, and in close partnership with Vuzix, we will push for workday reliability in the areas of Optical Character Recognition (OCR), object recognition, gesture recognition and holography. We continue to evaluate and build in some of these areas, while binocular devices will be the catalyst for others,

Until then, we remain confident that the M300, along with our commitments to the Effortless Human Interface and theeffective hands-on workforce, deliver significant ROI to AR adopters today.