Google Releases Incredible Mini Camera

When Google gets in on an industry they are loathe to do it halfway, except for social media as Google Plus ended up proving. Google has been getting more and more into hardware with laptops and phones and now it looks like they are getting involved via a special camera. Google’s latest hardware event revealed the creation of Clips. Clips, developed by Juston Payne, is a new series of small, independent cameras that run with special artificial intelligence. The camera functions by recording three hours of videos and images before automatically sifting through them and picking your ‘best moments’. The concept for the camera is enough to raise an eyebrow so a closer look was definitely well deserved.

When talking to Payne the first thing that he wanted to make clear was that it the Clips was not just a device created to companion alongside the new Google Pixel phone. Payne said, “It’s an accessory to anything, I’d say. It’s a stand-alone camera.” Payne went on to explain that the Clips product is trying to be so much more than just another small, simple camera. The AI function, which mysteriously selects your greatest moments, “is built into the device to decide when to take these shots, which is really important because it gives users total control over it.” Payne goes on to say that the Clips unit is self contained and you won’t need any other technology to run it.

The Clips gimmick relies upon a technology that has, until very recently, been too hard to really mobilize effectively. Payne explains how in past years in order to run semantic analysis, what picks out your images and videos, you would actually need a server farm and a desktop computer running it all. However with advancements in technology it was possible for Google to condense the process into a tiny, hand-held camera. The majority of the weight inside of the camera is the battery and the rest belongs to the lower power VPU, or vision processing unit. Google worked with Intel’s own Movidius in order to develop the VPU.

Right now Google is open about the fact that Clips works best when dealing with humans and animals. Quality testers helped to train the machine to recognize these common lifeforms and focal objects. However, it is true that the Clips smart camera isn’t ready yet for picking more ambiguous imagery such as a skyline or a view of the ocean.