The what, why and how of Google Glass
The what, why and how of Google Glass
2013 will be remembered for the NSA Prism Program now synonym to its revealer Snowden, the rise of Bitcoin and the death of Nelson Mandela. On the technological front it will also be recalled as the year that Google Glass, the first optically-driven wearable computer aimed at general consumers, saw the limelight.
2014 might be remembered as the year that Google Glass or ‘Glass’ hit the consumer market. The research branch of Bussines Insider BI Intelligence, believes that Glass will indeed hit the market in 2014 but take a few years to become mainstream and sell by millions in 2018.
Though Glass is state of the art technology it’s computational capabilities are not as advanced as today’s smart phones. Last year a ‘rudimentary’ beta version was released to get feedback from developers for hard- and software improvements and most importantly for the deployment of user cases. This feedback will be used to improve or even transform Glass into something more applicable and valuable in everyday life than it’s current form is today.
At Node1 we are excited about Glass because it opens up a new UX frontier. The key cutting-edge feature of Glass is that as a wearable computer it provides users advanced computational capabilities without hindering their physical abilities. If Glass will remain to have the form factor of glasses or transform into something else remains to be seen. It is however already a hands free wearable computer and hopefully will become even more so. This provides tremendous opportunity and a new way of thinking about mobile computing.
As mentioned before Glass is launched in Beta and it’s still very much a work in progress. For developers and innovative companies this provides an opportunity to take the lead in developing services for the next evolutionary step in (mobile) computing.
In this blog post we have therefore outlined what you should think about when developing for Glass or when you’re as a company – large or small – thinking about innovating with Glass. Here are the answers to the what, why and how of Glass.
Google Glass components
Google is a wearable computer that provides a different interaction interface than a touch screen device. In 2013 it was launched in Beta and made available to a ‘handful’ early adopters and developers, the so-called ‘explorers.’ If you decide to develop for Glass, be sure to take into account upcoming hardware and software updates.
Glass is not a mobile phone. It does not have built-in GPS or SMS messaging. Connection to Internet and GPS is done by Bluetooth tethering to your mobile phone or Wi-Fi.
- Bone conduction speaker
- No GPS
- No Radio
Obviously this is the most important question that needs to be answered. If smartphones are aboutmobile computing than one could argue that Glass is about contextual computing.
It’s about computing at a specific moment in time and in place and having the ability to do physically what you want and the advanced computational power to support it.
For instance having the ability to find and share your favorite recipes — even when your hands are covered in marinade.
Other interesting user cases popping up right now are:
Announced just a few days ago Hyundai is developing a Google Glass app to control your car’s features. Owners of both Glass and a 2015 Genesis will be able to use the headset to find their vehicle, automatically start it, send addresses to its navigation system, and lock and unlock its doors.
Healthcare professionals such as surgeons using/researching the application of Glass in the surgery room. In August 2013, Surgeon Dr. Christopher Kaeding used Glass to consult with a colleague in a distant part of Columbus, Ohio performing a surgery at the Wexner Medical Center at Ohio State University. A group of students at The Ohio State University College of Medicine also observed the operation on their laptop computers.
Sports apps like the Strave Cycling app and the translation app World Lens for Glass
Other interesting user cases one could think off as mentioned by our friends at Apigee in this great webcast are (see also below):
- Get data for repairs as overlay blueprints
- Checking colors and materials just by looking at them
- Retail (being able to track where objects are)
The key and the common denominator to all theses examples is again and again the ability of Glass to deliver very context specific value added information or computational power at moments the user can’t use or hold a device. In our opinion the starting point of any Google Glass powered service or app should begin with this notion.
In our opinion the most important thing to consider when designing and developing for Glass is this: it’s not a mobile phone.
The UX is fundamentally different and therefore calls for a different view on what kind of apps could be useful and add value. From the coding perspective it’s on the other hand not that different than developing apps for Android.
Key considerations you should take into account when developing for Glass are:
- The calls you send and receive are through Bluetooth being tethered to your cell phone.
- Battery life is short.
- Eye fatigue might come up. This means that events should be short.
- If your app will require a lot of processing power the battery might get hot. It’s important to take this into consideration, because the battery is placed near the user’s ear.
- There is no store or formal distribution for apps yet.
You can use the webbased Restful Mirror API if you need:
- Platform independence
- Common infrastucture
- Built-in functionality
You can use the GDK if you need:
- Realtime user interaction
- Online functionality
- Acces to hardware
- Or you can use them both.
All network calls go through phone, so latency/performance/API behavior is mostly the same.