The Next Big Thing in Wearables: The Android Wear SDK
Based on our recent hackathons and blog traffic, we know our community is very interested in wearables. The announcements today from Google (that were hinted last week at SXSW) are definitely interesting. This blog post has no information or hints about any future watches that AT&T may offer, but based on the demos I saw, I definitely want to get my hands on one of them.
What about those demos? I was impressed around how well the voice commands worked. It will be amazing if you can make the watch work from the distance of a couple feet on a crowded train. I liked the Jellyfish alert and other use cases. It was also interesting how the user opened the garage door at the end—an indication of what Google might do with their recent Nest Labs acquisition.
Android Wear is a platform specifically aimed at smartwatches with a variety of form factors (including round displays) and is currently in developer preview with a focus on app notifications in this first release. Developers can prototype apps now without a physical wearable using an emulator. Apps would be for development and testing purposes only and preview apps cannot be distributed (they actually warn that apps will likely fail when the production SDK is available later this year). Developers can sign-up for the SDK now. We signed up and are waiting for access.
Details About Android Wear: How Developer Friendly Is It?
The following Android Wear details are based on the available documentation. Google designed this around speech and textual information with a focus on providing information the user needs most urgently while minimizing the need for user actions. The documentation seems very straightforward: Google has made this developer friendly.
The UI uses a card-based metaphor similar in principle to Glass. The design principles are intentionally contextual, glanceable, and low interaction. This is clearly meant to steer developers to differentiate the Android Wear experience from the smartphone experience, augmenting the rich feature set available on the phone.
Cards represent a Context Stream for general notifications, which can be used as a starting point for further actions on the notification. Cards can also take a Cue Card format for voice commands that are tied to applications. Speaking “OK Google” will start the interaction.
Notifications have a defined structure. The notification should avoid long sentences, but may include images, an app icon, and ability to gain more contextual information to the right of the notification. For example, up to three Actions may be added per notification. An Action can have its own icon and caption. Voice is also integrated into actions. For example, you can use voice to reply to a text message (it needs to be short or may be truncated).
Google seems to be very focused on the user experience. They were guiding developers to help make Android Wear the ideal personal assistant (e.g. encouraging devs to only require input when absolutely necessary).
Additional contextual information can also be provided through Pages further to the right. Pages provide more detail, text, or any additional information that may be relevant to the notification.
When multiple related notifications appear from a single app, those notifications can be placed in Stacks. This prevents overwhelming the user’s stream with events from a single source by compressing events. All related notifications are accessible without interfering with other events in the steam.
It is no surprise that voice plays a major role in the user experience. The big difference for devs is the concepts of remote input and pre-defined text responses, which will improve the experience.
From a developer standpoint, notifications in Android Wear are created using the same methodology as the generic Android platform. This will ease the gap in building Android Wear applications.
We encourage developers to give this a look—and we will post more if we learn more.
Photo via Google.