Google has given another demonstration of its Glass project in use, telling developers it doesn't want the wearable technology to dominate people's lives.

Google Glass
Google's wearable technology experiment Glass won't dominate your life giving you only the information you need. (Credit: Google)

Last month Mark Hurst roused some interest in when he wrote a blog entitled The Google Glass feature no one is talking about. In the piece Hurst makes the case that constantly having news, messages and update feed into our eye line will signal the end of social interaction as we know it.

Hurst believes that the problem is not one for the user, but for everyone else: "The key experiential question of Google Glass isn't what it's like to wear them, it's what it's like to be around someone else who's wearing them."

And it may have been that Google was paying attention to these concerns. During it latest demonstration of just how Glass will work, Google highlighted the fact that apps running on the platform will have to abide by strict rules so as not to show too much information or too many updates.

Glass is Google's wearable technology experiment which aims to marry the features of a smartphone with augmented reality, projecting text, images and data onto the world around you via a heads-up display.

Google's Timothy Jordan demonstrated the interface at the SXSW festival taking place in Austin, Texas this week. As well as showing off Google's own apps such as Gmail, Google+ and Google Now, he demonstrated some third-party apps from the likes of Evernote, The New York Times and Path.

Jordan also gave developers in attendance their first look at the Google Glass Mirror API which will be the main interface between Google Glass, Google's own servers and the apps developers are writing for the platform.


Jordan told developers that because Glass is a unique platform, Google would be enforcing some rules for apps built for it. Essentially what Google doesn't want is to overload the user with information and updates.

For example, news apps should not present the entire article to users' eyes, instead just offering the headline. If a user wants to learn more, they can use the built-in voice control and text-to-speech functions to get Glass to read the article back to them.

The Mirror API features a number of these built-in features. Along with voice recognition and text-to-speech, all apps will also be able to access the camera and automatically share content with Google+. However Jordan pointed out developers could add other sharing options - which is a good thing considering the current dearth of Google+ users compared to the likes of Facebook and Twitter.

Developers will be able to create apps which present data on what Google calls "timeline cards" which can display text, images, video and rich HTML. Developers will also have the option of using what it calls bundles, which are essentially set of timeline cards.

Jordan showed off the Google Now app running on Glass which showed the weather in his location. An icon in the corner of the card indicated there was more information available and a quick swipe displayed the entire "bundle" showing the weather for the coming week.

Google Glass is controlled in three ways. You can swipe the side of the glass themselves to navigate back and forward through your timeline; you can use voice commands to take a picture, send a text, or look something up on the web; and you can use your eyes to control what happens on-screen.

Google has begun giving select journalists previews of Glass in recent weeks and last month it announced that it was extending its pre-order program to let "creative individuals" get their hands on one of the most sought after technologies around.

Google said it was "overwhelmed" with applications and has since closed the pre-order program. Those who do get selected for the program however will still have to stump up $1,500 for the pleasure of being an early adopter.