Imagine a world where taking pictures and posting them to your social media accounts only requires you to think for it to happen - a world where the true power of your brain is set free, where you are no longer tied to your smartphone or tablet in order to communicate with the world.

Well, that world is here today.

This may read like the opening of a (really cheesy) science fiction novel, but on Wednesday I took a picture just by concentrating and posted it to Twitter without opening my mouth or pressing a button.

Granted, I was wearing two headsets and looked like a bit of a berk, but the potential which has been unlocked through a new app called MindRDR (pronounced Mind Reader) that was created by a company based in central London is obvious.

This Place is a user-experience company that works across multiple platforms from smartphones and tablets to wearables like Google Glass.

Glass Elbow

While looking at Glass as a new platform for clients, the company noticed that interacting with the headset required repeated tapping and swiping of the headset's touchpad, leading to a complaint it has dubbed 'Glass Elbow'.

Controlling Google Glass With Your Mind
You may look ridiculous, but the technology's potential is huge IBTimes UK

It asked how you could move beyond voice or touch interfaces, and that was when it thought of devices like the NeuroSky Mindwave Mobile - an EEG headset which can read your brain's activity.

Reading brain activity is nothing new and the NeuroSky device which This Place is using is not very advanced. It is four years old and costs less than £70 from Amazon.

It can read just two states - concentrated and relaxed - but that is enough to allow you convert your thoughts into actions through Glass.

Mind control

By connecting the NeuroSky device by Bluetooth to Google Glass, the developers took the two outputs and assigned values to them in the MindRDR app.

Opening the app you are presented with a typical camera interface, but with a horizontal line to indicate if you are concentrating or not.

To take a picture all you need to do is concentrate enough for the horizontal line to reach the top of the screen. Once the image is captured, the next screen asks if you want to keep the photo and post it on social media, or discard it.

Concentrating again will automatically post it online, while relaxing will delete the photo.

The first time I tried the system it worked flawlessly, allowing me post this image to Twitter within 30 seconds of putting the headsets on.

However trying to repeat that success was difficult, highlighting the fact that this is far from an exact science in its current state.

Open source possibilities

But This Place knows this, and that is why it is making the code for the app available to all. It is hoping that more developers will build on what it has created, to improve the functionality and performance of the technology.

As well as improving the software and hardware being used, the system will also require users to train their minds to work in a specific way in order to make use of the technology.

MindRDR reads just two of the 18 signals we can currently map from the brain, indicating that there is huge potential here for a properly useful technology to be created.

Stephen Hawking interest

Looking at practical applications of the device, Chloe Kirton, creative director at This Place, says one of the world's most famous physicists is interested in seeing where this technology goes:

"While MindRDR's current capabilities are limited to taking and sharing an image, the possibilities of Google Glass telekinesis are vast. In the future, MindRDR could give those with conditions like locked-in syndrome, severe multiple sclerosis or quadriplegia the opportunity to interact with the wider world through wearable technology like Google Glass. This Place is already in conversations with Professor Stephen Hawking, amongst others, about the possibilities MindRDR could bring as the product evolves."

Many companies see voice as the natural successor to touch in how we will interact with technology in the coming years. The problem with that hypothesis is that people have so far shown an unwillingness to say talk to there devices in public, with statements like "OK Google, what does this rash mean?" or "Siri, how do I get to the pub?" remaining unsaid.

Moving to a world beyond that where we can just think about something to make it happen, is a much more appealing option.