Elon Musk
Elon Musk is now actively pursuing research into brain computer interfaces. But isn't he meant to be fearful of artificial intelligence? Reuters

It's difficult to get a finger on Elon Musk – one minute he's concerned about the rise of artificial intelligence (AI) and robots taking over our lives, and the next, he's starting a brand new company.

The technology entrepreneur, already well known for his work on Tesla and SpaceX, has announced he is starting a new business called Neuralink that will aim to drive brain-computer interfaces that will enable humans to control machines using their minds.

We'll still have to wait for about a week before a comic piece illustrated by viral content site Wait But Why comes out to explain in more detail, but we already know that Musk has a vested interest in the democratisation of AI and an experimental technology called neural lace.

But what's it all about? Here's a quick look at the issues that Musk is concerned about.

What is neural lace?

In 2015, scientists from Harvard University in the US and the National Center for Nanoscience and Technology in Beijing, China pioneered a way to inject a tiny electronic mesh sensor into the brain that fully integrated with the cerebral matter of mice.

The mesh, known as neural lace, did not cause the mice any harm. Once injected into the brain of a mouse, the mesh unfurled to 30 times its original size and the mouse's brain cells grew around the mesh, forming connections with the wires in the flexible mesh circuit.

At the moment, the mice still have to be connected to the computer using a physical wire, but eventually, the idea is to integrate an electric mesh with a human brain that can wirelessly be tracked by a computer. The end game, meanwhile, would be to enable humans to send information to computers via their minds, and perhaps even control machines telepathically.

An electronic mesh that integrates with brains
This tiny electronic mesh sensor is thin and flexible enough to be injected into the brain and gentle enough to integrate fully with brain cells, making human cyborgs a possibility Lieber Research Group, Harvard University

Cool, but I thought Elon Musk was afraid of AI?

True, that's what he's been saying since 2014, when he spoke at MIT and called AI humanity's "biggest existential threat". He's also expressed the fear that scientists are so focused on innovating that they may inadvertently create machines that could rise up and take over the world.

Remember the Mirai botnet consisting of hacked IoT cameras and routers? Musk even found a way to tie cybersecurity to AI, warning that AI could one day be used by cybercriminals to bring down the internet if they could find a way to automate hacking techniques.

Musk is however very interested in neural laceIBTimes UK covered the research when it first occurred, but the topic only went viral one year after the research was released, when he began talking about it on social media.

In February 2017, he told Vanity Fair that the only way to stop humans from becoming obsolete was to have "some sort of merger of biological intelligence and machine intelligence", and he felt that a real viable brain-computer interface could be achievable in the next five years.

So what is OpenAI and how does it fit in with this?

Artificial intelligence
Will artificial intelligence one day overwhelm humans? iStock

Make no mistake, Musk is definitely interested in AI, to the extent that he was one of London-based machine learning startup DeepMind's first investors before it was acquired by Google in 2014.

So in December 2015, he launched OpenAI, a $1bn (£800m) non-profit dedicated to advancing AI in such a way as to benefit humanity as a whole, without needing to be constrained by a need to deliver financial returns.

Progress with OpenAI has been slow, but in November it did form a useful partnership with Microsoft to use the Azure platform for large-scale deep learning experiments to tackle some of the world's most challenging issues, such as how to ensure that machine learning systems are safe and operate as they were intended.

The startup is also looking at how it can prevent attackers from hijacking machine learning systems to alter the results they product.

It's not clear how OpenAI will be involved in the development of neural lace, but we assume it will be along the lines of ensuring that the technology is built and safeguarded in an ethical way.