Forget about worrying that robots will one day take over our jobs – how about if robots were to take over the world? This is the reality that "Stop the Robots" protesters at the South by Southwest (SXSW) festival in Austin, Texas are most bothered about.
A group of 24 protesters descended on the entrance to the festival on Sunday 15 March to protest against artificial intelligence, all wearing matching T-shirts quoting Elon Musk's words of doom from an October 2014 speech: "With artificial intelligence we're summoning the demon."
The also carried signs with slogans like "Humans are the future", "AI could spell the end" and "Enhance life, don't replace it".
You might think that protesters against robots would be technophobes, but the group is led by a computer engineer and several of the members are from the University of Texas, which is renowned for its strong engineering degree programme.
Rage against the machine
"We stick with a pretty controversial message of 'Stop the Robots', even though we ourselves are technologists," Stop the Robots protest organiser Adam Mason told BBC Radio Five Live.
"It's when you take artificial intelligence and you put it in charge of a system or an entity that is not human where it can grow and learn and make decisions without a moral guideline. Humans make mistakes.
"If we make something that is as smart as humans or smarter, why won't it make mistakes and how will it be beholden to us?"
The protesters are not keen to prevent the progress of technology – rather they are deeply concerned about the uncontrolled growth of artificial intelligence and robots, and feel that more legislation and checks are required to stop AI from getting out of control.
"We have planes that can take off, land and fly through the sky, yet when we put 50 people into the air, we still put a human behind the control," stressed Mason.
"I think we need to continue to consider solutions like that in the future, where we tie decisions like that to human morality, rather than the morality of a computer."
AI needs to be properly researched
Fears about what artificial intelligence and robots mean for the future of mankind have been rising in the last 12 months, with killer robots being debated for the first time at the UN Convention on Certain Conventional Weapons (CCW) in May 2014.
A One Poll survey in the UK of 2,000 people found that respondents were most concerned about being replaced by artificial intelligence in the workplace, and 46% felt that technology was evolving too quickly and undermining traditional ways of life.
Prominent scientist Stephen Hawking, entrepreneur Elon Musk and many other senior industry figures from Microsoft, Google and leading academic institutions like MIT and Oxford University also chimed in to stress the importance of having laws and ethics to govern AI in an open letter in January.
"The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable," the signatories write in the letter.
"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."