Quantum computing
Could robots take over our jobs or harm society? A UK standards body has compiled a standard on ethical robot design to ensure this doesn't happen iStock

A UK standards body has released a set of guidelines relating to the ethical design of robots and robotic devices, in order to ensure that humans are protected with the advent of artificially intelligent machines, similar to the standards set out by science fiction author Isaac Asimov back in 1942.

The new standard BS 8611 has been compiled by the British Standards Institution (BSI), a body that sets all the technical and quality guidelines for goods sold in the UK. You might recognise the famous "Kitemark" quality mark that is issued to products that fulfil its guidelines.

The guidelines are intended to be used by robot and robotic device designers and manufacturers, and were drawn up with the assistance of multiple scientists, academics, ethicists, philosophers and users. Among other things, the guidelines advise that robots cannot be capable of deception, and that a human must always be responsible for the behaviour of a robot.

BS 8611 is available to purchase from the BSI online shop for £158 (£205), with a discount price of £79 offered to members of the BSI.

Robots could exacerbate social problems

The standard also states that robots should not be designed solely or primarily to kill or harm humans, and that if the robot is a product, it has to be designed to be "safe, secure and fit for purpose". Although there are concerns about the physical hazards of robots, namely psychological hazards such as fear and stress, the BSI is far more concerned with the ethical hazards that using robots can present.

These ethical hazards include levels of transparency – i.e. how easy it is to work out who is responsible for a robot and its behaviour, and writers of the guidelines are also concerned about whether a robot's actions can become racist or sexist, and whether it is a good idea for a robot to be able to form an emotional bond with its owners.

The guidelines warn that if ethical hazards are not taken seriously, it could lead to a future where robots cause unemployment and social displacement to rise, amongst other undesirable outcomes.

The Three Laws of Robotics

Isaac Asimov, one of the most popular science fiction writers in popular culture, set out the Three Laws of Robotics framework in his short story "Runaround" in 1942 (later compiled in the book I,Robot in 1950).

The first law of robotics states that a robot may not injure a human being, or through inaction, allow a human being to come to harm. Meanwhile, the second law proclaims that that robot must obey all orders given to it by human beings, except where the orders conflict with the first law, and the third law mandates that a robot must protect its own existence as long as protecting itself does not conflict with the first or second laws.

And Asimov also added a final "zeroth law" at the end of his book Foundation and Earth, which states that a robot must not injure humanity, or through inaction, allow humanity to come to harm.

The Three Laws of Robotics have been explored and featured in multiple TV shows and movies, including Doctor Who, Alien, RoboCop, Star Trek: The Next Generation, the Simpsons, Bicentennial Man, I,Robot, Big Bang Theory, Babylon 5 and Automata.