A pledge against the use of autonomous weapons has been signed by over 2,400 individuals working in artificial intelligence (AI) and robotics representing 150 companies from 90 countries.

The pledge, signed at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm and organised by the Future of Life Institute, calls on governments, academia, and industry to "create a future with strong international norms, regulations, and laws against lethal autonomous weapons".

"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," president of the Future of Life Institute and physics professor at the Massachusetts Institute of Technology Max Tegmark said.

"AI has huge potential to help the world -- if we stigmatise and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way."

Signatories of the pledge hail from organisations like Google DeepMind, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI, and individuals such as Elon Musk.

Must read: AI 'more dangerous than nukes': Elon Musk still firm on regulatory oversight [1]

The institute defines lethal autonomous weapons systems -- also known as "killer robots" -- as weapons that can identify, target, and kill a person, without a human "in-the-loop".

"That is, no person makes the final decision to authorise lethal force: The decision and authorisation about whether or not someone will die is left to the autonomous weapons system," the institute explains.

It said, however, this does not include today's drones, which are under human control, nor autonomous systems that merely defend

Read more from our friends at ZDNet