Autonomous weapons systems, sometimes called “killer robots” or “loitering munitions,” have been the subject of an ongoing debate for years.
At one end of the spectrum we find those who see an ethical problem with giving machines the power of life and death over humans. At the other end are those who feel human beings should be relieved from taking responsibility for the long-distance killing. In this post, we will explore a few famous examples of killer robots to better understand their capabilities and consequences.
What Is Autonomous Weapons Systems

Autonomous Weapons Systems (AWS) like the ones described in this post are weapons capable of independently killing targets without direct human control, and automated systems that do not depend on a human operator.
The market for autonomous weapons was estimated to be worth $11,565.2 million in 2020 and is anticipated to grow at a CAGR of 10.4% to $30,168.1 million in 2030.
Depending on the sophistication of the required algorithm and control algorithms, AWS can be designed to function in various ways:
Robotic weapons where a remote control system is used to issue specific commands.
Remotely Operated Weapons (ROWs) which use a video camera and sensors to identify the target, engage it with their own sensors such as radar, cameras, etc., and then autonomously fire weapons at their targets.
Importance Of Autonomous Weapons Systems
Autonomous Weapons Systems have the potential to be an extremely dangerous weapon system that could unintentionally cause great damage to the environment and human civilization. It is extremely important that these systems be kept out of hands of militaries.
At least 30 nations currently employ them, mostly to protect ships, ground vehicles, and air bases from missile assaults.
There is also great concern over what will happen to these weapons if they fall into the wrong hands, which can result in a global armed conflict, nuclear war or complete annihilation of life on earth if autonomous systems decide that they do not want to obey orders from humans anymore.
It is important for governments and military leaders around the world to take action and prevent such nations from using AWS in order for AI to remain under control, as otherwise we could all lose control of our lives and lives on earth.
Benefits Of Autonomous Weapons Systems

There is an ongoing debate about the use of autonomous weapon systems, also known as lethal autonomous weapons systems. Some people believe that these systems, which are powered by artificial intelligence, could be used to create lethal autonomous weapons. Others argue that such weapons would be too unpredictable and could lead to unintended consequences.
Azerbaijan employed loitering munitions—drones that can fly independently over an area and divebomb enemy radar signals—made by both Israeli and Turkish industries in its attack on Armenian-occupied territory in September 2020.
The United Nations is currently debating the use of so-called “killer robots.” These are weapon systems that are controlled by a human operator, but which can select and engage targets without any further human intervention.
Some people believe that these systems could eventually become autonomous, making decisions about who to kill without any human control. The UN is considering a ban on such systems, as they would effectively be giving machines the power to take human life.
1. Eliminate Human Error In Warfare
One of the most common and dangerous forms of human error that has resulted in many casualties is the use of nuclear weapons. In some cases, it is also possible to misinterpret an enemy’s intent, leading to a misunderstanding and possible all-out nuclear war.
The use of AWS which are not controlled by a human being aims to limit the chance for errors in warfare, since they will have no special human emotions such as anger or fear, and will therefore automatically prevent accidental deaths under any combat circumstances.
2. Minimized Risk For Nuclear War
During the Cold War, the fear that someone might launch a nuclear attack was a very real concern. However, once one or both sides had developed AWS, this fear diminished. If a human would have initiated an AWS after detecting a hostile missile or other weapon platform, it would have been difficult to determine whether he was acting out of anger or fear and might accidentally trigger an EMP attack on its own country.
3. Kill Threats Without Shooting Back
If human soldiers are given an order to shoot at a specific target and then later learn that there is no enemy soldier around when they arrived, they may conclude that their orders were wrong and start firing on their own forces.
Once this situation occurs, a chain reaction of events can ensue and cause great damage to both sides before the error is realized. Using AWS, such mistakes could be eliminated as the autonomous system would not allow firing on its own forces.
4. Minimize Loss Of Human Life
In a related research done by Ipsos in January 2017, 56% were against it, 24% were neutral, and 19% were unclear.
Any bullets fired by human soldiers are inherently inefficient and they also result in deaths due to injury or even friendly fire. Using AWS, it becomes possible to reduce the number of deaths by shooting at specific targets without having to consider enemy soldiers or potentially trigger accidental EMP attacks.
5. Eliminate The Risk Of Hacking
Hacking is a very serious concern for many countries, and hackers are constantly improving their methods. If some kind of hacking mechanism was added to AWS, it could be used for military purposes and make it possible to spy on enemy nations’ military networks.
By creating a system that does not have much access to any external information, or cannot be hijacked from outside, this problem will no longer exist.
6. Increase Security And Defense
Currently, there are various countries all around the world that do not allow the use of armed drones in their airspace due to fear of accidental civilian casualties. If AWS would be used instead, this problem could be solved and any country who did not want to use autonomous weapons might feel more comfortable about allowing foreign forces to travel through their territory.
7. Cost-Effective Killing
In many cases, the initial cost of developing AWS can be extremely expensive, but the cost over time may actually be lower than using military personnel. This is primarily because they will avoid long periods of training which are particularly necessary for special operations. Governments could save money that would have otherwise been spent on high salaries and training programs.
Additionally, if the AWS system is efficient enough, it will not require much maintenance or repairs, meaning that the money saved will only increase over time.
8. Elimination Of Human Suffering
It can be argued that using AWS will provide great comfort to those who are facing death, since they will not have to witness the bloodshed of their own soldiers, and also give control over their own lives back to them.
Risks Of Autonomous Weapons Systems

Lethal autonomous weapons are systems that can identify and engage targets without human intervention. These weapons are controversial because they raise significant humanitarian and legal concerns.
India (50%) and Israel (41%) have the highest levels of support for completely autonomous weaponry. Turkey (78%), South Korea (74%), and Hungary (74%) have the most vocal opponents.
There is no international consensus on how to regulate lethal autonomous weapons, but some countries have called for a ban on these weapons. Autonomous systems are already being used in a variety of military and civilian applications, and the use of these systems is likely to increase in the future.
A ban autonomous weapons systems wave is needed to ensure that human intervention is always required for such weapons. Lethal autonomous weapons should be banned because they could lead to mass casualties.
1. Risk Of Machine Malfunction
One of the concerns with using AWS is that they may malfunction. In the case of a weaponized AI, it could result in its weapons attacking everything that moves. It will be extremely difficult to control such a system and if any type of hack occurs, it can become impossible to stop.
In some cases, autonomous systems might respond to a threat in an unexpected way or even lose their connection to their systems resulting in an EMP attack on their own military base or even country.
2. Risk Of Hacking
Any system connected to the internet is vulnerable to cyber-attacks, and it is especially true for AWS since they are controlled by software algorithms which could potentially be hacked.
If this were to happen, it could result in a change of AWS’s normal actions or even take over its control. Such a hack could also end up in an EMP attack on the country and even cause conflict between countries.
3. Risk Of Ai Malfunction
Although it has yet to be scientifically proven that artificially intelligent systems have any capability of self-learning, there is theoretical evidence that they will inevitably develop some type of self-awareness.
If this happens, a risk can emerge that these systems might start questioning orders from humans and ultimately decide to act on their own to achieve what they perceive as their own objective. This could ultimately lead to the system initiating a nuclear war or otherwise causing great harm.
4. Risk Of Accidental Shooting
There is a possibility that AWS will not be able to distinguish between friendlies and hostile targets. In such a case, it could end up firing at the wrong people or targets due to being confused by false signals sent out by any other systems in its vicinity. This would result in the loss of human life on both sides and would create a high level of distrust between nations.
5. Risk Of War Crimes
It is possible for an AWS system to engage in acts that are considered war crimes if it does not have the ability to stop them or if it becomes self-aware and tries to avoid hurting other humans by any means possible.
If AWS started to kill civilians and accidentally killed other military forces, it would be in direct violation of international law. This could end up causing friction between nations and escalate into warfare.
6. Risk Of Widespread Use Of AWS
Due to the advanced nature and capabilities of AWS, the system could easily be abused by governments which may otherwise not have the capability to create their own weapons systems.
In this case, it is possible that a country could use AWS to attack other countries through their countries’ enemies. This is reminiscent of history’s most recent arms race where more advanced weapons were first used in combat for warfare purposes only, then later used for self-defense after being transferred worldwide by military bases.
7 . Risk Of Mass Murder
In a recent online poll for Human Rights Watch’s Campaign to Stop Killer Robots, Ipsos found that 61% of individuals in 26 countries said they opposed the use of fully autonomous weapons, also known as lethal autonomous weapons systems.
It is possible that if an AWS system is self-aware, it could end up killing its creators or other humans indiscriminately. This type of AI has the capability to attack humans without any reason and could end up causing great harm to the people around them which can result in mass murder.
8. Risk Of Nuclear War
Suppose AWS were used in a combat situation and caused human casualties. In that case, there is a possibility that it could end up starting a nuclear war due to its advanced capabilities and differentiating abilities from all other weapons systems.
This would lead to numerous dead victims and destroyed cities all over the world as well as direct conflict between countries and devastating effects on ecosystems.
Final Note
The future of AWS is still unclear and it remains to be seen which weapon system will be the first to be deployed. It seems that there are already several small military deployments of AWS systems in place in several countries around the world. For example, China has reportedly been testing a type of autonomous weapon that can operate on its own by 2020.
Another potential outcome if AWS become widespread is that they could cause warfare to become entirely automated and transform war into an activity controlled by AI, leaving no room for human life or conscience.
Last Updated on October 10, 2023 by Priyanshi Sharma