With the use of missiles launched by unmanned aerial vehicles to destroy ISIS military encampments and convoys, the U.S. proved that automated vehicles are effective in a combat zone when a country chooses not to have “boots on the ground”. Thus, many are debating the positives and negatives of automated weapons in war.
Automated weapons are theoretically perfect for replacing human soldiers out on the battlefields. Since they are easily modified, they can be adapted to their environment. In addition, they need no training time while being more accurate and more responsive than the average soldier. They also do not deal with Post-Traumatic Stress Disorder that most veterans develop. The robots do not retreat but rather stay in combat until they are totally disabled or destroyed. Logically, because these weapons can be made to have superior fighting capabilities than humans, they should be the next step to warfare. However, the problem, aside from the fear of robot apocalypses found from fictional stories, is that these machines cannot think for themselves.
While these automated weapons will be created with superior fighting abilities to those of humans, these robots will only follow orders, and therefore they cannot be trusted with one of the most important tasks of an armed force: the protection of civilians. Robots may be created to successfully differentiate between friend and foe, but they will not be able to differentiate bystanders from enemies while in battle, and therefore might kill fleeing refugees. Thus, they are not fit to be deployed because they, unlike soldiers, cannot be sent into war zones with civilians in the mix.
But automated weapons could be controlled by a third party, much like the airstrikes that the US launches on ISIS. The controller, based almost halfway around the world, would “pilot” the aircraft to the enemy convoy and launch a guided missile at it. However, the controller may become desensitized. Similar to a muted first person shooter, the controller directly piloting an unmanned aerial vehicle -- like those used in the war against ISIS -- would not hear the very real screams of the victims. He or she might start to think of the airstrikes as a game. The controller might care even less if he or she was not directly piloting the plane and had only sent out orders for bombing runs. This would lead to a decrease in the value of human lives, which could decrease the importance of human rights. That is something that should never happen.
Putting all ethical arguments aside, another reason for not using machines in our current day and age is the cost. While the average United States soldier costing around forty-four thousand dollars to equip, train, and pay according to NBC, popular science says that unmanned aerial vehicles cost 5 million or more. Even though this is fairly expensive, other weapon systems are more expensive. For example, BBC reports that the Super aEgis II, developed by a South Korean company, costs around 40 million dollars.
Given that other countries are or will be actively pursuing this type of weapon, I think the U.S. cannot afford to fall behind in this firepower. The safety of Americans would come before the ethical reasons of why not to use these machines. However, in a worldly view, I think we should discourage the use of these machines because they do not differentiate between innocent bystanders and enemies. Controllers of these robots might think of war as a video game. Furthermore, with their high cost, these robot weapons should not be used in the place of soldiers during wars.
No comments:
Post a Comment