Weaponizing Technology: Microsoft Leads Silicon Valley Down a Grim Path

2 minute read

Joshua Brustein, in his article “Microsoft Wins $480 Million Army Battlefield Contract,” discusses the technology Microsoft will be supplying to the United States Army. The contract includes 100,000 HoloLens augmented reality headsets to “increase lethality by enhancing the ability to detect, decide and engage before the enemy.” This development is particularly troubling for what it could mean for the future of Silicon Valley. This step not only illustrates Microsoft leading the technology sector into the military-industrial complex, but also Microsoft’s disregard for employee’s choice in the work they do. Brustein writes about “earlier this year, hundreds of Microsoft workers signed a petition criticizing a contract with U.S. Immigration and Customs Enforcement.” Despite worker objections, Microsoft is continuing to work with the government. Technology workers have demonstrated they do not want to build products used for waging war. Earlier this year, about 4,000 Google employees signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology”.

The technology industry is responsible for creating many of the tools we use today. Yet, these workers are against creating tools of mass destruction. The line drawn here is simple: these engineers, scientists, and designers want to build a better world. They have built instant messaging, wireless video calling, and portable, high quality cameras that can withstand daily uses and drops. Developing technologies designed to do harm is more than they signed up for. While discussing the ethical considerations of war is beyond the scope of this article, employees deserve to have a say in what they are building.

The other issue at hand with this development is what it could lead to. In Burstein’s article, Microsoft’s President and Chief Legal Officer, Brad Smith, said “Artificial intelligence, augmented reality and other technologies are raising new and profoundly important issues, including the ability of weapons to act autonomously. As we have discussed these issues with governments, we’ve appreciated that no military in the world wants to wake up to discover that machines have started a war. But we can’t expect these new developments to be addressed wisely if the people in the tech sector who know the most about technology withdraw from the conversation.” Unfortunately, Smith’s last sentence is incorrect. It is possible to have a conversation about how society wants to use these advanced technologies without creating them first. Essentially, it appears Smith is saying “Create weapons now, ask questions later.” This is similar to Silicon Valley’s “Move Fast and Break Things” mantra, which Facebook is seeing the fallout for now. This time, we should learn from their mistakes and be considerate of the implications of the technology we create.

The ultimate issue with the contract lies in the technology itself. It is easy to see what could go wrong with augmented reality powered by artificial intelligence: the AI could mistakenly categorize an unarmed civilian as hostile, the augmented reality device could malfunction and cause a soldier to make the wrong lethal decision, the facial recognition could be incorrect, and the list goes on. There’s also the issue of the technology falling into the hand of an oppressive regime, or the devices being hacked while soldiers are in the field. Fortunately, there is still time to have a conversation about these technologies before they are created and used. There is limited time to make a change, but change is still possible and necessary to create a better world.