The New Conscience Gatekeeper at Google, Amazon and Microsoft for AI Ethics

While the ethics debate has been unequivocally accompanying the progress of Artificial Intelligence (AI) technology, off late the heat around the debate has become even more intense. Interestingly, the dissent around AI ethics for some of the world’s leading AI players is coming from very close quarters – their employees.

In the last 2-3 months, Google, Amazon and Microsoft – the trio at the leading edge of AI work – witnessed rebellion from their employees against specific AI projects the companies were working on for the U.S. government and military on the question of ethics involving war and violation of human rights.


It started in April with a section of Google employees protesting against the company’s participation in Project Maven, Pentagon’s autonomous drone program. The program involves using AI to analyze video imagery, which can help improve the targeting of drone strikes. Google’s role in the project was to be around helping the military drones gain the ability to track objects, including interpreting camera footage from the drones.

As per a report published in New York Times in April, around 3,000 Google employees, including some very senior engineers, signed a petition asking CEO, Sundar Pichai, to pull out from the said contract. Few of them even refused to build a security tool required for the project.

The objection to the project was made on ethical and moral grounds as the petition read – “We believe that Google should not be in the business of war.” The petition further asks for drafting and enforcing a clear policy that Google will not ever build warfare technology. “The argument that other firms like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google,” the letter reads further.

The possibility of using AI for creating autonomous weapons has been one of the most fundamental arguments mooted against AI along with the argument around job losses. Some of the most renowned leaders and scientists, including Stephen Hawking and Elon Musk have been very vocal about it with the latter tweeting that a global arms race for AI will cause the third world war. In fact, a group of 116 global technology leaders signed an open letter in August last year calling on UN to ban the development and use of AI weaponry, called ‘Killer Robots’.


Coming back to the present day disconcert over AI that is spewing amidst the world’s technology giants, Amazon is the latest at the receiving end of employee ire. The company’s employees called out for an end to the sale of its facial recognition services, AWS Rekognition, to the U.S. law enforcement agencies over its contribution to violation of human rights.

The employees launched their protest in a letter addressed to CEO, Jeff Bezos, stating that refusal to contribute to tools that violate human rights.

An excerpt from the letter: “We are troubled by the recent report from the ACLU exposing our company’s practice of selling AWS Rekognition, a powerful facial recognition technology, to police departments and government agencies. We don’t have to wait to find out how these technologies will be used. We already know that in the midst of historic militarization of police, renewed targeting of Black activists, and the growth of a federal deportation force currently engaged in human rights abuses — this will be another powerful tool for the surveillance state, and ultimately serve to harm the most marginalized.


Microsoft too has faced similar dissent from its employees for its work for Immigration and Customs Enforcement (ICE) department, which includes providing cloud services. The contract includes processing data and AI capabilities.

In an open letter addressed to Satya Nadella, over 100 employees asked the company to stop working with the agency for its separation of migrant parents and their children at the border with Mexico, following Trump administration’s ‘zero tolerance’ policy. “We believe that Microsoft must take an ethical stand, and put children and families above profits,” the letter read.

An excerpt from the letter that was published in New York Times – “We request that Microsoft cancel its contracts with ICE, and with other clients who directly enable ICE. As the people who build the technologies that Microsoft profits from, we refuse to be complicit. We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm.”

Principles of AI Ethics

In Google’s case, even though the company maintained that its products would not create an autonomous weapons system and that its part in Project Maven was ‘specifically scoped to be for non-offensive purposes’, the powerful protest by its employees has reportedly forced Google to not renew the project once it expires in 2019.

In order to stem any further dissent from employees, in early June Google announced a set of principles it will be using as a code of ethics to abide by when developing future AI.

The memo sent out by Sunder Pichai, which was also published as a blog on Google’s website, read – “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

The seven principles listed are: Be socially beneficial; Avoid creating or reinforcing unfair bias; Be built and tested for safety; Be accountable to people; Incorporate privacy design principles; Uphold high standards of scientific excellence; Be made available for uses that accord with these principles.

The blog further stated the AI applications that Google will not pursue. These include:

  • Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Meanwhile, Microsoft lists down its AI principles on its website, as follows:

  • Fairness: AI must maximize efficiencies without destroying dignity and guard against bias.
  • Accountability: AI must have algorithmic accountability.
  • Transparency: AI must be transparent.
  • Ethics: AI must assist humanity and be designed for intelligent privacy.

While these principles and codes of ethics exemplify focusing on everything that’s good about, the real thrust lies in ensuring that these words are followed to the ‘T’ and not serve as mere lip service. And, the employees across these tech giants are ensuring that their companies don’t err, emerging as the biggest hope for ensuring companies follow the AI ethics. Employees are well on to becoming the new conscience gate keepers for the growing AI ambitions of the technology community.

(Image Courtesy: www.ttac21.net)

1 Comment
  1. Rahul Mani 2 years ago

    Strange but true that this technology that aims to shape the future of humanity will create some bitter issues for humanity that will stump us. Issues like unemployment, inequality of wealth, issues of behavioural changes and human-machine interactions, potential barrage of biases, security of both systems and humans, working of robots and their controls are all possible and can stare at the human race. However, the trick is to not let the technology go beyond control and the usage has to be creative not destructive.

Leave a Comment

Your email address will not be published.

You may also like