Google to continue working with military, but will avoid nukes and spying

Share

Google to continue working with military, but will avoid nukes and spying

Pichai explained that Google would not use its A.I. for "weapons or other technologies whose principal goal or implementation is to cause or directly facilitate injury to people" nor to support "technologies that gather or use information for surveillance violating internationally accepted norms of human rights".

This commitment follows protests from staff over the USA military's research into using Google's vision recognition systems to help guide drones.

Last month, some of the company's employees resigned over Project Maven, a Google contract with the Pentagon that involves drone analysis, Gizmodo reported. Pichai said the company will "continue our work with governments and the military in many other areas".

Google is one of the leading technology companies in artificial intelligence, which landed it a juicy government contract a year ago to work on "Project Maven".

Google today released guidelines for the creation of artificial intelligence, which includes a ban on the creation of autonomous weaponry and most applications of AI with the potential to harm people. After several employees quit and thousands of others signed a petition against the program, Google has said it will end its participation when its current contract expires next year.

Another Googler who spoke with Gizmodo said that the principles were a good start, mitigating some of the risks that employees who protested Maven were concerned about. The letter began with: "We believe that Google should not be in the business of war".

In addition to these ethical guidelines, Google published a starter's guide for making responsible AI, including testing for bias and understanding the limitations of the data used to train the algorithm. And they should only be made available for purposes that fall in line with the above.

CNBC also noted that Pichai's vow to "work to limit potentially harmful or abusive applications" is less explicit than previous Google guidelines on AI. Several employees said that they did not think the principles went far enough to hold Google accountable-for instance, Google's AI guidelines include a nod to following "principles of worldwide law" but do not explicitly commit to following global human rights law.

The company recommended that developers avoid launching AI programs likely to cause significant damage if attacked by hackers because existing security mechanisms are unreliable.

Google instead will seek government contracts in areas such as cybersecurity, military recruitment and search and rescue, Chief Executive Sundar Pichai said in a blog post on Thursday. "These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe".

Share

Advertisement

© 2015 Leader Call. All Rights reserved.