Google has promised not to use AI for weapons, following protests over its partnership with the US military.
A decision to provide machine-learning tools to analyse drone footage caused some employees to resign.
Google told employees last week it would not renew its contract with the US Department of Defense when it expires next year.
It has now said it will not use AI for technology that causes injury to people.
The new guidelines for AI use were outlined in a blog post from chief executive Sundar Pichai.
He said the firm would not design AI for:
- technologies that cause or are likely to cause overall harm
- weapons or other technologies whose principal purpose is to cause or directly facilitate injury to people
- technology that gathers or uses information for surveillance violating internationally accepted norms
- technologies whose purpose contravenes widely accepted principles of international law and human rights
He also laid out seven more principles which he said would guide the design of AI systems in future:
- AI should be socially beneficial
- It should avoid creating or reinforcing bias
- Be built and tested for safety
- Be accountable
- Incorporate privacy design principles
- Uphold high standards of scientific excellence
- Be made available for use
When Google revealed that it had signed a contract to share its AI technology with the Pentagon, a number of employees resigned and thousands of others signed a protest petition.
Project Maven involves using machine learning to distinguish people and objects in drone videos.
The Electronic Frontier Foundation welcomed the change of heart, calling it a “big win for ethical AI principles”.