It’s become more and more difficult over the years for the average American to trust some of the large technology companies out there like Facebook and Google. A new report has surfaced that claims Google moved during 2020 to tighten control over researchers working inside the company and the papers they publish. According to the report, Google launched a “sensitive topics” review, and in at least three cases, requested that authors adopt a positive tone in their research.
Google allegedly told at least three authors not to cast its technology in a negative light. Word of what Google is doing surfaced in internal Google communications and from interviews with researchers involved in the work. The new review procedure in place at the technology company asked researchers to consult with legal, policy, and public relations teams before pursuing topics like face and sentiment analysis and categorizations of race, gender, or political affiliation.
Those details came from an internal Google webpage that explains its policy. Google said some advances in technology increasingly lead to situations where projects that seem inoffensive raise ethical, reputational, regulatory, or legal issues. While the webpage carried no date, Reuters reports that current employees said the policy began in June.
Google’s “sensitive topics” process added another round of inspection to its standard review of papers for potential issues such as disclosing trade secrets. In some instances, Google reportedly intervened in the research in later stages. In one example, Google officials reportedly intervened, telling researchers on a study into content recommendation technology to “take great care to strike a positive tone.”
In that instance, the manager reportedly added the request didn’t mean “we should hide from the real challenges” the software raised. In that instance, the researchers allegedly updated their study to remove all references to Google products. An original draft mentioned YouTube. Google’s policies have come to the forefront after a scientist called Timnit Gebru, who led a 12-person team focusing on ethics and artificial intelligence software, was abruptly fired. The researcher claimed she was fired after questioning an order not to publish research that claimed AI able to mimic speech could disadvantage marginalized populations.