Google’s Research Reports on AI Must be Positive
Alphabet, Google’s parent company, has tightened control over its researchers, demanding, among other things, that their technology is not portrayed in a negative light in reports.
This is what Reuters news agency writes based on interviews with researchers. Google is said to have introduced a new review procedure for sensitive themes.
It states, among other things, that researchers must consult the legal department, but also the PR team before discussing topics such as facial analysis, emotion analysis or AI-related to race, gender or political preference.
It is said to be a new procedure that was introduced in July this year.
The procedure is in addition to existing reviews that check reports for, for example, the disclosure of trade secrets before they are published. Reuters writes, among other things, about a report on the recommendation of content.
In which one of the managers urged the authors shortly before publication to use a positive tone when it comes to the technology, “although without hiding the challenges of the software”. . Google’s subsidiary site YouTube uses this kind of referral AI to deliver new videos