Microsoft is Making it Harder to Use Its Facial Recognition

68

Microsoft is tightening the conditions to access its facial recognition software. The move fits in with the tech giant’s attempts to handle its own technology responsibly.

 

The AI around facial recognition can have far-reaching consequences for society if it ends up in the wrong hands, according to Microsoft.

That is already stated in a document called ‘Responsible AI Standard’, in which the company explains its own rules and standards. In practice, these are the tools Azure Face API, Computer Vision and Video Indexer, for which access is a lot more difficult. The company is already taking offline Azure services that try to analyze emotions, or look for key identity indicators such as age, gender and more.

Facial recognition has been one of AI’s most notorious capabilities for years, and while Microsoft itself has several of those software services, the company is also one of the more outspoken players when it comes to using it responsibly. The company asked for facial recognition laws years ago and also refused a contract in 2019 to supply the technology to the American police.

Automatic facial recognition can have far-reaching consequences for human rights, and in particular privacy. Due to user protests and a series of lawsuits, other companies, such as Facebook, have already drastically reduced their (public) use of facial recognition.

However, there is not much legal framework for the still fairly new technology in most places, so Microsoft is arguing for self-regulatory standards here for the time being. Customers who want to use the Face API will now have to fill in a form that will probably be checked on several points.

Leave A Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.