Microsoft doesn’t want AI to recognize your feelings anymore – mostly
Microsoft is updating its responsible AI standard and revealed that it is retiring (for the most part) the emotional and facial recognition capabilities of Azure Face.
Responsible AI Standards (opens in new tab) This is Microsoft’s internal rule when it comes to building AI systems. The company wants AI to be a positive force in the world and not be misused by bad actors. This is a standard that has never been shared with the public before. However, with this new change, Microsoft decided that the time would come.
Emotional and facial recognition software has been controversial, to say the least. Many organizations are demanding a ban on this technology. fight for the futureFor example, wrote an open letter in May asking Zoom to halt its development of emotional tracking software, calling it “offensive” and “violating privacy and human rights”.
policy change
As determined, Microsoft will be reworking its Azure Faces service to meet the requirements of its new responsible AI standard. First, the company is removing public access to AI’s emotion scanning capability. Second, Azure Face will no longer be able to recognize a person’s facial features, including “gender, age, [a] Smile, facial hair, hair and makeup.”
The reason for the retirement is that the global science community still does not have a clear consensus on the definition of “feelings”. Natasha Crampson, chief responsible AI officer at Microsoft, said experts from within and outside the company have expressed their concerns. The problem is that “there are challenges in how that generalizes across use cases, regions, and demographics, and increased privacy concerns…”
Apart from Azure Face, Microsoft’s custom neural voice will also see similar restrictions. custom neural voice (opens in new tab) is a text-to-speech app that is shockingly alive. The service will now be limited to a select few “managed customers and partners,” which are people who work directly with Microsoft’s Account Teams. The company says that while the technology has great potential, it can be used to perform impersonations. To retain access to Neural Voice, all existing customers must submit an intake form and be approved by Microsoft. They must be approved by June 30, 2023, and if they are not selected, these customers will no longer have access to Neural Voice.
still in work
Despite all that said, Microsoft isn’t giving up on its facial recognition technology entirely. Declaration pertains to public access only. Sarah Bird, Principal Group Project Manager at Azure AI, wrote about responsible facial recognition (opens in new tab), And in that post, she says, “Microsoft recognizes that these capabilities can be valuable when used for a set of controlled accessibility scenarios.” One of these scenarios, according to a representative, is to see AI (opens in new tab)Which is an iOS app that helps the visually impaired to identify the people and objects around them.
It’s good to see another tech giant recognizing the problems with facial recognition and the potential for abuse. IBM did something similar in 2020, though its approach was more neutral.
back in 2020, IBM announces it is leaving on facial recognition because the company feared it could be misused for mass surveillance. To see these two giants of the industry break free from this technology is a victory for anti-facial recognition critics. If you’re interested in learning more about AI, TechRadar recently published an article on what it can do for cyber security.