Monday, June 27, 2022

Microsoft Hires Controversial AI That Can Guess Your Emotions

- Advertisement -
- Advertisement -

Microsoft has announced it will stop selling an artificial intelligence service that can predict a person’s age, gender and even emotions.

The tech giant cited ethical concerns about facial recognition technology, which it claimed could expose people to “stereotyping, discrimination or unfair denial of service.”

In a blog post published Tuesday, Microsoft outlined the steps it would take to ensure its Face API is developed and used responsibly.

“To mitigate these risks, we have chosen not to support a general purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair and makeup,” wrote product manager Sarah Bird at Microsoft’s Azure Al.

“Recognition of these attributes will no longer be available to new customers as of June 21, 2022, and existing customers have until June 30, 2023 to stop using these attributes before retiring.”

Microsoft’s Face API has been used by companies like Uber to verify that the driver using the app matches the account on file, but unionized drivers in the UK are demanding it be removed after failing to identify legitimate drivers had recognized.

The technology also raised concerns about possible misuse in other settings, e.g. B. in companies that use them to monitor applicants during job interviews.

Despite discontinuing the product for customers, Microsoft will continue to use the controversial technology in at least one of its products. An app for people with visual impairments called Seeing AI will continue to harness the power of computer vision.

Microsoft also announced it would update its “Responsible AI Standard” — an internal playbook that guides the development of AI products — to mitigate the “sociotechnical risks” of the technology.

It included consultations with researchers, engineers, policy experts and anthropologists to understand what safeguards can help prevent discrimination.

“We know that AI systems can only be trusted when they have to provide appropriate solutions to the problems they are designed to solve,” wrote Natasha Crampton, Microsoft’s chief responsible AI officer, in a separate blog post.

“We believe that industry, academia, civil society and government must work together to advance the state of the art and learn from each other… A brighter, fairer future requires new guard rails for AI.”

- Advertisement -
Latest news
- Advertisement -
Related news
- Advertisement -


Please enter your comment!
Please enter your name here