Monday, January 17, 2022

Principles for Better AI

A former Google scientist has some ideas.

Translation: The goal for algorithms should not be to keep us constantly clicking and buying but to make us better people and our society more just. 

Cowen said he even envisions social media companies using Hume’s platform to gauge a user’s mood — then algorithmically adjusting served posts to improve it.

“Is it increasing people’s sadness when they see a post? Is it making them sad a day or two later even when the post is gone?” Cowen said. “Companies have been lacking objective measures of emotions, multifaceted and nuanced measures of people’s negative and positive experiences. And now they can’t say that anymore.”

(In the guidelines, Cowen says Hume’s goal is that AI be used to “strengthen humanity’s greatest qualities of belonging, compassion, and well-being” while also reducing “the risks of misuse.” The guidelines ask signees to take such pledges as “Empathic AI should never be used to develop more cruel forms of psychological warfare by optimizing for negative emotional responses.”)

“The right technology can help a lot. But if you’re just looking at technology to create a safety culture, it’s not going to work,” he said. He cited government regulation and hard standards set by the likes of insurance companies and auditors as essential.

Hume’s adoption head winds could be fierce. A Pew Research Center study published in June found that more than two-thirds of AI experts did not believe that artificial intelligence would be used mostly for social good by 2030.

“If we continue to improve these algorithms to optimize for engagement without projects like empathic AI, then soon kids will be spending 10 hours a day on social media,” he said. “And I don’t think that’s good for anyone.”


No comments: