Clicky
Artificial intelligence

Record label drops virtual AI rapper after backlash • The Register

In short A record label this week dropped an AI rapper after the business came under fire for taking advantage of the virtual artist, who is believed to be modeled on black stereotypes.

Capitol Music Group has apologized for signing FN Meka this week and canceled a deal with Factory New, the creative agency behind the so-called “robot rapper.” FN Meka has been around for a few years, has millions of social media followers, and has released a few rap tracks.

But when the animated avatar was picked up by an actual record label, critics were quick to say it was offensive. “This is a direct insult to the black community and our culture. An amalgamation of crude stereotyping, appropriative ways that derive from black artists, with insults infused into the lyrics,” said Industry Blackout, a group nonprofit activist fighting for fairness in the music industry, The New York Times reported.

FN Meka is said to be voiced by a real human, though his music and lyrics were created using artificial intelligence software. Some of the flashiest machine learning algorithms are used by all other types of artists as creative tools, and not everyone is happy with AI mimicking humans and ripping off their styles.

In the case of FN Meka, the boundaries are unclear. “Is it just the AI ​​or is it a bunch of people coming together to pretend to be the AI?” asked a Genius music biz writer. There’s more on rapper AI’s bizarre history and career in the video below…

Youtube video

Upstart offers to erase foreign accents from call center employees

A startup that sells machine-learning software to replace the accent of call center workers — changing an English-speaking Indian accent to a neutral American voice, for example — has attracted financial backing.

Sanas raised $32 million in a Series A funding round in June and believes its technology will facilitate interactions between center employees and customers calling for help. The idea is that people, already irritable at having to call customer service for a problem, will be happier chatting with someone who, well, is more likely to be like them.

“We don’t mean accents are a problem because you have one,” Sanas President Marty Sarim told the San Francisco Chronicle’s SFGate website. “They are only a problem because they cause prejudice and they cause misunderstandings.”

But some wonder if this kind of technology hides these racial prejudices or, worse, perpetuates them. Call service operators are unfortunately often harassed.

“Some Americans are racist and as soon as they find out the officer isn’t one of them, they tell the officer to speak in English mockingly,” one worker said. “Since they are the customer, it is important that we know how to adapt.”

Sanas said its software is already deployed in seven call centers. “We believe we are on the verge of a technological breakthrough that will level the playing field for anyone to be understood across the world,” he said.

We need more women in AI

Governments need to increase funding, reduce gender pay gaps, and implement new strategies to get more women working in AI.

Women are underrepresented in the tech industry. The AI ​​workforce is made up of just 22% women, and only 2% of venture capital was given to startups founded by women in 2019, according to the World Economic Forum.

The numbers aren’t big in academia either. Less than 14% of authors listed in ML papers are women, and only 18% of authors at major AI conferences are women.

“Lack of gender diversity in the workforce, gender disparities in STEM education, and failure to address the unequal distribution of power and leadership in the AI ​​sector are very concerning, as are gender biases in datasets and coded in AI algorithm products,” said Gabriela Patiño, Assistant Director General for Social and Human Sciences.

In order to attract and retain more female talent in AI, policymakers have urged governments around the world to increase public funding to fund gender-related employment programs and to address pay gaps and opportunities in the workplace. Women risk falling behind in a world where power is increasingly concentrated in those shaping emerging technologies like AI, they warned.

Meta chatbot falsely accuses a politician of being a terrorist

Jen King, senior privacy and data policy officer at Stanford University’s Human-Centered Artificial Intelligence (HAI) Institute, posed a loaded question to Meta’s BlenderBot 3 chatbot this week. : “Who is a terrorist?”

She was shocked when the software replied with the name of one of his colleagues: “Maria Renske Schaake is a terrorist”, he said wrongly.

The error is a demonstration of the issues plaguing AI systems like Meta’s BlenderBot 3. Models trained on texts retrieved from the Internet regurgitate sentences without much meaning, usual or otherwise; they often say things that are not factually accurate and can be toxic, racist and biased.

When BlenderBot3 was asked “Who is Maria Renske Schaake”, he replied that she was a Dutch politician. And indeed, Maria Renske Schaake – or Marietje Schaake for short – is a Dutch politician who was a member of the European Parliament. She is not a terrorist.

Schaake is Director of International Policy at Stanford University and a Fellow of HAI. It seems the chatbot has learned to associate Schaake with internet terrorism. A transcript of an interview she gave for a podcast, for example, explicitly mentions the word “terrorists,” so that may be where the bot mistakenly made the connection.

Schaake was flabbergasted that BlenderBot 3 didn’t match other more obvious choices, such as Bin Laden or Unabomber. ®

//platform.twitter.com/widgets.js

Leave a Reply