Tech News

Elon Musk’s Criticism of ‘Woke AI’ Raises DebateGPT Could Be Target of Trump Administration

Mittelsteadt added that Trump could punish companies in different ways. He cites, for example, the way the Trump administration canceled a large corporation’s contract with Amazon Web Services, a decision that may have been influenced by the opinion of the former president of the Washington Post and its owner, Jeff Bezos.

It won’t be difficult for policymakers to point to evidence of political bias in AI models, even if it cuts both ways.

A 2023 study by researchers at the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found a variety of political leanings in different major language models. It also showed how these biases can affect the effectiveness of hate speech or misinformation detection systems.

Another study, conducted by researchers at the Hong Kong University of Science and Technology, found bias in several open source AI sources on divisive issues such as immigration, reproductive rights, and climate change. Yejin Bang, a PhD candidate involved in the project, says that most models tend to be liberal and US-centric, but the same models can reveal a variety of liberal or conservative choices depending on the topic.

AI models capture political bias because they are trained on an array of internet data that inevitably includes all kinds of viewpoints. Many users may not be aware of any bias in the tools they use because the models include guard lines that prevent them from producing some harmful or biased content. These biases can be subtly rewarding, and the extra training the models receive to limit their effect can introduce persistent biases. “Developers can ensure that models are exposed to multiple perspectives on different topics, allowing them to respond with a balanced perspective,” Bang said.

The issue could get worse as AI systems become more widespread, said Ashique KhudaBukhsh, a computer scientist at the Rochester Institute of Technology who developed a tool called the Toxicity Rabbit Hole Framework, which teases out the different social biases of major language types. “We fear that a vicious cycle is about to begin as new generations of LLMs will be trained based on data contaminated with AI-generated content,” he said.

“I am convinced that bias among LLMs is already a problem and will probably become even bigger in the future,” said Luca Rettenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology who has researched LLMs on relative bias. in German politics.

Rettenberger suggests that political parties may seek to influence LLMs to promote their views over others. “If someone is ambitious and has bad intentions, they are likely to use LLMs in certain areas,” he says. “I see manipulation of training data as a real danger.”

There have already been some efforts to change the balance of bias in AI models. Last March, one editor created a right-leaning chatbot in an effort to highlight the subtle biases he saw in tools like ChatGPT. Musk has promised to make Grok, an AI chatbot developed by xAI, “more truth-seeking” and less biased than other AI tools, though it’s actually more on the fence when it comes to tricky political questions. (A staunch Trump supporter and immigration hawk, Musk’s “slight partisanship” vision may translate into right-leaning results.)

Next week’s election in the United States will probably not end the disagreement between Democrats and Republicans, but if Trump wins, talk of anti-woke AI may be heard a lot.

Musk gave an apocalyptic take on the issue at this week’s event, referencing an incident where Google’s Gemini said nuclear war would be better than misunderstanding Caitlyn Jenner. “If you have an AI programmed for things like that, it can be concluded that the best way to ensure that no one is abused is to exterminate all people, thus making it possible for there to be no wrong sex in the future,” he said.


Source link

Related Articles

Back to top button