The National Institute of Standards and Technology (NIST) has issued new instructions to scientists who join the American Institute for Artificial Intelligence (AISI) which eliminates the mention of “IA security”, “responsible AI” and “AI equity” in the skills awaiting members and introduces a priority for priority.
Information is part of a updated cooperative research and development agreement for members of the Safety Institute consortium, sent in early March. Previously, this agreement encouraged researchers to provide technical work that could help identify and correct the behavior of the discriminatory model linked to gender inequalities, race, age or wealth. These biases are extremely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups.
The new agreement deletes the mention of development tools “to authenticate the content and follow its provenance” as well as “label the synthetic content”, signaling less interest in monitoring disinformation and false. He also adds the emphasis on the implementation of America, asking a working group to develop test tools “to extend the global position of the American AI”.
“The Trump administration has deleted security, fairness, disinformation and responsibility as things that it appreciates for AI, which, I think, speaks of itself,” explains a researcher of an organization working with the IA security institute, who asked not to be appointed for fear of reprisals.
The researcher believes that ignorance of these problems could harm regular users by possibly authorizing algorithms that discriminate against income or other demographic data. “Unless you are a technological billionaire, it will lead to a worse future for you and the people who are close to your heart. Expect that AI is unjust, discriminatory, dangerous and deployed irresponsible, “said the researcher.
“It’s wild”, explains another researcher who worked with the IA security institute in the past. “What does it mean for humans to flourish?”
Elon Musk, who is currently carrying out a controversial effort to reduce public spending and bureaucracy on behalf of President Trump, criticized AI models built by Openai and Google. Last February, he published a meme on X in which Gemini and Openai were labeled “racist” and “awakened”. It often cite An incident in which one of Google’s models discussed the question of whether it would be wrong to please someone, even if it prevented a nuclear apocalypse – a highly improbable scenario. Besides Tesla and SpaceX, Musk directs Xai, an AI company that rivals directly with Openai and Google. A researcher who advises that XAI has recently developed a new technique to possibly modify the political trends of large languages models, as reported by Wired.
An increasing number of research show that political biases in AI models can have an impact on both liberal and the Conservatives. For example, A study of Twitter’s recommendation algorithm Posted in 2021 has shown that users were more likely to show right-wing perspectives on the platform.
Since January, the so-called Department of Government’s Effectiveness (DOGE) has swept away the US government, effectively dismissed civil servants, interrupt spending and creating an environment considered hostile to those who could oppose the Trump administration objectives. Certain government services such as the Ministry of Education have archived and deleted documents which mention DEI. DOGE has also targeted NIST, the parental organization of AISI in recent weeks. Dozens of employees have been dismissed.