The Democrats of the Chamber’s Supervisory Committee dismissed two dozen requests on Wednesday morning pressing the leaders of the Federal Agency to obtain information on the installation of AI software in federal agencies in the midst of the government’s workforce.
The investigative dam follows recent reports by Wired and The Washington Post With regard to the efforts of the so-called Ministry of Elon Musk’s Government (DOGE) to automate tasks with a variety of owners’ tools and access to sensitive data.
“The American people challenge the federal government of sensitive personal information linked to their health, finance and other biographical information on the basis that this information will not be disclosed or poorly used without their consent”, indicate requests, “including by the use of non-responsible and unusual and not responsible third party software.”
The requests, obtained for the first time by Wired, are signed by Gerald Connolly, a member of the Democrat Congress of Virginia.
The central objective of requests is to press agencies to demonstrate that any potential use of AI is legal and that measures are taken to protect private data from Americans. Democrats also want to know if the use of AI will benefit financially in Musk, which founded XAI and whose electric car company in difficulty, Tesla, works to pivot towards robotics and AI. Democrats are more concerned, known as Connolly, that Musk could use its access to sensitive government data for personal enrichment, taking advantage of data to “overeat” its own model of owner, known as Grok.
In requests, Connolly notes that federal agencies are “linked by multiple legal requirements in their use of AI software”, mainly highlighting the federal risk management and authorization program, which strives to normalize the government’s approach to the Cloud services and ensure that AI -based tools are correctly assessed for security risks. He also underlines the law on advanced American AI, which need Federal agencies to “prepare and maintain an inventory of the use of the agency’s artificial intelligence”, as well as “make agency stocks available to the public”.
The documents obtained by Wired last week show that DOGE agents deployed a owner chatbot called GSAI at around 1,500 federal workers. GSA oversees the properties of the federal government and provides information technology services to many agencies.
A memo obtained by wired journalists shows that the employees have been warned not to feed the software any unopassed information controlled. Other agencies, including the Treasury and Health and Social Services departments, have planned to use a chatbot, but not necessarily GSAI, according to documents considered by Wired.
Wired has also indicated that the American army is currently using double Camogpt software to scan its recording systems for any reference to diversity, actions, inclusion and accessibility. An army spokesperson confirmed the existence of the tool, but refused to provide additional information on how the army plans to use it.
In requests, Connolly writes that the Ministry of Education has personally identifiable information on more than 43 million people linked to federal students’ assistance programs. “Due to the opaque and frantic rhythm to which Doge seems to work,” he writes, “I am deeply concerned about the fact that students,” parents “, family members” and all other sensitive information is managed by secret members of the Doge team for little perforated purposes and without safeguard to prevent disclosure or inappropriate use. »The Washington Post previously reported That Doge had started to feed sensitive federal data from record systems of the Ministry of Education to analyze its expenses.
Educational Secretary Linda McMahon said on Tuesday that she was carrying out plans to fire more than a thousand workers in the department, joining Hundreds of others Who accepted DOGE’s “buyouts” last month. The Department of Education has lost almost half of its workforce – the first step, McMahon saysby fully abolishing the agency.
“The use of AI to assess sensitive data is heavy with serious dangers beyond inappropriate disclosure”, writes Connolly, warning that “the inputs used and the parameters selected for the analysis can be defective, the errors can be introduced by the design of the AI software, and the staff can misinterpret the recommendations of AI, among other concerns.”
He adds: “Without a clear goal behind the use of the AI, the railings to ensure appropriate data management and adequate monitoring and transparency, the application of AI is dangerous and potentially violates the federal law.”