Zscaler CEO, Jay Charthry, details how the pirates tried to identify him using the cloning of the voice to execute a scam on “The Claman Countdown”.
Americans who seek to settle a divorce and to obtain custody of their children could accumulate unforeseen legal costs by trying to refute artificial intelligence (IA) Deepfake videos, Photographs and documents, according to a family lawyer from the family.
Michelle O’Neill, co -founder of the law firm based in Dallas, Owlawyers, told Fox Business that the courts saw a “real increase” of false evidence, frequently created with AI. The problem, she said, becomes more and more common and that judges are taught in schools and conferences to remain vigilant.
A type of defective evidence is generated by AI Porn revenge – including false photos and videos of people who engage in intimate acts. O’Neill notes that even if Deepfakes have mainly approved in the news when they affect celebrities, the question also has an impact on ordinary citizens who undergo ruptures or plead divorces by the family court.
A majority of small businesses use artificial intelligence
The use of artificial intelligence to generate false images and videos could accuse costs for customers who spawn a divorce. (Istock / Kirill Kudryavtsev / AFP via Getty Images / Getty Images)
O’Neill’s assertion on these types of content generated by AI “exploded on the stage” is saved by statistics showing that the prevalence of Deepfake videos, without the photographs, has increased 900% on an annual basis Since 2019.
“When a customer brings me evidence, I must question my own customers that never where did you get it? How did you get it? You know, where does it come from?” Said O’Neill.
The problem also has a massive impact of women. The Sensity IA research company has constantly found that between 90% and 95% of all online depths are non -consensual porn. Approximately 90% of this number is Non -consensual porn for women.
Despite the amazing number, O’Neill says that social media platforms are slow to act.
The first lady Melania Trump spoke to Capitol Hill in early March for the first time since her return to the White House, participating in a round table with the legislators and the victims of the revenge porn and the deep buttocks generated by AI.
Congress is currently preparing to punish Internet abuse involving non -consensual and explicit images.
AI scams proliferate. A new tool tries to fight them

A green Wireframe model covers the lower face of an actor when creating a synthetic facial resuscitation video, alternately known as Deepfake, London, Great Britain, February 12, 2019. (Photos Reuters TV / Reuters / Reuters)
The act of taking is a bill presented in the Senate by Sens. Ted Cruz, R-Texas and Amy Klobuchar, d-minn., This would make it a federal crime to publish or threaten to publish non-consensual intimate images, including “digital counterfeits” designed by artificial intelligence. The bill unanimously adopted the Senate earlier in 2025, Cruz saying on Monday that it thought it would be adopted by the House before becoming law.
While the government is pressure for new laws, O’Neill said that AI used to create fraudulent and explicit content remains a “real threat” for the justice system.
“The integrity of our very judicial system depends on the integrity of the evidence that you can go and present. If you cannot even trust the integrity of the evidence which is presented to a judge, if a judge cannot even count on the integrity of the evidence they receive-our judicial system can be completely in danger by the existence of artificial intelligence,” she told Fox.
AI, note O’Neill, also has a negative impact on economically contested Americans who are prey to fraudulent judicial evidence. Now, a person contesting the authenticity of the proofs admitted may have to pay a legal medicine expert to perform an exam and verification test.
Almost 50% of voters said that Deepfakes had a certain influence on the electoral decision: survey

Deeplakes artificial intelligence presents a serious risk for the judicial system, according to the lawyer for family law Michelle O’Neill. (Getty Images / Getty Images)
Fraudulent evidence may even extend to the videos indicating the abuse of a child when two parts are fighting for guard. If a party does not have the financial means to refute that the evidence of abuse is generated by the AI, the judges must now decide if he will take the floor of the alleged victim or believe the images which have come to a court.
“What happens to people who do not have the money [to disprove that]? Thus, not only do we have a threat to the integrity of the judicial system, but we also have a problem of access to justice, “said O’Neill.
Family lawyer noted that judges mainly saw a harmful use of AI to create false documents, such as falsified bank files or drug tests.
A judge also told O’Neil that they came across a falsified audio who had thrown the other part in a negative light. The quality of recording was not credible enough. The judge reprimanded the individual and the proof was excluded.
Get Fox Affairs on the move by clicking here
However, with the rapid increase in this technology, O’Neill worries that the gap between what is real and what is generated by AI will shrink.
“I think it is a problem on several levels of our society. And, you know, attracting attention to this subject is something that is very important,” she said.