Skip to main content
Uncategorized

Deepfake evidence concerns hit family law

By February 19, 2020November 9th, 2021No Comments

He said, she said…or did they really? The rise of deepfakes poses new challenges to the legal industry and the family law industry isn’t immune. We take a look at the new dangers of deepfake evidence forgery.

A UK family lawyer has related his direct experience with this emerging issue, after his client’s voice was manipulated using voice forging software, creating a fake audio recording of an apparent threat to an ex-spouse during court custody proceedings. Family lawyer Byron James at international law firm Expatriate Law says his case raises the question—can the legal system continue to take video and audio evidence at face value?

The mainstream media has ensured we’ve pretty much all heard of deepfakes by now.  Whether it’s a “Muslim” Barack Obama, Kim Yong-un doing the Macarena, or Matt Damon apologising to Matt Damon on Saturday Night Live, plenty of hoax deepfakes of celebrities and world leaders have gone viral over the past few years.

But the danger is that in the real world of the law, “it is now possible, with sufficient content, to create an audio or video file of anyone saying anything”, warns lawyer Mr James. The Telegraph is calling his matter “the first reported case of its kind in the UK courts”.

Mr James’ client was accused of making threats over the phone to his ex-spouse, but insisted he had never said the words attributed to him. The lawyer was then given an audio file which appeared to be a recording using the words alleged to have been uttered by the client:

“This is always a difficult position to be in as a lawyer, where you put corroborating contrary evidence to your client and ask them if they would like to comment. My client remained, however, adamant that it was not him despite him agreeing it sounded precisely like him, using words he might otherwise use, with his intonations and accent unmistakably him. Was my client simply lying?”

“In the end, perhaps for one of the first times in the family court, we managed to prove that that the audio file had not been a faithful recording of a conversation between the parties but rather a deepfake manufacture.”

Deepfakes (the name derived from “deep learning” and “fake”) uses the power of AI and its deep-learning algorithms to create sophisticated, plausible forged footage. The technology mimics patterns of appearance and behaviour to map movements and words onto others’ faces and voices, putting words into people’s mouths and potentially making them do far worse things than the Macarena.

You don’t need a computer science degree to use the freely available, user friendly software, and step-by-step instructions are easily accessible online. Just as creating them is becoming easier, so too are they becoming more convincing as technology advances.

While they are more convincing depending on how much source material they start with, in recent times researchers have shown that convincing deepfake videos can be created out of a set of just 8 original images.

With this digital manipulation of the truth, the fear is that ordinary people may no longer be able to discern what is true and what is not. However–so far–fake and real videos are not completely indistinguishable.

Proving deepfake evidence is a malignant forgery

In practical terms, combating deepfakes being used in legal settings means proving the evidence has been manufactured. The technology, while advanced, is still not perfect and there remain “tells” that give away their falseness. Deepfakes suffer from warping and other side effects of their digital transformation which “leaves a kind of watermark that exposes them as not genuine”.  However, audio deepfakes may not have such obvious “tells”.

Metadata, too, can be revealing. In the UK family lawyer’s situation, demanding to see the original file of the purported audio evidence resulted in damning proof of its alteration. Unfortunately, it’s possible that metadata can also be manipulated.

Another avenue being looked at to authenticate videos is blockchain. Blockchain creates a digital ledger where, each time something digital is created or altered it is documented in a way that cannot be manipulated.

Researchers in this area say:

“Our solution can help combat deepfake videos and audios by helping users to determine if a video or digital content is traceable to a trusted and reputable source…If a video or digital content is not traceable, then the digital content cannot be trusted.”

The fact that unscrupulous individuals will attempt to distort and manipulate the truth in legal proceedings is not new. Both handwritten, paper documents and printed documents have always been vulnerable to manipulation, which is why handwriting and forensic computer experts show up in court to help determine a source of something or whether someone is the original author of a document.

And of course, there’s Photoshop for the distortion of still images. But now, doctored technological evidence in the form of audio and video is in the mix as well.

There is a danger that time and resource factors in the family court system mean there’s a lot of pressure to get through hearings quickly, and this could influence the ability of courts to take the time to properly identify deepfake evidence. Instead, courts have typically taken this kind of evidence at face value.

There’s a clear need for the law to catch up with technology in relation to deepfakes potentially being used to manipulate the justice system.

So far there have been limited efforts in the US to combat the dangers of deepfakes. For example in 2019, US Rep. Yvette Clarke introduced legislation to “require deepfake producers to include a digital watermark indicating that the video contains ‘altered audio or visual elements’. Obviously, though, this isn’t going to happen voluntarily in cases where individuals have gone to the trouble to doctor evidence to use in their legal proceedings.

Another US senator, Ben Sasse, introduced the Malicious Deep Fake Prohibition Act in 2018 but it expired without any co-sponsors.

Other legal experts suggest that a way to combat deepfakes could be through revenge porn and cybercrime legislation, so that individuals who create deepfake pornographic videos, for instance, can be prosecuted. This could provide a disincentive against their creation and distribution. Some US states, such as Virginia, are taking the lead in this regard.

Currently, there are no federal or state laws in Australia that deal with deepfakes. However, our revenge porn laws “already use language broad enough to cover videos that ‘appear to depict’ the victim of deepfake revenge porn”, according to the government.

The UK family lawyer says the judge in his deepfake case was “really shocked. It would never have occurred to him to look into that.” The lawyer implies the evidence doctoring backfired on the parent faking the evidence, but it would be interesting to know what the actual consequences (if any) were for them.

What were the repercussions? The deepfaking parent losing all credibility before the judge and revealing their underlying malevolence and a level of criminality that could impact on an assessment of their parental capacity? Perjury charges? I guess we will have to wait til March to find out, when the article is to be published in the International Family Law Journal.

Sources: The Telegraph, Legal Cheek

Do you need assistance with a family law matter? Please contact Canberra family lawyer Cristina Huesch or one of our other experienced solicitors here at Alliance Legal Services on (02) 6223 2400.

Please note our blogs are not legal advice. For information on how to obtain the correct legal advice, please contact Alliance Legal Services.

Author

Call Now Button