Lecturer and author Mike Reilly offers tips to protecting yourself against misleading artificial intelligence
By Ryan Pacheco-Granite State News Collaborative
With New Hampshire’s general election just days away, on Tuesday, Nov. 5, knowing which information to trust is critical for voters who want to make an informed decision at the ballot box.
The rise of artificial intelligence over the past few years has led to concerns about the budding technology’s potential for malicious applications, such as “deepfaking” (using artificial intelligence to create lifelike videos that use a real person’s image and voice in order to impersonate them).
When used with ill intent, AI-generated content can be weaponized to misinform voters.
New Hampshire came face to face with these challenges two days before the state’s presidential primary earlier this year, when robocalls using AI voice-mimicking technology to impersonate President Joe Biden encouraged voters not to vote in the state’s primary election.
Alarmed by that incident, bipartisan members of the New Hampshire Legislature introduced House Bill 1596, requiring disclosure of any AI use in election communications. Both the House and Senate approved the bill, and it was signed into law in August by Gov. Chris Sununu.
Mike Reilley, a data and digital journalism professor at the University of Illinois Chicago, has been offering tips and insights on navigating AI-powered misinformation and disinformation. (Misinformation is a larger term that refers to untrue information, and disinformation refers to deliberate and consciously deceptive forms of misinformation). He responded by email to questions from the Granite State News Collaborative.
Reilley is a senior lecturer at the University of Illinois, Chicago, and is lead trainer in the Radio Television Digital News Association/Google News Initiative Election Fact-Checking Program.
Q: You've built a successful career as a journalist and digital media expert. How important is accuracy with regard to the onslaught of information people are consuming?
Mike Reilley: Accuracy is in the DNA of journalism, so it's paramount that we're sharing information that has been vetted before publication or airing. At a time when public trust in journalism is at record lows, we can't lose more credibility due to sloppy fact-checking. With the web, we live in a world where everything is true and nothing is true. It's up to journalists to give readers accuracy and context so they better understand the world around them.
Q: AI has the potential to revolutionize our approaches to research (a clear example obviously being your Journalist's Toolbox), while also placing us in positions where it can be hard to tell whether you're watching a politician or a deepfake. What are your thoughts on striking the right balance between embracing this new era of AI, while limiting its potential to be used maliciously?
Mike Reilley: Technology marches one direction: forward, never backward. We'll never go back to the analog days. Developers aren't going to slow down because journalists want them to. In my training, I often use this example: AI is the big yacht cruising through the harbor. Journalists and fact-checkers are the people on the jet skis chasing after it. We'll always be chasing the technology, whether we like it or not. That's a very pragmatic statement, but it's true.
Q: What advice do you have for people who are worried that AI will make it impossible to authenticate the information they're presented with?
Mike Reilley: I've trained more than 2,500 journalists in detecting deepfakes and fact-checking over the past eight months. It can be done. The fact-checking tools will always lag a bit behind the AI deepfake creation tools, but we'll always have resources to check. It's a matter of knowing what to look for.
Q: What can people do right now to become more savvy about identifying AI misinformation?
Mike Reilley: Start with the source of the information. Is it something or someone you know and trust? Can I confirm it elsewhere? Is there an origin source and when/where was the information first posted? Look closely at photos before sharing and retweeting things. Just take that pause and remember, if something looks too good to be true, it likely isn't true.
Q: Before the 2024 presidential primary, our state was targeted by a robocall scheme involving an AI voice impersonating President Biden. Now, a new New Hampshire law requires political communications to disclose the use of potentially deceptive AI. What can states do to mitigate these threats before they ramp up?
Mike Reilley: I use Biden's robocall audio in my trainings to show how deepfake detection tools can pick up the tells. I think disclosure is a good idea. I require my students to disclose any AI use in their homework, and they are given training and guardrails on how to/not to use it. I think limiting tools so they can't replicate people doing and saying things they didn't actually do is a good step. YouTube now has "altered content" warnings on its videos to let you know if a video has been AI-generated.
Like any technology, AI in the hands of bad actors can create chaos. But remember, when you invent the ship, you also invent the shipwreck.
These articles are being shared by partners in the Granite State News Collaborative and the Know Your Vote youth voter guide. The Know Your Vote youth voter guide project was designed, reported and produced by student and young professional journalists from The Clock,The Concord Monitor, The Equinox, Granite State News Collaborative, Keene State College, The Laconia Daily Sun, The Monadnock Ledger-Transcript, Nashua Ink Link and The Presidency and the Press program at Franklin Pierce University. You can see the full guide at www.collaborativenh.org/know-your-vote.