Your daughter is calling you in distress. While on vacation, her wallet and passport were stolen. She desperately needs you to wire her $5,000. You’re a little surprised because you didn’t know she was on vacation, and she has never asked you for a wire transfer. But it’s your child, how could you say no?

You send her the money and call her to check if she received it. She has no idea what you’re talking about! She’s not on vacation and says she didn’t call you. What just happened?

You were the victim of a high-tech scam.

It has been possible to impersonate other people’s phone numbers for a long time, and thanks to artificial intelligence (AI) researchers at Microsoft, it is now possible to impersonate someone’s voice with just a three-second clip of their speech. It could be as simple as calling your target, recording them say, “Hello, who am I speaking with?” and hanging up.

Thankfully, this tool has not been released to the public, but it is only a matter of time until this method will be replicated by another organization, which may release it to the public. A hacker could also gain access to this tool and use it for themselves or sell it on the black market.

Just the start

Here’s another example: A scandal is in the news. Photos show a cabinet minister receiving an envelope full of cash from a lobbyist, counting the cash and shaking the lobbyist’s hand. The minister insists the photos must be forged. The prime minister asks the minister to step down because the public outcry has become too great, and that party ends up losing the next election.

Was it a forgery? As of 2022, we may never know for sure. Thanks to newly released tools, you can now place anyone’s face on any image convincingly. All you need is some real photos of them. For public figures such as a cabinet minister or a well-known lobbyist, it’s easy to find these photos on the internet. You could then hire two actors to play out the scene to make it more realistic, and impose the fake faces over the real ones. This could be done by the opposition research arms of political parties to discredit opponents, or by foreign intelligence services seeking to cause political instability.

Time to worry

While technology is not quite advanced enough to pull off the first example I gave, it is getting very close, and I believe we should all be worried now. The second example is already feasible.

What solutions will there be once these situations arise? Should we never answer our phones again? Should we assume all photos we see could be fake?

What does it mean for our society when anyone’s voice or image can be manipulated to have them saying or doing anything? Will evidence in criminal cases still be valid?

Imagine a future where it is very difficult to convict anyone accused of a criminal offense based on photos, videos or audio recordings because any of these could be easily fabricated. Now imagine the opposite: a future where it’s incredibly easy to convict someone because anyone who wants to see them locked away could fabricate the evidence. Who would want to do such thing? A scorned lover, a detective or prosecutor eager to close the case, a malicious hacker half the world away who enjoys sending strangers to prison for their own amusement. At some point in the near future, anyone anywhere at any time could do it.

That is only the beginning. AI will continue to become more advanced, year by year, and once the genie is out of the bottle, it will be difficult to get it back in. Once open-source software is released on the internet, someone out there has already saved a copy for themselves, even if the original copy is taken down.

Why are AI researchers even developing such tools? Because it’s a fun intellectual challenge, consequences be damned – and because they are paid to do the work.

The inner workings of government
Keep track of who’s doing what to get federal policy made. In The Functionary.
The Functionary
Our newsletter about the public service. Nominated for a Digital Publishing Award.

But there are consequences. The head of the Manhattan Project later regretted his creation when he saw the destructive effects of the nuclear arms race and became an anti-nuclear proliferation activist. I believe AI research will be the 21st century equivalent of the Manhattan Project –except instead of a handful of nations having access to the technology, every person in the world will have access.

Is there any way these negative outcomes can be prevented? It would be difficult without buy-in from the U.S., international co-operation, and a sense of urgency seen only during the start of COVID and the Second World War. Here are some possible ways to minimize the damage.

1. Before any government agency funds or provides tax breaks to any kind of AI research, the funder should determine what the criminal potential of the tool would be, regardless of the intended use.

2. These tools and their source code should not be released publicly. They could still be used nefariously by anyone who licenses them, but this barrier would at least stop these tools from becoming an app on everyone’s phone. Consider the nuclear weapons analogy: the fewer people who have it, the better. The government would need to co-ordinate with source code repositories such as GitHub to figure out how to best achieve this.

3. Legislation should be passed to prevent certain types of AI tools from being developed in the first place, or at least to limit who can access those tools (much like the national security export restrictions that the U.S. places on some cryptographic software). This measure would face opposition from some software development companies.

4. The sale and use of the hardware necessary to run the most resource-intensive AI applications (for example, data centre-focused graphics processing units), should be restricted to approved researchers (as some chemicals currently are). If computers are too slow to run these tools, they stop being practical. This measure would face opposition from some hardware manufacturers and data centre providers.

The Chinese and Russian governments would likely never agree to ending all AI research, which could create an arms race, although it is possible they would be willing to restrict access to these tools to their citizens.

AI accountability can’t be left to the CRTC

Canada’s next industrial AI strategy needs to address adoption

This may seem like a restriction on freedom for something that so far has not done any major harm. But imagine if two years from now, tens of thousands of retirees have lost all their retirement savings due to fake phone calls such as the first example I cited. How much would that cost the government in social assistance payments? How much would it cost the victims and their families in stress?

Do you know anyone who opens email attachments or click links from unknown senders, despite warnings from their IT department, children or grandchildren not to do so? Imagine telling these same people to ask their family members personal verification questions every time they want to speak with them. It wouldn’t be realistic. Why should that burden be shifted to the public rather than the originators of this new problem?

Governments around the world have a reputation for moving slowly and not understanding new technology. AI could be an existential risk to our species as serious as climate change and nuclear weapons. We need to address it with proportionate urgency.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission, or a letter to the editor. 
Andrew Cichocki
Andrew Cichocki is a software engineer living in Toronto. He studied political science at Toronto Metropolitan University and is interested in how public policy can positively impact society. Twitter: @AndrewCichocki.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Creative Commons License