Expert warns of dark side of artificial intelligence
As developments in artificial intelligence (AI) continue to evolve at an exponential rate, local stakeholder Godiva Golding is calling for local regulation of the technology.
Golding is the founder and CEO of Steamhouse, an education hub that focuses on getting students digitally literate and technically skilled.
She explained that while developments in AI are fast expanding, the technology used to create harmful products like deepfakes are not ubiquitous. Deepfakes are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another.
"The power of AI is supposed to be able to, like, save us, generate artificial human intelligence. We're still not at the point of AGI (artificial general intelligence) as yet. Most of the things that we do is programmatic and that means that, when it comes to even generating deepfakes, actually there are persons with ill intent who are often generating deepfakes," said Golding. "And it goes back to those core skills that any digital society needs to have. We need to be able to think critically, do your research, and validate the things that you're seeing. It not as simple as google this thing to see if it's right."
But she added that people also have a responsibility to not be gullible.
"What it is now in terms of like a deepfake in an election, the truth of the matter is, a lot of the social media companies and the content that we consume is consumed in an echo chamber. And so that means that persons aren't really sitting down to evaluate the sources. We can go back to some of the most recent elections, 2020 elections, where information would have been circulated online and persons just simply pass it on. Why? Because they buy into their biases very quickly," she said.
Speaking on the risks of AI and more specifically deepfakes, the country's de facto Information Minister Robert Nesta Morgan told THE STAR that the risks posed by deepfake technology are grave, particularly in the context of cyberbullying and subversive political actors who may spread false information and sow discord.
"To address these risks, we must prioritise educating the public about the dangers of deepfakes and promoting media literacy to help individuals identify and avoid them," he said.
He continued, "One of the key challenges with deepfakes is that they can be incredibly convincing, making it difficult for individuals to discern what is real and what is not. This can lead to cyberbullying and harassment, particularly of women who may be targeted with sexualised or demeaning content. By educating the public about deepfakes and their potential harms, we can help individuals be more discerning consumers of information and better equipped to protect themselves online."
Golding explained, "We're gonna have to draw some lines, not in the sand. We're gonna have to draw some very strict lines in terms of what is ethical, what isn't ethical, what is the use that we allow and the uses that we don't allow. And make sure these things are put down in legislation."
Though Golding admits the uses of AI can be very dangerous, she is adamant that it should not prohibit progress.
"We have to strike this balance between not letting our fears prevent us from progressing. Beyond generative AI, there are some ... that can help in things like cancer detection or we can listen to somebody who is wheezing and tell if they have a lung infection, is it pneumonia? So all of these things are new possibilities and it will allow for early detection in things like health," she said. "But, if we shut down and say 'hey we shouldn't do anything in AI' we never get to benefit from those incremental progresses, and the progress isn't even incremental because this isn't something we would even be talking about 10 years ago when I started messing around with it. We are on the brink of something exponential, so we're gonna have to be asking all the questions."