Artificial Intelligence – AI – is developing rapidly. It can create efficiency with certain tasks, including content creation and research. It’s also creating new, extremely difficult problems related to intellectual property and exploitation, including children.
The Pope, The White House, and a global committee of 18 countries have all pushed for guidelines and mandates for the future of AI. But it’s evolving much faster than policy and government.
One of these terrifying evolutions is deepfake technology. It uses “deep” AI to create fake voices, images, videos, and conversations. The fake content is so realistic, that we have a new word: deepfake, which was first coined on Reddit in 2017.
And deepfakes are quickly getting out of control. For example, Alibaba researchers created a new AI tool that turns static images into dancing videos – called “Animate Anyone.” One of the top comments on the program’s GitHub is, “…can’t wait to use this for porn.”
Or, consider the New Jersey teens who created deepfake nude images of female classmates. We’re afraid this is just the beginning. Revenge porn and child sexual abuse content (CSAM) will both be taken to another, sinister level.
The challenge with deepfakes is that there isn’t a strong technical solution like a filter to prevent them. There aremultiple steps parents can take to mitigate risk and we share those below. But we also desperately need a regulatory solution to reign in the technology. And governments seem woefully caught off-guard by these advances.