Main Ads

Ad

Australian Researchers Develop AI Tool to Tackle Harmful Deepfakes on Social Media

3 days ago | Cyber Security


Jakarta, INTI – The rapid surge in AI-based image and video manipulation is raising major concerns worldwide, including in Australia. Deepfake technology has been misused by criminals to produce fake content that looks real, including severe abuses such as child exploitation, extortion, and identity fraud. Addressing this threat, Monash University together with the Australian Federal Police (AFP) introduced a new artificial intelligence tool in an official release, bringing a preventive, technology-driven approach, while offering a digital privacy protection solution for the public.

“Data Poisoning” Technique Becomes a New Weapon

The research team explained that the technology uses a “data poisoning” method. This technique makes the manipulation or production of deepfake content significantly more difficult. It does so by making subtle pixel-level changes, causing malicious AI models to produce distorted, biased, blurred, or even unrecognizable output.

According to AiLECS the joint project between AFP and Monash University this approach will support investigators because it reduces the volume of fake material that needs to be reviewed and helps trace manipulation more effectively.

Silverer: A Prototype Tool to Protect Personal Images

The AI scrambler tool is named Silverer. Currently in prototype stage, it is designed to become a technology that can be used by everyday Australians to protect their privacy when uploading photos or visual content to social media.

“Images can be modified first using Silverer, and when malicious AI tries to manipulate them, the result becomes distorted, blurred, and unclear,” said Elizabeth Perry, researcher and project lead for AiLECS.

Warning: AI-Based Child Exploitation Deepfakes Increasing

The AFP warns that AI-generated child abuse material is increasing significantly. Open-source technology with very low access barriers makes it easier for offenders to produce manipulated images or videos in short periods of time. For this reason, the presence of Silverer is seen as an important preventive step to narrow the space for future digital crimes.

Conclusion

The joint efforts of Monash University and AFP demonstrate that AI is not only a tool that can be weaponized, but can also be developed into a smart defense system. Silverer is a real example of how technology can reduce the potential abuse of deepfake content, while protecting the public and assisting investigators in combating visual cybercrime.

Read More: 7 Indonesian Programmers Win Global Cybersecurity Competition and Showcase Their Skills Internationally

Indonesia Technology & Innovation
Advertisement 1