Close Menu
  • Home
  • Maritime
  • Offshore
  • Port
  • Oil & Gas
  • Energy
  • Technology
  • Incidents
  • Environment
  • Events
    • Maritime
    • Offshore
    • Oil & Gas
    • Energy
  • Advertising
  • Contact
Facebook X (Twitter) Instagram LinkedIn
Trending
  • Saipem Milestone in Guyana Yellowtail Project
  • Port Of Arkhangelsk Welcomes First Chinese Vessel Of 2025 Via Arctic Express N1
  • SeaBird scores OBN work for survey vessel
  • Inyanga Marine Energy Group appoints new chair of the board
  • Shell shakes hands with three players to boost offshore unit safety
  • Can hydrogen make good on its clean energy potential?
  • The Untold Plight Of North Korean Seafood Workers in China
  • Ship Recyclers “Drip Fed” Tonnage
Facebook X (Twitter) Instagram LinkedIn
Maritime247.comMaritime247.com
  • Home
  • Maritime
  • Offshore
  • Port
  • Oil & Gas
  • Energy
  • Tech
  • Incidents
  • Environment
  • Events
    • Maritime
    • Oil & Gas
    • Offshore
    • Energy
  • Advertising
Maritime247.comMaritime247.com
Home»Technology»New Technique Can Protect Images from AI
Technology

New Technique Can Protect Images from AI

August 12, 2025
Facebook Twitter LinkedIn WhatsApp Reddit Tumblr Email
Share
Facebook Twitter LinkedIn Email

New Technique Developed by Australian Researchers to Prevent Unauthorized AI Learning from Images

A groundbreaking new technique created by Australian researchers could potentially put a stop to unauthorized artificial intelligence (AI) systems from learning from images, photos, artwork, and other visual content. This innovative method, developed by CSIRO, Australia’s national science agency, in collaboration with the Cyber Security Cooperative Research Centre (CSCRC) and the University of Chicago, involves subtly altering content to make it unreadable to AI models while remaining visually unchanged to the human eye.

One of the key applications of this technology is to protect sensitive data such as satellite imagery or cyber threat information from being absorbed by AI models, especially within defense organizations. Additionally, this breakthrough could also aid artists, organizations, and social media users in safeguarding their work and personal data from being utilized to train AI systems or create deepfakes. For instance, a social media user could automatically apply a protective layer to their images before posting, preventing AI systems from learning facial features for deepfake manipulation.

This technique establishes a boundary on what an AI system can learn from the protected content, providing a mathematical assurance that this protection remains intact even against adaptive attacks or retraining efforts. Dr. Derui Wang, a scientist at CSIRO, emphasized that this method offers a heightened level of certainty for individuals sharing content online.

“Our approach is distinct in that we can mathematically ensure that unauthorized machine learning models are unable to learn beyond a specified threshold from the content. This offers a robust safeguard for social media users, content creators, and organizations,” Wang explained.

See also  South Pacific Coalition Moves to Protect Nazca Ridge Ecosystems

Moreover, the application of this technique can be automated on a large scale. Wang stated, “A social media platform or website could integrate this protective layer into every uploaded image, potentially mitigating the proliferation of deepfakes, reducing instances of intellectual property theft, and empowering users to maintain control over their content.”

While the current implementation of this method is focused on images, there are plans to extend it to text, music, and videos in the future. Although the technology is still in the theoretical stage, with results validated in a controlled laboratory environment, the code is publicly accessible on GitHub for academic purposes. The research team is actively seeking partnerships with various sectors, including AI safety and ethics, defense, cybersecurity, and academia.

The paper detailing this technique, titled “Provably Unlearnable Data Examples,” was presented at the 2025 Network and Distributed System Security Symposium (NDSS) and was honored with the Distinguished Paper Award.

images Protect Technique
Share. Facebook Twitter LinkedIn Tumblr Telegram Email

Related Posts

Plastic Pollution Talks Run Overtime

August 16, 2025

Wine Down Under

August 15, 2025

Exploring Ocean Canyons, SOI Strikes Again with Groundbreaking Research

August 15, 2025
Top Posts

Duties of Bosun (Boatswain) on a Ship

February 1, 2025

China Fights Australia’s Plans to Reclaim Darwin Port Citing U.S. Influence

May 27, 2025

Fire-Stricken Wan Hai 503 Continues to Drift Off Indian Coast as Salvage Efforts Intensify

June 11, 2025

Car Carrier ‘Morning Midas’ Catches Fire with Electric Vehicles Off Alaska

June 5, 2025
Don't Miss
Energy

Longship welcomes ultra-low emission diesel-electric cargo ship

March 6, 2025

Dutch Shipping Company Longship Welcomes Ultra-Low Emission Vessel Longearth Longship, a Dutch shortsea ship operator…

WFW Advises Eurazeo on Investment in Offshore Service Vessel Platform

February 20, 2025

Global Marine Business Advisors appoints rep for Saudi Arabia

March 25, 2025

BOS Princess Becomes Geotechnical Drilling Vessel

March 25, 2025

Subscribe to Updates

Your Weekly Dive into Maritime & Energy News.

About Us
About Us

Stay informed with the latest in maritime, offshore, oil & gas, and energy industries. Explore news, trends, and insights shaping the global energy landscape.

For advertising inquiries, contact us at
info@maritime247.com.

Facebook X (Twitter) YouTube LinkedIn
Our Picks

New amphibious boat to be manufactured in UK

June 10, 2025

PEAK Wind Enters into OMA to Oversee Wind Farm Operations

March 26, 2025

US Launches Preps for Expanded Offshore Drilling

April 21, 2025

Subscribe to Updates

Your Weekly Dive into Maritime & Energy News.

© 2025 maritime247.com - All rights reserved.
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • Advertising

Type above and press Enter to search. Press Esc to cancel.