Close Menu
  • Home
  • Maritime
  • Offshore
  • Port
  • Oil & Gas
  • Energy
  • Technology
  • Incidents
  • Environment
  • Events
    • Maritime
    • Offshore
    • Oil & Gas
    • Energy
  • Advertising
  • Contact
Facebook X (Twitter) Instagram LinkedIn
Trending
  • European refiners could drive green hydrogen momentum, with maritime sector playing important role
  • North Sea yields ‘significant’ black gold discovery
  • Falmouth Scientific, Inc. Receives ISO 9001:2015 Quality Certification
  • New leadership for Oceanbird – Splash247
  • Boats Group lawsuit alleges monopoly in US listings
  • Hollandse Kust West Beta cable tests completed
  • New Fred. Olsen 1848 floating solar lead brings experience from SolarDuck, Equinor
  • Strohm’s TCP jumpers make their way to Malaysian deepwater sector
Facebook X (Twitter) Instagram LinkedIn
Maritime247.comMaritime247.com
  • Home
  • Maritime
  • Offshore
  • Port
  • Oil & Gas
  • Energy
  • Tech
  • Incidents
  • Environment
  • Events
    • Maritime
    • Oil & Gas
    • Offshore
    • Energy
  • Advertising
Maritime247.comMaritime247.com
Home»Technology»New Technique Can Protect Images from AI
Technology

New Technique Can Protect Images from AI

August 12, 2025
Facebook Twitter LinkedIn WhatsApp Reddit Tumblr Email
Share
Facebook Twitter LinkedIn Email

New Technique Developed by Australian Researchers to Prevent Unauthorized AI Learning from Images

A groundbreaking new technique created by Australian researchers could potentially put a stop to unauthorized artificial intelligence (AI) systems from learning from images, photos, artwork, and other visual content. This innovative method, developed by CSIRO, Australia’s national science agency, in collaboration with the Cyber Security Cooperative Research Centre (CSCRC) and the University of Chicago, involves subtly altering content to make it unreadable to AI models while remaining visually unchanged to the human eye.

One of the key applications of this technology is to protect sensitive data such as satellite imagery or cyber threat information from being absorbed by AI models, especially within defense organizations. Additionally, this breakthrough could also aid artists, organizations, and social media users in safeguarding their work and personal data from being utilized to train AI systems or create deepfakes. For instance, a social media user could automatically apply a protective layer to their images before posting, preventing AI systems from learning facial features for deepfake manipulation.

This technique establishes a boundary on what an AI system can learn from the protected content, providing a mathematical assurance that this protection remains intact even against adaptive attacks or retraining efforts. Dr. Derui Wang, a scientist at CSIRO, emphasized that this method offers a heightened level of certainty for individuals sharing content online.

“Our approach is distinct in that we can mathematically ensure that unauthorized machine learning models are unable to learn beyond a specified threshold from the content. This offers a robust safeguard for social media users, content creators, and organizations,” Wang explained.

See also  Nokia's Subsea Optical Solution Powers Boosts Interconnectivity in Indonesia

Moreover, the application of this technique can be automated on a large scale. Wang stated, “A social media platform or website could integrate this protective layer into every uploaded image, potentially mitigating the proliferation of deepfakes, reducing instances of intellectual property theft, and empowering users to maintain control over their content.”

While the current implementation of this method is focused on images, there are plans to extend it to text, music, and videos in the future. Although the technology is still in the theoretical stage, with results validated in a controlled laboratory environment, the code is publicly accessible on GitHub for academic purposes. The research team is actively seeking partnerships with various sectors, including AI safety and ethics, defense, cybersecurity, and academia.

The paper detailing this technique, titled “Provably Unlearnable Data Examples,” was presented at the 2025 Network and Distributed System Security Symposium (NDSS) and was honored with the Distinguished Paper Award.

images Protect Technique
Share. Facebook Twitter LinkedIn Tumblr Telegram Email

Related Posts

Falmouth Scientific, Inc. Receives ISO 9001:2015 Quality Certification

August 21, 2025

Antarctica Undergoing Abrupt Change

August 21, 2025

Reach Subsea Completes Inspection for TotalEnergies Using USV/ Electric ROV

August 21, 2025
Top Posts

Duties of Bosun (Boatswain) on a Ship

February 1, 2025

Top 16 Biggest LNG Ships

April 16, 2025

Sea-Doo Switch recall underway after serious safety concerns

March 2, 2025

China Fights Australia’s Plans to Reclaim Darwin Port Citing U.S. Influence

May 27, 2025
Don't Miss
Maritime

Greece Moves to End Asylum Claims for All Maritime Migrants From Libya

July 10, 2025

Greece Takes Action to Stem Maritime Migration from North Africa Greece’s government is seeking to…

Wind Opponents Sue to Block Empire Wind

June 4, 2025

U.S.-Built ECO Liberty Ushers in New Era for American Offshore Wind Vessels

June 30, 2025

Italy to establish offshore wind hubs in Augusta and Taranto

July 12, 2025

Subscribe to Updates

Your Weekly Dive into Maritime & Energy News.

About Us
About Us

Stay informed with the latest in maritime, offshore, oil & gas, and energy industries. Explore news, trends, and insights shaping the global energy landscape.

For advertising inquiries, contact us at
info@maritime247.com.

Facebook X (Twitter) YouTube LinkedIn
Our Picks

MSC orders additional 10 LNG dual-fuel boxships

December 29, 2024

Tanker Collision Near Strait of Hormuz Not Related to Israel-Iran Conflict, Frontline Confirms

June 17, 2025

BOEM’s Draft EIS Sets the Stage for GoM Oil Leasing in a Shifting Energy Landscape

December 25, 2024

Subscribe to Updates

Your Weekly Dive into Maritime & Energy News.

© 2025 maritime247.com - All rights reserved.
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • Advertising

Type above and press Enter to search. Press Esc to cancel.