• RisingAttacK quietly alters key features, tricking AI without changing the image’s appearance
  • Vision systems in self-driving cars could be blinded by nearly invisible image modifications
  • The attack fools top AI models used in cars, cameras, and healthcare diagnostics

Artificial intelligence is becoming more integrated into technologies that rely on visual recognition, from autonomous vehicles to medical imaging – but this increased utility also raises potential security risks, experts have warned.

A new method called RisingAttacK could threaten the reliability of these systems by silently manipulating what AI sees.

This could theoretically cause it to miss or misidentify objects, even when images appear unchanged to human observers.

Targeted deception through minimal image alteration

Developed by researchers at North Carolina State University, RisingAttacK is a form of adversarial attack that subtly alters visual input to deceive AI models.

The technique does not require large or obvious image changes; instead, it targets specific features within an image that are essential for recognition.

“This requires some computational power, but allows us to make very small, targeted changes to the key features that make the attack successful,” said Tianfu Wu, associate professor of electrical and computer engineering and co-corresponding author of the study.

These carefully engineered changes are completely undetectable to human observers, making the manipulated images appear entirely normal to the naked eye.

“The end result is that two images may look identical to human eyes, and we might clearly see a car in both images,” Wu explained.

“But due to RisingAttacK, the AI would see a car in the first image but would not see a car in the second image.”

This can compromise the safety of critical systems like those found in self-driving cars, which rely on vision models to detect traffic signs, pedestrians, and other vehicles.

If AI is manipulated into not seeing a stop sign or another car, the consequences could be severe.

The team tested the method against four widely used vision architectures: ResNet-50, DenseNet-121, ViTB, and DEiT-B. All four were successfully manipulated.

“We can influence the AI’s ability to see any of the top 20 or 30 targets it was trained to identify,” Wu said, citing common examples like cars, bicycles, pedestrians, and stop signs.

While the current focus is on computer vision, the researchers are already looking at broader implications.

“We are now in the process of determining how effective the technique is at attacking other AI systems, such as large language models,” Wu noted.

The long-term aim, he added, is not simply to expose vulnerabilities but to guide the development of more secure systems.

“Moving forward, the goal is to develop techniques that can successfully defend against such attacks.”

As attackers continue to discover new methods to interfere with AI behavior, the need for stronger digital safeguards becomes more urgent.

Via Techxplore

You might also like

By

Leave a Reply

Your email address will not be published. Required fields are marked *