LLMs’ Data-Control Path Insecurity – Schneier on Security

https://www.schneier.com/blog/archives/2024/05/llms-data-control-path-insecurity.html

Any LLM application that interacts with untrusted users—think of a chatbot embedded in a website—will be vulnerable to attack. It’s hard to think of an LLM application that isn’t vulnerable in some way.

Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data—whether it be training data, text prompts, or other input into the LLM—is mixed up with the commands that tell the LLM what to do, the system will be vulnerable.

But unlike the phone system, we can’t separate an LLM’s data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it’s the very thing that enables prompt injection.

Yacht sinks after being rammed by orcas in Strait of Gibraltar – BBC

https://www.bbc.com/news/articles/cmm330y6d2qo

A sailing yacht has sunk in Moroccan waters in the Strait of Gibraltar after being rammed by an unknown number of orcas, Spain’s maritime rescue services said.

Two people onboard the vessel, Alboran Cognac, were rescued by a passing oil tanker, after the incident at 0900 local time (0800 BST) on Sunday.

It is the latest in a series of orca rammings of vessels around the Strait of Gibraltar over the past four years.

Scientists are unsure about the exact causes of the behaviour, but believe the highly intelligent mammals could be displaying “copycat” or “playful” behaviour.

Solar storms made GPS tractors miss their mark at the worst time for farmers - The Verge

https://www.theverge.com/2024/5/12/24154779/solar-storms-farmer-gps-john-deer

Farmers had to stop planting their crops over the weekend as the strongest solar storms since 2003 battered the GPS satellites used by self-driving tractors

LandMark Implement, which owns John Deere dealerships in Kansas and Nebraska, warned farmers on Friday to turn off a feature that uses a fixed receiver to correct tractors’ paths. LandMark updated its post Saturday, saying it expects that when farmers tend crops later, “rows won’t be where the AutoPath lines think they are” and that it would be “difficult - if not impossible” for the self-driving tractor feature to work in fields planted while the GPS systems were hampered.

New Attack Against Self-Driving Car AI - Schneier on Security

https://www.schneier.com/blog/archives/2024/05/new-attack-against-self-driving-car-ai.html

This is another attack that convinces the AI to ignore road signs:

Please Rate Your Experience • Robb Knight

https://rknight.me/blog/please-rate-your-experience/

The UK Bans Default Passwords - Schneier on Security

https://www.schneier.com/blog/archives/2024/05/the-uk-bans-default-passwords.html

The UK is the first country to ban default passwords on IoT devices.

Facebook snooped on users’ Snapchat traffic in secret project, documents reveal | TechCrunch

https://techcrunch.com/2024/03/26/facebook-secret-project-snooped-snapchat-user-traffic/?guccounter=1

In 2016, Facebook launched a secret project designed to intercept and decrypt the network traffic between people using Snapchat’s app and its servers.

Hardware Vulnerability in Apple’s M-Series Chips - Schneier on Security

https://www.schneier.com/blog/archives/2024/03/hardware-vulnerability-in-apples-m-series-chips.html

Note that exploiting the vulnerability requires running a malicious app on the target computer. So it could be worse. On the other hand, like many of these hardware side-channel attacks, it’s not possible to patch.

ASCII art elicits harmful responses from 5 major AI chatbots | Ars Technica

https://arstechnica.com/security/2024/03/researchers-use-ascii-art-to-elicit-harmful-responses-from-5-major-ai-chatbots/

Researchers have discovered a new way to hack AI assistants that uses a surprisingly old-school method: ASCII art. It turns out that chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.

Wi-Fi jamming to knock out cameras suspected in nine Minnesota burglaries -- smart security systems vulnerable as tech becomes cheaper and easier to acquire | Tom's Hardware

https://www.tomshardware.com/networking/wi-fi-jamming-to-knock-out-cameras-suspected-in-nine-minnesota-burglaries-smart-security-systems-vulnerable-as-tech-becomes-cheaper-and-easier-to-acquire

Edina police suspect that nine burglaries in the last six months have been undertaken with Wi-Fi jammer(s) deployed to ensure incriminating video evidence wasn’t available to investigators.

Worryingly, Wi-Fi jamming is almost a trivial activity for potential thieves in 2024. KARE11 notes that it could buy jammers online very easily and cheaply, with prices ranging from $40 to $1,000. Jammers are not legal to use in the U.S. but they are very easy to buy online.