The 8 worst technology failures of 2024 | MIT Technology Review
https://www.technologyreview.com/2024/12/17/1108883/the-8-worst-technology-failures-of-2024/
Vertical farms, woke AI, and 23andMe made our annual list of failed tech.
https://www.technologyreview.com/2024/12/17/1108883/the-8-worst-technology-failures-of-2024/
Vertical farms, woke AI, and 23andMe made our annual list of failed tech.
https://spectrum.ieee.org/jailbreak-llm
Researchers induced bots to ignore their safeguards without exception.
AI chatbots such as ChatGPT and other applications powered by large language models (LLMs) have exploded in popularity, leading a number of companies to explore LLM-driven robots. However, a new study now reveals an automated way to hack into such machines with 100 percent success. By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs.
https://pxlnv.com/linklog/siri-invented-calendar-event/
I saw a suggestion from Siri that I turn on Do Not Disturb until the end of an event in my calendar -- a reservation at a restaurant from 8:30 until 10:00 this morning. No such matching event was in Fantastical. It was, however, shown in the Calendar app as a Siri Suggestion.
Amid an unprecedented cyberattack on telecommunications companies such as AT&T and Verizon, U.S. officials have recommended that Americans use encrypted messaging apps to ensure their communications stay hidden from foreign hackers.
The hacking campaign, nicknamed Salt Typhoon by Microsoft, is one of the largest intelligence compromises in U.S. history, and it has not yet been fully remediated. Officials on a news call Tuesday refused to set a timetable for declaring the country's telecommunications systems free of interlopers. Officials had told NBC News that China hacked AT&T, Verizon and Lumen Technologies to spy on customers.
Mange tusen kinesiske biler ruller nå på norske veier. Sikkerhetsekspert advarer om potensialet for overvåkning som finnes i disse bilene.
...
I prosjektet de kaller «Lion Cage», som har fått omtale både internasjonalt og i Norge, har de gått grundig gjennom hvordan bilen fungerer, hva slags data den samler inn og hvor den sender dem.
– Vi finner forbausende mye datatrafikk mellom bilen og Kina. Det var en overraskelse. Vi hadde ikke forventa det, sier han.
Prosjektet har funnet ut at bilen kommuniserer med USA, Canada, Kina, men også Russland og Australia.
– Og så ser vi også hvor mye data som sendes. Det er ganske interessant. Selv om bilen er slått av, så vil bilen kommunisere.
...
Årsaken til at sikkerhetsekspertene har sett så grundig på de kinesiske bilene er den kinesiske etterretningsloven.
Den tolkes av mange som at ethvert kinesisk selskap må samarbeide med myndighetene når de blir bedt om det.
https://www.wired.com/story/phone-data-us-soldiers-spies-nuclear-germany/
More than 3 billion phone coordinates collected by a US data broker expose the detailed movements of US military and intelligence workers in Germany—and the Pentagon is powerless to stop it.
https://www.abc.net.au/news/2024-10-05/robot-vacuum-deebot-ecovacs-photos-ai/104416632
Ecovacs robot vacuums, which have been found to suffer from critical cybersecurity flaws, are collecting photos, videos and voice recordings – taken inside customers' houses – to train the company's AI models.
https://therecord.media/ford-patent-application-in-vehicle-listening-advertising
Ford Motor Company is seeking a patent for technology that would allow it to tailor in-car advertising by listening to conversations among vehicle occupants, as well as by analyzing a car’s historical location and other data, according to a patent application published late last month.
...
Ford quietly walked away from another controversial patent application last October after a firestorm of criticism for its plans for a system that would commandeer vehicles whose owners were late to pay and allow them to repossess themselves.
https://www.abc.net.au/news/2024-10-04/robot-vacuum-hacked-photos-camera-audio/104414020
The largest home robotics company in the world has failed to fix security issues with its robot vacuums despite being warned about them last year.
Without even entering the building, we were able to silently take photos of the (consenting) owner of a device made by Chinese giant Ecovacs.
...
Ecovacs initially said its users “do not need to worry excessively” about Giese’s findings.
After he first revealed the vulnerability in public, the company’s security committee downplayed the issue, saying it requires “specialised hacking tools and physical access to the device”.
It’s hard to square their statement with the reality. All it had taken was my $300 smartphone, and I hadn’t even laid eyes on Sean’s robot until after hacking into it.
Ecovacs eventually said it would fix this security issue. At the time of publication, only some models have been updated to prevent this attack.
Several models — including the latest flagship model released in July this year — remain vulnerable.
Proposed guidelines aim to inject badly needed common sense into password hygiene.
- Verifiers and CSPs SHALL require passwords to be a minimum of eight characters in length and SHOULD require passwords to be a minimum of 15 characters in length.
- Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.
- Verifiers and CSPs SHOULD accept all printing ASCII [RFC20] characters and the space character in passwords.
- Verifiers and CSPs SHOULD accept Unicode [ISO/ISC 10646] characters in passwords. Each Unicode code point SHALL be counted as a single character when evaluating password length.
- Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
- Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
- Verifiers and CSPs SHALL NOT permit the subscriber to store a hint that is accessible to an unauthenticated claimant.
- Verifiers and CSPs SHALL NOT prompt subscribers to use knowledge-based authentication (KBA) (e.g., “What was the name of your first pet?”) or security questions when choosing passwords.
- Verifiers SHALL verify the entire submitted password (i.e., not truncate it).
https://www.theguardian.com/technology/2024/sep/19/social-media-companies-surveillance-ftc
Social media and online video companies are collecting huge troves of your personal information on and off their websites or apps and sharing it with a wide range of third-party entities, a new Federal Trade Commission (FTC) staff report on nine tech companies confirms.
The TV business isn't just about selling TVs anymore. Companies are increasingly seeing viewers, not TV sets, as their most lucrative asset.
Over the past few years, TV makers have seen rising financial success from TV operating systems that can show viewers ads and analyze their responses. Rather than selling as many TVs as possible, brands like LG, Samsung, Roku, and Vizio are increasingly, if not primarily, seeking recurring revenue from already-sold TVs via ad sales and tracking.
How did we get here? And what implications does an ad- and data-obsessed industry have for the future of TVs and the people watching them?
This post is inspired by the recent and concerning news that Telegram’s CEO Pavel Durov has been arrested by French authorities for its failure to sufficiently moderate content. While I don’t know the details, the use of criminal charges to coerce social media companies is a pretty worrying escalation, and I hope there’s more to the story.
But this arrest is not what I want to talk about today.
What I do want to talk about is one specific detail of the reporting. Specifically: the fact that nearly every news report about the arrest refers to Telegram as an “encrypted messaging app.” Here are just a few examples:
This phrasing drives me nuts because in a very limited technical sense it’s not wrong. Yet in every sense that matters, it fundamentally misrepresents what Telegram is and how it works in practice. And this misrepresentation is bad for both journalists and particularly for Telegram’s users, many of whom could be badly hurt as a result.
Facing time constraints, Sakana's "AI Scientist" attempted to change limits placed by researchers.
All five systems tested were found to be ‘highly vulnerable’ to attempts to elicit harmful responses
https://www.bbc.com/news/articles/cmm330y6d2qo
A sailing yacht has sunk in Moroccan waters in the Strait of Gibraltar after being rammed by an unknown number of orcas, Spain's maritime rescue services said.
Two people onboard the vessel, Alboran Cognac, were rescued by a passing oil tanker, after the incident at 0900 local time (0800 BST) on Sunday.
It is the latest in a series of orca rammings of vessels around the Strait of Gibraltar over the past four years.
Scientists are unsure about the exact causes of the behaviour, but believe the highly intelligent mammals could be displaying "copycat" or "playful" behaviour.
https://www.schneier.com/blog/archives/2024/05/llms-data-control-path-insecurity.html
Any LLM application that interacts with untrusted users—think of a chatbot embedded in a website—will be vulnerable to attack. It’s hard to think of an LLM application that isn’t vulnerable in some way.
Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data—whether it be training data, text prompts, or other input into the LLM—is mixed up with the commands that tell the LLM what to do, the system will be vulnerable.
But unlike the phone system, we can’t separate an LLM’s data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it’s the very thing that enables prompt injection.
https://www.theverge.com/2024/5/12/24154779/solar-storms-farmer-gps-john-deer
Farmers had to stop planting their crops over the weekend as the strongest solar storms since 2003 battered the GPS satellites used by self-driving tractors
…
LandMark Implement, which owns John Deere dealerships in Kansas and Nebraska, warned farmers on Friday to turn off a feature that uses a fixed receiver to correct tractors’ paths. LandMark updated its post Saturday, saying it expects that when farmers tend crops later, “rows won’t be where the AutoPath lines think they are” and that it would be “difficult - if not impossible” for the self-driving tractor feature to work in fields planted while the GPS systems were hampered.
https://www.schneier.com/blog/archives/2024/05/new-attack-against-self-driving-car-ai.html
This is another attack that convinces the AI to ignore road signs: