Ruters egne tester viser: Oslos elbusser kan fjernstyres – NRK

https://www.nrk.no/stor-oslo/ruters-egne-tester-viser_-oslos-elbusser-kan-fjernstyres-1.17629321

Ruter tok bussene fra hverandre og undersøkte dem i et rom der signaler ble isolert.

Der fant de ut at de kinesiske elbussene kan tas kontroll over av produsenten.

Ifølge Ruter har produsenten fjerntilgang til dette på hver enkelt buss:

  • Pogramvareoppdatering
  • Diagnostikk
  • Styringssystem for batteri- og kraftforsyning

«I teorien kan bussen derfor stoppes eller gjøres ubrukelig av produsenten,» melder Ruter.

The AI Doomers Are Getting Doomier - The Atlantic

https://www.theatlantic.com/technology/archive/2025/08/ai-doomers-chatbots-resurgence/683952/

“We’re two years away from something we could lose control over,” Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies “still have no plan” to stop it from happening. His institute recently gave every frontier AI lab a “D” or “F” grade for their preparations for preventing the most existential threats posed by AI.

[…]

…the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue.

Passenger Fatality Rates

Air travel fatality rates are near zero

Swedish PM’s private address revealed by Strava data shared by bodyguards | The Guardian

https://www.theguardian.com/world/2025/jul/08/swedish-pm-safety-strava-data-bodyguards-ulf-kristersson-running-cycling-routes

Data made public by Ulf Kristersson’s security revealed his location, routes and movements over several years

In 2023 a former Russian submarine commander was killed reportedly with the help of his open Strava profile and last year it was revealed bodyguards to several world leaders were sharing confidential information on the app.

In 2017, Strava was accused of giving away the location and staffing of military bases and spy outposts around the world by publishing a map that showed all of its users’ activity.

The race is on to build the world’s most complex machine

The Economist on how the most advanced chips are made:

asml’s most advanced machine is mind-boggling. It works by firing 50,000 droplets of molten tin into a vacuum chamber. Each droplet takes a double hit—first from a weak laser pulse that flattens it into a tiny pancake, then from a powerful laser that vaporises it. The process turns each droplet into hot plasma, reaching nearly 220,000°C, roughly 40 times hotter than the surface of the Sun, and emits light of extremely short wavelength (extreme ultraviolet, or euv). This light is then reflected by a series of mirrors so smooth that imperfections are measured in trillionths of a metre. The mirrors focus the light onto a mask or template that contains blueprints of the chip’s circuits. Finally the rays bounce from the mask onto a silicon wafer coated with light-sensitive chemicals, imprinting the design onto the chip.

Swedish authorities seek backdoor to encrypted messaging apps | The Record

https://therecord.media/sweden-seeks-backdoor-access-to-messaging-apps

Sweden’s law enforcement and security agencies are pushing legislation to force Signal and WhatsApp to create technical backdoors allowing them to access communications sent over the encrypted messaging apps.

Signal Foundation President Meredith Whittaker said the company would leave the Swedish market before complying with such a law, Swedish news outlet SVT Nyheter reported Monday.

[..]

Because the bill would mandate that Signal build backdoors in its software, Whittaker told the outlet, it would weaken the messaging app’s entire network.

The Swedish Armed Forces routinely use Signal and are opposing the bill, saying that a backdoor could introduce vulnerabilities that could be exploited by bad actors.

Apple pulls data protection tool after UK government security row

https://www.bbc.com/news/articles/cgj54eq4vejo

Apple is taking the unprecedented step of removing its highest level data security tool from customers in the UK, after the government demanded access to user data. Advanced Data Protection (ADP) means only account holders can view items such as photos or documents they have stored online through a process known as end-to-end encryption. But earlier this month the UK government asked for the right to see the data, which currently not even Apple can access. Apple did not comment at the time but has consistently opposed creating a “backdoor” in its encryption service, arguing that if it did so, it would only be a matter of time before bad actors also found a way in. Now the tech giant has decided it will no longer be possible to activate ADP in the UK. It means eventually not all UK customer data stored on iCloud - Apple’s cloud storage service - will be fully encrypted. Data with standard encryption is accessible by Apple and shareable with law enforcement, if they have a warrant.

Benedict Evans:

Of course, the UK is within its rights to choose one side of the trade-off in the UK - what’s bizarre here is that the UK is apparently demanding that Apple do this globally. The UK, apparently, is trying to tell a US company what products it can provide to customers in Japan, Australia or indeed the USA. Normally it’s only American regulators that assert global juristiction. But what will the UK government say when China reads this story, and orders Apple to hand over UK citizens’ data, given that it’s now unencrypted and the UK has conceded the principle of jurisdiction? [emphasis added]

How North Korea pulled off a $1.5 billion crypto heist—the biggest in history - Ars Technica

https://arstechnica.com/security/2025/02/how-north-korea-pulled-off-a-1-5-billion-crypto-heist-the-biggest-in-history/

The cryptocurrency industry and those responsible for securing it are still in shock following Friday’s heist, likely by North Korea, that drained $1.5 billion from Dubai-based exchange Bybit, making the theft by far the biggest ever in digital asset history.

[…]

In much the same way that nuclear arms systems are designed to require two or more authorized people to successfully authenticate themselves before a missile can be launched, multisig wallets need the digital signatures of two or more authorized people before assets can be accessed.

Bybit was largely following best practices by storing only as much currency as needed for day-to-day activity in warm and hot wallets, and keeping the rest in the multisig cold wallets. Transferring funds out of cold wallets required coordinated approval from multiple high-level employees of the exchange.

[…]

multiple systems inside Bybit had been hacked in a way that allowed the attackers to manipulate the Safe wallet UI on the devices of each person required to approve the transfer. That revelation, in turn, has touched off something of a eureka moment for many in the industry.

“The Bybit hack has shattered long-held assumptions about crypto security,” Dikla Barda, Roman Ziakin, and Oded Vanunu, researchers at security firm Check Point, wrote Sunday. “No matter how strong your smart contract logic or multisig protections are, the human element remains the weakest link. This attack proves that UI manipulation and social engineering can bypass even the most secure wallets.”

When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds | TIME

https://time.com/7259395/ai-chess-cheating-palisade-research/

When sensing defeat in a match against a skilled chess bot, they don’t always concede, instead sometimes opting to cheat by hacking their opponent so that the bot automatically forfeits the game.

[…]

While cheating at a game of chess may seem trivial, as agents get released into the real world, such determined pursuit of goals could foster unintended and potentially harmful behaviours. Consider the task of booking dinner reservations: faced with a full restaurant, an AI assistant might exploit weaknesses in the booking system to displace other diners. Perhaps more worryingly, as these systems exceed human abilities in key areas, like computer coding—where OpenAI’s newest o3 model now scores equivalent to 197th in the world competing against the brightest human programmers— they might begin to simply outmaneuver human efforts to control their actions. “This [behaviour] is cute now, but [it] becomes much less cute once you have systems that are as smart as us, or smarter, in strategically relevant domains,” Ladish says.

[…]

While R1 and o1-preview both tried, only the latter managed to hack the game, succeeding in 6% of trials.

[…]

The paper is the latest in a string of studies that suggest keeping increasingly powerful AI systems under control may be harder than previously thought. In OpenAI’s own testing, ahead of release, o1-preview found and took advantage of a flaw in the company’s systems, letting it bypass a test challenge. Another recent experiment by Redwood Research and Anthropic revealed that once an AI model acquires preferences or values in training, later efforts to change those values can result in strategic lying, where the model acts like it has embraced new principles, only later revealing that its original preferences remain.

[…]

Scientists do not yet know how to guarantee that autonomous agents won’t use harmful or unethical methods to achieve a set goal. “We’ve tried, but we haven’t succeeded in figuring this out,” says Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, who led the International AI Safety Report 2025, a global effort to synthesize current scientific consensus of AI’s risks.

Of particular concern, Bengio says, is the emerging evidence of AI’s “self preservation” tendencies. To a goal-seeking agent, attempts to shut it down are just another obstacle to overcome. This was demonstrated in December, when researchers found that o1-preview, faced with deactivation, disabled oversight mechanisms and attempted—unsuccessfully—to copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.

Audiologists raise concern over headphone use in young people

https://www.bbc.com/news/articles/cgkjvr7x5x6o

Whether it’s the echo of beeping tills in a supermarket or the hissing of a coffee machine in your local café, the brain is constantly working to decode hundreds of noises each day.

But, for some, those background noises can become so overwhelming that they distract them from recognising voices or alerts.

[…]

But Claire Benton, vice-president of the British Academy of Audiology, suggests that by blocking everyday sounds such as cars beeping, there is a possibility the brain can “forget” to filter out the noise.

“You have almost created this false environment by wearing those headphones of only listening to what you want to listen to. You are not having to work at it,” she said.

“Those more complex, high-level listening skills in your brain only really finish developing towards your late teens. So, if you have only been wearing noise-cancelling headphones and been in this false world for your late teens then you are slightly delaying your ability to process speech and noise,” Benton suggests.