Here Come the AI Worms | WIRED

https://www.wired.com/story/here-come-the-ai-worms/

Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

Air Canada ordered to pay customer who was misled by airline’s chatbot | The Guardian

https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit

Canada’s largest airline has been ordered to pay compensation after its chatbot gave a customer inaccurate information, misleading him into buying a full-price ticket.

Air Canada came under further criticism for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions”.

Company worker in Hong Kong pays out £20m in deepfake video call scam | The Guardian

https://www.theguardian.com/world/2024/feb/05/hong-kong-company-deepfake-video-conference-call-scam

Police investigate after employee tricked into transferring money to fraudsters posing as senior officers of her firm

AI and Trust – Schneier on Security

https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html

In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.

The Internet Enabled Mass Surveillance. AI Will Enable Mass Spying. – Schneier on Security

https://www.schneier.com/blog/archives/2023/12/the-internet-enabled-mass-surveillance-ai-will-enable-mass-spying.html

White faces generated by AI are more convincing than photos, finds survey | The Guardian

https://www.theguardian.com/technology/2023/nov/13/white-faces-generated-by-ai-are-more-convincing-than-photos-finds-survey

However, the team said the results did not hold for images of people of colour, possibly because the algorithm used to generate AI faces was largely trained on images of white people.

Somewhat ironically, while humans seem unable to tell apart real faces from those generated by AI, the team developed a machine learning system that can do so with 94% accuracy.

‘The Problem With Jon Stewart’ Is Ending – Pixel Envy

https://pxlnv.com/linklog/problem-with-jon-stewart-ending/

Apple is a big, sprawling conglomerate. If it cannot handle Stewart’s inquiries about China or our machine learning future, I think it should ask itself why that is, and whether those criticisms have merit.

Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy | WIRED UK

https://www.wired.co.uk/article/slovakia-election-deepfakes

Fact-checkers scrambled to deal with faked audio recordings released days before a tight election, in a warning for other countries with looming votes.

How a chatbot encouraged a man who wanted to kill the Queen

The case of Jaswant Singh Chail has shone a light on the latest generation of artificial intelligence-powered chatbots.On Thursday, 21-year-old Chail was given a nine-year sentence for breaking into Windsor Castle with a crossbow and declaring he wanted to kill the Queen.

https://www.bbc.com/news/technology-67012224

How Will A.I. Learn Next? | The New Yorker

https://www.newyorker.com/science/annals-of-artificial-intelligence/how-will-ai-learn-next

As chatbots threaten their own best sources of data, they will have to find new kinds of knowledge.