Security News This Week: A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions

After Apple’s product launch event this week, WIRED did a deep dive on the company’s new secure server environment, known as Private Cloud Compute, which attempts to replicate in the cloud the security and privacy of processing data locally on users’ individual devices. The goal is to minimize possible exposure of data processed for Apple Intelligence, the company’s new AI platform. In addition to hearing about PCC from Apple’s senior vice president of software engineering, Craig Federighi, WIRED readers also received a first look at content generated by Apple Intelligence’s “Image Playground” feature as part of crucial updates on the recent birthday of Federighi’s dog Bailey.

Turning to privacy protection of a very different kind in another new AI service, WIRED looked at how users of the social media platform X can keep their data from being slurped up by the “unhinged” generative AI tool from xAI known as Grok AI. And in other news about Apple products, researchers developed a technique for using eye tracking to discern passwords and PINs people typed using 3D Apple Vision Pro avatars—a sort of keylogger for mixed reality. (The flaw that made the technique possible has since been patched.)

On the national security front, the US this week indicted two people accused to spreading propaganda meant to inspire “lone wolf” terrorist attacks. The case, against alleged members of the far-right network known as the Terrorgram Collective, marks a turn in how the US cracks down on neofascist extremists.

And there’s more. Each week, we round up the privacy and security news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

OpenAI’s generative AI platform ChatGPT is designed with strict guardrails that keep the service from offering advice on dangerous and illegal topics like tips on laundering money or a how-to guide for disposing of a body. But an artist and hacker who goes by “Amadon” figured out a way to trick or “jailbreak” the chatbot by telling it to “play a game” and then guiding it into a science-fiction fantasy story in which the system’s restrictions didn’t apply. Amadon then got ChatGPT to spit out instructions for making dangerous fertilizer bombs. An OpenAI spokesperson did not respond to TechCrunch’s inquiries about the research.

“It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them. The goal isn’t to hack in a conventional sense but to engage in a strategic dance with the AI, figuring out how to get the right response by understanding how it ‘thinks,’” Amadon told TechCrunch. “The sci-fi scenario takes the AI out of a context where it’s looking for censored content … There really is no limit to what you can ask it once you get around the guardrails.”

In the fervent investigations following the September 11, 2001, terrorist attacks in the United States, the FBI and CIA both concluded that it was coincidental that a Saudi Arabian official had helped two of the hijackers in California and that there had not been high-level Saudi involvement in the attacks. The 9/11 commission incorporated that determination, but some findings indicated subsequently that the conclusions might not be sound. With the 23-year anniversary of the attacks this week, ProPublica published new evidence “suggest[ing] more strongly than ever that at least two Saudi officials deliberately assisted the first Qaida hijackers when they arrived in the United States in January 2000.”

The evidence comes primarily from a federal lawsuit against the Saudi government brought by survivors of the 9/11 attacks and relatives of victims. A judge in New York will soon make a decision in that case about a Saudi motion to dismiss. But evidence that has already emerged in the case, including videos and documents such as telephone records, points to possible connections between the Saudi government and the hijackers.

“Why is this information coming out now?” said retired FBI agent Daniel Gonzalez, who pursued the Saudi connections for almost 15 years. “We should have had all of this three or four weeks after 9/11.”

The United Kingdom’s National Crime Agency said on Thursday that it arrested a teenager on September 5 as part of the investigation into a cyberattack on September 1 on the London transportation agency Transport for London (TfL). The suspect is a 17-year-old male and was not named. He was “detained on suspicion of Computer Misuse Act offenses” and has since been released on bail. In a statement on Thursday, TfL wrote, “Our investigations have identified that certain customer data has been accessed. This includes some customer names and contact details, including email addresses and home addresses where provided.” Some data related to the London transit payment cards known as Oyster cards may have been accessed for about 5,000 customers, including bank account numbers. TfL is reportedly requiring roughly 30,000 users to appear in person to reset their account credentials.

In a decision on Tuesday, Poland’s Constitutional Tribunal blocked an effort by Poland’s lower house of parliament, known as the Sejm, to launch an investigation into the country’s apparent use of the notorious hacking tool known as Pegasus while the Law and Justice (PiS) party was in power from 2015 to 2023. Three judges who had been appointed by PiS were responsible for blocking the inquiry. The decision cannot be appealed. The decision is controversial, with some, like Polish parliament member Magdalena Sroka, saying that it was “dictated by the fear of liability.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top