AI agents are an ‘existential threat’ to secure messaging, Signal’s president Whittaker says
AI agents promise convenience, but Signal’s president warns their deep access to personal data could power a wave of security failures for which the tech industry is unprepared.

As the majority of tech companies lean into AI features and AI agents, Signal’s president, Meredith Whittaker, is pushing back. The Signal boss told Fortune that the rise of AI agents poses an “existential threat” not just to secure messaging apps like Signal but to anyone who builds apps for phones or computers.
To be able to fulfill their purpose of performing tasks on a user’s behalf, AI agents need access to large amounts of sensitive information, including things like bank details and passwords. However, this creates a large new “attack surface,” that cybercriminals or spy agencies could use to steal sensitive personal or company information.
AI agents are especially vulnerable to prompt injection attacks, where malicious websites hide instructions that trick the AI into executing harmful actions. Because products like AI web browsers can read and act on web content, attackers could potentially steal emails, access accounts, exfiltrate data, overwrite clipboards, or redirect users to phishing sites.
“The way an agent works is that it completes complex tasks on your behalf, and it does that by accessing many sources of data,” she said in an interview on the sidelines of the Slush technology conference in Helsinki, Finland, last week. “It would need access to your Signal contacts and your Signal messages…that access is an attack vector and that really nullifies our reason for being.”
Signal is often used by journalists and politicians due to its strong focus on privacy and security. The platform promises end-to-end encryption by default and minimizes data collection to protect user communications. If AI agents have unfiltered access to these communications through the operating system Signal is running on, attackers could exploit this new vulnerability.
“The integration of agents at the operating system level is being done in ways that are very reckless and insensitive to cybersecurity and privacy basics,” Whittaker said. “It is a very, very dangerous architectural decision that threatens not only Signal, but the ability to develop safely at the application layer and the ability to have safe infrastructure that operates with integrity.”
AI agents risk undermining the internet’s security foundations
Rival messaging apps, such as Meta’s WhatsApp and Facebook Messenger, are leaning into AI features—something that Whittaker sees as unnecessary and unwanted by users.
“No one wants AI in their messaging app. It’s really annoying,” she said. “If we look at what they’re useful for at a consumer level, it’s really not clear to me that that trade-off is worth it…What are we actually optimizing for with these yawn-inducing conveniences?”
Consumer appetite for AI in messaging apps is mixed, although there is some interest in features such as translation and summarization. Companies have made efforts to mitigate some of the security risks posed by these features and reassure users that their privacy is intact.
Meta, for its part, has framed some of its new AI tools as safety-enhancing rather than privacy-eroding, pointing to features such as scam-detection and automated help functions. The company also stresses that its AI features are only activated when users choose to engage with them, and that the assistant cannot read messages unless they are explicitly sent to it.
Whittaker says Big Tech’s rush to introduce AI, especially agentic AI, is raising security risks across the entire internet that far outweigh its potential use cases.
“Part of what we’re seeing is that there is a bit of nervousness around the amount of [capital expenditure] that has been expended to support this scale at all costs…the infrastructure spend is eye-watering,” she said. “There’s a need to continually float these valuations in this market to investors and shareholders quarterly, leading to what I’m seeing as a lot of reckless deployments that bypass security teams…That is very dangerous.”