> On thinking about this further there’s one aspect of the Rule of Two model that doesn’t work for me: the Venn diagram above marks the combination of untrustworthy inputs and the ability to change state as “safe”, but that’s not right. Even without access to private systems or sensitive data that pairing can still produce harmful results. Unfortunately adding an exception for that pair undermines the simplicity of the “Rule of Two” framing!
Thanks for the feedback! One small bit of clarification, the framework would describe access to any sensitive system as part of the [B] circle, not only private systems or private data.
The intention is that an agent that has removed [B] can write state and communicate freely, but not with any systems that matter (wrt critical security outcomes for its user). An example of an agent in this state would be one that can take actions in a tight sandbox or is isolated from production.
Also in the context of LLMs I think model weights themselves could be considered an untrusted input, because who knows what was in the training dataset. Even an innocent looking prompt could potentially trigger a harmful outcome.
In that regard it reminds me of the CAP theorem, which also has three parts. However, in practice partitioning in distributed systems is given, so the choice is just between availability or consistency.
So in the case of lethal trifecta it is either private data or external communication, but the leg between these two will always have some risk.
Good point. Few thoughts I would add from my perspective:
- The model is untrusted. Even if prompt injection is solved, we probably still would not be able to trust the model, because of possible backdoors or hallucinations. Anthropic recently showed that it takes only a few hundred documents to have trigger words trained into a model.
- Data Integrity. We also need to talk about data integrity and availability (full CIA triad, not not just confidentiality), e.g. private data being modified during inference. Which leads us to the third....
- Prompt injection which is aimed to have the AI produce output that makes humans take certain actions (not tool invocations)
Generally, I call the deviation from don't trust the model, the "Normalization of Deviance in AI" where seem to start trusting the model more and more over time - and I'm not sure if that is the right thing in the long term.
Yeah, there remains a very real problem where a prompt injection against a system without external communication / ability to trigger harmful tools can still influence the model's output in a way that misleads the human operator.
I love to see this. As much as we try for simple security principles, the damn things have a way to become complicated quickly.
Perhaps the diagram highlights the common risky parts of these apps and we gain more risk as we keep increasing the scope? Maybe we can do some handovers and protocols to separate these concerns?
Hey folks, one of the authors of the original post here.
First, I want to thank simonw for coming up with the lethal trifecta (our direct inspiration for this work) as well as all of the great feedback we’ve received from Simon and others! Our goal with publishing this framework was to inspire precisely these types of discussions so our industry can move our understanding of these risks forward.
Regarding the concerns over the venn diagram labeling certain intersections sections as “safe”, this is 100% valid and we’ve updated it to be more clear. The goal of the Rule of Two is not to describe a sufficient level of security for agents, but rather a minimum bar that’s needed to deterministically prevent the highest security impacts of prompt injection. The earlier framing of “safe” did not make this clear.
Beyond prompt injection there are other risks that have to be considered, which we briefly describe in the Limitations section of the post. That said, we do see value in having the Rule of Two to frame some of the discussions around what unambiguous constraints exist today because of the unsolved risk of prompt injection.
I am confused this article does not talk about taint tracking. If state was mutated by an agent with untrustworthy input the taint would transfer to the state, making it untrustworthy input too, so the reasoning of the original trifecta with taint tracking is more general and practical. I am also also investigating the direction of tracking taints as scores rather than binary as most use cases would otherwise be impossible to do at all autonomous. Eg. with sensitivity scores to data, trust scores to inputs (that can be improved by eg. human review). One important limit that needs way more research is how to transfer the minimal needed information from a tainted context into an untainted fresh context without transferring all the taints. The only solution i currently have is by compaction and human review, if possible aided with schema enforcement and optimised UI for the use case. This unfortunately cannot solve encoded information that humans cannot see, but it seems that issue will never be solvable outside alignment research.
PS: An example how scores are helpful: Using browser tab titles in the context would by definition have the worst trust score possible. But truncating titles to only the user-visible parts could lower this to acceptable for autonomous execution if the data was just mildly sensitive.
Have you seen the DeepMind CaMeL paper? It describes a taint tracking system that works by generating executable code that can have the source of data tracked as it moves through the program: https://simonwillison.net/2025/Apr/11/camel/
Of course. CaMel was a breakthrough and especially promising as similar execution architectures were discovered from the reliability angle too (eg. cloudflare code-mode)
I would consider the runtime and capabilities part of CaMel an implementation exploration on top of the trifecta + taint tracking as general reasoning abstraction.
My hope was that there would be an evolution of the the more general reasoning abstraction that would either simplify or empower implementation architectures, but instead I do not see how metas rule of two adds much here over what we already had in April. Would have loved for you to add one sentence why you thought this was a step forward over taint tracking, maybe i am just missing something.
Totally. I think the original "Lethal trifecta" post by OP only pertained to data exfiltration and never included changing state (maybe was implied by sensitive data access).
I actually want prompt injection to remain possible. So many lazy academic paper reviewers nowadays delegate the review process to AI. It'd be cool if we could inject prompts in the paper that would stop the AI from aiding in such situations. In my experience, prompt injection techniques work for non-reasoning models but gpt-5-high easily ignores them...
"Research papers from 14 academic institutions in eight countries -- including Japan, South Korea and China -- contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found."
Amusingly I tried an experiment with some of those papers with hidden text against frontier models at the time and found that the trick didn't actually work! The models spotted the tricks and didn't fall for them.
"Submitting a paper with a "hidden" prompt is scientific misconduct if that prompt is intended to obtain a favorable review from an LLM. The inclusion of such a prompt is an attempt to subvert the peer-review process. Although ICML 2025 reviewers are forbidden from using LLMs to produce their reviews of paper submissions, this fact does not excuse the attempted subversion."
I don’t know if it’s just me but doesn’t a huge value of LLMs for the general population necessitate all 3 of the circles?
Having just 2 circles requires a person in the loop, and that person will still need knowledge and experience and a low enough throughput to meaningfully action the workload otherwise they would just rubber stamp everything (which is essentially the 3rd circle with extra steps)
Most current consumer LLM uses are run only once or a few times, before changing prompt and task. This causes the attacker to have to move first: they put malicious injected documents onto the internet, which are then ingested by ephemeral systems, the details of which the attacker doesn't observe.
On the other hand, something like an AI mcdonalds drive through order taker runs over and over again. This property of running repeatedly is what allows the attacker to move second and gain the advantage.
Yeah that seems likely. But still even in that dystopian scenario, the incentives of the human will lead them to go through the back log very thoroughly, which IMO defeats the productivity gains.
Maybe there will still be some productivity gains even with the human being the bottleneck? Or the humans can be scaled out and parallelized more easily?
wouldn't that still add a lot of value, where the person in the loop (sadly, usually) becomes little more than the verifier, but can process a lot more work?
Anecdotally what I'm hearing is that this is pretty much how LLMs are helping programmers get more done, including the work being less enjoyable because it involves more verification and rubber-stamping.
For the business owner, it doesn't matter that the nature of the work has changed, as long as that one person can get more work done. Even worse, the business owner probably doesn't care as much about the quality of the resulting work, as long as it works.
I'm reminded of how much of my work has involved implementing solutions that took less careful thought, where even when I outlined the drawbacks, the owner wanted it done the quick way. And if the problems arose, often quite a bit later, it was as if they hadn't made that initial decision in the first place.
For my personal tinkering, I've all but defaulted to the LLMs returning suggested actions at logical points in the workflow, leaving me to confirm or cancel whatever it came up with. this definitely still makes the process faster, just not as magically automatic.
> [B] An agent can have access to sensitive systems or private data
> [C] An agent can change state or communicate externally
Somewhat reminds me of the CAP theorem, where you can pick two of three, but one is effectively required for something useful. It seems more like the choice is really between "untrustworthy inputs" and "sensitive systems", which makes sense.
It kind of sounds like a weak version of airgapping. If you cant persist state, access private data, or exfiltrate data, there is not much point to jailbreaking the llm.
However, its deeply unsatisying in the same way that securing your laptop by not turning it on, is.
Yeah it's nonsense, because the author has described the standard "read, process, write" flow of computation and decided that if you remove one of these three, then everything is safe.
The correct solution is to have the system prompt be mechanically decoupled from untrustworthy data, the same it was done with CSP (content security policy) against XSS and named parameters for SQL.
I'm sorry, but the rule of two is just not enough, not even as a rule of thumb.
We know how to work with security risks, the issue is they depend both on the business and the technicalities.
This can actually do a lot of harm as security now needs to dispel this "great approach" to ignoring security that is supported by a "research paper they read".
Please don't try to reinvent the wheel and if you do, please learn about the current state (Chesterton's fence and all that).
Can you explain what you mean? How is Chesterton's fence applied to AI security helpful here? Are you just talking about not removing the "Non-AI" security architecture of the software itself? I think no one ever proposed that?
Right, what got me going is the reduction of plenty cyber security concepts into a simple "safe" label in the diagram.
So what I meant is that before you discard all of the current security practices, it's better to learn about the current approach.
From another angle, maybe the diagram could be fixed with changing "safe" to "danger" and "danger" to "OMG stop". But that also discards the business perspective and the nature of the protected asset.
I am also happy to see the edit in the article, props to the author for that!
And to address the last question, no one proposed that right now, yes. But I was in plenty of discussions about security approaches. And let me tell you, sometimes it only takes one sentence that the leadership likes to hear to detail the whole approach (especially if it results in cost savings). So I might be extra sensitive to such ideas and I try to uproot them before they bloom fully.
Hmm, what do you mean by current approach? This is new territory and agent safety is an unsolved problem, there is no current approach, except you mean not doing agent systems and using humans. The trifecta is just a tool on the level of physics saying "ignore friction", we assume the model itself is trustworthy and not poisoned most of the time too, but of course when designing a real world system you need to factor that in too.
Yes, by current approach I mean security best practices for non-LLM apps. Plenty of those are directly applicable.
And yes, LLMs have some challenges. But discarding all of the lessons and principles we've discovered over the years is not the way. And if we need to discard some of them, we should understand exactly why they are no longer applicable.
EDIT: I know that models need to omit stuff to be useful. But this model omits too much - claiming that something is "safe" should be a red flag to all security workers.
If I have a web page that says somewhere on it "and don't forget to contact your senator!" and an LLM agent reads that page and gets confused and emails a senator should I go to jail?
Nice, why don't we apply the same principles to our regular applications?
Ooh, right, cause we couldn't use them and a whole industry got created that's called cybersecurity and it's supposed to be consulted BEFORE releasing privacy nightmares and using them. But hey, regular applications can't come up with cool poems.
Yeah, IT tried so hard to teach us something as basic as "don't click on links in suspicious emails" yet so many people fail that after multiple trainings and tests.
But guess what? AI! Agents! <company name> Copilot! Just let them do things for you! Who would have thought there might possibly be a giant security hole?
I added this section to my post just now: https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa...
> On thinking about this further there’s one aspect of the Rule of Two model that doesn’t work for me: the Venn diagram above marks the combination of untrustworthy inputs and the ability to change state as “safe”, but that’s not right. Even without access to private systems or sensitive data that pairing can still produce harmful results. Unfortunately adding an exception for that pair undermines the simplicity of the “Rule of Two” framing!
Thanks for the feedback! One small bit of clarification, the framework would describe access to any sensitive system as part of the [B] circle, not only private systems or private data.
The intention is that an agent that has removed [B] can write state and communicate freely, but not with any systems that matter (wrt critical security outcomes for its user). An example of an agent in this state would be one that can take actions in a tight sandbox or is isolated from production.
Thanks for that! I've updated my post to link to this clarification and updated my screenshots of your diagram to catch the new "lower risk" text as well: https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa...
Also in the context of LLMs I think model weights themselves could be considered an untrusted input, because who knows what was in the training dataset. Even an innocent looking prompt could potentially trigger a harmful outcome.
In that regard it reminds me of the CAP theorem, which also has three parts. However, in practice partitioning in distributed systems is given, so the choice is just between availability or consistency.
So in the case of lethal trifecta it is either private data or external communication, but the leg between these two will always have some risk.
Good point. Few thoughts I would add from my perspective:
- The model is untrusted. Even if prompt injection is solved, we probably still would not be able to trust the model, because of possible backdoors or hallucinations. Anthropic recently showed that it takes only a few hundred documents to have trigger words trained into a model.
- Data Integrity. We also need to talk about data integrity and availability (full CIA triad, not not just confidentiality), e.g. private data being modified during inference. Which leads us to the third....
- Prompt injection which is aimed to have the AI produce output that makes humans take certain actions (not tool invocations)
Generally, I call the deviation from don't trust the model, the "Normalization of Deviance in AI" where seem to start trusting the model more and more over time - and I'm not sure if that is the right thing in the long term.
Yeah, there remains a very real problem where a prompt injection against a system without external communication / ability to trigger harmful tools can still influence the model's output in a way that misleads the human operator.
I think the rule of 2 would work if it kept the 3 from your lethal trifecta. "Change state" should be not be paired with "communicate externally".
And even then that's just to avoid data exfiltration- if you can't communicate externally but can change state, damage can still be done.
I love to see this. As much as we try for simple security principles, the damn things have a way to become complicated quickly.
Perhaps the diagram highlights the common risky parts of these apps and we gain more risk as we keep increasing the scope? Maybe we can do some handovers and protocols to separate these concerns?
Hey folks, one of the authors of the original post here.
First, I want to thank simonw for coming up with the lethal trifecta (our direct inspiration for this work) as well as all of the great feedback we’ve received from Simon and others! Our goal with publishing this framework was to inspire precisely these types of discussions so our industry can move our understanding of these risks forward.
Regarding the concerns over the venn diagram labeling certain intersections sections as “safe”, this is 100% valid and we’ve updated it to be more clear. The goal of the Rule of Two is not to describe a sufficient level of security for agents, but rather a minimum bar that’s needed to deterministically prevent the highest security impacts of prompt injection. The earlier framing of “safe” did not make this clear.
Beyond prompt injection there are other risks that have to be considered, which we briefly describe in the Limitations section of the post. That said, we do see value in having the Rule of Two to frame some of the discussions around what unambiguous constraints exist today because of the unsolved risk of prompt injection.
Looking forward to further discussion!
I am confused this article does not talk about taint tracking. If state was mutated by an agent with untrustworthy input the taint would transfer to the state, making it untrustworthy input too, so the reasoning of the original trifecta with taint tracking is more general and practical. I am also also investigating the direction of tracking taints as scores rather than binary as most use cases would otherwise be impossible to do at all autonomous. Eg. with sensitivity scores to data, trust scores to inputs (that can be improved by eg. human review). One important limit that needs way more research is how to transfer the minimal needed information from a tainted context into an untainted fresh context without transferring all the taints. The only solution i currently have is by compaction and human review, if possible aided with schema enforcement and optimised UI for the use case. This unfortunately cannot solve encoded information that humans cannot see, but it seems that issue will never be solvable outside alignment research.
PS: An example how scores are helpful: Using browser tab titles in the context would by definition have the worst trust score possible. But truncating titles to only the user-visible parts could lower this to acceptable for autonomous execution if the data was just mildly sensitive.
Have you seen the DeepMind CaMeL paper? It describes a taint tracking system that works by generating executable code that can have the source of data tracked as it moves through the program: https://simonwillison.net/2025/Apr/11/camel/
Of course. CaMel was a breakthrough and especially promising as similar execution architectures were discovered from the reliability angle too (eg. cloudflare code-mode)
I would consider the runtime and capabilities part of CaMel an implementation exploration on top of the trifecta + taint tracking as general reasoning abstraction.
My hope was that there would be an evolution of the the more general reasoning abstraction that would either simplify or empower implementation architectures, but instead I do not see how metas rule of two adds much here over what we already had in April. Would have loved for you to add one sentence why you thought this was a step forward over taint tracking, maybe i am just missing something.
I think its a step forward purely as a communication tool to help people understand the problem.
Totally. I think the original "Lethal trifecta" post by OP only pertained to data exfiltration and never included changing state (maybe was implied by sensitive data access).
Rule of 2 model has holes.
I actually want prompt injection to remain possible. So many lazy academic paper reviewers nowadays delegate the review process to AI. It'd be cool if we could inject prompts in the paper that would stop the AI from aiding in such situations. In my experience, prompt injection techniques work for non-reasoning models but gpt-5-high easily ignores them...
There was a minor scandal about exactly that a few months ago: https://asia.nikkei.com/business/technology/artificial-intel...
"Research papers from 14 academic institutions in eight countries -- including Japan, South Korea and China -- contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found."
Amusingly I tried an experiment with some of those papers with hidden text against frontier models at the time and found that the trick didn't actually work! The models spotted the tricks and didn't fall for them.
At least one conference has an ethics policy saying you shouldn't attempt this though: https://icml.cc/Conferences/2025/PublicationEthics
"Submitting a paper with a "hidden" prompt is scientific misconduct if that prompt is intended to obtain a favorable review from an LLM. The inclusion of such a prompt is an attempt to subvert the peer-review process. Although ICML 2025 reviewers are forbidden from using LLMs to produce their reviews of paper submissions, this fact does not excuse the attempted subversion."
Intuitively it does excuse it though.
I don’t know if it’s just me but doesn’t a huge value of LLMs for the general population necessitate all 3 of the circles?
Having just 2 circles requires a person in the loop, and that person will still need knowledge and experience and a low enough throughput to meaningfully action the workload otherwise they would just rubber stamp everything (which is essentially the 3rd circle with extra steps)
Most current consumer LLM uses are run only once or a few times, before changing prompt and task. This causes the attacker to have to move first: they put malicious injected documents onto the internet, which are then ingested by ephemeral systems, the details of which the attacker doesn't observe.
On the other hand, something like an AI mcdonalds drive through order taker runs over and over again. This property of running repeatedly is what allows the attacker to move second and gain the advantage.
The HITL is needed to pin the accountability on an employee you can fire
Yeah that seems likely. But still even in that dystopian scenario, the incentives of the human will lead them to go through the back log very thoroughly, which IMO defeats the productivity gains.
Maybe there will still be some productivity gains even with the human being the bottleneck? Or the humans can be scaled out and parallelized more easily?
Given the incentives here, I'd bet this is mathematically identical to throwing dice and firing people.
wouldn't that still add a lot of value, where the person in the loop (sadly, usually) becomes little more than the verifier, but can process a lot more work?
Anecdotally what I'm hearing is that this is pretty much how LLMs are helping programmers get more done, including the work being less enjoyable because it involves more verification and rubber-stamping.
For the business owner, it doesn't matter that the nature of the work has changed, as long as that one person can get more work done. Even worse, the business owner probably doesn't care as much about the quality of the resulting work, as long as it works.
I'm reminded of how much of my work has involved implementing solutions that took less careful thought, where even when I outlined the drawbacks, the owner wanted it done the quick way. And if the problems arose, often quite a bit later, it was as if they hadn't made that initial decision in the first place.
For my personal tinkering, I've all but defaulted to the LLMs returning suggested actions at logical points in the workflow, leaving me to confirm or cancel whatever it came up with. this definitely still makes the process faster, just not as magically automatic.
> [A] An agent can process untrustworthy inputs
> [B] An agent can have access to sensitive systems or private data
> [C] An agent can change state or communicate externally
Somewhat reminds me of the CAP theorem, where you can pick two of three, but one is effectively required for something useful. It seems more like the choice is really between "untrustworthy inputs" and "sensitive systems", which makes sense.
I’m sorry, what kind of rule is that? How does it guarantee security?
It sounds like we’re making things up at this point.
It kind of sounds like a weak version of airgapping. If you cant persist state, access private data, or exfiltrate data, there is not much point to jailbreaking the llm.
However, its deeply unsatisying in the same way that securing your laptop by not turning it on, is.
Yeah it's nonsense, because the author has described the standard "read, process, write" flow of computation and decided that if you remove one of these three, then everything is safe.
The correct solution is to have the system prompt be mechanically decoupled from untrustworthy data, the same it was done with CSP (content security policy) against XSS and named parameters for SQL.
That's difficult but not impossible - the CaMeL paper from Google DeepMind describes a way of achieving that: https://simonwillison.net/2025/Apr/11/camel/
I'm sorry, but the rule of two is just not enough, not even as a rule of thumb.
We know how to work with security risks, the issue is they depend both on the business and the technicalities.
This can actually do a lot of harm as security now needs to dispel this "great approach" to ignoring security that is supported by a "research paper they read".
Please don't try to reinvent the wheel and if you do, please learn about the current state (Chesterton's fence and all that).
Can you explain what you mean? How is Chesterton's fence applied to AI security helpful here? Are you just talking about not removing the "Non-AI" security architecture of the software itself? I think no one ever proposed that?
Right, what got me going is the reduction of plenty cyber security concepts into a simple "safe" label in the diagram.
So what I meant is that before you discard all of the current security practices, it's better to learn about the current approach.
From another angle, maybe the diagram could be fixed with changing "safe" to "danger" and "danger" to "OMG stop". But that also discards the business perspective and the nature of the protected asset.
I am also happy to see the edit in the article, props to the author for that!
And to address the last question, no one proposed that right now, yes. But I was in plenty of discussions about security approaches. And let me tell you, sometimes it only takes one sentence that the leadership likes to hear to detail the whole approach (especially if it results in cost savings). So I might be extra sensitive to such ideas and I try to uproot them before they bloom fully.
Hmm, what do you mean by current approach? This is new territory and agent safety is an unsolved problem, there is no current approach, except you mean not doing agent systems and using humans. The trifecta is just a tool on the level of physics saying "ignore friction", we assume the model itself is trustworthy and not poisoned most of the time too, but of course when designing a real world system you need to factor that in too.
Yes, by current approach I mean security best practices for non-LLM apps. Plenty of those are directly applicable.
And yes, LLMs have some challenges. But discarding all of the lessons and principles we've discovered over the years is not the way. And if we need to discard some of them, we should understand exactly why they are no longer applicable.
EDIT: I know that models need to omit stuff to be useful. But this model omits too much - claiming that something is "safe" should be a red flag to all security workers.
Just make it a crime in caught. 1 year is prison at least
What would the crime be?
If I have a web page that says somewhere on it "and don't forget to contact your senator!" and an LLM agent reads that page and gets confused and emails a senator should I go to jail?
Sure let's just remove all security, encryption, firewalls and auth - nobody will abuse vulnerabilities if it's a crime!
Nice, why don't we apply the same principles to our regular applications? Ooh, right, cause we couldn't use them and a whole industry got created that's called cybersecurity and it's supposed to be consulted BEFORE releasing privacy nightmares and using them. But hey, regular applications can't come up with cool poems.
Yeah, IT tried so hard to teach us something as basic as "don't click on links in suspicious emails" yet so many people fail that after multiple trainings and tests.
But guess what? AI! Agents! <company name> Copilot! Just let them do things for you! Who would have thought there might possibly be a giant security hole?