Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And yet the distinction must be made. Do you know what it’s called when data is treated as code when it’s not supposed to be? It’s called a “security vulnerability.” Untrusted data must never be executed as code in a privileged context. When there’s a way to make that happen, it’s considered a serious flaw that must be fixed.


> Do you know what it’s called when data is treated as code when it’s not supposed to be? It’s called a “security vulnerability.”

What about being treated as code when it's supposed to be?

(What is the difference between code execution vulnerability and a REPL? It's who is using it.)

Whatever you call program vs. its data, the program can always be viewed as an interpreter for a language, and your input as code in that language.

See also the subfield of "langsec", which is based on this premise, as well as the fact that you probably didn't think of that and thus your interpreter/parser is implicitly spread across half your program (they call it "shotgun parser"), and your "data" could easily be unintentionally Turing-complete without you knowing :).

EDIT:

I swear "security" is becoming a cult in our industry. Whether or not you call something "security vulnerability" and therefore "a problem", doesn't change the fundamental nature of this thing. And the fundamental nature of information is, there exist no objective, natural distinction between code and data. It can be drawn arbitrarily, and systems can be structured to emulate it - but that still just means it's a matter of opinion.

EDIT2: Not to mention, security itself is not objective. There is always the underlying assumption - the answer to a question, who are you protecting the system from, and for who are you doing it?. You don't need to look far to find systems where users are seen in part as threat actors, and thus get disempowered in the name of protecting the interests of vendor and some third parties (e.g. advertisers).


Imagine your browser had a flaw I could exploit by carefully crafting the contents this comment, which allows me to take over your computer. You’d consider that a serious problem, right? You’d demand a quick fix from the browser maker.

Now imagine that there is no fix because the ability for a comment to take control of the whole thing is an inherent part of how it works. That’s how LLM agents are.

If you have an LLM agent that can read your email and read the web then you have an agent which can pretty easily be made to leak the contents of your private emails to me.

Yes, your email program may actually have a vulnerability which allows this to happen, with no LLM involved. The difference is, if there is such a vulnerability then it can be fixed. It’s a bug, not an inherent part of how the program works.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: