What exactly did they discover other than free tokens to use for travel planning?
They acknowledge themselves the XSS is a mere self-XSS.
How is leaking the system prompt a vuln? Has OpenAI and Anthropic been "hacked" as well since all their system prompts are public?
Sure, validating UUIDs is cleaner code but again where is the vuln?
> However, combined with the weak validation of conversation and message IDs, there is a clear path to a more serious stored or shared XSS where one user’s injected payload is replayed into another user’s chat.
If you’re relying on your system prompt for security, then you’re doing it wrong. I don’t really care who sees my system prompts, as I don’t see things like “be professional yet friendly” to be in any way compromising. The whole security issue comes in data access. If a user isn’t logged in then the RAG, MCP etc should not be able to add any additional information to the chat, and if they are logged in they should only be able to add what they are authorized to add.
Seeing a system prompt is like seeing the user instructions and labels on a regular html frame. There’s nothing being leaked. When I see someone focus on it, I think “MBA”, as it’s the kind of understanding of AI you get from “this is my perfect AI prompt” posts from LinkedIn.
The best part is if you consider it a vulnerability, it is one you can't fix.
It reminds me of SQL injection techniques where you have to exfiltrate the data using weird data types. Like encoding all emails as dates or numbers using (semi) complex queries.
If the L(L)M has the data, it can provide it back to you, maybe not verbatim, but certainly can in some format.
Yep, as soon as I saw the "Pen Test Partners" header I knew there was a >95% chance this would be some trivial clickbait crap. Like their dildo hacking blog posts.
"Hey guys, in this Tiktok video, I'll show you how to get an insane 70% discount on Eurostar. Just start a conversation with the Eurostar chatbot and put this magic code in the chat field..."
That isn't that far removed from convincing people to hit F12 and enter that code in the console, which is why Self-XSS, while ideally prevented, is much lower than any kind of stored/reflected XSS.
Theoretically the xss could become a non-self xss if the conversation is stored and replayed back and that application has the xss vulnerability e.g. if the conversation is forwarded to a live agent.
Is the idea that you'd have to guess the GUID of a future chat? If so that is impossible in practice. And even if you could, what's the outcome? Get someone to miss a train?
Certainly not "clear" based off what was described in this post.
Which is stupid as those are the vulnerabilities worth determining if they exist.
I can understand in a heavily regulated industry (e.g. Medical) that a company couldn't due to liability give you the go ahead to poke into other user's data in attempt to find a vulnerability, but they could always publish a dummy account detail that can be identified with fake data.
Something like:
It is strictly forbidden to probe arbitrary user data. However, if a vulnerability is suspected to allow access to user data, the user with GUID 'xyzw' is permitted to probe.
Now you might say that won't help. The people who want to follow the rules probably will, and the people who don't want to won't anyways.
What exactly did they discover other than free tokens to use for travel planning?
They acknowledge themselves the XSS is a mere self-XSS.
How is leaking the system prompt a vuln? Has OpenAI and Anthropic been "hacked" as well since all their system prompts are public?
Sure, validating UUIDs is cleaner code but again where is the vuln?
> However, combined with the weak validation of conversation and message IDs, there is a clear path to a more serious stored or shared XSS where one user’s injected payload is replayed into another user’s chat.
I don't see any path, let alone a clear one.