Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

context poisoning is a real problem that these memory providers only make worse.


IMO context poisoning is only fatal when you can't see what's going on (eg black box memory systems like ChatGPT memory). The memory system used in the OP is fully white box - you can see every raw LLM request (and see exactly how the memory influenced the final prompt payload).


That's significant, you can improve it in your own environment then.


Yeah exactly - it's all just tokens that you have full control over (you can run CRUD operations on). No hidden prompts / hidden memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: