Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> rework their tools so they discard/add relevant context on the fly

That may be the foundation for an innovation step in model providers. But you can achieve a poor man’s simulation if you can determine, in retrospect, when a context was at peak for taking turns, and when it got too rigid, or too many tokens were spent, and then simply replay the context up until that point.

I don’t know if evaluating when a context is worth duplicating is a thing; it’s not deterministic, and it depends on enforcing a certain workflow.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: