Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is definitely a problem. Skills require a very strong model - one with a longer context (32,000 tokens minimum at a guess) that can reliably drive Unix CLI tools over a multiple step conversation.

I haven't yet run a local model that feels strong enough at these things for skills to make sense. Really I think the unlock for skills was o3/Claude 4/GPT-5 - prior to those the models weren't capable enough for something like skills to work well.

That said, the rate of improvement of local models has been impressive over the past 18 months. It's possible we have a 70B local model that's capable enough to run skills now and I've not yet used it with the right harness.



I connect local models to MCPs with LM Studio and I'm blown away at how good they are. But the issues creep up when you hit longer context like you said.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: