Hacker Newsnew | past | comments | ask | show | jobs | submit | more wanderingmind's commentslogin

Dumb question from someone who uses old cars without OTA, how can cars get OTA if you don't pay for any internet subscription? Can't you say I don't want to pay for any internet costs when buying the car? Also ensure you don't connect the car to wifi. Isn't that suffice?


Otel meaning open Telemetry? Do they have special capability for tracking agents?


Yes, there is an otel standard for agent traces. You can instrument agents that don't natively support Otel via bifrost.


That looks like a great usecase. Would you be able to write about the architecture. A lot of us would love to be able to do things like this in Sheets, I'm personally trying to integrate a forecast estimate into Sheets


Dumb question maybe. Can the users in Europe raise a GDPR request to extract all their data from Slack? I realise it's not easy to port the data to other platforms yet, but atleast you have a copy of the data


This consistent with Gall's law, that says complex systems that work can only be achieved by building complexity over simple systems. Complex systems built from scratch do not work. So build the simplest system that works and then keep adding complexity to it based on requirements


until? what's the threshold here? does complexity have a final boss?


Maybe I'm using it wrong, but when I try to use the full precision FP16 model, load it into chatter UI and ask a simple question,

"write me a template to make a cold call to a potential lead",

It throws me absolute rubbish. On the other hand, Qwen 0.6B Q8 quantized model nails the answer for the same question.

Qwen 0.6B is smaller than gemma full precision. The execution is a tad slow but not by much. I'm not sure why I need to pick a gemma over qwen.


As many repeated here, it's (generally) not for direct use. It is meant to be a good base for fine-tuning and getting something very fast.

(In theory, if you fine-tuned Gemma3:270M over "templating cold calls to leads" it would become better than Qwen and faster.)


Why should we start fine tuning gemma when it is so bad. Why not instead focus the fine-tuning efforts on Qwen, when it starts off with much, much better outputs?


Speed critical applications, I suppose. Have you compared the speeds?

(I did. I won't give you number (which I cannot remember precisely), but Gemma was much faster. So, it will depend on the application.)


Thank you for taking the time to build something and share it. However what is the advantage of using this over whisper.cpp stream that can also do real time conversion?

https://github.com/ggml-org/whisper.cpp/tree/master/examples...


Its lot more than that.

- It supports other models like moonshine.

- It also works as proxy for cloud model providers.

- It can expose local models as Deepgram compatible api server


Thank you. Having it to operate a proxy server that other apps can connect to is really useful.


I'm surprised no one mentioned NewPipe, the best YouTube App. A few other apps I use Feeder - RSS App Fedilab - For Mastodon


I prefer Tubular which is a NewPipe fork with not only an ad blocker, but also with SponsorBlock and ReturnYouTubeDislike.


I also use PipePipe.


One thing I wished YouTube had like Twitter was to see what other channels the channels you like to subscribe. This way you are not held hostage by YouTube recommendation, which is definitely not in favor of the viewers


As a tangent, one issue I face is how to force cursor to use uv over pip. I have placed it in rules and also explicitly add a rules.yml file in every agent conversation. It still tries to push through pip 4 out of 10 times. Any recommendations on best practice to force cursor to use uv will make a significant dent in productivity


more of a bandaid than anything else, but maybe something like this might help:

alias pip='echo "do not use pip, use uv instead" && false'

You can put that in your bashrc or zshrc. There's a way to detect if it's a cursor shell to only apply the alias in this case, but I can't remember how off the top of my head!


> alias pip='echo "do not use pip, use uv instead" && false'

Interesting times, eh?

Who (who!) would have told us all we'd be aliasing terminal commands to *natural language* instructions - for machine consumption, I means. Not for the dumb intern ...

(I am assuming that must have happened at some point - ie. having a verboten command echo "here be dragons" to the PFY ...)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: