Hacker Newsnew | past | comments | ask | show | jobs | submit | cellis's commentslogin


Go back far enough and everything was stolen from someone


thats so wrong


ChatGPT does not offer a clear indication that it is blatantly illegal, though the amount of legal risk is high enough that it might as well be.


ChatGPT isn't even a search engine, let alone a lawyer.


Electricians in data-center states are eating; elsewhere they are scraping by due to macro-economics.


On chrome iOS I’m physically unable to press the three checkboxes on the terms which means I can’t try your app. They appear to be overlapped by the terms themselves and thus never received click events.


Yeah, this only appears to work on desktops with a res of 4K or greater, which is… not ideal for a social network?


iOS safari same issue. can’t get passed the terms


Whenever some group is said to have made/fined 1M out of their likely billions in revenue, someone will chime and say “that’s nothing”. But From a “department P&L perspective” yes, it is a lot of money!

Think about the crime families as making e.g. 50% money from construction corruption, 40% from drug sales, 5% from extortion… someone has to run the other smaller departments and that is a lot of money for that “Dept Head”. Also from the FBIs perspective they want to unravel conspiracies, often by yanking on one piece of yarn like this one.


Could this be used to train a text -> audio model? I'm thinking of an architecture that uses RVQ. Would RVQ still be necessary?


I believe DDN is capable of handling TTS (text-to-speech) tasks, because with the text condition, the generation space is significantly reduced.

And it's recommended to combine it with an autoregressive model (GPT) for more powerful modeling capabilities.


Are they trying to imply that high fat diets in humans similarly affect brain autophagy? That seems like quite the causal stretch given the vastly more complex metabolic architecture of humans.


I'm don't see where the authors are making this implication.


What if the car is stolen?


Opposite for me…5-codex high ran out of tokens extremely quickly and didn’t adhere as well to the agents.md as Claude did to the Claude.md, perhaps because it insists on writing extremely complicated bash scripts or whole python programs to execute what should be simple commands.


Codex was a miserable experience for me until I learned to compact after every feature. Now it is a cut above CC, although the latter still has an edge at TODO scaffolding and planning.


I don't even compact, I just start from scratch whenever I get down below 40%, if I can. I've found Codex can get back up to speed pretty well.

I like to have it come up with a detailed plan in a markdown doc, work on a branch, and commit often. Seems not to have any issues getting back on task.

Obviously subjective take based on the work I'm doing, but I found context management to be way worse with Claude Code. In fact I felt like context management was taking up half of my time with CC and hated that. Like I was always worried about it, so it was taking up space in my brain. I never got a chance to play with CC's new 1m context though, so that might be a thing of the past.


/new (codex) or /clear (claude code) are much better than compact after every feature, but of course if there is context you need to retain you should put it (or have the agent put it) in either claude/agents.md or a work log file or some other file.

/compact is helping you by reducing crap in your context but you can go further. And try to watch % context remaining and not go below 50% if possible - learn to choose tasks that don't require an amount of context the models can't handle very well.


Compact?


/compress or something like that, basically taking the context and summarizing it.


Cursor does this automatically, although I wish there was a command for it as well. All AIs start shitting the bed once their context goes above 80% or so.


Claude Code was the first coding tool that was honest about performance degrading as the context windows fills, and gave use the /context command.

Do any other tools have anything like a /context command? They really should.


gpt-5 command line use is bizarre. It always writes extraordinarily complicated pipelines that Claude instead just writes simple commands for.

My use case does better with the latter because frequently the agent fails to do things and then can't look back at intermediate.

E.g. Command | Complicated Grep | Complicated Sed

Is way worse than multistep

Command > tmpfile

And then grep etc. Because latter can reuse tmpfile if grep is wrong.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: