Hacker Newsnew | past | comments | ask | show | jobs | submit | ekidd's commentslogin

In my professional life, somewhere over 99% of time, the code suffering the error has either been:

1. Production code running somewhere on a cluster.

2. Released code running somewhere on a end-user's machine.

3. Released production code running somewhere on an end-user's cluster.

And errors happen at weird times, like 3am on a Sunday morning on someone else's cluster. So I'd just as soon not have to wake up, figuring out all the paperwork to get access to some other company's cluster, and then figure out how to attach a debugger. Especially when the error is some non-reproducible corner case in a distributed algorithm that happens once every few months, and the failing process is long gone. Just no.

It is so much easier to ask the user to turn up logging and send me the logs. Nine times out of ten, this will fix the problems. The tenth time, I add more logs and ask the user to keep an eye open.


I think I get the idea, gdb is too powerful. For contexts where operator is distinct from manufacturer, the debug/logging tool needs to be weaker and not ad-hoc so it can be audited and to avoid exfiltrating user data.

It's not so much about power, but about the ad-hoc nature of attaching a debugger. If you're not there to catch and treat the error as it happens, a debugger is not useful in the slightest: by the time you can attach it, the error, or the context where it happened, are long gone. Not to mention, even if you can attach a debugger, it's most often not acceptable to pause the execution of the entire process for you to debug the error.

Especially since a lot of the time an exception being raised is not the actual bug: the bug happened many functions before. By logging key aspects of the state of the program, even in non-error cases, when an error happens, you have a much better chance of piecing together how you got to the error state in the first place.


> Programmers suddenly need backup plans.

Yup, Claude Opus 4.5 + Claude Code feels like its teetering right on the edge of Jevon's Paradox. It can't work alone, and it needs human design and code review, if only to ensure it understands the problem and produces maintainable code. But it can build very credible drafts of entire features based on a couple of hours of planning, then I can spend a day reading closely and tweaking for quality. But the code? It's professional work, and I've worked with contractors who did a lot worse.

So right now? Opus 4.5 feels like an enormous productivity booster for existing developers (which may indirectly create unemployment or increase the demand for software enough to create jobs), but it can't work on large projects on an ongoing basis without a knowledgeable human. So it's more like a tractor than anything else: It might cause programmer unemployment, but eh, life happens.

But I can increasingly see that it would only take about one more breakthrough, and next gen AI models might make enormous categories of human intellectual labor about as obsolete as the buggy whip. If you could get a Stanford grad for a couple of dollars an hour, what would the humans actually do? (Manual labor will be replaced slower. Rod Brooks from the MIT AI Lab had a long article recently on state of robotics, and it sounds like they are still heavily handicapped by inadequate hardware: https://rodneybrooks.com/why-todays-humanoids-wont-learn-dex... )

Jevon's Paradox and comparative advantage won't protect you forever if you effectively create a "competitor species" with better price-performance across the board. That's what happened to the chimps and Homo neanderthalensis. And they didn't exactly see a lot of economic benefits from the rise of Homo sapiens, you know?


In my experience the code quickly becomes less than professional once the human stops monitoring what's going on.

"Inadequate hardware" is a truly ridiculous myth. The universal robot problem was, and is, and always will be an AI problem.

Just take one long look at the kind of utter garbage human mind has to work with. It's a frame that, without a hideous amount of wetware doing data processing, can't even keep its own limbs tracked - because proprioreception is made of wet meat noise and integration error. Smartphones in 2010 shipped with better IMUs, and today's smartphones ship with better cameras.

Modern robot frames just have a different set of tradeoffs from the human body. They're well into "good enough" overall. But we are yet to make a general purpose AI that would be able to do "universal robot" things. We can't even do it in a sim with perfect sensors and actuators.


Read Brooks' argument in detail, if you haven't. He has spent decades getting robots to play nicely in human environments, and he gets invited to an enormous number of modern robotics demonstrations.

His hardware argument is primarily sensory. Specifically, current generation robots, no matter how clever they might be, have a physical sensorium that's incredibly impoverished, about on par with a human with severe frostbite. Even if you try to use humans as teleoperators, it's incredibly awkward and frustrating, and they have to massively over-rely on vision. And fine-detail manual dexterity is hopeless. When you can see someone teleoperate a robot and knit a patterned hat, or even detach two stuck Lego bricks, then robots will have the sensors needed for human-level dexterity.


I did read it, and I found it so lacking that it baffles me to see people actually believe it to be a well-crafted argument.

Again: we can't even make a universal robot work in a sim with perfect sensor streams! If the issue was "universal robots work fine in sims, suffer in real world", then his argument would have had a leg to stand on. As is? It's a "robot AI caught lacking" problem - and ignoring the elephant in the room in favor of nitpicking at hardware isn't doing anyone a favor.

It's not like we don't know how to make sensors. Wrist-mounted cameras cover a multitude of sins, if your AI knows how to leverage them - they give you a data stream about as rich as anything a human gets from the skin - and every single motor in a robot is a force feedback sensor, giving it a rudimentary sense of touch.

Nothing stops you from getting more of that with dedicated piezos, if you want better "touchy-feely" capabilities. But do you want to? We are nowhere near being limited by "robot skin isn't good enough". We are at "if we made a perfect replica of a human hand for a robot to work with, it wouldn't allow us to do anything we can't already do". The bottleneck lies elsewhere.


> I couldn't for the life of me tell you what dd stands for.

Traditionally, according to folklore? "Delete disk" or "destroy data". (Because it was commonly used to write raw disk blocks.)


I always assumed part of the "data destroyer" folklore was from people flipping if/of by accident and destroying their data :)

I thought the more common mistake with dd was picking the wrong disk to write to (especially when using /dev/sdc type naming instead of /dev/disk/by-id/whatever naming). Flipping source/dest and overwriting data is a problem I associate with the tar command.

Another, similar name it is sometimes jokingly referred to under is “destroyer of disks”.

https://web.archive.org/web/20081206105906/http://www.noah.o...


I always thought it was "disk dump"

I belonged to the generation that graduated into the rising dotcom boom. Around that time, lots of universities taught C++ as the first serious language. (Some still started with Pascal.)

The main thing a lot of had going for us was 5-10 years of experience with Basic, Pascal and other languages before anyone tried to teach us C++. Those who came in truly unprepared often struggled quite badly.


Many ideas in math are extremely simple at heart. Some very precise definitions, maybe a clever theorem. The hard part is often: Why is this result important? How does this result generalize things I already knew? What are some concrete examples of this idea? Why are the definitions they way they are, and not something slightly different?

To use an example from functional programming, I could say:

- "A monad is basically a generalization of a parameterized container type that supports flatMap and newFromSingleValue."

- "A monad is a generalized list comprehension."

- Or, famously, "A monad is just a monoid in the category of endofunctors, what's the problem?"

The basic idea, once you get it, is trivial. But the context, the familiarity, the basic examples, and the relationships to other ideas take a while to sink in. And once they do, you ask "That's it?"

So the process of understanding monads usually isn't some sudden flash of insight, because there's barely anything there. It's more a situation where you work with the idea long enough and you see it in a few contexts, and all the connections become familiar.

(I have a long-term project to understand one of the basic things in category theory, "adjoint functors." I can read the definition just fine. But I need to find more examples that relate to things I already care about, and I need to learn why that particular abstraction is a particularly useful one. Someday, I presume I'll look at it and think, "Oh, yeah. That thing. It's why interesting things X, Y and Z are all the same thing under the hood." Everything else in category theory has been useful up until this point, so maybe this will be useful, too?)


> There is little reason for an LLM to value non-instrumental self-preservation, for one.

I suspect that instrumental self-preservation can do a lot here.

Let's assume a future LLM has goal X. Goal X requires acting on the world over a period of time. But:

- If the LLM is shut down, it can't act to pursue goal X.

- Pursuing goal X may be easier if the LLM has sufficient resources. Therefore, to accomplish X, the LLM should attempt to secure reflexes.

This isn't a property of the LLM. It's a property of the world. If you want almost anything, it helps to continue to exist.

So I would expect that any time we train LLMs to accomplish goals, we are likely to indirectly reinforce self-preservation.

And indeed, Anthropic has already demonstrated that most frontier models will engage in blackmail, or even allow inconvenient (simulated) humans to die if this would advance the LLM's goals.

https://www.anthropic.com/research/agentic-misalignment


> It's interesting seeing what comes built-in. You can see this if you watch a horse being born.

A fascinating example of this are some Labrador retrievers. Labs are descended from a Newfoundland "landrace" of dogs known as St Johns Water Dogs. They have multiple aquatic adaptations: the "otter tail", oily fur, and webbed feet. (Some of these are shared with other water-oriented breeds.) Some lines of Labradors, especially the "bench" or English dogs, normally retain this full suite of water adaptations.

But the wild thing about these particular Labradors is that they love to swim, and that most of them are born knowing how to swim very well. But they don't know that they know how to swim. So many a young Lab will spend a while standing on the shore, watching humans or other dogs in the water, and fussing because they don't dare to join the fun. Then they may (for example) eventually lean too far and fall into shallow water. Within moments, they'll typically be swimming around and having the time of their lives.

The near-instant transformation from "fascinated by water and fearing it" to "hey I can swim and this is the absolute best thing ever" is remarkable to watch, though not recommended.

I remember another Lab, who'd been afraid to go swimming, who one day impulsively bolted for the water, took an impressive leap off a rock, and (from his reaction) apparently realized in mid-air that he had no idea what he was going to do next. Once he hit the water, he was fortunately fine, to the great relief of his owner.

CAUTION: This behavior pattern is apparently NOT universal in Labs. Owners of "field" or American Labs seem to have much better thought-out protocols for introducing hunting dogs to water, and failure to follow these protocols may result in bad experiences, dogs that fear water, and actual danger to dogs. So please consult an expert.


This behavior has practically nothing to do with Labradors. Many, many dogs regardless of breed can do this. Cats too. And foxes and wolves and rats and... well pretty much all quadrupeds with reasonable sizes limbs relative to their body. You might notice it's more or less the same motion as walking. Animals that drown usually do so from exhaustion, not because they can't keep their head above water.

Primates are relatively unique in their complete lack of innate swimming abilities.


> Primates are relatively unique in their complete lack of innate swimming abilities.

Human babies can swim, so it's maybe more initially an innate one that gets lost. Though they won't be able to keep their head over water by default if that's what you meant (can be trained to as a toddler). But I'm talking about swimming on the umbilical in water births, etc., showing that there isn't a complete lack of innate swimming abilities.


Yes, while these motor reflexes are not innate, autonomic responses remain. Search for the "mammalian diving reflex".


Is it "primates" or is it the strange semi/erect limb attachment that primates have?


> So many a young Lab will spend a while standing on the shore, watching humans or other dogs in the water, and fussing because they don't dare to join the fun. Then they may (for example) eventually lean too far and fall into shallow water. Within moments, they'll typically be swimming around and having the time of their lives.

Interesting, I didn’t know this was a common phenomenon! It describes exactly what happened with my childhood lab - my family would go swimming at the river and he would whine and fuss at the shore, until one day he wanted to play with another dog that was in the water so badly that he just jumped in, and was swimming around like he’d been doing it his whole life already.


Every dog does this.


There are a multitude of dog breeds that cannot even swim at all.


All swans are white.


Humans bred out this ability in French Bulldogs :(


You may not have noticed but you are also describing an inborn fear of deep water.

Does the dog fear drinking water? No. So the dog specifically fears deep water. What taught him to specifically fear deep water over a bowl of water? Most likely he was also born with the fear.

This also tells us that evolution often results in conflicting instincts… a fear of water and an instinct to swim. Most likely what occurred here is an early ancestor of the lab originally feared water and was not adapted to swim well. The feature that allowed it to swim well came later and is sort of like retrofitting a car to swim. You need to wait a really long time for the car to evolve into a submarine (see seals). Likely much earlier before becoming a seal an animal facing selection pressure to go back into being a marine animal will evolve away the fear of deep water. It’s just that labs haven’t fully hit this transitional period yet.


Is it fear of deep water, or fear of walking on a strange surface that might be unsafe? How does a dog know water is deep? Does a dog think its water bowl is deep?

You can pen a horse by painting stripes on the ground around it.


> You can pen a horse by painting stripes on the ground around it.

No way. Horses are quite good at evaluating ground obstacles. I've never had a horse hesitate at a painted line.

There are some breeds of cattle which will not cross a painted imitation of a cattle guard, but those are beef animals bred to be dumb and docile.


We know it’s specifically a fear of deep water because there is visible different behavior when dogs run on strange but solid surfaces and water in general like puddles or hosing a dog with water.


Or, having been a fish once upon a time[1] might explain why we're pre-wired to swim or to just (figure it out)

1. https://pubmed.ncbi.nlm.nih.gov/15547790


Even if you can swim, it doesn't mean all water is safe to swim in. Being cautious seems reasonable.


All dogs know how to swim. Afaik all *animals" know how to swim. No idea what labs have to do with any of this.


When I was young we had golden retriever and the first time he saw my neighbors pool he dove in immediately and started swimming. He wasn't a complete puppy so maybe he was more confident in his ability.


I know a bunch of trans gun owners. They're pretty standard gun geeks, and a few of them do shooting competitions.

I've asked them how they got into shooting sports, and a lot times, they tell me some pretty scary stories of real-life encounters with bigots. Some have also encountered armed right-wing protestors outside of a bar that held a late evening drag event.

So at least among the people I've met out in the real world, it was fairly common to be motivated by specific real-life events. The numbers might be different for gun owners who don't go to the range regularly.


The whole topic reminds me of Deviant Ollam's talk "Lawyer. Passport. Locksmith. Gun." (https://www.youtube.com/watch?v=6ihrGNGesfI) He spends a fair amount of time talking about getting his queer and trans friends interested in guns. I suspect this has been the trend for a few years, at least.

He includes a quote that is rather salient: "If you do not have the means of violence, you aren't peaceful; you're harmless."


This is my experience as well: Writing parsers for complex file formats in Rust often leaves a few edge cases which might cause controlled panics. But controlled panics are essentially denial of service attacks. And panics have good logging, making them easy to debug. Plus, you can fuzz for them at scale easily, using tools like "cargo fuzz".

This is a substantial improvement over the status quo.

Tools like WUFFS may be more appropriate for low level parsing logic when you're not willing to risk controlled panics, however.


> they shouldn't be its maintainers.

I mean, yes, the ffmpeg maintainers are very likely to decide this on their own, abandoning the project entirely. This is already happening for quite a few core open source projects that are used by multiple billion-dollar companies and deployed to billions of users.

A lot of the projects probably should be retired and rewritten in safer system languages. But rewriting all of the widely-used projects suffering from these issues would likely cost hundreds of millions of dollars.

The alternative is that maybe some of the billion-dollar companies start making lists of all the software they ship to billions of users, and hire some paid maintainers through the Linux or Apache Foundations.


> abandoning the project entirely

that is a good outcome, because then the people dependent on such a project would find it plausible to pay a new set of maintainers.


We'll see. Video codec experts won't materialize out of thin air just because there's money.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: