Bringing Up Bebe, The Danish Way Of Parenting, The Coddling of the American Mind. These are pretty similar to Soviet style, but perhaps a bit less structured.
We are basically raising our daughter Soviet-style to the extent that we can; so far so good. It's difficult in a culture where ADHD American style of child raising is prevalent.
Yeah, I wonder if you plotted crime rate vs time spent outside or something like that (car accident rates are usually reported as an average of an accident / # of miles, since how much you drive changes your likelihood of being in an accident)
The core premise (benchmarks are broken), might be correct, but the poverty benchmark he uses is a bad example. The OPM and SPM (supplemental poverty measure, developed in 2009-2012), disagree by less than 10%; and the latter takes into account many of the criticisms in the article.
The author uses MIT Living Wage numbers to argue that should be the new "poverty" benchmark - an absurd proposition. Those might be reasonable middle class numbers. He also implies that the benchmark historically represented what is now covered under that $140K calculation - also false; it took ~$9000 in 1966 to cover a "basic standard of living" for a family of 4 with 1 earner; inflation adjusted, that's around $90,000 today. If you add in SS/Medicare taxes (3% then, 15% today), that puts you at ~$100K-105K.
Using the same MIT Living Wage numbers and taking Essex-Princeton NJ as the area (roughly what the author used), you end up with $99,922 as the living wage for a single earner, 4 person household - almost exactly the same as the household back in 1966.
Were there more jobs in 1966 that paid $9000/year versus jobs that pay $100K today? That's the real story you're looking for.
The core premise (benchmarks are broken), might be correct, but the poverty benchmark he uses is a bad example. The OPM and SPM (supplemental poverty measure, developed in 2009-2012), disagree by less than 10%; and the latter takes into account many of the criticisms in the article.
The author uses MIT Living Wage numbers to argue that should be the new "poverty" benchmark - an absurd proposition. Those might be reasonable middle class numbers. He also implies that the benchmark historically represented what is now covered under that $140K calculation - also false; it took ~$9000 in 1966 to cover a "basic standard of living" for a family of 4 with 1 earner; inflation adjusted, that's around $90,000 today. If you add in SS/Medicare taxes (3% then, 15% today), that puts you at ~$100K-105K.
Using the same MIT Living Wage numbers and taking Essex-Princeton NJ as the area (roughly what the author used), you end up with $99,922 as the living wage for a single earner, 4 person household - almost exactly the same as the household back in 1966.
Were there more jobs in 1966 that paid $9000/year versus jobs that pay $100K today? That's the real story you're looking for.
The strongest argument is probably that for someone subsisting on the minimal wage, the CPI is not a good representation of their consumption basket (whereas it might be for someone close to the median).
Therefore the adjustment should probably be based on a different number reflecting the actual consumption of households near the poverty line (food would probably be higher than it is in the CPI currently, as one example)
The increasing levels of abstraction work only as long as the abstractions are deterministic (with some limited exceptions - i.e. branch prediction/preloading at CPU level, etc). You can still get into issues with leaky abstractions, but generally they are quite rare in established high->low level language transformations.
This is more akin to manager-level view of the code (who need developers to go and look at the "deterministic" instructions); the abstraction is a lot lot more leaky than high->low level languages.
This depends on the particular group of rationalists. An unfortunately outsized and vocal group with strong overlap in the tech community has gone to notions of quasi mathematical reasoning distorting things like EV ("expected value"). Many have stretched "reason" way past the breaking point to articles of faith but with a far more pernicious affect than traditional points of religious dogma that are at least more easily identifiable as "faith" due to their religious trappings.
Edit: See Roko's Basilisk as an example. wherein something like variation on Christian hell is independently reinvented for those not donating enough to bring about the coming superhuman AGI, who will therefore punish you- or the closest simulation it can spin up in VR if you're long gone- for all eternity. The infinite negative EV far outweighing any positive EV of doing more than subsist in poverty. Even managed to work in that it could be a reluctant, but otherwise benevolent super AI such that, while benevolent, it wanted to exist, and to maximize its chances it bound itself to a promise in the future to do these things as an incentive for people to get it to exist.
Sure, but LLMs tend to be better at navigating around documentation (or source code when no documentation exists). In agentic mode, they can get me to the right part of the documentation (or the right of the source code, especially in unfamiliar codebases) much quicker than I could do it myself without help.
And I find that even the auto-generated stuff tends to go up at least a bit in terms of level of abstraction than staring at the code itself, and helps you more like a "sparknotes" version of the code, so that when you dig in yourself you have an outline/roadmap.
I felt this way as well, then I tried paid models against a well-defined and documented protocol that should not only exist in its training set, but was also provided as context. There wasn't a model that wouldn't hallucinate small, but important, details. Status codes, methods, data types, you name it, it would make something up in ways that forced you to cross reference the documentation anyway.
Even worse, the model you let it build in your head of the space it describes can lead to chains of incorrect reasoning that waste time and make debugging Sisyphean.
Like there is some value there, but I wonder how much of it is just (my own) feelings, and whether I'm correctly accounting for the fact that I'm being confidently lied to by a damn computer on a regular basis.
> the fact that I'm being confidently lied to by a damn computer on a regular basis
Many of us who grew up being young and naive on the internet in the 90s/early 00s, kind of learnt not to trust what strangers tell us on the internet. I'm pretty my first "Press ALT+F4 to enter noclip" from a multiplayer lobby set me up to be able to deal with LLMs effectively, because it's the same as if someone on HN writes about something like it's "The Truth".
Indeed. But the unintended consequence (perhaps) of LLMs making things easier to use is that more people will use them - basically Jevons paradox.
I would expect that this will cause certain programs to see more demand than the creators anticipated for (extrapolating previous trends), which might require changes in the programs (i.e. more people apply for benefits than expected, benefits / application might have to be cut, etc).
And in some ways there's a Cantillon effect (though traditionally associated with proximity to the "money printer", but here the proximity is to the LLM-enablement; in that those who use the LLMs first can get the benefit before the rules are changed).