> It's human nature: https://en.wikipedia.org/wiki/Negativity_bias. Everyone does it, but we perceive other people as doing it more than we do, which is itself a variation of the bias.
You can even see it in the title of the OP, in the word "overwhelmingly". That's excessive: the negative bias is noticeable, but if you look closely, it's not overwhelming. (To make up some numbers, it's more like 60-40, not 90-10.)
However, it often feels as if it is overwhelming; in fact, one or two datapoints, plus negativity bias, are enough to create just such a feeling. The feeling gets expressed in ways that trigger similar feelings in other people, so we end up with a positive* feedback loop.
The interesting question is, what factors mitigate this? how do we dampen negativity bias? or, how do we get negative feedback into our positive feedback loop of negative affect? That must also be happening all the time, or we'd be in a "war of all against all", which isn't the case, though (again) it may feel like it.
* ['positive' in the sense of increasing; a positive loop of negative affect!]
We focus on negative outcomes because that relates directly to survival. Our brains are wired for it. Talking about negative outcomes means we learn about them and have a better chance of avoiding them. Plus, the fear response is much stronger and lasts longer than the happy / joy response.
Note that for humans and other social animals "survival" doesn't always mean life or death -- it can mean being included or excluded from a social group which indirectly affects survival chances.
You can't really put politics aside when the US was obviously dangling the return of the Monroe doctrine for Ukraine. Let's see what that "deal" looks like.
We’re on a mission to give humans and LLMs reliable and fast access to web data.
Web scraping used to be the same for decades (brittle scripts that break constantly). We're automating that end-to-end with LLMs that build and maintain data pipelines. We're also heavily focused on making ethical scraping the default (robots.txt checks, rate limiting, etc.).
All 3 founders have spent years in the trenches writing web scraping and ETL software and still do it almost every single day. We’re making sure every team member can do the best work of their career at Kadoa.
We are looking for people who share our passion for software craftsmanship, data, and AI.
We are growing fast, have a no-bullshit culture, and try to minimize the distance between the code you write and the customers who use it.
If that sounds like you, please email me at (adrian at kadoa dot com) and mention HN in the subject line. AI slop applications will be filtered out immediately ;)
Would love to learn more about how this is built. I remember a similar project from 4 years ago[0] that used a classic BERT model for NER on HN comments.
I assume this one uses a few-shot LLM approach instead, which is slower and more expensive at inference, but so much faster to build since there's no tedious labeling needed.
> Would love to learn more about how this is built. I remember a similar project from 4 years ago[0] that used a classic BERT model for NER on HN comments
Yes, I saw that project pretty impressive! Hand-labeling 4000 books is definitely not an easy task, mad-respect to tracyhenry for the passion and hardwork that was required back then.
For my project, I just used the Gemini 2.5 Flash API (since I had free credits) with the following prompt:
"""You are an expert literary assistant parsing Hacker News comments.
Rules:
1. Only extract CLEARLY identifiable books.
2. Ignore generic mentions.
3. Return JSON ARRAY only.
4. If no books found, return [].
5. A score from -10 to 10 where 10 is highly recommended, -10 is very poorly recommended and 0 is neutral.
6. If the author's name is in the comment, include it; otherwise, omit the key.
JSON format:
[
{{
"title": "book title",
"sentiment": "score",
"author" : "Name of author if mentioned"
}}
]
Text:
{text}"""
It did the job quite well. It really shows how far AI has come in just 4 years.
Thanks. I now run a two-step process: first pass reads through all posts and comments to extract patterns, second pass uses those to generate the content. Should be much more representative of your full year now :)
My impression was the same as the poster: it still over-indexes on a couple of recent posts.
Of course, it's possible that we've both been repeating ourselves all year long! I mean, I know I do that, I just think I've ridden more hobby horses than it picked up. :-)
It's fun, though. Thanks for sharing - a couple of my "roasts" gave me a genuine chuckle.
Perhaps it should also avoid putting too much emphasis on several comments to the same story: there was a story about VAT changes in Denmark, where I participated with several comments; but the generator decided that I apparently had a high focus vat, when I just wanted to provide some clarifying context to that story. I wonder how comments are weighed, is it individually or per story?
Specifically this roast:
> You have commented about the specific nuances of Danish VAT and accounting system hardcoding at least four times, proving you are the only person on Earth who finds tax infrastructure more exciting than the books being taxed.
Yeah, but I did it on the same story (i.e. context).
Though the other details it picked up, I cannot really argue with: the VAT bit just stood out to me.
> It's human nature: https://en.wikipedia.org/wiki/Negativity_bias. Everyone does it, but we perceive other people as doing it more than we do, which is itself a variation of the bias. You can even see it in the title of the OP, in the word "overwhelmingly". That's excessive: the negative bias is noticeable, but if you look closely, it's not overwhelming. (To make up some numbers, it's more like 60-40, not 90-10.)
However, it often feels as if it is overwhelming; in fact, one or two datapoints, plus negativity bias, are enough to create just such a feeling. The feeling gets expressed in ways that trigger similar feelings in other people, so we end up with a positive* feedback loop.
The interesting question is, what factors mitigate this? how do we dampen negativity bias? or, how do we get negative feedback into our positive feedback loop of negative affect? That must also be happening all the time, or we'd be in a "war of all against all", which isn't the case, though (again) it may feel like it.
* ['positive' in the sense of increasing; a positive loop of negative affect!]
https://news.ycombinator.com/item?id=40430263
reply