Sometimes a topic comes up that you not only don’t care about, that you can’t even conceive of a way to incentivise yourself to care about it. It’s not that you harbour some hidden like or dislike, it’s just that to the extent humanly possible, you really REALLY don’t care.
Reading the replies and the tribal colours / flag waving is fascinating. I suppose This tribalism happens for every topic but it’s just hard to be gifted this perspective on topics your career is invested in.
It’s been fun reading clearly fabricated explanations that don’t base themselves on any objectivity but on the posters inner desires.
> can’t even conceive of a way to incentivise yourself to care about it
A lot of people on this forum came of age at a time when the internet and computers and tech companies built around them were ascendant and seen as a force for good in the world. We had the open source movement and peer-to-peer sharing and everything was going to be free and egalitarian and connected. Google was "organizing the world's information" and went from underdog to champion while telling us "don't be evil" was their core value. Kids who were bullied for being sincere and passionate were becoming the adults who ran things and got success and admiration.
Also, in the first ~decade or so that I was on HN speculative fiction about AI was extremely popular here. We weren't really sure if superhuman intelligence would happen anytime soon, but we all had the sense that if/when it did it better be designed and run by people with high ethical standards if we have any hope of avoiding major catastrophe.
(I personally see the concerns about mis-aligned "superhuman AI" and mis-aligned mega-corporations as representing essentially the same underlying anxiety: that there are powerful forces in the world that are beyond our ability to influence in meaningful ways but which have outsized effects on our environment and lives while being completely agnostic to our happiness or even our existence.)
Now we've been through one or more cycles of seeing our heroes turn into villains. Google got rid of "don't be evil". Musk turned his attention away from the stars and the "good of humanity" and toward petty political spats and gossip. And now OpenAI, which sold itself as the organization that will "do AGI" and do it the safe and ethical way, looks like it's run by somebody who is incredibly shrewd and self-serving.
So while I understand why you wouldn't care about this, I also completely understand why it's such an engaging topic for so many people here.
I have heard the term "wish casting" used to refer to this phenomenon. When people are deeply emotionally charged, they start asserting facts they want to be true, and ignoring the line between fantasy and reality, as if saying something enough makes it real
I see it almost exclusively online, and people don't seem to notice what they are doing.
I'm pretty convinced that the majority of these people actually "know" that they are asserting claims that they wish are true instead of actually true.
The evidence is that if you challenge them with the actual facts, they sometimes don't accuse you of being factually wrong, instead they accuse you of wishing the other way (i.e. being on the other camp).
Plus whenever powerful people are involved they get treated as these great masterminds, always conspiring, and it’s a given there’s more to the story than the plebs will never know about. So with that assumption people’s minds start filling in the blanks with all the things that are REALLY happening behind the scenes and anyone who doesn’t assume the same is naive at how the powerful operate (just look at Twitter discourse for anything related to the British monarchy).
IRL the vast majority of the time they are just flawed humans like the rest of us. Sometimes people take on more responsibilities in life than they can realistically handle. And everyone needs to be challenged and questioned whether they’re being honest with themselves at various points in their lives.
Yeah, there is a deep human drive to both form opinions on topics and fit them to narratives. Most people are deeply uncomfortable admitting that they don't know, cant know, or will never know something.
As a result, random people on the street form strong opinions on everything, from Sama's psychology and internal narrative to nuclear reactor design.
The comment (what I said would make a great tweet) is autological (as in it defines itself). The intrinsic irony is that the author is also unaware therefore proving its own example. The reader who takes it at face value would be similarly unaware.
In other words, the comment is a construct of that definition; And a demonstration of the fallacy of opinion.
I think there’s a third level to it as well. It is also a true statement which is a contradiction of itself.
Logically then it is not self-consistent which also makes it logical.
Is the only way out is to assert as you do that some opinions are less valid than others? I don’t think that resolves the paradox.
Philosophers must have studied this... I understand that in math we have Gödel incompleteness which is an axiomatic version of a similar argument.
I dont know anything about Gödel incompleteness, but of course some opinions are more and less "valid" than others. Some topics are also closer or more distant to the individual. There is no reason to assume equality.
I can have a valid , well informed, and even actionable opinion about what my wife would like for dinner.
Inversely, my opinions on what happened behind closed doors between PG and SamA in 2019 isnt informed, actionable, or generally useful.
To the extent my post could be seen as critical of opinions(which is I think where you draw the irony from), it wasnt critical of holding just any opinion, but holding a certain class of poorly formed and completely unnecessary opinions.
From the outside you might think that this itself is useless navel-gazing, but I have found it to be actionable. It has helped me to question some of my own compulsive behaviors.
I'm more interested in the metaphysics of the comment itself. It clearly exists in an ontological universe because it can be constructed but it cannot apply to itself because self-evaluation renders it false. So it is de facto meaningless but it is actually not meaningless because in the universe where "the only thing we know is that we don't know anything" we would now know two things. I guess that means we've learnt something?
Anyway there is some connection to AI and AGI in this that is worth exploring...
Complex effects of mundane decisions become compelling stories of good and evil involving heros and villains.
People don't really want to litigate these things in particular but use it as proxy for their own personal feelings about the effects of AI or even capitalism itself.
Reading the replies and the tribal colours / flag waving is fascinating. I suppose This tribalism happens for every topic but it’s just hard to be gifted this perspective on topics your career is invested in.
It’s been fun reading clearly fabricated explanations that don’t base themselves on any objectivity but on the posters inner desires.