The comment (what I said would make a great tweet) is autological (as in it defines itself). The intrinsic irony is that the author is also unaware therefore proving its own example. The reader who takes it at face value would be similarly unaware.
In other words, the comment is a construct of that definition; And a demonstration of the fallacy of opinion.
I think there’s a third level to it as well. It is also a true statement which is a contradiction of itself.
Logically then it is not self-consistent which also makes it logical.
Is the only way out is to assert as you do that some opinions are less valid than others? I don’t think that resolves the paradox.
Philosophers must have studied this... I understand that in math we have Gödel incompleteness which is an axiomatic version of a similar argument.
I dont know anything about Gödel incompleteness, but of course some opinions are more and less "valid" than others. Some topics are also closer or more distant to the individual. There is no reason to assume equality.
I can have a valid , well informed, and even actionable opinion about what my wife would like for dinner.
Inversely, my opinions on what happened behind closed doors between PG and SamA in 2019 isnt informed, actionable, or generally useful.
To the extent my post could be seen as critical of opinions(which is I think where you draw the irony from), it wasnt critical of holding just any opinion, but holding a certain class of poorly formed and completely unnecessary opinions.
From the outside you might think that this itself is useless navel-gazing, but I have found it to be actionable. It has helped me to question some of my own compulsive behaviors.
I'm more interested in the metaphysics of the comment itself. It clearly exists in an ontological universe because it can be constructed but it cannot apply to itself because self-evaluation renders it false. So it is de facto meaningless but it is actually not meaningless because in the universe where "the only thing we know is that we don't know anything" we would now know two things. I guess that means we've learnt something?
Anyway there is some connection to AI and AGI in this that is worth exploring...