PG and I are assuming a measurable outcome, which the selection process is explicitly supposed to predict.
I made no claims about practicality - right now all I have is a little bit of measure theory showing that pg's algo is, in principle, fixable. I fully agree that the first round capital data he cites is inadequate (and also wrong, due to the unjustified exclusion of uber, which they explicitly note would alter the results).
My concrete claim: PGs idea for a statistical test is solid, I can (and shortly will) prove a toy version works, and given enough work one can probably cook up a practical version for some problems.
"Your idea isn't 100% perfect right out of the gate" is a very unfair criticism. Are we supposed to nurture every idea in complete secrecy until it is perfect?
OK I missed that you meant "easily fixed" in the strictly mathematical sense, not in the practical, real-world application sense.
With statistics on human affairs, 99% of the hard part is not the math, it is applying that math to a complicated, heterogenous, and difficult to measure underlying phenomena. And in most cases, statistics alone will never give you a straight answer, the best they can do is supplement and confirm qualitative observations. Failing to recognize this is how you get all those unending media reports about how X is bad for your health. PG's post was at the level of one of those junk health news articles.
And because human affairs are hard, we should criticize anyone who dares to voice an idea they haven't fully figured out yet.
This idea that statistics can only confirm and supplement "qualitative observations" (I.e. my priors) is completely unscientific and anti-intellectual. If that's true, forget stats - lets just write down the one permitted belief on a piece of paper and not waste resources on science. Science is really boring when only one answer is possible.
This idea that statistics can only confirm and supplement "qualitative observations" (I.e. my priors) is completely unscientific and anti-intellectual.
Since when is investing in startups a science? What is anti-intellectual, what is anti-science is to use the wrong tool for the job. Human affairs are not a science in the way that physics is a science. Statistics are far, far more fraught because there are so many variables in play, phenomena are hard to quantify, each case is so heterogenous, etc. You cannot use statistics in human affairs without also having a very good observational understanding of what is actually going on, otherwise you will end up in all sorts of trouble.
I made no claims about practicality - right now all I have is a little bit of measure theory showing that pg's algo is, in principle, fixable. I fully agree that the first round capital data he cites is inadequate (and also wrong, due to the unjustified exclusion of uber, which they explicitly note would alter the results).
My concrete claim: PGs idea for a statistical test is solid, I can (and shortly will) prove a toy version works, and given enough work one can probably cook up a practical version for some problems.
"Your idea isn't 100% perfect right out of the gate" is a very unfair criticism. Are we supposed to nurture every idea in complete secrecy until it is perfect?