I don't understand his point about First Round Capital showing their female founders did better than companies without female founders. What does that show? How do we know that female founders aren't simply better? Or maybe women are scared of applying, so out of women, only the best apply? In that case, the mere idea that there is a bias can cause "pre-selection" bias.
I lack the mathematics to prove this, but it seems that on the face of it, pg is simply wrong. Or I'm misreading terribly.
Tangentially: Speaking of bias, why doesn't YC publish information on their companies' tech choices? PG racked up a lot of inferred cachet (positive) by stating that use of Lisp gave them a huge advantage. Now that YC has data, they should be able to show how choice of technology correlates to performance.
It's certainly possible that the observed bias isn't the fault of First Round Capital's selection process if the applying populations are different -- the "pre-selection" bias you're talking about.
If that's the case, First Round Capital could profitably benefit from encouraging more female founders to apply.
The argument is that First Round Capital must have implictly made it harder for female founders to get funding, since the ones who do perform better. The rational course of action for First Round Capital would be to lower their threshold on female founders (or, conversely, raise the threshold on male founders) until they perform no better or no worse than male founders.
I lack the mathematics to prove this, but it seems that on the face of it, pg is simply wrong. Or I'm misreading terribly.
Tangentially: Speaking of bias, why doesn't YC publish information on their companies' tech choices? PG racked up a lot of inferred cachet (positive) by stating that use of Lisp gave them a huge advantage. Now that YC has data, they should be able to show how choice of technology correlates to performance.