The Fundamental Theorem of Algebra went through many "proofs" over decades before settling on one we accept as rigorous.
And rigor itself? A relatively modem requirement. Calculus was put on a rigorous basis only in the 19th century. It existed, and was applied, for centuries before that.
The real truth about mathematics is that it's way more about intuition than rigor, and that things that most people think are true are taken as such. It's human nature, and mathematics is a human endeavor.
Moreover, math itself resists rigor in a fundamental way, as Godel showed us. Machine proofs? Eh. Ultimately, everything is up to humans, even the axioms. Will the computer believe in the axiom of choice?
And stuff that's too complex to be understood except for the select few? So be it. It means nobody really cares about that stuff. Maybe someone will discover a better proof later. Maybe someone will stumble upon a counterexample. Maybe nobody will touch that for a thousand years. Math will go on.
Not that all math is accessible, but all good math is not limited to in-groups.
For an extreme example, Grigori Perelman's famous work is surely not accessible, and is famously incomplete. It didn't matter. The ideas were clear to enough people around the world that it didn't take long for them to spread, and for others to step up and close the gaps.
I've always said that mathematics is a form of art, perhaps with a smaller audience than most. The only criterion in mathematics is beauty: whether the work is interesting or not. Some works lack in that aspect, as it happens in all arts, and will not be appreciated by future generations. And it's fine.
> And rigor itself? A relatively modern requirement.
Perhaps. I would argue that it is LONG overdue.
> Moreover, math itself resists rigor in a fundamental way, as Godel showed us.
This merely shows that some things cannot be proved, be it by machine or computer. That's not the issue. The issue is whether or not we should be moving towards machine-verified proofs, since "proofs" verified only by humans often turn out to be wrong.
> Machine proofs? Eh. Ultimately, everything is up to humans, even the axioms. Will the computer believe in the axiom of choice?
A computer will "believe" the axiom of choice if it is told that it is an axiom. I agree that the humans get to decide what is an axiom (at least in their system), but it should be immediately obvious exactly what axioms are being accepted, and then rigorously shown that only those accepted axioms are being used.
I can even point to a demonstration. One of the mathematical formalization tools available today is Metamath ( http://us.metamath.org/ ). In Metamath you can state the axioms you wish to accept, and then generate proofs that are automatically checked; every step must lead back to a previously-proved theorem or an axiom.
There are several existing Metamath databases. The "set.mm" database (Metamath Proof Explorer) is based on classical logic, ZFC, and the axiom of choice. Since the axiom of choice is included as an axiom, you can use it. See here: http://us.metamath.org/mpeuni/mmset.html
In contrast, the "iset.mm" database (Intuitionistic Logic Explorer) is based on intuitionistic logic and does not include the axiom of choice. Since the axiom of choice is not included, you cannot verify a proof that tries to use it - the verifier will simply complain that the step is invalid if you try to do it. See here: http://us.metamath.org/ileuni/mmil.html
The Fundamental Theorem of Algebra went through many "proofs" over decades before settling on one we accept as rigorous.
And rigor itself? A relatively modem requirement. Calculus was put on a rigorous basis only in the 19th century. It existed, and was applied, for centuries before that.
The real truth about mathematics is that it's way more about intuition than rigor, and that things that most people think are true are taken as such. It's human nature, and mathematics is a human endeavor.
Moreover, math itself resists rigor in a fundamental way, as Godel showed us. Machine proofs? Eh. Ultimately, everything is up to humans, even the axioms. Will the computer believe in the axiom of choice?
And stuff that's too complex to be understood except for the select few? So be it. It means nobody really cares about that stuff. Maybe someone will discover a better proof later. Maybe someone will stumble upon a counterexample. Maybe nobody will touch that for a thousand years. Math will go on.
Not that all math is accessible, but all good math is not limited to in-groups.
For an extreme example, Grigori Perelman's famous work is surely not accessible, and is famously incomplete. It didn't matter. The ideas were clear to enough people around the world that it didn't take long for them to spread, and for others to step up and close the gaps.
I've always said that mathematics is a form of art, perhaps with a smaller audience than most. The only criterion in mathematics is beauty: whether the work is interesting or not. Some works lack in that aspect, as it happens in all arts, and will not be appreciated by future generations. And it's fine.
Disclaimer: I do math, sometimes.