I don't think this is a good analogy. No one "does mathematics" in ZFC. Mathematicians don't express their reasoning in set-theoretic axioms; they do so in common language proofs. You can see this by opening any graduate textbook or journal.
Mainstream mathematicians are not being prevented from proving theorems because they lack "better ways of reasoning about equality of types."
The importance of ZFC is that it ensures our common language reasoning isn't incoherent (among other things). Once you've done that, you don't need to do it again.
Mathematicians use implicit DSLs approximately embedded in natural language to write proofs. Formalizing these DSLs in ZFC is a nightmare because its axioms are arcane and far from common mathematical discourse (much like assembly). On the other hand, the type formers and rules of (say) MLTT are natural generalizations of the rules of ordinary logic.
> Mainstream mathematicians are not being prevented from proving theorems because they lack "better ways of reasoning about equality of types."
How do you know? I've made the mistake of assuming that the definition of a certain morphism is natural with respect to a certain object when it wasn't (coevaluation in a rigid monoidal category). Better language for equality and naturality are precisely what HoTT provides. More importantly, the hope is that type theory will not only provide better language, but relieve us of the tedium of checking proofs by hand.
> The importance of ZFC is that it ensures our common language reasoning isn't incoherent.
I don't think this is true. AFAIK, no one has written down the definition of (say) a scheme from first principles in ZFC. In fact, if you care about coherence/verification, you should care even more about type theory. IIRC this is what drew Voevodsky to type theory in the first place -- https://mathoverflow.net/questions/234492/what-is-the-mistak...
I don't think you should regard ZFC's purpose as proof verification. If you want verification, by all means use something else, as discussed in the link in my original post (which also addresses your scheme point – that's a problem common to most approaches so far!).
If HoTT helps you prove theorems, great. In particular I could see it being of interest to people working in higher category theory (and such people have responded in other comments here!). My remarks were meant for the other 99% of mathematicians – most of whom go their entire lives without ever writing the word "category" in a paper.
We've hit the comment depth limit, so I'll reply here.
> it's not clear to me why you necessarily want the language you use to rigorously investigate the foundations of mathematics to be the same one you use to do proof verification.
Among other things, I want theorems about foundations to be useful to non-foundational mathematicians. Working on type theory can improve, at the very least, formal verification tools. In my view, traditional ZFC-based mathematical logic has a much smaller likelihood of being useful for regular mathematicians. For example, the vast majority of mathematicians will never explicitly assume the existence of a large cardinal in a paper. It seems to me that the difference in usefulness is probably due to their relative distance from common discourse.
Thanks for the interesting discussion. My main point of disagreement re:foundations is that in my view a central criterion for a foundational system is easy translation of common mathematical discourse into formal proofs. Otherwise, it seems that the coherence of ordinary mathematical discourse must be an article of faith.
I don't have time to explain in detail at the moment, but I recommend reading about the history of set theory and the problems that the introduction of ZFC was meant to solve. I think you'll agree that they were both important and have nothing to do with proof verification.
More generally, it's not clear to me why you necessarily want the language you use to rigorously investigate the foundations of mathematics to be the same one you use to do proof verification. ZFC – and set-theoretic approaches more broadly – facilitate proving things about mathematics (meta-mathematical theorems), and for that they are very useful. E.g., the strength of a theory is equivalent to the size of the cardinals it assumes, loosely speaking [1].
How is this not partial evidence that ZFC is trash?
> Mainstream mathematicians are not being prevented from proving theorems because they lack "better ways of reasoning about equality of types."
Mainstream mathematicians do run the risk of being siloed in narrow specialties because abstractions are needed to distill the all definitions of remote areas of mathematics.
Saying HoTT is a failure because people do informal proofs is rather moving the goal posts; people won't do formal proofs until the tooling is really good. But none of the up-and-comming tools are classic first order logic + ZFC (Please don't mention TLA+) and there's a reason for that.
> The importance of ZFC is that it ensures our common language reasoning isn't incoherent (among other things). Once you've done that, you don't need to do it again.
No you need formal proofs to check that there aren't errors along the way. And lack of formal proofs slows down new weird stuff like the ABC conjecture proof candididate.
I don't expect this to be solved now (tooling, as per above), but mathematicians should learn more category theory now as that works just fine pencil paper and brain. When the type theoretic tooling is ready they will be ready.
> How is this not partial evidence that ZFC is trash?
ZFC was never intended as a language for practically doing mathematics in, so that's a weird criticism.
> Mainstream mathematicians do run the risk of being siloed in narrow specialties because abstractions are needed to distill the all definitions of remote areas of mathematics.
Mathematicians are siloed in narrow specialties because they focus on solving hard problems that require a great deal of specific subject matter expertise. I strongly doubt that "distill[ing] the all definitions of remote areas of mathematics" is even possible, or a fruitful project to attempt. Could you give an example of how definitions from, say, probability theory and algebraic geometry might be merged in a useful way?
> No you need formal proofs to check that there aren't errors along the way. And lack of formal proofs slows down new weird stuff like the ABC conjecture proof candididate.
I think this comment misunderstands the problem with the ABC "proof," which is (speaking roughly) that a certain lemma asserted something that was crucial to the argument, but which was never actually proved. Peter Scholze and others spotted this pretty quickly. Formalization wouldn't really have helped at all.
> ZFC was never intended as a language for practically doing mathematics in, so that's a weird criticism.
Isn't that just the point? Currently we do metamathematics in an esoteric language where your normal mathematical tools won't always have the meaning you expect them to have - HoTT is about enabling us to do it in something much closer to normal mathematics.
> Could you give an example of how definitions from, say, probability theory and algebraic geometry might be merged in a useful way?
A lot of category theory is "this theorem from this context can be translated to this theorem in this other context". Do I know any specific equivalences between probability distributions and multivariate polynomials? No. But I wouldn't be at all surprised if we found some. At one point the idea that arithmetic could tell us something about geometry was a surprising and novel one.
> I think this comment misunderstands the problem with the ABC "proof," which is (speaking roughly) that a certain lemma asserted something that was crucial to the argument, but which was never actually proved. Peter Scholze and others spotted this pretty quickly. Formalization wouldn't really have helped at all.
The fact that it was disputed for at least two years whether that lemma was or wasn't valid is the fact that formalisation would have been helpful.
> No one "does mathematics" in ZFC.
How is this not partial evidence that ZFC is trash?
What percentage of coders do programming on a turing tape? Is this partial evidence that turing tapes are trash? Does that question even make sense?
> I don't expect this to be solved now (tooling, as per above), but mathematicians should learn more category theory now as that works just fine pencil paper and brain. When the type theoretic tooling is ready they will be ready.
The abstractions of category theory are useless in many areas of mathematics. Prime example: PDEs.
Laypeople think that category theory is the 'ultimate math' because they hear that it provides bridges or analogies between different areas of math.
Perhaps programmers are especially prone to this because category theory does have some applications to programming.
The thing is, almost all of pure math is itself is a bridge between different areas of math. Some of these areas are bridged by category theory, some are bridged by other kinds of math, which have less catchy names.
> What percentage of coders do programming on a turing tape? Is this partial evidence that turing tapes are trash? Does that question even make sense?
Yes it is. Turing machine models are very limited, and a programme to let us achieve the things we can do with Turing machines (mainly runtime analysis) with a better model (i.e. a lambda-calculus style model) is a very good idea.
> The thing is, almost all of pure math is itself is a bridge between different areas of math. Some of these areas are bridged by category theory, some are bridged by other kinds of math, which have less catchy names.
I'd be equally interested in a programme of doing metamathematics in some non-category-theoretic model that was still "normal" mathematics in the same way that category theory is (and ZFC isn't). But I'm not aware of any such competing effort.
> What percentage of coders do programming on a turing tape? Is this partial evidence that turing tapes are trash? Does that question even make sense?
Yes it is. Turing machine models are very limited, and a programme to let us achieve the things we can do with Turing machines (mainly runtime analysis) with a better model (i.e. a lambda-calculus style model) is a very good idea.
What you wrote is a different justification for why turing tapes are worse than lambda calculus. It has nothing to do with the number of people programming on turing tapes, which is the argument that I was responding to.
I could easily have used 'lambda calculus' instead of 'turing tape' above. Most people do not code in the lambda calculus. They write haskell or javascript or whatever.
It doesn't mean that the lambda calculus is trash.
Likewise, most mathematicians don't work directly with ZFC. Doesn't mean ZFC is trash.
> I'd be equally interested in a programme of doing metamathematics in some non-category-theoretic model that was still "normal" mathematics in the same way that category theory is (and ZFC isn't).
My point is that almost all pure math (e.g: linear algebra, topology, differential geometry, category theory, group theory) is already metamathematics. Of course, there is a spectrum of 'meta-ness' but I think this is a continuous spectrum. I do not think there is a well-defined division between 'mathematics' and 'metamathematics'.
For example, can you give an argument for why, say, the irrationality of sqrt(2) is not 'metamath', yet godel's incompleteness theorem is 'metamath'?
> What you wrote is a different justification for why turing tapes are worse than lambda calculus. It has nothing to do with the number of people programming on turing tapes, which is the argument that I was responding to.
It has everything to do with it: the reason for wanting a lambda-calculus-like model is that lambda-calculus-like models are what working programmers actually program in. If programmers actually used languages that looked like turing tapes then turing tapes would be a good model for talking about programming in.
> I could easily have used 'lambda calculus' instead of 'turing tape' above. Most people do not code in the lambda calculus. They write haskell or javascript or whatever.
Haskell has been described as essentially a typed lambda calculus. You're treating this as a binary distinction when it isn't: there's a lot of value in the model that we can do formal program analysis with being close to the models that we like to program in, whether the models are exactly identical is a lot less significant than the degree of similarity. Likewise the problem that "mathematicians don't work in ZFC" isn't just that mathematicians are doing something slightly different day-to-day, it's that it's a very different paradigm from normal mathematics.
> My point is that almost all pure math (e.g: linear algebra, topology, differential geometry, category theory, group theory) is already metamathematics. Of course, there is a spectrum of 'meta-ness' but I think this is a continuous spectrum. I do not think there is a well-defined division between 'mathematics' and 'metamathematics'.
> For example, can you give an argument for why, say, the irrationality of sqrt(2) is not 'metamath', yet godel's incompleteness theorem is 'metamath'?
I'd argue that irrationality of sqrt(2) is applicable outside of a mathematical context - it's a fact about something we're modelling rather than solely a fact about our models ("2" and "sqrt" are of course abstract models, but they can be applied to model a variety of concrete things that we care about, and you can carry over the irrationality of sqrt(2) into at least some of those contexts, where it will translate into something meaningful and useful). Whereas godel's incompleteness theorem is a map for which there is no territory; it's a fact about abstract models that could never correspond to anything that wasn't an abstract model.
But if you want to regard number theory as a subset of metamathematics then I don't mind. When I say I want to be able to do metamathematics, I mean I want to be able to do all metamathematics; in particular I want to be able to talk about proofs in general. You could argue that the irrationality of sqrt(2) is a statement about proofs, but it's certainly not in a context that allows you to reason about general proofs, and number theory does not give you a first-class way to work with proofs in general (of course the Godel encoding exists, but it's extremely tedious and not useful for practical work). Likewise, as far as I know, there's no way to really talk about (general) proofs directly in terms of group theory or linear algebra.
> What percentage of coders do programming on a turing tape? Is this partial evidence that turing tapes are trash? Does that question even make sense?
ZFC should be more like machine code we actually use than turn tape concept which we don't. The fact that no one uses something higher level that compiles down to ZFC is disheartening.
As the FOM mailing list demonstrates, it's really about goal posts here.
For one camp, the goal posts are such that ZFC or many other things are equally good. ZFC by share age has the "large cardinal" advantage in that people have been grinding away at it longer.
For the other camp, large-cardinal-type research agendas aren't very interesting, and the goal posts are dramatically different.
I still think ZFC is trash, but I will admit my mistake in thinking other share my goal posts.
> The abstractions of category theory are useless in many areas of mathematics. For example, PDEs.
At the moment that is true.
But I think this is more due to the human concerns than the actually Math. Until Statistics overtook it, differential equalization were the most-applied branch of mathematics, which definitely influenced the culture around it. There is also the general algebraist---analyst cultural divergence.
I look forward to the day when the computer tools are so good they are used in those fields too. That should bridge the culture gap, and then we shall see what the math holds.
Mainstream mathematicians are not being prevented from proving theorems because they lack "better ways of reasoning about equality of types."
The importance of ZFC is that it ensures our common language reasoning isn't incoherent (among other things). Once you've done that, you don't need to do it again.