> …especially when you get past low level algorithms and into domain logic (your point exactly). They also don't serialize well, so a database or API call with a "proof" field would be susceptible to fudging…
Nonsense.
Proving things about low-level algorithms that are full of complicated behaviors involving shared mutable state is often more difficult than proving things about high-level domain logic; regardless, people still do it, and if you use any modern CPU, especially from the ARM family, you benefit from their work.
A proof serializes just as well as any other kind of program — after all, the first step in checking a proof involves deserializing the proof, or in other words, reading and parsing its source code.
As for “fudging”; proofs are guaranteed to be correct up to the correctness of the proof checker, which can be used to recheck the proof at any time.
For more information, see Geuvers (2009) “Proof assistants: History, ideas and future”.
> Proving things about low-level algorithms [...] more difficult than proving things about high-level domain logic
I'm not sure I buy that. Like, prove that you never double-charge a sale, or a deactivated user can never make a purchase. Even if all that logic is within one service, it could be separated by multiple layers, callback functions and redirection, parallel tasks, database calls scattered between, etc. And even once you do have a proof, a single line code change anywhere between here and there could easily force you to redo the whole thing. And that doesn't even get into the more common case of distributed services.
> A proof serializes
We're probably talking about different things. I'm imagining something like a proof of the optimal traveling salesman between a bunch of cities. (Note, optimal here; verifying that a solution is below a specified limit can be done in polynomial time, but verifying optimality cannot). Say you want to store the answer in a database for later lookup. But it'd be possible that the DB could get fudged, and so "proof"-wise couldn't be trusted. Thus anything requiring a type-level proof would have to solve it from scratch every time. That's what I mean by "they don't serialize well" (though some proofs, like "N is not prime" can be serialized by storing the factors). Of course you could work around it and add "assuming our database has not been fudged" to the axioms, but the second you manually update one record in your database, the house of cards comes tumbling down and you can no longer trust that any of the proofs that you've worked so hard to build still apply in your system.
Nonsense.
Proving things about low-level algorithms that are full of complicated behaviors involving shared mutable state is often more difficult than proving things about high-level domain logic; regardless, people still do it, and if you use any modern CPU, especially from the ARM family, you benefit from their work.
A proof serializes just as well as any other kind of program — after all, the first step in checking a proof involves deserializing the proof, or in other words, reading and parsing its source code.
As for “fudging”; proofs are guaranteed to be correct up to the correctness of the proof checker, which can be used to recheck the proof at any time.
For more information, see Geuvers (2009) “Proof assistants: History, ideas and future”.
https://sci-hub.st/https://link.springer.com/article/10.1007...