@@ 0,0 1,73 @@
title = "RC Week 9: Parallels of Proofs and Programs"
tags = ["recurse-center"]
I have three weeks left at Recurse Center.
This last week was significantly less productive for me than usual, because I've been pretty fatigued and just recovered from a cold.
But I still got some work done that I'm proud of.
More than that, I'm excited for the coming three weeks!
This week I was mostly fatigued all week, so I didn't do very much coding.
In spite of that, I made some really good progress on [IsabellaDB](https://sr.ht/~ntietz/isabella-db/) through some pairing sessions!
A friend reminded me that a few years ago I was *deeply* skeptical of pair programming (I knew it worked for some people, but I was convinced I was not one of those people).
This week cemented what I learned earlier in batch:
Pair programming is a highly effective tool for getting work done.
It's not an all-the-time thing for me, and it's highly dependent on having the right pair for the right problem, but it's a great time.
Through pairing this week, I was able to finish out both a basic move explorer (show the list of legal moves, click one to make that move) and finish out my sparse bitmap implementation.
This lays the groundwork for the more interesting features I am building with IsabellaDB.
Next up is **displaying win/loss/draw percents** in an opening tree so you can explore openings.
After that, building some **filters** to explore openings for a certain subset of games (played in the last 12 months, etc.).
And then after that, I'll generalize it to be a **query engine over all the games** so you can do things like search for sequences of positions (want to see how often the Caro-Kann transposes into a French Defense?) or features of positions/games (want to find all the [Botez Gambits](https://www.chess.com/terms/botez-gambit-chess)?).
When I wasn't feeling up to coding this week, I dove into exploring Coq (a proof assistant) and Idris (a functional language with dependent types) more.
Right now, I'm getting a lot of energy from exploring this more mathematical side of programming.
I'm not sure it'll be sustained energy, but it's really exciting and fun to explore!
In particular, doing theorem proving with Coq is just kind of a fun puzzle game and it's addictive once you get the hang of it and the difficulty is at the right level.
If the proofs are too hard, you can't really get going in a flow sort of way.
But if they're *just* hard enough to be engaging but feasible, then it's so delightful and pulls me in.
These two activities—systems programming and theorem proving—came together in a very nice way this week.
Last week and this week, working through proofs, there were a few occasions where proofs got pretty difficult.
To get through them, there are two general techniques I've been using:
- **Break the problem into subparts recursively.** For a proof, this typically is one of a few things. For a particular statement, you may break it down into its cases (a boolean can be true or false, so consider each of those independently). And for a longer proof, you can find an intermediate lemma which you can prove to make the later work easier.
- **Update definitions to support your proofs.** Sometimes, a definition is wrong, and clearly needs to be reworked in order for a proof to be possible. I ran into this where I had an edge case that didn't matter until the Final Boss proof; when I fixed my definition, the proof was possible, where before it was not. In other cases, there are equivalent definitions where one will make the proof significantly easier. Usually this lets you avoid intermediate lemmas, and if the proof requires fewer steps from end-to-end it's usually easier to get from the start to the finish, so it makes it a lot easier!
Both of these techniques came into play when I was working on my sparse bitmap implementation, as well.
The first thing I realized was that the way I defined it was not ideal for combining multiple bitmaps.
The definition worked and felt elegant, but it was very awkward and hard to reason about when iterating over two bitmaps in parallel.
In a pairing session, we changed the definition to an equivalent (but simpler) implementation.
This required changing most of the methods implemented on the bitmaps, too, since they rely on the underlying details.
But at the end of the day, it was worth it:
The implementation of the bitwise operators became so much **easier to reason about and therefore more likely to be correct**.
Recursively breaking down problems also came into play with the bitmaps.
This is a common technique in programming in general, so what I'm talking about here isn't shocking.
The surprising thing to me, though, was how much **writing my program felt like writing my proofs**.
I think it's because it gives me a sense of formalism about how to reason about my code and a mental structure to it.
At any rate, exploring proof assistants has made writing programs much easier.
That's a win.
There's a strong parallel between the activities of writing proofs and of writing programs.
The [Curry-Howard correspondence](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence) tells us that programs and proofs are directly related.
I don't understand the details of that yet, but will work toward that through our exploration of Coq.
What I do know right now is these activities **are extremely similar** in how I think through things and how I approach them.
Another Recurser presented yesterday on how doing [studies](https://en.wikipedia.org/wiki/Study_%28art%29) (in the art sense where you work through some small pieces in isolation before doing the broader composition) can be a highly effective technique for programming as well.
This makes a lot of sense and is a technique I want to try out more deliberately in the future.
In a sense, I think I'm already doing it.
What is this, if not breaking problems down recursively?
(There's a small difference of the study being something you don't intend to reuse directly.)
Is there an art equivalent of updating your definitions to support the proof?
It's sort of fascinating the parallels between fields that I think of as typically unrelated.
Sure, proofs and programs, we've been exposed to that before.
But I'm a little bit mind-blown that there's also a parallel between *art* and programming in form of techniques.
This makes me excited to explore other domains and learn how people in other domains work!
See you next week!
It'll be a short one, because of Thanksgiving.