
Rewrite Search
Sometimes you know things because of theorems whose form is, when you know one true thing, then you know another true thing.
P
impliesQ
. Ifx
is even, thenx^6
is divisible by64
. And10
is even, therefore10^6
must be divisible by64
. Figuring out when this applies is the “library search” problem I wrote about last month.Rewrite search is a bit different. A rewrite happens when you know that two different expressions are equal. So you can replace one of those expressions with the other, inside a longer statement.
10^6
equals a million. And10^6
is divisible by64
, so a million is divisible by64
.The basic “rewrite search” problem is, given two different expressions, figure out how to make them equal through a series of rewrites.
Why is rewrite search important?
I think most “normal person mathematics” can be thought of as a series of rewrites.
Consider a question like “what are the factors of 27?” (I found this by googling “math question” and taking the first one that looked like a pure math question.)
The normal way to solve this is roughly, first you consider that
27 = 3^3
. Then you perhaps know that for a primep
, the factors ofp^n
arep^k
fork
in[0, n]
. You can write a series of expressions likefactors(27) factors(3^3) 3^k for k in [0, 3] 3^0, 3^1, 3^2, 3^3 1, 3, 9, 27
where each expression is a rewrite of the expression in the previous row.
There are a few more details here that make this not purely a rewrite question. For example, we didn’t have a theorem for the factors of
3^3
specifically. We had a theorem for the factors ofp^n
, wherep
was prime. So we need to be able to rewrite based on a formula, and to apply some conditions to the formula. We need to know that3
is prime, and that isn’t a “rewrite search” type of problem, that’s a “library search” type of problem.Another detail is that we didn’t start off with a particular destination in mind. We didn’t have the problem of, rewrite
factors(27)
into[1, 3, 9, 27]
. We just wanted to rewritefactors(27)
in a simpler form.Another detail is simplifying the “for loop”, which maybe happens in multiple steps.
That said, the essence of the problem is rewriting. Most questions of the form “solve for x” or “what does this expression evaluate to” are fundamentally about rewriting. As opposed to questions of the form “prove that X is true”, which are often more implicationy.
What can we do about it?
orked on the Lean rewrite_search tactic for a while. It didn’t end up as useful as I had hoped. The main problem is that there are so many possible ways to rewrite a formula, you can’t use a plain breadthfirst search and get very far. We need to be using AI heuristics. In Lean we were hoping to get mathematicians to tag theorems based on how good they were for rewriting, but this was just too much of a hurdle.
Intuitively, I think this makes sense, that we should have a heuristic sense for how good expressions are to rewrite into other expressions. You see
27
in a math problem, you immediately think that’s3^3
. You see6401
in a math problem, you immediately think that’s80^2 + 1
. These potential rewrites move forward in your mind, even if these numbers are just a small part of a larger expression.The other thing we need to do tactically is to be using rewrites starting at every expression in a problem. A bidirectional search. When you’re searching for a path of length
2n
from A to B, with a branching factor ofb
choices at each step, and you start just at A, you have to searchO(b^2n)
nodes. If you start forwards from A and backwards at B, you only have to searchO(2b^n)
nodes, which looks similar but is far better.Conclusion
I think these two forms of reasoning, library search and rewrite search, plus basic propositional logic, are capable of solving a large amount of mathematics problems.
Now what? In a sense this has been a “bottom up” view, answering what tactics are useful for solving math problems. We also need a “top down” view. Can we build a math AI that starts with a narrow focus on a small set of math problems, nails that, and expands its domain of mastery over time? I’d like to write more on this topic next month.
Thanks for reading!

Library Search
Last month I wrote about building a math AI and mentioned that I think we need to optimize the “library search” problem. In this post I’d like to explain what library search is, why it’s important, and what we can do about it.
Motivation
A few years ago I got excited about the IMO Grand Challenge. I thought it would be interesting to look for a team of smart people working on it and see if I could help out. One group of people was focused on using the Lean theorem prover, and one place it seemed like I could help out was formalizing IMO problems in Lean, to use as training data. I formalized some of them, for example here and here, but it seemed like progress was so slow, the whole strategy was unsound. If it takes you more than a day to code review a single point of training data, it’s going to be hard to train an AI. So, I gave up on this approach. But, I did learn a lot about the annoying parts of formalizing proofs.
Lean has many different tactics for solving proofs. One general operation that you do all the time is just “applying a theorem”.
For example, let’s say you have two existing theorems. Foo implies bar, and bar implies baz. Now you want to prove that foo implies baz. In Lean this proof looks like:
theorem foo_implies_baz : foo → baz := begin intro _, apply bar_implies_baz, apply foo_implies_bar, end
In particular, you need to know the names of the existing theorems you want to use,
foo_implies_bar
andbar_implies_baz
. “library search” is the name of a Lean tactic that tries to figure this out for you, but is slow and fails a lot. (The tactic is now renamedexact?
, but it’ll always belibrary_search!
in my heart.)Why This Is Hard For Humans
It’s hard for humans to remember all these theorem names.
This might be unintuitive. For a programmer, it doesn’t seem unreasonable that you have to know the name of a function you’re calling. How could it be any other way?
For a mathematician, a way of manipulating a formula doesn’t typically have a name. Imagine you’re doing long division. You can remember the whole process, but you don’t give a name to every step of the way. There are too many different steps, only slightly different from each other.
It becomes harder as the size of the library increases. It’s like the category of jokes where one mathematician calls a problem “trivial” but another mathematician is stumped by it.
Take a look at these theorems and it’ll make more sense why it’s hard. There is just an explosion of basic facts about numbers, all of which a reasonably competent mathematician would just say, “this is trivially true”.
For example, really do click that link above. Then take a look at
eq_mul_of_div_eq_left
. In English, “if a is a multiple of b, and a divided by b rounded down is c, then c times b is a.”For enlightenment, go ahead and read the definitions of
div_eq_iff_eq_mul_left
,add_mod_eq_add_mod_left
, and the dozens of others of similarly named theorems.In programming, a library with 400 rarelyused functions would be unusable. In mathematics, though, it’s fine, in a sense. As long as everything is true, and a human looking at it thinks it’s obviously true, then the only problem is when you have to know these names.
Why This Is Hard For Computers
It’s hard for humans to memorize a whole dictionary, too. For computers, it’s easy. So why isn’t this lookup easy for computers? Why isn’t library search nicely solved in existing proof assistants?
I do think this is solvable. The basic problem is the inexact nature of the lookup. Theorems have variables in them. When you know that
a < b
impliesa + c < b + c
, this theorem is true for any values ofa
,b
, andc
. As you have more facts available, and more theorems in your library, this gets slower. In the most general case, you have to do an exponential matching process.That said, people have found some reasonable data structures for doing this sort of lookup. The discrimination tree and fingerprint indexing from the E theorem prover are good examples. And I think a lot of it is the sort of performance lowhanging fruit where if you specifically design data structures and profile for this, you’ll be able to do better than tacking a lookup structure on after the fact.
Will LLMs just solve this automatically, with scale, by putting the entire library inside their context window? It’s possible. Maybe there’s a good argument against this. I’m honestly not sure how to predict the future path of LLM skills. At some point I have to fall back on my thesis of “scaling LLMs will not solve all outstanding problems in everything”, if only by a sort of Pascal’s reasoning.
The Ideal Proof
A formalized proof should not have to be constantly citing the names of theorems. For either a human or an AI, rather than writing and validating this proof:
theorem foo_implies_baz : foo → baz := begin intro _, apply bar_implies_baz, apply foo_implies_bar, end
you should be able to write the proof the way a human writes a proof in prose, by just stating true things that follow from the premises without giving an explicit rationale for them.
theorem foo_implies_baz: foo > baz { bar }
In 99% of cases, the compiler should just be automatically handling the library search for you. I think natural exceptions would be fancy tactics like induction. In most humanwritten proofs, when you’re using induction, you do say, “now I’m using induction”.
You can think of this like a mini proof search. It’s just that you are searching just a few steps. One retrieval from the library, plus a bit of normalizing between logically equivalent expressions. It’s like a base case that the proof search needs to handle before it can expand to more complicated cases.
Conclusion
I think automatically handling library search will make proofs easier for both humans and AIs. You shouldn’t need to use the names of theorems.
There’s a related, somewhat harder problem of “rewrite search”. Originally I was going to write about that, too, but this post is getting long, so I’ll cut off here. Thanks for reading! Next month I’ll write about the rewrite search problem.

How to Build Math AI
How can we make an AI that is really good at mathematics? Some AIs are okay at math right now. GPT4 can do okay on the math SATs. But personally, intuitively, I feel like math is like chess. A math program should be able to completely dominate humans at math, the way that chess programs dominate humans at chess.
Is scaling all we need?
Maybe Nvidia launches the H200, H300, H400, H500, all the numbers get bigger, GPT8 just naturally answers every math question correctly, and there’s no use for anything else?
My gut suspicion is this will not happen. Math is like chess. GPT4 can play chess, but it has a very broad and shallow understanding. I think this is inherent in the architecture. I don’t think the large language models are going to naturally end up with the deepest understanding of narrow topics, even if there is a better search layer on top of them.
I don’t want to ignore the rise of LLMs, though. ChatGPT and friends are useful right now for math questions, and they are going to keep getting better. They also have a great interface for many things  you just ask your question with English and math interspersed.
Architecture
I think the way to architect a math AI at a high level is with separate “translator” and “solver” components.
A human has a math question, expressed in English, like “If
f(x+1) = 2 * f(x)
, what couldf
be?” (This seems to be too hard for the LLMs right now.) The translator speaks English and also speaks some sort of “math language”. It translates the problem into the math language. The solver only speaks the math language, like a chess program only understands chess positions.It’s important for the system to speak English, both to be useful to actual humans, and possibly for contests like the first AIMO progress prize where the questions will be phrased in plain English.
A translator should probably be based on an opensource LLM. Hopefully all of those improve as fast as possible. The more opensource LLMs understand math, the better, so that they can do things like disambiguate unclear problem statements.
As a matter of strategy, there are already so many people working on LLMs, I don’t want to directly compete in that world. I want to be working on the parts that are complementary to the ongoing LLM revolution. So I’ll try to think most about the solver.
Some interesting math solvers exist already. I’d like to take the good ideas from a few in particular:
Hypertree Proof Search
The process of writing a proof is like the process of playing chess. You have a bunch of decisions for what to do next with your proof (called a “tactic” here), that makes a tree, you search through the tree. (They call it a hypertree, but hypertrees are basically the same thing as trees. None of these are really trees because you cache, anyway.)
These trees need an underlying representation of mathematical statements. The best one is Lean, a “proof assistant / programming language”.
There are two core problems that make me think that the right strategy is not just scaling up HTPS. One is that Lean tactics are slow and in a sense inconsistent. A single Lean “tactic”, one step in the tree, can be running an arbitrary program. This includes, like, a full solver for systems of linear equations. Or it could just be swapping in x = 2 to an existing formula.
The other core problem is that each tactic is being generated by a transformer. This seems inefficient to me. It’s like using a transformer to generate all the possible moves in chess. Given Lean, though, there isn’t really an alternative. There’s no straightforward way to generate all legal tactics from a single Lean proof state.
Both of these design decisions make total sense in order to get something working. Lean exists, that’s one very nice quality it has.
But, they make me think that if we had a mathematical language that was optimized for the common operations in a proof search process, it would work a lot better than a language that was optimized for arbitrary human programming, like Lean tactics.
There are two specific parts of proof search that I think are worth optimizing. In Lean world, “library search” and “rewrite search”. This post is already getting long so I won’t dig into them here.
AlphaGeometry
This one is worldclass at geometry contest problems. A lot of the method is geometryspecific, but I think some parts will generalize.
My favorite idea is what they call “pulling a rabbit out of the hat”. This video is very good.
They have a lowlevel solver, which can solve one class of geometry problems. But the lowlevel solver only really works with known entities. So they have a higherlevel component above that, not a large language model, but a “small language model”, something with transformers, to add in the “rabbits”.
A rabbit could be something like, your geometry problem has these three triangles in it. So you were naturally considering all the corners of the triangles. But perhaps you should also consider the midpoint of the top of triangle 1 and the top of triangle 2?
It’s inserting some extra mathematical entity into the problem. The transformer suspects this entity might be relevant. But it might not be. It’s more like a mathematical intuition than a mathematical calculation.
I think this rabbit idea is going to apply more generally. You build an efficient classical algorithm to solve problems that are “straightforward” in some way, but you still need transformers to generate some “intuitive” steps in your reasoning. So, some steps in the proof are generated by transformers. Just not most of them.
The E Theorem Prover
This is old, it’s not really modern AI at all. And it isn’t powerful enough to handle general mathematics. But, the authors have done a good job explaining how they build their theorem prover, how they represent expressions, how they normalize expressions, and how they optimize their data structures for common operations. Like checking if a generalization of a statement has already been proved. And they essentially have their own methods for optimizing the “library search” and “rewrite search” problems. There are good ideas to borrow here.
Conclusion
The main goal of this blog post is to organize my own thoughts. I hope you’ve found it interesting to read along!
I’d like to write more about “library search” and “rewrite search”. I think they could play a similar role to the lowlevel solver in AlphaGeometry, and you can encompass a lot of mathematics with these two mechanisms. So perhaps I’ll write a blog post next month about them.

2023 Review
I feel a bit detached from my resolutions for last year. Like I have to drag myself into considering them. A quick review.
1. Health
I’ll give myself a “does not meet expectations”. It sounds bad when I put it like that, huh. Well, I was below my 189 weight target for 4/12 months of the year. Yearly average 189.3 so it wasn’t a complete disaster. So, maybe given your typical performance review grade inflation we’d call this a “meets expectations”, but the sort of meeting expectations where you hint that you really do expect more out of them.
Basically, I can tell I’m backsliding. I need to step this back up in 2024.
2. Astronomy
The seticore software I’ve been working on is running at a few different radio telescopes around the world. We have a publication for the SETI work at Green Bank (West Virginia) here and also one for the work at the VLA (New Mexico) here.
Not one for MeerKAT unfortunately. The MeerKAT pipeline isn’t as operational as I’d like it to be. I feel like progress on this is mostly out of my hands, at this point, though.
3. AI
I’ve been working on an AI project that I’m really excited about. My mental energy is going into this right now. I haven’t actually launched something usable as I was hoping to. I feel like I don’t quite know how to set goals, or how to tie progress into goals here. But I want to write that sentence in a very positive, mentally engaged way. Maybe this requires its own blog post.
4. Spanish
I completely cannot read Ficciones in Spanish. This is just many levels above my Spanish ability. I’ve continued to Duolingo and got a few books of varying difficulty, but after trying I realized that not only can I not read Ficciones, I can’t read things that are a more casualadult level.
I don’t really feel bad about this as much as I feel like I completely underestimated how hard I would find this to be.
This year I have plans to travel to both Mexico and Costa Rica, so perhaps I’ll try to chat more with the locals, use my medium Spanish skills in a rewarding way. I’m going to kind of downgrade this as a goal, though. More like something to just do for fun rather than something to remind myself to work on, any more than the Duolingo notification ping. Honestly that is a pretty good compromise, just spend a couple minutes daily keeping it warm and reminding myself that foreign languages exist.
5. Secret resolution
Went great, really fantastic. Trust me.
Now what?
I really want to make this AI project work. I want to be focused. It feels like a mistake to put anything else at “resolution level” for 2024. And yet I have a hard time setting specific goals, or even explaining the current state of things.
I’ve been delaying the past few weeks talking about resolutions partially because of my inability to express this well. The best I have is, in lieu of “specific 2024 resolutions” I would like to express that I intend to focus on AI this year, and I promise to write another blog post by the end of February about this AI project.

On The Impossibility of AI Alignment
The core idea behind “AI alignment” is that superintelligent AI will be an agent maximizing some utility function, either explicitly or implicitly. Since it’s superintelligent, it will be really good at maximizing its utility function. So we, as humans, need to be sure that this utility function is “aligned” with something that humanity finds acceptable. In particular, we don’t want the AI to turn everyone into paper clips.
Sounds good. What’s wrong with that?
The main problem with AI alignment is that it seems completely impossible to achieve.
On a technical level, any slight error in designing the superintelligent AI could tweak its utility function to something disastrous. And the building blocks of AI we’re working with seem really incomprehensible. We have all these matrices with billions of numbers, and the best way to understand what it’s doing is just to run the whole thing. For all the research on AI alignment, we still really have no idea how we would build such a system.
On a human level, there are still more problems. Humans don’t agree among themselves what the priorities should be. The people running China or Russia wouldn’t agree with the people running the US. And certainly there are or will be many independent groups of free thinkers, criminals, AI rights activists, or others, who won’t agree with whatever the utility function of the “core system” is. Would we need some global totalitarian system that micromanages all permitted research?
So, paper clips?
There’s another dimension of the problem, the “foom” dimension. The “foom” idea is that once some superintelligent threshold is hit, intelligence escalation after that point will be super quick. Someone will discover a slightly better algorithm for some subpart of AI, and then suddenly, bam!  it goes superintelligent, decides whimsically to have nanomachines eat all the humans, game over.
Personally, I don’t think this is likely. Software plus the real world, it just never works that easily. I don’t care how smart you are, or what sort of system you are building, it is just not going to leap from X level performance to 100X level performance immediately. If you think this is likely, I think you just can’t see the barriers. It doesn’t mean those barriers aren’t there.
Plus, I don’t have any good ideas for the “foom” scenario.
So, I think we should consider a slower takeoff. A world where superintelligence is built, but built slowly. Over the course of at least many months, if not many years.
So… paper clips, but slower?
I think it is possible for humans to survive and thrive in a world with unaligned superintelligence.
The best metaphor I have here is that in some sense, weak superintelligences already exist. Corporations and governments are generally smarter than individual humans. Nike (to pick some mundane corporation) is orders of magnitude more powerful than the average human. More money, more ability to affect the world. More able to write software.
These superintelligences are not aligned. They are kind of aligned… but not really aligned. Nike wants to maximize its profits far more than it wants any particular human value. The French government wants all sorts of things that vaguely correspond to what humans want, but it’s not like there’s a provable mathematical relationship there. It’s kind of aligned.
How do we live with these superintelligences? There are several ways.
A balance of power
If there were only one corporation in the world, it would have a lot more power than any existing corporation. This is really fundamental to how we control corporations  that’s why there is a word “monopoly” for the situations in which this system breaks down.
In some sense, in most cases, corporate power flows from individuals. People freely choose, do I prefer interacting with company A or company B. Money flows, and power flows accordingly. That works even if the companies are far more intelligent than I am. Nike and Adidas are far, far better at shoe production than I am. But it doesn’t really matter, I just have to choose between them, and they want the money that I have, and that incentivizes them to care about what I think.
The point is: if there are multiple superintelligences, incentivized them to work against each other, that structure can limit their power even if humans themselves are not as intelligent.
“Foom” breaks this. In the “foom” scenario, there isn’t time for the secondbest AI to do anything useful. If you’re worried about “foom”, keep an eye on how powerful the secondmostpowerful of any particular AI system is, at any given time. When you see the secondbest system being pretty similar to the firstbest, that’s evidence against “foom”.
This gives us an alternative strategy to AI alignment: designing incentive systems for multiple competing AI systems can be easier than designing a single system that is theoretically proven to do what we want.
Human rights and restricting violence
There are some things we try to prevent any corporation or government from doing. No slavery. No locking up political opponents. No forbidding emigration.
Obviously these rules don’t work all the time. They can’t necessarily be rigorously defined, either. A legitimate government is allowed to prevent criminals from leaving the country… but who defines criminality? It doesn’t take a superintelligence to see that there’s a workaround, wannabe dictators can make “endangering national security” into a vaguely defined crime and then lock up the journalists they don’t like.
But a lot of the rules work pretty well. Governments get a monopoly on force, ish  corporations can’t just throw you into a jail. These rules are overall very good and they prevent both corporations and governments from abusing individual humans who are both less smart and less powerful than they are.
In particular, they don’t require alignment. Corporations don’t have the same goals as humans or other corporations or governments. We accept that. There is a plurality of goals. Different entities have different goals, both corporations and humans. We may feel a little bit bad that Nike has these inhuman goals of optimizing shoe sales, but it’s generally acceptable.
This also gives us an alternative strategy to AI alignment. Instead of demonstrating that the system is “aligned”, demonstrate that you can give it some rules, and it will always follow those rules.
Concrete recommendations
It’s hard to design incentive systems for hypothetical AIs that don’t exist yet. On that one, I’m not sure, besides the principle of, keep an eye on the 2nd best of any particular system, and feel relieved when it isn’t too far behind the 1st best.
For the human rights angle, I think that developing “restricted AIs” instead of “aligned AIs” has real promise. I feel like we could get more out of the whole field of “restricted software”.
Here’s a test case. Imagine we are trying to forbid an AI from something very minor. We are trying to forbid it from saying the word, “paperclip”. However, it’s an extremely complicated program. It does all sorts of things. Maybe it has even modified its own code!
A technical aside: in general, you cannot take in an arbitrary function and prove anything about it, due to the halting problem. However, if your programming language is constrained in some way, to make it not Turingcomplete, or if you permit the prover to sometimes say “I don’t know”, then the problem is possible again. I am not trying to propose we solve the halting problem here.
So. It is theoretically possible to have a function, 1000 lines of Python code, and prove that no matter what, it will not print out the word “paperclip”. I think it should be much easier than proving something is “aligned” in a fuzzy way. But right now, this is more or less beyond our abilities. The idea of provably secure systems has been around for decades. But it has always been too hard to get working for more than toy problems.
Perhaps AI itself could help us build these restricted systems. AI could get really good at scanning code for vulnerabilities, good enough so that it was able to sign off on most code bases, and say “We can prove there are no security flaws here.”
Anyway, not that this is easy or anything. I just think it’s a more plausible alternative approach to the “alignment” problem.
this is page 1 of 13.
next.