
AI Will Become Mathematicians’ ‘Co-Pilot’ | Spektrum der Wissenschaft – Scientific American – Christoph Drösser | Fields Medalist Terence Tao explains how proof checkers and AI programs are dramatically changing mathematics
https://www.scientificamerican.com/article/ai-will-become-mathematicians-co-pilot/

4 Comments
With AI, there’s a real potential of doing that. I think in the future, instead of typing up our proofs, we would explain them to some GPT. And the GPT will try to formalize it in Lean as you go along. If everything checks out, the GPT will [essentially] say, “Here’s your paper in LaTeX; here’s your Lean proof. If you like, I can press this button and submit it to a journal for you.” It could be a wonderful assistant in the future.
I think in three years AI will become useful for mathematicians. It will be a great co-pilot. You’re trying to prove a theorem, and there’s one step that you think is true, but you can’t quite see how it’s true. And you can say, “AI, can you do this stuff for me?” And it may say, “I think I can prove this.” I don’t think mathematics will become solved. If there was another major breakthrough in AI, it’s possible, but I would say that in three years you will see notable progress, and it will become more and more manageable to actually use AI. And even if AI can do the type of mathematics we do now, it means that we will just move to a to a higher type of mathematics. So right now, for example, we prove things one at a time. It’s like individual craftsmen making a wooden doll or something. You take one doll and you very carefully paint everything, and so forth, and then you take another one. The way we do mathematics hasn’t changed that much. But in every other type of discipline, we have mass production. And so with AI, we can start proving hundreds of theorems or thousands of theorems at a time. And human mathematicians will direct the AIs to do various things.
What mathematicians are doing is that we’re exploring the space of what is true and what is false and why things are true. And the way we do it is through proofs. Everyone knows that when it’s true, we have to go try and prove it or disprove it. And that takes a lot of time. It’s tedious. But in the future, maybe we will just ask an AI, “Is this true or not?” And we can explore the space much more efficiently, and we can try to focus on what we actually care about. The AI will help us a lot by accelerating this process. We will still be driving, at least for now. Maybe in 50 years things will be different. But in the near term, AI will automate the boring, trivial stuff first.
Part of the problem is that it doesn’t have enough data to train on. There are published papers online that you can train it on. But I think a lot of the intuition is not captured in the printed papers in journals but in conversations with mathematicians, in lectures and in the way we advise students. Sometimes I joke that what we need to do is to get GPT to go take a standard graduate education, sit in graduate classes, ask questions like a student and learn like humans learn mathematics.
I wish people would say machine learning or neural networks instead of AI. People think of chatgpt when they read articles like this, but they’re not using it.
AI still makes many mistakes. I have asked it to create formulas to solve specific math problems and it made mistakes. It thought that it had answered correctly until I had shown it where it was incorrect. It then apologized and could not correct the error.
It may not be wise to assume that it does not or can not err.
Have you asked AI to calculate? I hope they improve before using it for anything important!