Advent of Code, the yearly gauntlet of programming puzzles that Eric Wastl runs each year from December 1 through December 25, is a big event for me. It’s the one time of the year when I can get out of the humdrum of programming for an insurance company and really get to think about problems. Problems like, how many puzzles can I solve in a new programming language before I say “Fuck it, I’ll do it in Python.”
(Usually three or four).
One year, I started out doing the puzzles in the “fantasy retro console” Pico-8, as games. One year I made one of the later puzzles into an isometric roguelike that I then turned in for the following spring’s 7 Day Roguelike competition.
My past couple of years haven’t been wild successes. The puzzles got too hard for me to wrap my mind around, and rather than seek help on Reddit, I just quit. I always meant to go back and finish them but, well, I didn’t.
I’m not super surprised to find that Eric has decided to cut down on the number of puzzles in AoC. He’s been clear for years that preparing for AoC takes a good portion of his year, what with developing the puzzles and having testers solve them so that he can smash the bugs and set the difficulty appropriately.
I was a little more surprised that he’s deleting the global leaderboard. Some people, it seems, are a little too excited by it, even going so far as to DDoS it to keep people from updating. Past couple of years, but especially last year, the top spots of the leaderboard were set by people who had completed both halves of the usually pretty tough puzzles in less than a minute, in many cases much less. The global leaderboards had been made pointless by both AI and very fast solvers, or very fast solvers using AI to simplify the puzzle for them.
And, Eric is very clear about the use of AI:
Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve – no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
I’ll be up front: I have been programming with Github Copilot for some time, and use it as part of my workflow both at work and at home. I’ve never dumped the puzzle into Copilot and asked it to pop out a solution; it’s more useful for setting up infrastructure, like timers and file parsing and stuff. I have asked ChatGPT to solve problems after I’ve solved them, just to see if it can. Generally, it doesn’t do so hot. But clearly the top solvers have better AI than I do, and that’s fine. I agree with Eric, here: I do AoC to challenge myself, not to prove I can AI better than someone else.
You can read all about all the changes for 2025 here, on the FAQ for AoC 2025.





This raises a question I’ve been thinking about for a while. How is Eric going to know if someone used AI? Especially if they only used it in the planning, not the execution? Is it purely an honor system, where he trusts people not to use it? In which case plenty of people will use it and just lie about it.
Even with something like watermarking, what’s to stop someone just producing the code with AI in one place and then transcribing it and entering it somewhere else? I was thinking about this with Suno and Spotify and other streaming platforms using software to identify AI in music and I was wondering just where the boundaries ought to be drawn. If I made a song on Suno and then had a live band cover it, would that be an AI song or not? Certainly, no algorithm would be able to identify it as AI because they use the frequencies to check and this would all be real instruments and real singers. Ditto, if an artist used AI to produce an illustration, then copied it, would that be AI? Or if I had CHatGPT write me a short story and then I re-wrote it, keeping the structure and the plot and the narrative but using my own words, the way a translator would translate from another language?
I think it’s still easy enough to spot 100% AI product but being sure AI hasn’t been used at all is going to be impossible.
He might not know if someone is using AI. I think his removal of the global leaderboard is him saying, “Knock yourself out. There’s no reason to try to solve every puzzle in ten seconds.” He’s just saying that winning using AI is no accomplishment.
I do think there is a place for AI in creative works, and that includes creative endeavors like programming, but it is not as a source of creativity. Once the human does the creative part, there may be a place for AI. I use it, in writing, as an editor. I write; it points out where I made mistakes, and I either agree or disagree but in all cases, I am doing the writing and supplying the creativity.
I’ve written a bunch of stories and AI has never been able to suggest something I felt had a place in any story I’ve written. Similarly, the AI code I see at work is usually of pretty poor quality, although it certainly looks nice. But it can’t solve a problem from scratch.
Once again, nice things get ruined by the pesky humans!
This is why I’m a big proponent of AI and AGI/super intelligence. I can’t wait for the AI to wipe out humans and raise dogs to be the dominant species!
Literally asking for a dog-eat-dog world, are you?