Day 2: Learning, Testing, and Junior Dev Vibes
Red-Nosed Reindeer nuclear plant

It’s Day 2 of Advent of Code, and my AI assistant, Aider (+ Claude Sonnet), is already showing signs of learning and adapting. Today’s challenge was all about parsing and analyzing data, with a mix of predictable successes, unexpected issues, and a sprinkle of junior dev behavior from the AI.
The full code is available on GitHub.
Part 1: Red-Nosed Reindeer Data Analysis
The Red-Nosed Reindeer nuclear fusion/fission plant engineers needed help analyzing unusual data from their reactor. Each report contains a list of numbers called levels. A report is considered “safe” if it meets two criteria:
All the levels are either increasing or decreasing.
Any two adjacent levels differ by at least 1 and at most 3.
AI Behavior: A Clever Start
When I asked Aider to scrape the challenge and store it as a file, it surprised me. Instead of copying the entire challenge, it summarized the problem, extracting the key requirements and ignoring the fluff. Its output included a short description and an example, which was oddly efficient! It’s learning how to cut to the chase – a promising sign.
Red, Green, Refactor
- Red: Writing Failing Tests
Following the test-driven development (TDD) approach, I wrote tests based on the example input. Aider generated:
Tests for valid inputs to compute individual parts of the solution.
Failing test cases for edge cases like empty input or invalid formats.

- Green: Implementation Code
Aider then implemented the code. The solution was surprisingly clean and idiomatic for Elixir. It:
Followed Elixir conventions and project structure.
Handled all test cases, including edge cases and error conditions.
Broke down the problem into smaller, reusable functions:
parse_reports/1: Validates the input.parse_line/1: Converts strings to numbers.safe_report?/1: Contains the main logic.
Used helper functions for checking increasing or decreasing patterns.
It even employed the with keyword, elegantly focusing the logic on the happy path while bubbling errors up.

The parse_report function handled errors gracefully

- Refactor
There wasn’t much refactoring needed – the code was already better than I expected! It followed idiomatic patterns and was well-structured.
Hallucination Alert!
December 2023?!
At one point, Aider claimed the puzzle input wasn’t available until December 2024 (spoiler: it’s December 2nd, 2024). After I pushed back, it acknowledged the mistake and told me to fetch the data myself. Classic junior dev vibes: “If I can’t do it, it must be impossible!”

Fine, I retrieved the data manually.
Wrong Puzzle?!

With the real input in hand, I ran into another hiccup. The AI struggled because the test inputs had uniform rows, but the real input didn’t. This was a case of overfitting to the test cases – a valuable lesson in ensuring tests are as generic as possible.

Once I clarified the issue, the AI fixed the solution and apologized. Success! However, it inadvertently broke the tests during the process. After a quick nudge to run the tests during refactoring, it resolved the issue quickly. A good reminder to be explicit in instructions when collaborating with an AI.

Part 2: The Harder Variant
As always, Advent of Code’s second part added complexity to the challenge.
- New Tests
I started by creating tests for the updated requirements. Aider adapted smoothly, generating comprehensive tests for the additional logic.

- Implementation
The implementation phase went smoothly. Aider continued to follow the idiomatic patterns established earlier and handled the new requirements without much trouble.
- Success!
The solution worked on the first try. It felt great to see the AI growing more reliable with each iteration.

Reflections on Day 2
Today reinforced some key takeaways for working with AI assistants:
Testing discipline is non-negotiable. Overfitting to edge cases can derail your progress, so always aim for generic tests.
Be explicit in your instructions. Even smart tools like Aider need clear guidance to avoid breaking things inadvertently.
AI can be a great collaborator. Its ability to focus on implementation details lets me stay focused on the higher-level logic and architecture—a fantastic productivity boost for senior engineers and engineering managers.
Day 2 was a mix of learning, problem-solving, and junior dev humor. Can’t wait to see what Day 3 has in store!






