In Part 1 of this blog series, I shared my experience using Claude Code to resurrect a critical Trello card management tool that I’ve relied on for 6 years but hadn’t maintained in 2 years. After Twitter API changes broke my data pipeline and deployment issues plagued the application, I used Claude Code to:
- Resolve persistent bugs in the card movement functionality that had frustrated me for 4 years.
- Create a new Twitter data pipeline to replace the broken Zapier integration.
- Fix deployment issues with Google Cloud Run that were causing mysterious startup errors.
- Add quality-of-life improvements to the UI and keyboard shortcuts.
In that post, I described my experiences using Claude Code over 2 days, highlighting both its capabilities (quickly debugging complex issues, suggesting architectural improvements) and potential pitfalls (coloring outside the lines, overly complex solutions). Rather than passive “vibe coding,” I found using Claude Code required active engagement and critical thinking—like pair programming with an extremely knowledgeable but sometimes overzealous partner.
I wrote a lot about fixing bugs that eluded me for 4 years.
In this post, I wanted to write about how Claude Code helped me think through the architectural problems in my code. Fixing this would completely eliminate these types of errors.
But despite all the books I’ve read, the hundreds of podcasts and videos I’ve watched, and the scores of hours practicing it on my code, I was at a loss to figure out exactly what to do to fix the architectural issues in my code. It was amazing to have an LLM tell me exactly how to do it in my specific codebase.
What made it so exciting was that this wasn’t about generating code. Instead, it was helping me bridge years of theory that I’ve studied and actually put it into practice.
Lessons:
- Architectural Improvements > Bug Fixes: Fixing bugs is good, but addressing underlying architectural issues can eliminate entire categories of bugs and make future development more robust.
- From Theory to Practice: AI can help you fix these architectural issues. What an amazing use of “vibe coding” techniques. This is the opposite of “turning your brain off.”
- Testability as a North Star: Easy testability is a hallmark of good architecture. When components are easily testable, they tend to have cleaner interfaces and better separation of concerns.
- Active Collaboration: Using AI effectively requires engagement and critical thinking. It’s not about blindly accepting generated code but collaboratively reasoning through solutions.
Let’s Fix The Architecture Problems!
I asked Claude Code to provide recommendations for completely eliminating these categories of errors, to put me on a path to a cleaner code base. We identified two areas for improvement that I’m eager to jump into, after book deadlines are over:
- Ensuring that my Clojure maps that store Trello cards are accessed correctly and consistently.
- Developing a better way to extract the business logic in the Move modal dialog box into pure functions, which can then be unit tested.
It was exciting to create the list and have it give me concrete patterns that I could follow.
Hardening Data Access with a Consistent Interface Layer
The bugs I encountered in Part 1 were mainly due to map keys resolving to nil because of inconsistent usage of namespaced keys. In some places, I used :idList, while in other places I used :trello-card/idList. In fact, there are probably other misspelled keys in my code too.
This inconsistent usage resulted in nil values appearing where they shouldn’t.
Claude Code recommended creating accessor functions like get-card-id and get-list-id that would handle all possible variations:
(defn get-card-id [card]
; creates a more forgiving way to get values from map
(or (:id card)
(:trello-card/id card)
(:card-id card)
(get-in card [:trello-card/raw :id])))
I liked this because I often forget my own key conventions in programs that I’ve worked on for a long time, I occasionally mistype keys, etc. Implementing this in functions is a great way to enforce consistency.
I’ll likely do the same thing for setting these key values so that I can put all those conventions in code rather than keeping them in my head.
(People make fun of class-getters and setters, but at a certain point, it sure is nice to make those explicit!)
Claude Code also suggested adding data validation to catch issues early:
(defn validate-move-operation [{:keys [card-id list-id target-list-id] :as move-op}]
(cond ; this is like a case statement
(nil? card-id)
{:valid false :error "Missing card ID"}
(nil? target-list-id)
{:valid false :error "Missing target list ID"}
:else
{:valid true :data move-op}))
That’s also a great idea, and I can add those as post-condition validation check-in functions that modify card states or use cards. Huzzah.
Improving Testability for the Card Moving Interface
I then asked Claude Code the following:
Help me create a plan to make the movemodal dialog box and associated components more testable. It is currently painful to test all the types of errors that we’ve encountered over the past couple of days. Please enumerate those types of problems and come up with a way to rewrite the code to enhance testability. This would help to eliminate the need for manual testing of pulling up the MoveModal box and testing the escape, enter, and tab keys. Additionally, I want to check to ensure that the history links function correctly and that clicking on the links triggers the correct moves. The moves must direct to the appropriate location
What it came up with was terrific:
Refactor Component Architecture: Replace the current global atom-based state with the component local state to make the flow of data more explicit and testable. Pass explicit callbacks for all actions instead of relying on a shared state. Create pure functions for all logic, which can be tested independently of the UI components.
Separate UI Logic from Event Handling: Isolate pure render functions that only handle UI presentation from event handlers that process keyboard and click interactions. Extract state management logic into separate testable functions that can be verified without rendering components.
Create Explicit Test Scenarios: Develop unit tests for all pure functions with comprehensive input variations. Build component tests that use mock data and callbacks to verify correct behavior. Implement integration tests to ensure complete workflows function as expected.
Implement Testing Utilities: Create event simulators for keyboard and mouse interactions that allow testing without manual input. Build state inspectors to easily verify component state at different points in time. Develop mock callbacks that record their invocations for validation.
Create Manual Test Harness: Build a dedicated test component with controls for simulating different user interactions. Include state visualization to make the current component state visible during testing. Add event simulation buttons to trigger specific behaviors without manual keyboard input.
Improve History Links Testing: Isolate the history display logic into pure functions that can be tested independently. Create mock data generators to produce test history entries. Implement verification functions to ensure clicking history links triggers the correct moves.
What was so fantastic for me was that all these recommendations seemed spot-on. But what was even better was that I could ask it to “show me exactly how I would implement tests for keystroke handling and what the UI component would look like.”
Changes to MoveModal Component Architecture
- Move validation logic from component into standalone function
validate-move-operation
- Create
get-effective-target
function to handle target selection logic- Extract history transformation logic to
prepare-history-display
function- Create separate
handle-move-button-click
function for move button logic- Implement
handle-cancel-button-click
for modal closing- Add
handle-reload-lists-click
for list reloading functionality- Create
handle-dropdown-selection
for dropdown changes- Add
handle-history-click
for history item interaction- Remove nested conditional logic from component render function
- Eliminate duplicate validation code that appears in two places
- Move logging statements to a specific debug section
- Change component to call handlers instead of containing logic
- Remove direct state manipulation from render functions
- Consolidate identical validation blocks into single implementation
- Unify the move operation execution code
- Remove redundant effective target calculation
- Simplify button onClick handlers to call extracted functions
- Change dropdown callbacks to use handler functions
- Move alert/error display logic out of component
Implementation Priority – Start by creating pure functions for all logic currently embedded in components. Next, implement event simulation utilities for testing. Then refactor components to use the pure functions and explicit callbacks. Add comprehensive unit tests for all functions and components. Finally, create a test harness for manual verification of complex interactions.
Conclusion
While fixing bugs that annoyed me for 4 years felt great, what’s truly great is how AI can recommend how to fix architectural problems.
I love architecture. In fact, the entire Idealcast podcast series was my multiyear quest to better understand what good software architecture is, which has everything to do with the learnings that went into the Wiring the Winning Organization book that I coauthored with Dr. Steve Spear. (Download the entire Part I here!)
But despite all the books I’ve read, the hundreds of podcasts and videos I’ve watched, and the scores of hours practicing it on my code, it was amazing to have an LLM tell me exactly how to do it in my specific codebase. By recommending patterns like consistent data accessors and how to extract the logic from my UI components, I was well on my way to fixing these problems.
This will make the testing so much easier. Instead of tediously clicking through the UI to test keyboard interactions (which I’ve been doing for years), I can now verify these functions in isolation.
A change that would have required hours of manual testing can now be validated in seconds through automated tests. This creates a tighter feedback loop, allowing me to catch issues before they reach production.