With the rise of powerful AI tools, it’s time to rethink how we work—moving away from traditional methods and embracing AI-augmented workflows.
One of the biggest challenges I face is navigating complex, domain-specific codebases within large organizations—including my own. These often contain undocumented quirks, legacy bugs, and hidden logic that can be difficult to uncover, and easy to get blindsided by.
To work efficiently—especially when integrating or building on top of existing systems like internal APIs (e.g., for creating MCP servers)—I’ve found a better approach: leverage AI, specifically Anthropic’s models, which currently offer some of the best coding assistance.
Instead of reading dense, often outdated documentation, I let the AI traverse the GitHub codebase directly. It quickly identifies key syntax, surfaces relevant files, and even generates sample/test code for hands-on experimentation.
Once that foundation is set, I use the AI to probe deeper—especially into areas even internal engineers often overlook. For example, while working with the SPAR API, I encountered a strange behavior: account types labeled “ACCT” needed to be converted into a return series for the UI. There was no mention of this in the documentation, raising concerns about missing steps or permission-based logic hidden under the surface.
That’s when it’s critical to ask the right questions and reach out. When AI hits a wall, it usually means you’ve found something truly obscure—and that’s your cue to dig deeper.