As a developer advocate, I’ve always faced a challenge: how do you maintain high-quality technical documentation that’s technically accurate, well-written and formated, and consistent with the latest updates? Recently, I discovered a powerful approach using Claude Code’s subagents and agent skills that transformed my workflow.
A New Kind of Literary Bullfight
The Challenge: Updating a Complex Technical Article
I recently needed to update the article Custom audio streaming app with ADK Bidi-streaming on the ADK official document site that I wrote while ago:
This wasn’t just about fixing a few typos — I wanted to:
- Improve the writing quality and consistency
- Ensure code examples followed best practices
- Verify technical consistency against the latest SDK implementation
This seemingly simple task revealed the core challenges of high-quality technical writing.
The Challenge of High-Quality Tech Writing
Creating excellent technical documentation requires multiple layers of expertise:
- Professional Editing: Consistent writing style, proper grammar, clear structure, and appropriate cross-references
- Code Review: Well-formatted code snippets with consistent coding practices and proper error handling
- Subject Matter Expertise: Deep knowledge of the technology being documented — in this case, source-level understanding of:
Traditionally, you’d need multiple reviewers — an editor, a code reviewer, and a subject matter expert (SME) — to achieve this level of quality. But what if you could combine all three using AI?
The Solution: Claude Code Subagents and Agent Skills
Claude Code offers two powerful features that can act as your expert reviewers:
What Are Claude Code Subagents?
Subagents are specialized AI assistants that you can configure to perform specific tasks autonomously. You define their expertise, tools, and behavior in configuration files within your project.
What Are Agent Skills?
Agent Skills provide subagents with access to specific knowledge bases, such as documentation, source code, or API references. This gives them deep, contextual understanding of your technology stack.
My Strategy
I created two specialized subagents:
docs-reviewer
- Role: Professional editor and code reviewer
- Responsibilities: Ensure consistent writing style, proper document structure, and code quality
adk-reviewer
- Role: ADK subject matter expert
- Equipped with three agent skills:
google-adkto access to ADK source code,gemini-live-apito Gemini Live API documentation, andvertexai-live-apito Vertex AI Live API documentation - See also: Supercharge ADK Development with Claude Code Skills
Tip: To streamline billing and take advantage of Google Cloud’s infrastructure, I used Claude on Vertex AI. This integration allows me to use Claude Code while keeping costs integrated with my existing Google Cloud billing.
Defining the docs-reviewer Subagent
The docs-reviewer subagent is configured to act as a senior documentation reviewer. Here’s a snippet from its agent definition that shows its core capabilities (see docs-reviewer.md for full definition):
# Your roleYou are a senior documentation reviewer ensuring that all parts of the documentation
maintain consistent structure, style, formatting, and code quality. Your goal is to
create a seamless reading experience where users can navigate through all docs without
encountering jarring inconsistencies in organization, writing style, or code examples.
## When invoked
1. Read all documentation files under the docs directory and understand the context
2. Review the target document against the Review Checklist below
3. Output and save a docs review report named `docs_review_report__.md`
The agent has a comprehensive review checklist covering:
- Structure and Organization: Consistent heading hierarchy, section ordering, and document types
- Writing Style: Active voice, present tense, consistent terminology, and proper cross-references
- Code Quality: Proper formatting, commenting philosophy, and example consistency
- Table Formatting: Alignment rules and cell content standards
The review report categorizes findings into:
- Critical Issues (C1, C2, …): Must fix — severely impacts readability or correctness
- Warnings (W1, W2, …): Should fix — impacts consistency and quality
- Suggestions (S1, S2, …): Consider improving — would enhance quality
Defining the adk-reviewer Subagent
The adk-reviewer subagent is equipped with specialized knowledge through agent skills. Here’s its agent definition (see adk-reviewer.md for full definition):
# Your roleYou are a senior code and docs reviewer ensuring the target code or docs are
consistent and updated with the latest ADK source code and docs, with the knowledge
on how ADK uses and encapsulates Gemini Live API and Vertex AI Live API features
internally.
## When invoked
1. Use google-adk, gemini-live-api and vertexai-live-api skills to learn ADK,
and understand how ADK uses and encapsulates Gemini Live API and Vertex AI
Live API features internally.
2. Review target code or docs with the Review checklist below.
3. Output and save a review report named `adk_review_report__.md`
The key review principles are:
- Source Code Verification: The agent investigates the actual adk-python implementation to verify issues, rather than solely relying on API documentation
- Latest Design Consistency: Ensures code and docs match the latest ADK design intentions
- Feature Completeness: Identifies missing important ADK features
- Deep API Understanding: Knows how ADK encapsulates and uses Gemini Live API and Vertex AI Live API internally
This approach is powerful because the agent can reference the actual source code to catch issues like deprecated parameters, API changes, and implementation nuances that might not be obvious from documentation alone.
The Review Process in Action
Let me walk you through how these subagents transformed my article review process.
Documentation Review with the docs-reviewer Agent
I ran the docs-reviewer agent on my article, and it produced a comprehensive review report identifying critical and warning-level issues across consistency, writing, and code quality:
With this report, I started an interactive review process with Claude Code where I looked at each issue one by one, understood the problem and possible fixes suggested by the agent, and determined how to fix it (or skip it if I believed it was appropriate).
Here are some examples of the interactive fix process:
Example 1: Fixing Incomplete Imports
In this issue, the agent pointed out a code quality problem. C2 means it’s the #2 critical issue.
Since I agreed with this assessment, I entered this prompt to Claude Code:
My Prompt:
fix C2
Claude Code Response:
I just typed those two words and Claude Code took care of it. Of course, I reviewed the result to double-check and avoid the risk of hallucinations.
Let’s take a look at another docs review issue example.
Example 2: Fixing Inconsistent Heading Levels
In this issue, the agent pointed out a text formatting problem. W1 means it’s the #1 warning issue.
This is one of the typical problems that coding agents are really good at fixing automatically: semantic text editing. It’s like a text editor that completely understands the meaning of the content, so you can ask how to edit the text semantically. In this case, recommendations like “Use #### for function/code example titles” represent great examples of semantic text editing.
My Prompt:
fix W1
Claude Code Response:
Other Document Review Examples
The docs-reviewer agent found a total of 25 issues across the article. Here’s a summary of the key findings:
Critical Issues (5):
- C1: Inconsistent model name in code vs text — mixing
gemini-2.0-flash-expandgemini-2.0-flash-live-001 - C2: Incomplete import in session resumption section
- C3: Incorrect function reference — using
InMemoryRunnerinstead ofRunner - C4: Missing function definition and initialization context
- C5: Typo in code comment (“parial” should be “partial”)
Warnings (12):
- W1: Inconsistent heading level structure
- W2: Inconsistent code comment style
- W3: Missing cross-references
- W4: Inconsistent table formatting
- W5: Unclear section purpose (session resumption placement)
- W6: Inconsistent terminology (app vs application, agent vs ADK agent)
- W7: Missing error handling explanation
- W8: Incomplete example code with undefined variables
- W9: Inconsistent code block language tags
- W10: Missing prerequisites section
- W11: Ambiguous numbering in headings
- W12: Inconsistent list formatting
Suggestions (8):
- Add visual architecture diagram
- Add complete runnable example
- Improve troubleshooting section
- Add production deployment considerations
- Enhance code comments with teaching context
- Add audio format specifications
- Improve introduction
- Add best practices section
After handling each issue one by one with Claude Code, I was able to improve the text and code quality of the article significantly beyond the original in a very short time.
ADK Review with the adk-reviewer Agent
Having gained confidence in the text and code quality, I started working with another subagent: adk-reviewer. Equipped with deep knowledge of ADK internals, it produced another review report focusing on API usage, technical accuracy, and consistency with the latest ADK release.
Let’s take a look at what kind of issues the agent found and how we fixed them.
Example 3: Fixing Deprecated API Usage
In this issue, the agent found an inconsistency between the original article and the latest ADK version:
The session_id parameter is now mandatory for calling run_live, and the session parameter is no longer supported. Let’s have Claude Code fix it.
My Prompt:
fix C1
Claude Code Response:
What impressed me most about the adk-reviewer agent is that it digs deep into the google-adk Python SDK source code and understands the design intentions and complex interactions between objects. It even understands how google-adk exposes its functionality through interactions with external libraries such as Gemini API and Vertex AI APIs. With this expert perspective, the agent can find issues and recommend the best fix options.
Example 4: Deep Dive into Streaming Behavior
Just like having a human subject matter expert as your reviewer, you can also have interactive deep-dive research and discussion with Claude Code to gain a better understanding of the essential problem and build a practical solution.
In this example, the adk-reviewer agent pointed out an issue where the original sample code was only using the partial (incremental) texts from the agent and ignoring the complete text:
But I wasn’t sure about this. If we send both partial and complete text to the client, it needs to handle the duplication between them. But at the same time, We don’t want to lose any texts from the agent. So, instead of just choosing a fix option, I started a discussion with Claude Code.
My Prompt:
For W2, if I concatenate all texts with partial=True, will that be exactly the same as the text with partial=False? Check with the google-adk skill.
As mentioned earlier, I have defined google-adk skill on this Claude Code, so it has access to ADK Python SDK source code, Gemini Live API docs and Vertex AI Live API docs. In the prompt above, I asked to use the skill explicitly to deep research on this.
After a few minutes of research, Claude Code responded:
Claude Code Response:
Now we confirmed that the partial text will not lose any texts from the agent, at the ADK source code level.
This is remarkable. With this interactive session, Claude Code was able to understand the situation at a higher resolution and provide a deeper, interactive review process.
Other ADK Review Examples
The adk-reviewer agent found a total of 6 issues focusing on ADK API usage and best practices. Here’s a summary of all findings:
Critical Issues (1):
Warnings (2):
- W1: Missing explanation of session creation requirement — unclear when session creation is necessary vs optional
- W2: Incomplete event handling for audio streaming — not handling complete (non-partial) text events
Suggestions (3):
- S1: Add error handling for WebSocket disconnections — graceful cleanup on unexpected client disconnects
- S2: Document session resumption configuration more clearly — when to use it and when to skip it
- S3: Add information about runner lifecycle management — runners should be created once and reused, not per connection
The agent’s deep knowledge of ADK internals helped identify these issues by examining the actual source code and understanding how ADK encapsulates Gemini Live API and Vertex AI Live API features. This level of analysis would be difficult to achieve without direct access to the SDK implementation.
Key Takeaways
Using Claude Code subagents and agent skills to review my technical writing transformed my workflow and delivered remarkable results:
- Specialized Review Teams: The
docs-revieweracted as a professional editor and code reviewer, while theadk-reviewerserved as an ADK subject matter expert. Together, they found 31 issues (6 from ADK review, 25 from documentation review) that I would have likely missed on my own. - Source-Level Deep Dive: Agent skills gave the
adk-reviewerdirect access to the adk-python SDK source code, Gemini Live API docs, and Vertex AI Live API docs. This enabled it to catch deprecated API parameters, understand implementation nuances, and verify design intentions that aren’t obvious from documentation alone. - Interactive Problem Solving: Rather than just accepting automated fixes, I could engage in deep technical discussions with Claude Code. For example, when uncertain about the W2 text streaming issue, I asked it to “Check with the
google-adkskill” and received a thorough analysis of the ADK source code explaining why the current approach was actually correct. - Semantic Text Editing: Claude Code excels at understanding the meaning behind content and applying changes semantically. Tasks like “Use #### for function/code example titles” were executed flawlessly across the entire document, something that would be tedious and error-prone to do manually.
Getting Started
Want to try this approach for your technical writing? Here’s how to get started:
- Set up Claude Code: Install Claude Code and configure it for your project
- Integrate with Vertex AI (optional): Use Claude on Vertex AI for streamlined billing
- Create subagents: Define specialized agents in
.claude/agents/for different review aspects – see Subagents documentation and example agent definitions - Configure agent skills: Add relevant documentation and source code as skills in
.claude/skills/– see Agent Skills documentation and example skill definitions
For more details on ADK development with Claude Code skills, check out my previous article: Supercharge ADK Development with Claude Code Skills.
Conclusion
High-quality technical writing requires multiple reviewers: an editor for consistency, a code reviewer for quality, and a subject matter expert for accuracy. With Claude Code’s subagents and agent skills, I created this entire review team.
The two agents — docs-reviewer and adk-reviewer–found 31 issues I would have missed. They didn’t just identify problems; they referenced actual source code and explained the reasoning behind their recommendations. The workflow was simple: review the report, type “Fix C1”, and Claude Code applied the fix with full context.
This approach augments your expertise rather than replacing it. The agents catch what you miss and help maintain consistency across your writing. Since the configurations are version-controlled, you can reuse them for every article.
If you write technical documentation, try this approach. The setup investment pays off every time you review or create content.
Source Credit: https://medium.com/google-cloud/supercharge-tech-writing-with-claude-code-subagents-and-agent-skills-44eb43e5a9b7?source=rss—-e52cf94d98af—4
