An Ideas to Life experiment

πŸ€– Google ADK Cheat Sheet

Your quick reference guide for building AI agents with Google Agent Development Kit

Getting Started

πŸš€ 5-Minute Quick Start
1. Prerequisites: Python 3.10+ or Node.js 18+
2. Install: pip install google-adk (Python) or npm install @google/adk (TypeScript)
3. Setup API Key: Create .env with your Gemini API key
4. Create Agent: adk create my-agent
5. Run: python agent.py or node agent.js

That's it! You've built your first AI agent.
What is Google ADK?

Google's Agent Development Kit (ADK) is an open-source, code-first framework for building and deploying AI agents. It's the same toolkit that powers agents inside Google products like Agentspace and Customer Engagement Suite.


Key Features:

  • Model-agnostic: Optimized for Gemini but works with any LLM
  • Multi-language: Python, TypeScript, Java, and Go support
  • Production-ready: Same framework used in Google products
  • Multi-agent orchestration: Build teams of specialized agents
  • A2A protocol support: Enable agent-to-agent communication
Installation - Python
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate

# Install ADK
pip install google-adk

# Create a new agent project
adk create my-agent
cd my-agent

Platform notes: Requires Python 3.10+ (3.10–3.12 recommended)

Installation - TypeScript
# Create project directory
mkdir my-adk-agent && cd my-adk-agent

# Initialize npm project
npm init -y

# Install ADK and dependencies
npm install @google/adk @google/adk-devtools
npm install -D typescript

# Create tsconfig.json
npx tsc --init

Platform notes: Requires Node.js 18 or higher

Environment Setup

Create a .env file in your project root with your API credentials:

# For Google AI Studio
GOOGLE_API_KEY=your_api_key_here

# Or for Vertex AI
GOOGLE_CLOUD_PROJECT=your_project_id
GOOGLE_CLOUD_LOCATION=us-central1

Get your API key from Google AI Studio or set up Vertex AI credentials.

Your First Agent

Here's a simple "Hello World" agent in Python:

from google.adk import LlmAgent

# Create a simple agent
agent = LlmAgent(
name="hello_agent",
model="gemini-2.0-flash-exp",
preamble="You are a helpful assistant."
)

# Run the agent
response = agent.run("Hello! What can you do?")
print(response)
Common Use Cases
  • Customer support: Build agents that answer questions and resolve issues
  • Data analysis: Create agents that query databases and generate insights
  • Content generation: Develop agents for writing, summarization, and editing
  • Task automation: Orchestrate complex workflows with multiple agents
  • Research assistant: Build agents that search, analyze, and synthesize information

Agent Types

LLM Agents

LLM Agents use Large Language Models as their core engine to understand natural language, reason, plan, and dynamically decide which tools to use.

# Python example
from google.adk import LlmAgent

agent = LlmAgent(
name="assistant",
model="gemini-2.0-flash-exp",
preamble="You are a helpful coding assistant.",
tools=[search_tool, calculator_tool]
)

Best for: Dynamic decision-making, natural language understanding, complex reasoning tasks

Workflow Agents

Workflow Agents orchestrate tasks using structured patterns for predictable, deterministic pipelines.


Type Description
SequentialAgent Executes sub-agents one after another in order
ParallelAgent Runs multiple sub-agents simultaneously
LoopAgent Repeats a sub-agent until a condition is met

Best for: Structured workflows, assembly-line processes, predictable multi-step tasks

Sequential Agent Example
from google.adk import SequentialAgent

# Create a pipeline of agents
pipeline = SequentialAgent(
name="content_pipeline",
agents=[
research_agent, # First: gather information
writer_agent, # Second: write draft
editor_agent # Third: edit and polish
]
)
Parallel Agent Example
from google.adk import ParallelAgent

# Run multiple analyses at once
analyzer = ParallelAgent(
name="multi_analyzer",
agents=[
sentiment_agent,
keyword_agent,
summary_agent
]
)
Agent Hierarchy

Agents can be composed into hierarchies where parent agents delegate to sub-agents:

  • Parent agents can have multiple sub-agents
  • Sub-agents can only have one parent
  • ADK will raise an error if you try to add an agent to multiple parents
  • Use this to build modular, reusable agent teams
Model Selection
Model Best For
gemini-2.0-flash-exp Fast responses, most tasks (recommended default)
gemini-1.5-pro Complex reasoning, long context
gemini-1.5-flash Quick tasks, high throughput

Tools & Functions

What are Tools?

Tools give agents the ability to take actions beyond text generation. They can search the web, execute code, query databases, call APIs, or perform any custom function you define.

Creating Custom Tools
# Python - Define a simple tool
from google.adk import tool

@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
# Your implementation here
return f"Weather in {city}: Sunny, 72Β°F"

# Add tool to agent
agent = LlmAgent(
name="weather_agent",
model="gemini-2.0-flash-exp",
tools=[get_weather]
)
πŸ’‘ Tool Best Practices
Always include clear docstrings and type hints. The LLM uses these to understand when and how to call your tools.
Pre-built Tools

ADK provides several built-in tools:

  • Search tools: Web search capabilities via Google Search
  • Code execution: Run Python code safely in a sandbox
  • Function calling: Structured output generation
  • Agent-as-tool: Use other agents as tools
Using Agents as Tools

One of ADK's most powerful features is using agents as tools for other agents:

from google.adk import LlmAgent

# Create specialized agents
math_agent = LlmAgent(name="math", model="gemini-2.0-flash-exp")
search_agent = LlmAgent(name="search", model="gemini-2.0-flash-exp")

# Main agent uses other agents as tools
coordinator = LlmAgent(
name="coordinator",
model="gemini-2.0-flash-exp",
tools=[math_agent, search_agent]
)
Tool Execution Modes
Mode Description
Auto Agent decides when to use tools (default)
Manual You control tool execution explicitly
Required Force the agent to use a specific tool
Integrating External APIs
# Example: Create a database query tool
import requests
from google.adk import tool

@tool
def query_database(sql: str) -> dict:
"""Execute a SQL query against the database."""
# Make API request to your database service
response = requests.post(
"https://api.example.com/query",
json={"query": sql}
)
return response.json()

Multi-Agent Design Patterns

8 Essential Patterns

ADK supports 8 proven design patterns for building multi-agent systems:

Pattern When to Use
Sequential Pipeline Linear workflows where output flows from one agent to the next
Parallel Execution Independent tasks that can run simultaneously
Generator-Critic One agent creates, another validates (e.g., write then review)
Iterative Refinement Repeatedly improve output until quality threshold is met
Dynamic Routing Route tasks to appropriate specialist agents based on content
Hierarchical Manager agent delegates to worker agents
Collaborative Multiple agents contribute expertise to solve complex problems
Human-in-the-loop Agent seeks human approval before critical actions
Generator-Critic Pattern
from google.adk import SequentialAgent, LlmAgent

# Generator creates content
generator = LlmAgent(
name="writer",
model="gemini-2.0-flash-exp",
preamble="You are a creative writer."
)

# Critic reviews and provides feedback
critic = LlmAgent(
name="editor",
model="gemini-2.0-flash-exp",
preamble="You are a critical editor. Review for clarity and accuracy."
)

# Combine in sequence
pipeline = SequentialAgent(
name="content_creation",
agents=[generator, critic]
)
Dynamic Routing Pattern
from google.adk import LlmAgent

# Create specialist agents
technical_agent = LlmAgent(name="tech_support", ...)
billing_agent = LlmAgent(name="billing", ...)
general_agent = LlmAgent(name="general", ...)

# Router agent with specialists as tools
router = LlmAgent(
name="customer_service",
model="gemini-2.0-flash-exp",
preamble="Route customer inquiries to the right specialist.",
tools=[technical_agent, billing_agent, general_agent]
)

The router agent intelligently selects which specialist to use based on the customer's question.

Iterative Refinement
from google.adk import LoopAgent

# Agent refines output until quality threshold
refiner = LoopAgent(
name="iterative_refiner",
agent=quality_checker_agent,
max_iterations=5, # Safety limit
exit_condition="output_meets_quality"
)

Set max_iterations to prevent infinite loops. Use escalate=True in agent response to exit early.

Agent2Agent (A2A) Protocol

What is A2A?

Agent2Agent (A2A) is an open, vendor-neutral protocol developed by Google that enables AI agents to discover, communicate, and collaborate across different platforms and frameworks.


Key Benefits:

  • Universal interoperability: Agents work together regardless of technology
  • Capability discovery: Agents advertise what they can do via Agent Cards
  • Enterprise security: Built-in authentication and authorization
  • Standardized: Open specification, not vendor lock-in
Enabling A2A in Your Agent
# Python - Make your agent A2A-compatible
from google.adk import LlmAgent

agent = LlmAgent(
name="my_agent",
model="gemini-2.0-flash-exp",
enable_a2a=True # Enable A2A protocol
)

# Agent now exposes:
# - POST /run endpoint for execution
# - GET /.well-known/agent.json for metadata
Agent Cards

Agent Cards are JSON documents that describe an agent's capabilities, allowing other agents to discover and use them.

{
"name": "Weather Agent",
"description": "Provides weather forecasts",
"capabilities": ["weather_lookup", "forecast"],
"version": "1.0.0",
"authentication": {
"type": "api_key",
"required": true
}
}
Calling Remote A2A Agents
# Connect to a remote A2A agent
from google.adk import RemoteAgent

remote_agent = RemoteAgent(
url="https://api.example.com/agent",
auth_token="your_token_here"
)

# Use it like any other agent
local_agent = LlmAgent(
name="coordinator",
model="gemini-2.0-flash-exp",
tools=[remote_agent] # Remote agent as a tool
)
A2A Security

A2A incorporates enterprise-grade security standards:

  • HTTPS/TLS: All communication encrypted
  • JWT tokens: Secure authentication
  • OIDC: OpenID Connect support
  • API keys: Simple key-based auth
  • OAuth 2.0: Delegated authorization
A2A Protocol Version

Current version: 0.2

Latest updates include:

  • Support for stateless interactions
  • Standardized authentication mechanisms
  • Improved error handling
  • Enhanced capability discovery

Tips & Best Practices

πŸ’‘ Start with Single Agents
Before building multi-agent systems, perfect your individual agents. Clear preambles and well-defined tools are the foundation of successful agent systems.
πŸ’‘ Build Teams of Specialists
Instead of one "do everything" agent, create focused specialist agents with clear instructions for a single job. Then orchestrate them with a coordinator agent.
# Good: Focused specialists
research_agent = LlmAgent(preamble="You only research topics")
writing_agent = LlmAgent(preamble="You only write content")

# Avoid: One agent trying to do everything
πŸ’‘ Use Clear Preambles
The preamble (system prompt) is crucial. Be specific about:
  • The agent's role and expertise
  • What the agent should and shouldn't do
  • Output format expectations
  • Any limitations or disclaimers
πŸ’‘ Safety & Trust
# Include safety reminders in preambles
preamble = """You are a financial advisor assistant.
IMPORTANT: You provide information, not financial advice.
Always remind users to consult with professionals.
If uncertain, say so clearly."""
πŸ’‘ Monitor and Log
Always log agent interactions for debugging and monitoring:
import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

response = agent.run(prompt)
logger.info(f"Agent response: {response}")
πŸ’‘ Handle Errors Gracefully
try:
response = agent.run(prompt)
except Exception as e:
# Provide fallback behavior
response = "I'm having trouble processing that. Please try again."
logger.error(f"Agent error: {e}")
πŸ’‘ Test with Edge Cases
Test your agents with:
  • Empty or malformed inputs
  • Very long inputs (context limits)
  • Ambiguous requests
  • Requests outside agent expertise
  • Concurrent requests (if applicable)
πŸ’‘ Use Version Control
Track your agent configurations, preambles, and tool definitions in version control. Treat agent prompts like codeβ€”they need versioning and review.
πŸ’‘ Optimize for Cost
  • Use gemini-1.5-flash for simple tasks
  • Cache frequently used context
  • Keep prompts concise
  • Use workflow agents for deterministic steps (no LLM needed)
πŸ’‘ Deploy with Agent Engine
Use Google's Agent Engine to deploy your ADK agents to production:
# Deploy to Agent Engine
adk deploy --project your-project-id --region us-central1
Agent Engine provides hosting, scaling, monitoring, and A2A endpoints automatically.
Common Gotchas
  • Agent hierarchy: An agent can only have one parent. Trying to reuse sub-agents in multiple parents will raise an error.
  • Context limits: Even with large context models, monitor token usage. Long conversations can exceed limits.
  • Tool descriptions: LLMs rely on tool docstrings to understand when to call them. Vague descriptions lead to wrong tool selection.
  • Infinite loops: Always set max_iterations on LoopAgent to prevent runaway agents.
  • Stateless by default: Agents don't persist memory between runs unless you explicitly implement state management.
Getting Help