
Create Unit Tests
The next task in my TODO list is to create unit tests. I’ll let Gemini do this. It goes ahead and creates test_rickbot_agent.py
in our src/tests/unit
folder:
"""Unit tests for the Rickbot agent.This file contains unit tests to verify the functionality and behavior of the
Rickbot agent.
"""
import pytest
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types as genai_types
from src.rickbot_agent.agent import root_agent
APP_NAME = "test_rickbot"
@pytest.mark.asyncio
async def test_rickbot_agent_response():
"""Tests that the rickbot agent returns a non-empty response."""
session_service = InMemorySessionService()
await session_service.create_session(
app_name=APP_NAME, user_id="test_user", session_id="test_session"
)
runner = Runner(
agent=root_agent, app_name=APP_NAME, session_service=session_service
)
query = "What is the meaning of life?"
response_text = ""
async for event in runner.run_async(
user_id="test_user",
session_id="test_session",
new_message=genai_types.Content(
role="user", parts=[genai_types.Part.from_text(text=query)]
),
):
if event.is_final_response():
if event.content and event.content.parts and len(event.content.parts) > 0:
response_text = event.content.parts[0].text
print(f"Response: {response_text}")
else:
response_text = ""
break
assert isinstance(response_text, str)
assert len(response_text) > 0
We have a simple test_rickbot_agent_response()
test. At the heart of the test is the ADKRunner
. This is the central execution engine of an ADK application. It connects the agent to the session, allows us to issue requests, and then provides the responses as a stream of events.
Our test script begins by initializing the Runner
with two key components:
root_agent
: This is our actual Rickbot agent, imported directly fromsrc.rickbot_agent.agent`
. The Runner wraps our agent to manage its execution.InMemorySessionService
: An agent needs short-term memory to handle conversations. The ADK provides different session services for this. For our simple application we useInMemorySessionService
, a lightweight and self-contained service that stores the conversation’s history and state in memory. This is perfect for automated tests as it requires no external database or setup and is cleared after each test run.
You’ll notice the test is written using async
and await
. This is fundamental because agent calls to the Gemini model need to be asynchronous. If they were not, then our client application would appear to freeze whilst we wait for a response. Any code that uses the ADK Runner must be asynchronous, which is why our test function uses async
.
The async for
loop allows us to listen for stream of events from the Gemini model. When the final response is received we can then extract the final message intended for the user. We then check that this response is not empty.
Testing a Multi-Turn Conversation
Since multi-turn conversations are going to be handled differently with the ADK (compared to the original Rickbot implementation), I definitely want to include this in my unit testing. So I went ahead and added an extra test:
@pytest.mark.asyncio
async def test_rickbot_agent_two_turn_conversation():
"""
Tests the agent's ability to maintain context across a two-turn conversation.It verifies that the agent can remember information from the first turn
(the user's name) and use it in the second turn to answer a question.
"""
session_service = InMemorySessionService()
await session_service.create_session(
app_name=APP_NAME, user_id="test_user", session_id="test_session"
)
runner = Runner(
agent=root_agent, app_name=APP_NAME, session_service=session_service
)
queries = ["Hello, my name is Dazbo", "What is my name?"]
responses = []
for query in queries:
print(f"Query: {query}")
response_text = ""
async for event in runner.run_async(
user_id="test_user",
session_id="test_session",
new_message=genai_types.Content(
role="user", parts=[genai_types.Part.from_text(text=query)]
),
):
if event.is_final_response():
if (
event.content
and event.content.parts
and len(event.content.parts) > 0
):
response_text = event.content.parts[0].text
print(f"Response: {response_text}")
else:
response_text = ""
break
responses.append(response_text)
assert len(responses) == 2
assert isinstance(responses[0], str)
assert len(responses[0]) > 0
assert isinstance(responses[1], str)
assert "Dazbo" in responses[1]
The second test is very similar to the first. We create the session_service
and Runner
in exactly the same way. But this time we send two queries to the agent sequentially, in the same session:
- “Hello, my name is Dazbo.”
- “What is my name?”
Then the test asserts that we received two responses, and that the second response includes the word “Dazbo”. If the test passes, then the session memory (using the InMemorySessionService
) is working.
Note that our makefile
already contained a target for running tests (created by the Agent-Starter-Pack), and which I’ve already refactored for my new folder structure:
# Run unit and integration tests
test:
uv run pytest src/tests/unit && uv run pytest src/tests/integration
Now I can easily run the test from my terminal like this:
make test
But instead, let’s just ask Gemini CLI to run it for us:
I allow Gemini to run my tests. And a few seconds later…
One cool thing about using Gemini CLI for your workflows is that it will typically offer to run tests if it makes changes to code that is tested.
Neat!
If we want the verbose output from the test, we can do this:
uv run pytest -v -s src/tests/unit/test_rickbot_agent.py
And the output looks like this:
UI Considerations
We have a few things to consider here…
Which UI?
The original Rickbot uses Streamlit. The easiest option will be to port this UI and re-use it. It looks nice, and is pure Python. But the UI is very limited from a look-and-feel customisation perspective. A React frontend will allow me much more freedom in how the UI looks.
UI Integration
With Streamlit I can can run the Rickbot Agent using Runner
. The advantage here is that the UI and agent all run in the same Python process.
Alternatively, I can expose the Rickbot Agent as web API, and call this from the UI. This approach can also be used with Streamlit, and is required if I create a React UI.
UI Hosting
Lastly, where shall I host the UI?
I can choose to host the UI in the same container as rickbot_agent
, which means I can deploy the UI and agent as a combined application to Cloud Run.
Alternatively I can decouple the agent and UI. I.e. deploy the agent in one container, and deploy the UI in another. They will then be deployed to two separate Cloud Run services.
My Selected Approach
I will get the most flexibility by exposing my agent with a web API, and deploying my UI in a separate container / Cloud Run service. This will be the target state.
But initially I will start by simply porting the existing Streamlit UI and running in the same container. That’s what we’ll do next.
Implementing the Streamlit UI
Simply asking Gemini to follow the To-Do step “Migrate rickbot Streamlit UI as streamlit_fe” didn’t work well. It created a streamlit_fe
folder and a basic app.py
page that resembles the original Rickbot UI. But it skipped a whole bunch of logic. So I undid these changes and then gave Gemini some critical requirements as part of the Streamlit step:
Before implementing, check:
– That we can change the personality — the system prompt — of the model dynamically
– That we have kept the Streamlit authentication flow which uses “create_auth_secrets.py”
– That we can still upload files to the model as part of the prompt
– That we also migrate the privacy policy page
– That the multi-turn conversation is no longer stored in Streamlit — this is now in the agent.
Specific prompts FTW, right?
Gemini CLI responds:
1. Dynamic Personality Change
* Current Status: The agent is not currently set up to dynamically change personalities. The rickbot_agent/agent.py file loads
the "Rick" personality and its system prompt only once when the application starts.
* How to Fix: We need to modify the agent to use a callback. This callback (before_model_callback) will intercept the request,
read the personality name from the session state (which the UI will provide), load the corresponding personality's system
prompt, and inject it into the instructions sent to the model for that specific turn.
* Verdict: This requires a change in the agent's code, but it is a standard ADK pattern and is straightforward to implement.2. Streamlit Authentication Flow
* Current Status: You are correct, I did not include the authentication flow from the original app in the initial migration plan.
My goal was to first establish the core chat functionality.
* How to Fix: We can add this back. It involves porting the create_auth_secrets.py logic and wrapping the main UI in the if not
st.user.is_logged_in: block, just like the original app.
* Verdict: This was omitted but can be added back.
3. File Uploads
* Current Status: Similar to authentication, I did not include the file upload logic in the proposed app.py.
* How to Fix: This can be re-implemented. We would add the st.file_uploader back to the UI. Then, when calling the
runner.run_async, we would construct the new_message object with multiple parts: one for the text prompt and another for the
file's content (e.g., Part.from_data(data=bytes, mime_type=...)).
* Verdict: This was omitted but is fully supported by ADK and can be added.
4. Privacy Policy Page
* Current Status: I did not migrate the privacy policy page.
* How to Fix: This is a simple fix. I can create the src/streamlit_fe/pages/ directory and write the content of the original
privacy_policy.py file into it. Streamlit will then automatically create the page.
* Verdict: Easy to add.
5. Multi-turn Conversation Storage
* Current Status: Your understanding is correct. This is one of the key benefits of the new architecture.
* Explanation: The authoritative history of the conversation is now managed entirely by the ADK's SessionService within the agent.
The Streamlit UI no longer needs to pass the full conversation history back and forth. It only needs to keep the list of
messages for the purpose of displaying the chat on the screen. This is a major improvement in decoupling the frontend from the
backend logic.
* Verdict: This is already handled correctly in the proposed architecture.
Summary & Recommendation
You've raised excellent points. My initial proposal was a "slim" migration to get the core working, but it omitted key features.
I recommend we proceed as follows:
1. I will first create a new app.py that includes the authentication flow, file uploader, and privacy page.
2. Then, I will guide you through the necessary changes in rickbot_agent/agent.py to enable dynamic personalities using a
callback.
Does this revised plan sound good to you?
Okay, sounds good. I go ahead and approve the plan and Gemini gets to work.
It did an okay job of creating the Streamlit UI code. But it totally failed to create working code that uses ADK callbacks to modify the agent when the Rickbot personality is changed. In the end I made some manual changes to my agent.py
, and opted to pre-load the various personalities as separate agent instances, and cache them. After all, there’s not many!
"""
This module defines the main agent for the Rickbot-ADK application.
It initializes a set of Google Gemini-powered agent, each loaded from a specific personality.
We then cache these agents for fast retrieval.
"""from google.adk.agents import Agent
from google.adk.tools import google_search # built-in Google Search tool
from google.genai.types import GenerateContentConfig
from .config import get_config, logger
from .personality import Personality, personalities
config = get_config()
def create_agent(personality: Personality) -> Agent:
"""Creates and returns an agent with the given personality."""
logger.debug(f"Creating agent for personality: {personality.name}")
# Load configuration
return Agent(
name=f"{config.agent_name}_{personality.name}", # Make agent name unique
description=f"A chatbot with the personality of {personality.menu_name}",
model=config.model,
instruction=personality.system_instruction,
tools=[google_search],
generate_content_config=GenerateContentConfig(
temperature=personality.temperature, top_p=1, max_output_tokens=8192
),
)
# Pre-load and cache all agents at startup
_agent_cache: dict[str, Agent] = {
name: create_agent(p) for name, p in personalities.items()
}
logger.info(f"Pre-loaded and cached {len(_agent_cache)} agents.")
def get_agent(personality_name: str) -> Agent:
"""Retrieves a pre-configured agent from the cache."""
agent = _agent_cache.get(personality_name)
if not agent:
logger.error(f"Agent for personality '{personality_name}' not found in cache.")
# Fallback to Rick or raise an error
rick_agent = _agent_cache.get("Rick")
if not rick_agent:
raise ValueError("Default 'Rick' personality agent not found.")
return rick_agent
return agent
# For backwards compatibility or direct access if needed, though get_agent is preferred.
root_agent = get_agent("Rick")
Gemini created a new streamlit_fe
folder that looks like this:
src/streamlit_fe
├── pages
│ └── privacy_policy.py
├── __init__.py
├── app.py
├── chat.py
├── create_auth_secrets.py
├── st_config.py
└── st_utils.py
Most of these files are basically identical to the original Rickbot Streamlit implementation. But there were some issues in what had been created. For example:
- Although Gemini replicated the function
create_secrets_toml()
— which is used to pull the Rickbot OAuth credentials into the Streamlit client — it failed to actually replicate the code that checks if the user is authenticated or not. So I’ve manually recreated this inapp.py
. If you’re interested in how this works, check out my article: Adding Authentication and Authorisation to our Rickbot Streamlit Chatbot with OAuth and the Google Auth Platform. - The rate limiting code had been omitted. I want this as a basic measure to prevent too many requests going to Gemini and costing me money! So I’ve recreated this by implementing a dedicated
RateLimiter
class in myst_utils.py
. - In
chat.py
Gemini had not replicated my code that shows any uploaded image or video in the chat UI itself. So I’ve re-implemented this.
To support these changes, I’ve added some additional entries to my project’s .env
file:
# Client UI
export APP_NAME="rickbot_st_ui"
export AUTH_REQUIRED="False"
export RATE_LIMIT="60"
Fixing Session and User Handling
When I tell Gemini CLI to carry on with the “session handling” to-do step, it responds with:
This sounds like a good plan. So I let Gemini go ahead and the resulting code looks good.
Looking For More Differences and Regressions
My next prompt:
Please look carefully at the Streamlit code from the original Rickbot repo. Then compare to the ‘streamlit_fe’ implementation in this project. Highlight any areas where user-facing functionality may be missing or different.
Here’s the output:
So far, so good. Then:
Oh, good catch on the “attachments in history” regression. We’ll fix that in a second.
Here’s the final result of the comparison from Gemini CLI:
This is a pretty good analysis. I’m happy that the only thing left to fix is the “attachments in chat history” regression. So I ask:
Can you help me fix this attachment display regression?
Gemini replaces the missing block of code that shows file attachments in the history. And then it closes the To Do entry for me, without needing any prompting to do so. Nice!
Our streamlit_fe/app.py
now looks like this:
"""
This is the main entry point for the Rickbot Streamlit application.
"""import asyncio
import uuid
from pathlib import Path
import streamlit as st
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from streamlit.errors import StreamlitAPIException
from streamlit.runtime.scriptrunner import get_script_run_ctx
from rickbot_agent.agent import get_agent # Import the agent getter
from rickbot_agent.personality import Personality, get_avatar, personalities
from streamlit_fe.chat import render_chat
from streamlit_fe.create_auth_secrets import create_secrets_toml
from streamlit_fe.st_config import config, logger
from streamlit_fe.st_utils import RateLimiter
# Define the root path of the project
ROOT_DIR = Path(__file__).parent.parent
DEFAULT_PERSONALITY = "Rick"
RICKBOT_AVATAR = get_avatar("rickbot-trans")
async def initialize_adk_runner(personality: Personality):
"""Initialise the ADK runner with the correct agent personality."""
rickbot_agent = get_agent(personality.name)
session_service = InMemorySessionService()
await session_service.create_session(
app_name=config.app_name,
user_id=st.session_state.user_id,
session_id=st.session_state.session_id,
)
return Runner(
agent=rickbot_agent,
app_name=config.app_name,
session_service=session_service
)
@st.cache_resource # Ensure this rate limiter is shared across all user sessions and reruns
def initialize_rate_limiter():
""" Initialize the rate limiter. """
return RateLimiter(config.rate_limit_qpm)
def main():
""" Main function to run the Streamlit application. """
# --- Page Configuration ---
st.set_page_config(
page_title="Rickbot",
page_icon=RICKBOT_AVATAR, # Rickbot logo
layout="wide",
initial_sidebar_state="expanded",
)
if "current_personality" not in st.session_state:
st.session_state.current_personality = personalities[DEFAULT_PERSONALITY]
# --- Authentication Check ---
if config.auth_required:
try:
create_secrets_toml(config.google_project_id) # Do once and cache
except ValueError as e:
logger.error(f"Failed to setup auth: {e}", exc_info=True)
st.error(f"⚠️ Could not initialize the application. Please check your configuration. Error: {e}" )
st.stop()
# If the user isn't logged in, show the unauthenticated welcome screen
if not st.user.is_logged_in:
header_col1, header_col2 = st.columns([0.3, 0.7])
header_col1.image(RICKBOT_AVATAR, width=140)
header_col2.title(f"{st.session_state.current_personality.title}")
st.divider()
st.markdown(
"Rickbot is a chat application. Chat with Rick, ask your questions, and feel free to upload content as part of your discussion. Rickbot also offers multiple other personalities to interact with."
)
st.markdown(
":eyes: We do not store any user data, prompts or responses. Read our [Privacy Policy](/privacy_policy)."
)
st.divider()
st.markdown(
":lock: Please login to use Rickbot. Any Google account will do. Login helps us prevent abuse and maintain a stable, accessible experience for everyone."
)
if st.button("Log in with Google", use_container_width=True):
st.login()
else: # We are authenticated
authenticated_flow()
else: # No authentication required - go straight to authenticated page
authenticated_flow()
def authenticated_flow():
try:
# --- Rate Limiting ---
rate_limiter = initialize_rate_limiter()
# --- Session State Initialization ---
if "user_id" not in st.session_state:
if config.auth_required and st.user.is_logged_in:
st.session_state.user_id = st.user.email
else:
# No user logged in so get use Streamlit unique session ID as a stand-in for user_id
ctx = get_script_run_ctx()
st.session_state.user_id = ctx.session_id if ctx else "anonymous_user"
logger.debug(f"User ID: {st.session_state.user_id}")
if "session_id" not in st.session_state:
st.session_state.session_id = str(uuid.uuid4())
logger.debug(f"Session ID: {st.session_state.session_id}")
# Re-initialize ADK runner if personality changes or not yet initialized
if (
"adk_runner" not in st.session_state
or st.session_state.get("last_personality") != st.session_state.current_personality
):
st.session_state.adk_runner = asyncio.run(
initialize_adk_runner(st.session_state.current_personality)
)
st.session_state.last_personality = (st.session_state.current_personality)
# --- Render Chat Interface ---
render_chat(
config,
rate_limiter,
st.session_state.adk_runner
)
except (StreamlitAPIException, KeyError, ValueError, TypeError, RuntimeError) as e:
st.error(f"An unexpected error occurred: {e}")
logger.error(f"Application error: {e}", exc_info=True)
if __name__ == "__main__":
main()
What’s going on here?
- This
app.py
is the entry point for the Streamlit application. - The
initialize_adk_runner()
function is, er, responsible for initialising our ADK Runner. When doing so, the application retrieves the appropriate cached personality agent, by callingrickbot_agent.agent.get_agent(personality.name)
. It then passes in this agent, when creating thegoogle.adk.runners.Runner
. TheRunner
is ultimately returned. Note that this function isasync
because it needs to interact with the ADK’sSessionService
, which is inherently asynchronous. - As with the original Rickbot Streamlit UI, we check if OAuth-based user authentication is required (which is configured using an environment variable), and if so, check whether the current user is logged in. If not, we execute login. But if so, we execute the
authenticated_flow()
function. - In
authenticated_flow()
we bridge Streamlit’s session handling with ADK session management. We asynchronously run theinitialize_adk_runner()
function usingasyncio.run()
to obtain the ADK runner. - Finally, we pass this runner into our
streamlit_fe.chat.render_chat()
function.
Our streamlit_fe/chat.py
looks like this:
"""Streamlit chat interface for Rickbot-ADK.This module provides the Streamlit-based chat interface for the Rickbot-ADK application.
It handles user input, displays chat messages, manages personality selection,
and interacts with the Agent Development Kit (ADK) runner to generate bot responses.
"""
import asyncio
from pathlib import Path
from typing import Any
import streamlit as st
from google.adk.runners import Runner
from google.genai.types import Blob, Content, Part
from rickbot_agent.personality import personalities
from streamlit_fe.st_utils import RateLimiter
# Define the root path of the project
ROOT_DIR = Path(__file__).parent.parent
USER_AVATAR = str(ROOT_DIR / "rickbot_agent/media/morty.png")
async def get_agent_response(runner: Runner, prompt: str, uploaded_file: Any, rate_limiter: RateLimiter):
"""
Handles user input and generates the bot's response using the Rickbot ADK agent.
"""
# --- Rate Limiting Check ---
# Perform this check *before* modifying session state or displaying the user's prompt
if not rate_limiter.hit("rickbot"):
st.warning(
"Whoa, slow down there! Give me a minute."
)
st.stop() # Stop execution to prevent the message from being processed
# Create the user message object, including any attachments
user_message: dict[str, Any] = {"role": "user", "content": prompt}
if uploaded_file:
user_message["attachment"] = {
"data": uploaded_file.getvalue(),
"mime_type": uploaded_file.type or "",
}
st.session_state.messages.append(user_message)
# Display user message and attachment in the chat
with st.chat_message("user", avatar=USER_AVATAR):
if uploaded_file:
mime_type = uploaded_file.type or ""
if "image" in mime_type:
st.image(uploaded_file.getvalue())
elif "video" in mime_type:
st.video(uploaded_file.getvalue())
st.markdown(prompt)
# Prepare the message for the ADK
message_parts = [Part(text=prompt)]
if uploaded_file:
message_parts.append(
Part(
inline_data=Blob(data=uploaded_file.getvalue(), mime_type=uploaded_file.type)
)
)
new_message = Content(role="user", parts=message_parts)
# Generate and display the agent's response
with st.status("Thinking...", expanded=True) as bot_status:
with st.chat_message("assistant", avatar=st.session_state.current_personality.avatar):
response_placeholder = st.empty() # empty invisible container for retrieving streamed content
full_response = ""
# Call the agent runner
async for event in runner.run_async(
user_id=st.session_state.user_id,
session_id=st.session_state.session_id,
new_message=new_message
):
if event.is_final_response() and event.content and event.content.parts:
for part in event.content.parts: # retrieve the response in parts
if part.text:
full_response += part.text
# Visual trick!
# Add the block element - a similated cursor - whilst the agent is still streaming the response
response_placeholder.markdown(full_response + "▌")
bot_status.update(label="Done.", state="complete")
response_placeholder.markdown(full_response)
# Add the full bot response to the session state
st.session_state.messages.append({"role": "assistant", "content": full_response})
def render_chat(config, rate_limiter: RateLimiter, adk_runner: Runner):
"""
Renders the main chat interface, including sidebar and chat history.
"""
st.session_state.current_personality = st.session_state.current_personality
# --- Session State Initialization ---
if "messages" not in st.session_state:
st.session_state.messages = []
if "file_just_uploaded" not in st.session_state:
st.session_state.file_just_uploaded = False
def on_file_change():
st.session_state.file_just_uploaded = True
# --- Title and Introduction ---
header_col1, header_col2 = st.columns([0.3, 0.7])
header_col1.image(st.session_state.current_personality.avatar, width=140)
header_col2.title(f"{st.session_state.current_personality.title}")
st.caption(st.session_state.current_personality.welcome)
# --- Sidebar for Configuration ---
with st.sidebar:
if config.auth_required and st.user.is_logged_in:
st.caption(f"Welcome, {st.user.name}")
st.button("Log out", on_click=st.logout)
# --- Personality Selection ---
personality_menu_names = [p.menu_name for p in personalities.values()]
selected_menu_name = st.selectbox(
"Choose your bot personality:",
options=personality_menu_names,
index=personality_menu_names.index(
st.session_state.current_personality.menu_name
),
)
# Find the corresponding personality object based on the selected menu_name
selected_personality = next(
p for p in personalities.values() if p.menu_name == selected_menu_name
)
if selected_personality != st.session_state.current_personality:
st.session_state.current_personality = selected_personality
st.session_state.messages = [] # Reset messages on personality change
st.rerun()
st.info(st.session_state.current_personality.overview)
uploaded_file = st.file_uploader(
"Upload a file.",
type=["png", "jpg", "jpeg", "pdf", "mp3", "mp4", "mov", "webm"],
on_change=on_file_change,
)
if st.button("Clear Chat History", use_container_width=True):
st.session_state.messages = []
st.rerun()
st.info(
"""
### Info
* Created by Dazbo.
* I do not store any user data, prompts or responses. Read our [Privacy Policy](/privacy_policy).
* Check out the [GitHub repo](https://github.com/derailed-dash/rickbot/).
* View the [Rickbot blog post](https://medium.com/google-cloud/creating-a-rick-morty-chatbot-with-google-cloud-and-the-gen-ai-sdk-e8108e83dbee).
"""
)
# Display previous messages from history
for message in st.session_state.messages:
avatar = (
USER_AVATAR
if message["role"] == "user"
else st.session_state.current_personality.avatar
)
with st.chat_message(message["role"], avatar=avatar):
if attachment := message.get("attachment"):
if "image" in attachment.get("mime_type", ""):
st.image(attachment["data"])
elif "video" in attachment.get("mime_type", ""):
st.video(attachment["data"])
st.markdown(message["content"])
# Handle new user input
if prompt := st.chat_input(st.session_state.current_personality.prompt_question):
file_to_process = None
if st.session_state.get("file_just_uploaded"):
file_to_process = uploaded_file
st.session_state.file_just_uploaded = False # Consume the flag
asyncio.run(get_agent_response(adk_runner, prompt, file_to_process, rate_limiter))
A few notes about this:
- The
render_chat()
function builds the UI itself. It includes the side bar and main window. The sidebar includes the dropdown for selecting our personality, and a widget for uploading an attachment. The main window shows the chat history, and then prompts for user input. - In response to user input we asynchronously run
get_agent_response()
. This is where we actually send requests to the agent. I.e. by callingrunner.run_async(user_id, session_id, message)
from within theget_agent_response()
function. - The agent doesn’t just return a single answer. Instead, for any given prompt the ADK runner can yield a sequence of event types, including: whether the agent decides to call a tool, the results of tool execution, internal state changes, and the agent’s internal “thinking” process or intermediate text. The
runner.run_async()
is a generator that returns these events sequentially. - We use
async for event
to iterate and process all the events that are returned. For each, the asynchronous iterator waits for therunner
to yield an event, processes the event, and then waits for the next event. - The
is_final_response()
method is used to retrieve the final user-facing message. This is the one we want to return back to the user in the UI.
Launch the UI
It’s time to try it!
To save a bit of repetitive typing I’ve created a new make
target:
streamlit:
@echo "================================================================================="
@echo "| 🚀 Launching Streamlit FE... |"
@echo "================================================================================="
uv run -- streamlit run src/streamlit_fe/app.py
So now I can launch the Streamlit UI like this:
make streamlit
The output in the console looks like this:
And if we click on the local URL, Rickbot-ADK opens in the browser:
Source Credit: https://medium.com/google-cloud/updating-the-rickbot-multi-personality-agentic-application-part-3-guided-implementation-of-the-9675d3f92c11?source=rss—-e52cf94d98af—4