How I teach people to stop “chatting with AI” and start building workers using Google’s Agent Development Kit.
Disclaimer
This article is based on a live hands-on workshop and personal experience. Examples are simplified for learning purposes and may skip production concerns like security, scale, or compliance. Please don’t ship this straight to prod and blame your grandma.
By the time people walk into this workshop, most of them have already “used AI”, which usually means they’ve typed a prompt, gotten something vaguely impressive back, smiled a little, and then quietly hoped it would keep working the same way tomorrow.
Sometimes it does.
Sometimes it absolutely does not.
When it breaks, the usual suspects show up fast. Maybe the model changed. Maybe the temperature is wrong. Maybe the AI is just being weird today. All perfectly reasonable theories, except that in practice, the problem is far more boring and far more uncomfortable.
We didn’t tell it what we actually wanted.
This workshop exists for that exact moment.
From Chatting to Telling AI to Work
One of the biggest mental shifts people struggle with is letting go of the idea that AI is something you “talk to” and accepting that, in many real use cases, it’s something you need to direct.
This is where the Agent Development Kit, or ADK, starts to make sense.
ADK isn’t a chatbot framework. It’s not here to help you have better conversations with AI. It’s here to help you define behavior that stays consistent across time, prompts, and contexts, which sounds obvious until you realize how rarely we do this explicitly.

A helpful analogy, especially for students and non-technical audiences, is food. Chatbots are like ordering makanan online late at night. You describe what you want, you hope the picture matches reality, and if it doesn’t, you shrug and blame the app. ADK is what happens when you stop ordering and start opening a kitchen, complete with recipes, utensils, and a very clear instruction that says, “No improvising tonight.”
Once people see it that way, the confusion usually fades and is replaced by a quieter realization that the AI wasn’t being unpredictable before. We were.
Agents, Minus the Buzzwords
The word “agent” tends to scare people off (Agent 47 anyone?), mostly because it sounds like something that requires a PhD or a large budget. In reality, an agent is a very simple idea wearing a very fancy name.
An agent has:
A role.
A goal.
Instructions.
It may have tools.
And it runs in a loop (if you wanted it to).
That’s it.
If you squint a little, it’s basically a worker with a job description. Not a chatbot waiting to be entertained, but something that knows what it’s supposed to do, how it’s supposed to do it, and when it should stop.
This distinction matters, because the moment you treat an agent like a worker, the quality of your instructions suddenly becomes your biggest bottleneck.

Power Prompting, or Saying What You Mean
Most prompts are vibes. Power prompts are contracts.
Power prompting isn’t about clever wording or long paragraphs. It’s about clarity. Who is this agent supposed to be? What is it responsible for? What tone should it use? What should it absolutely avoid? What does a “good” response look like?

When those things are left vague, the agent fills in the blanks, and it will do so enthusiastically. Not maliciously, not randomly, just faithfully, based on whatever ambiguity you gave it.
In ADK, this matters even more, because instructions don’t disappear after one message. They become part of the agent’s identity. If you’re sloppy here, you’re not just getting one bad answer. You’re teaching bad behavior.
Pro Tip: Remember the cheat code: “Turn this into a power prompt…” It’ll make your life so much easier. Try it!
Enter CeritaNenek
To make all of this feel less abstract, the workshop revolves around a small, very human example: CeritaNenek.
CeritaNenek is an Indonesian grandmother who tells short stories, often with a gentle moral at the end. She’s warm, a little old-fashioned, occasionally wise, and never in a rush. Most importantly, she feels familiar.
But we don’t build CeritaNenek once. We build her twice (well.. 3 if you count my own experiment)

Version One: The “Basic” Agent
The first version uses a simple instruction. Be a friendly Indonesian grandma. Tell short stories. Keep it under a certain length.
And it works. Mostly.
The stories are fine. The tone is okay. Sometimes the moral lands. Sometimes it wanders. The agent sounds friendly, but not always consistent, and every now and then, it feels like it forgets who it’s supposed to be.
At this point, most people nod and say, “Ya, AI memang gitu.”
This is where the workshop gets interesting.

Version Two: Same Code, Different Instructions
For the second version, we don’t change the model. We don’t change the environment. We don’t add tools or memory or anything fancy.
We only change the instructions.
We make them explicit. We define tone. We define structure. We define what CeritaNenek cares about and what she doesn’t. We remove ambiguity and replace it with intention.
The difference is immediate and slightly unsettling. The stories feel more grounded. The voice stabilizes. The agent stops sounding like a generator and starts sounding like a character.
This is usually the moment the room goes quiet, not because everyone is amazed, but because they’re doing mental math and realizing how many problems they’ve been blaming on the model instead of their instructions.

Version Three: ADK with tools
For the third version, we finally upgrade the engine. We keep gemini-3-pro-preview as the brain, but we introduce a new capability into the code: client.models.generate_content.
In this stage, the agent stops being text-bound. By connecting the “storyteller” role to an image generation tool, the agent can autonomously decide to visualize the stories it tells. It reads the room (or rather, the context), generates the narrative, and then calls the image tool to create a specific illustration that matches the mood of the story.
The “Grandma” isn’t just typing anymore; she’s showing you pictures from her stories.
You can peek into the Filesfolder in my GitHub repo to see the implementation.

While we could split this into two agents (one writer, one artist), I wanted to show you how a single agent with the right tools can handle a multimodal loop seamlessly.
Breaking Things on Purpose
To really drive the point home, the workshop includes a few carefully designed failures.
We change the model name slightly. We move instructions into the wrong field. We add tools without telling the agent when to use them. We deploy to a place we shouldn’t.
Then we ask a simple question: what went wrong?
The questions look like certification traps. Long. Wordy. Slightly intimidating. The answers, almost every time, come back to the same root cause. The agent did exactly what it was told, just not what we meant.
Once you see that pattern, you stop fearing agent systems and start debugging them like any other piece of software.
So What Happens After the Workshop?
This is the part people always ask about. Once you can build an agent, what’s next?
The roadmap is intentionally unglamorous. You refine behavior. You add tools carefully. You break tasks into smaller responsibilities. You connect the agent to something real. You deploy something tiny. You observe. You adjust.
Most people won’t build a massive multi-agent system, and that’s fine. If you walk away understanding how to design clear instructions and predictable behavior, you’re already ahead of a surprising number of production AI projects.

Closing Thoughts
Power prompting isn’t about being clever. It’s about being honest with yourself about what you want the system to do.
ADK gives you the structure. Power prompting gives you the discipline. When those two meet, AI stops feeling random and starts feeling usable, which is a much more practical form of magic.
If you want to try the hands-on demo yourself, the full codelab is available here: https://github.com/anggwar/GDE-ADK-Foundations-Plus/
Just don’t forget to write your instructions like you actually mean them.
P.S. Yes, the illustrations looks aggressively cute. Just deal with it! 😆
Sources and References
- Google Codelab: Build Agents with ADK Foundations
https://codelabs.developers.google.com/devsite/codelabs/build-agents-with-adk-foundation - Agent Development Kit (ADK) documentation: https://cloud.google.com/vertex-ai/docs/agents
- Gemini models overview: https://ai.google.dev/gemini
- Workshop codelab repository: https://github.com/anggwar/GDE-ADK-Foundations-Plus/
Writer is a Google Developer Expert in Google Cloud who builds AI agents, runs workshops, and keeps reminding people that not everything needs to be an agent. He believes clear instructions solve more problems than new models.
DevFest Bogor 2025: ADK & Power Prompting for Smarter Agents was originally published in Google Cloud – Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source Credit: https://medium.com/google-cloud/devfest-bogor-2025-adk-power-prompting-for-smarter-agents-15a7208c51a9?source=rss—-e52cf94d98af—4
