
Box’s next-gen AI agents extract insights across data domains
Who: Box is one of the original information sharing and collaboration platforms of the digital era. They’ve helped define how we work, and they’ve continued to evolve those practices alongside successive waves of new technology.
What they did: Thanks to the exciting advances of the generative AI era, with all the data that Box users have stored, they can now get considerably more value out of those files by using AI to search and synthesize their information in new ways. That’s why Box created Box AI Agents, to intelligently discern and structure complex unstructured data.
Why it matters: Box built its agents using Gemini 2.5 models, our most advanced generative AI, and the new Agent2Agent protocol, which enables integration between agents across dozens of platforms. Not only do the Box AI Agents deliver new capabilities to users, they also demonstrate how advanced agent connections can work at scale across platforms.
Learn from us: “The Box AI Enhanced Extract Agent gives enterprise users confidence in their AI, helping overcome any hesitations they might feel about gen AI technology and using it for business-critical tasks. … With the Box AI Enhanced Extract Agent, we wanted to transform how businesses interact with their most complex content — whether that’s scanned PDFs, images, slides, and other diverse materials — and then turn it all into structured, actionable intelligence.” –Yashoda Bhavnani, Head of AI, Box & Dr. Ali Arsanjani, Director, Applied AI Engineering, Google Cloud
Schroders’ multi-agent financial-analysis research assistant
Who: Schroders is a leading global active investment manager with roughly $975 billion in assets under management and a recognized leader in sustainable investment.
What they did: Schroders analysts are responsible for covering as many as 50 companies at a time. To maximize analysts’ impact, Schroders is enabling its analysts to shift from data collection to the higher-value strategic thinking that is critical for business scalability and client investment performance. This is why Schroders and Google Cloud collaborated to build a multi-agent research assistant prototype using Vertex AI Agent Builder.
Why it matters: Personalization and avoiding rigid processes was a core goal for supporting analysts’ unique workflows. The system uses customizable system instructions — underlying prompts that can be tuned by analysts and developers. In addition, the system allows for personalized agent configuration. Analysts in the pilot have reported significant satisfaction and time savings in working with the agent.
Learn from us: “Along the way, the team also discovered several key learnings for building effective agents: Meticulously decompose tasks. Prompt engineering is key. Tool reliability is non-negotiable. Limit tool scope per agent. Managing state is complex. Leverage Agent-of-Agents protocols. User trust is earned.” – Ed Jeffery, Principal Software Engineer, Investment AI R&D, Schroders & Megha Agarwal, AI Engineer, UK/I Customer Engineering, Google Cloud
Hypros develops IoT scanners to detect hospital falls
Who: Hypros offers customized solutions and practical insights for healthcare in the digital age, offering a range of services including process analysis and unique platforms and devices.
What they did: Monitoring patients in care settings is a critical task, but patients cannot be monitored continuously and need privacy, both in the room and digitally. Hypros set out to develop an in-room IoT device powered by AI that can sense fallen or distressed patients with a two-stage AI workflow that determines when patients need help without actively watching at all times.
Why it matters: Through a combination of low-resolution sensors and a pair of AI models that can recognize distress even from obscured, anonymized sensor readings, Hypros created a device that gives caregivers confidence in its accuracy and gives patients comfort that they are not being constantly watched. This two-step process demonstrates the power of diverse AI modalities working in concert, an approach that’s becoming increasingly common for agentic AI.
Learn from us: “Instead of stockpiling sensor data, the system uses advanced AI models to interpret and connect data from multiple streams — turning simple raw readings into practical insights that guide better decisions. Real-time alerts also bring timely attention to critical situations — ensuring patients receive the swift and focused care they deserve, and staff can perform at their very best.” – Marcel Walz, CTO, Hypros & Erlandas Norkus, AI Engineer, Hypros
Formula E recharges a race car by descending a mountain
Who: Formula E is a race league famous for its boundary-pushing cars, with near instant acceleration, thanks to electronic motors. Yet EVs also have the unique ability to recharge their batteries through braking, an important tactic during races.
What they did: Leading up to the Monaco E-Prix, Formula E and Google set out to test whether a Formula E GENBETA race car, starting with only 1% battery, could regenerate enough energy from braking during a descent through France’s coastal Alps to then complete a full lap of the iconic Monaco circuit. They used AI modeling to run numerous trials before attempting this feat.
Why it matters: The team started with a straightforward prompt in Google’s AI Studio, asking if the descent would work. Using Gemini 2.5 Pro’s deep learning feature, the model assessed all the factors, like terrain, weather, and vehicle specs, and determined the project was “theoretically feasible.” It’s a powerful example of the R&D power of gen AI. During the event, Firebase and BigQuery helped visualize real-time telemetry, while data from multiple sensors and Google Maps enabled continual status updates.
Learn from us: “From figuring out if our crazy Mountain Recharge idea was even possible, to giving us live insights during the descent, AI was our guide. It’s what turned an ambitious ‘what if’ into a reality we could track moment by moment.” – Alex Aidan, VP of Marketing, Formula E
LVMH creates a data estate for 75 diverse maisons
Who: The world’s largest luxury conglomerate, LVMH is home to such iconic brands as Louis Vuitton, Moët & Chandon, and Hennessy as well as Dior, Tiffany and Co., Bvlgari, Sephora, Celine, and Dom Perignon. While each maintains a high degree of autonomy, they all rely on LVMH’s IT to create services for them.
What they did: For four years, LVMH has been working with Google Cloud to build a data foundation for its brands, particularly as they lacked a 360-degree view of customers across the different brands. With that foundation in place, they’ve begun layering on generative AI features, such as chat agents for client advisors in their shops.
Why it matters: The various generative AI platforms now have more than 40,000 monthly global users doing more than 1.5 million queries, such as analyzing documents and creating translations. These AI tools are also helping to further deepen personalization in customer relationships and boosting operational efficiency between teams and collaborators.
Learn from us: “If you want to really compare it to pure retail e-commerce, their solutions are very mass market, with big data and automation. Automation wouldn’t really fit for us, we’re looking for a truly differentiated approach. But where it can deepen our connections with our clients even more, that is the hallmark of LVMH service — excellent and effortless. That’s what our technology is for.” – Franck Le Moal, Chief Information Officer, LVMH
Alpian redefines private banking for the digital age
Who: The first fully cloud-native private bank in Switzerland, Alpian boasts a unique model that blends personal wealth management with digital convenience.
What they did: Working within the highly regulated world of finance, Alpian sought to deploy generative AI tools to automate processes and deliver better services. Tools like Gemini and Vertex AI have streamlined traditionally complex processes, allowing developers to interact with infrastructure through simple conversational commands.
Why it matters: The idea was ours, but given all the factors — combing through reams of player data, combining it with up-to-the-minute batting orders or traffic conditions and then coming up with a range of messages we could quickly display — it took the help of AI to get it into play.
Learn from us: “By aligning technological advancements with regulatory requirements, Alpian is creating a model for the future of banking — one where agility, security, and customer-centricity can come together seamlessly and confidently.” – David Nemeshazy, CTO, Alpian & Damien Chambon, Head of Cloud, Alpian
MLB delights fans in the stands during All-Star Game
Who: Major League Baseball is constantly innovating with new technology platforms to keep fans connect to one of the most statistically and historically rich sports.
What they did: Google Cloud teamed up with the Statcast unit of Major League Baseball to build a gen-AI-powered tool that estimated where potential home runs might land in the stands during the MLB All-Star Game. Using data like a player’s at-bat history and other real-world factors, this agentic AI system gave fans attending the game some hint of whether they might snag a lucky homer.
Why it matters: Using the home run modeler to assess where the next players at-bat might hit a home run, the creative team then used Gemini 2.5 Pro to quickly create a selection of taglines. They would then select their favorite option, quickly making any tweaks to the language, before displaying it on the Jumbotron and digital displays around the park. This “human-in-the-loop” review is an increasingly common practice in AI-powered work.
Learn from us: “Watching these messages roll through town, we get a real sense of just what a team player AI can be. The idea was ours, but given all the factors — combing through reams of player data, combining it with up-to-the-minute batting orders or traffic conditions and then coming up with a range of messages we could quickly display — it took the help of AI to get it into play.” – Hussain Chinoy, Technical Solutions Manager for Gen AI, Google Cloud & Emmanuel Awa, AI Blackbelt, Google Cloud
Oviva develops meal-logging app with AI-powered advice
Who: Oviva developed an AI-powered meal-logging app that simplifies the meal-logging process and enhances the quality of feedback people receive, helping them make better dietary decisions.
What they did: Our AI algorithms analyze the logged meals in near real-time, providing people with feedback that is not just specific and personalized but also easy to understand and act upon. This feedback focuses on helping people maintain a balanced diet throughout the day, rather than overwhelming them with detailed nutritional breakdowns.
Why it matters: Oviva’s user base exhibits cyclical activity patterns, with meal logging concentrated around specific times. This behavior necessitates significant fluctuations in processing resources, sometimes requiring several orders of magnitude higher capacity. Google’s AI-optimized infrastructure offers low latencies and excellent availability that have proven to be a reliable way to achieve our goals, without dedicating engineering resources to make our offering scalable.
Learn from us: “The AI-powered meal logging feature is making a significant difference in how they approach their diets. People report feeling more confident in their food choices and more motivated to maintain healthy eating habits. The simplicity and immediacy of the feedback have also improved user retention.” – Manuel Baumann, Co-Founder & CTO, Oviva AG & Nicolas Wipfli, Customer Engineer, Google Cloud
Source Credit: https://cloud.google.com/blog/topics/customers/cool-stuff-google-cloud-customers-built-monthly-round-up/