Polyglot Persistence with GCP: Building an Intelligent E-commerce Catalog with AlloyDB, MongoDB, Cloud Storage, BigQuery, and MCP Toolbox.
The Challenge of Data Silos in Modern Applications
In the world of modern software development, data is no longer one-size-fits-all. A single e-commerce product, for example, generates structured transactional data (price, stock), highly flexible semi-structured data (specifications, user reviews), and massive unstructured data (images, videos). Trying to shoehorn all this information into a single relational database often leads to complex, slow, and expensive compromises.
The solution? Polyglot Persistence. This architectural pattern recognizes that the best database for one type of data (like inventory) is often the worst for another (like flexible product specs or analytics).
For our project — building an Intelligent E-commerce Product Catalog and Recommendation System — we’re embracing this principle by combining four powerful, specialized data technologies, unified by a critical abstraction layer: the MCP Toolbox.
🏗️ Our Architecture: The Specialized Stack
Our design leverages the unique strengths of each database service, ensuring optimal performance, scalability, and flexibility for every data type.
1. AlloyDB (The Transactional Backbone)
The single source of truth for structured, transactional, and critical data.
- Data Stored: Core product information (Product ID, Name, SKU, Price, Inventory), User Profiles, Order Summaries.
- Why AlloyDB? Its high performance, strong consistency, and reliability are essential for mission-critical operations like inventory updates and financial transactions.
2. MongoDB (The Flexible Catalog Layer)
Storing rich, semi-structured product details and high-volume, flexible user behavior data.
- Data Stored: Detailed Product Specifications (which vary widely between products), Rich Product Descriptions, User Interactions (Clickstreams, Search Queries), and Full Order Details.
- Why MongoDB? Its document model allows us to store complex, nested product attributes without rigid schemas, making product onboarding and updates far faster than with traditional SQL schemas.
3. Cloud Storage (The Unstructured Media Home)
Serving massive, unstructured media assets efficiently and affordably.
- Data Stored: High-resolution Product Images, 360-degree Videos, User-Uploaded Review Media, and raw application/event logs for archival.
- Why Cloud Storage? Databases are terrible at storing large binary files. Cloud Storage offers industry-leading scalability, cost-effectiveness, and direct URL access (often via Content Delivery Networks) for fast loading on the frontend.
4. BigQuery (The Analytical Powerhouse)
Aggregating, analyzing, and modeling historical data to derive intelligence.
- Data Stored: Transformed historical sales data (from AlloyDB), aggregated user behavior (from MongoDB), and long-term order history.
- Why BigQuery? Its serverless, petabyte-scale architecture and integrated Machine Learning (BigQuery ML/Vertex AI) capabilities make it the ideal platform for generating personalized product recommendations and analyzing sales trends across massive datasets.
🛠️ The Unifier: MCP Toolbox (YAML Abstraction Layer)
The most complex part of polyglot persistence isn’t the databases themselves — it’s managing the code that talks to them all. A developer shouldn’t need to write SQL, MongoDB aggregate queries, Cloud Storage API calls, and BigQuery statements for a single feature.
This is where the MCP Toolbox shines.
The MCP Toolbox acts as our unified data abstraction layer. It’s configured entirely through YAML files, allowing us to define “Entities” (like Product, User, or Order) and map their fields and operations to the specific underlying database tool.
The result is a clean, maintainable Python backend that interacts only with the abstract mcp.Product object, letting the toolbox handle the database-specific complexity.
Data Blueprint
This diagram illustrates how our frontend communicates solely with our Python backend, which in turn leverages the MCP Toolbox to route requests to the specialized data stores.
💻 Setting the Stage for Development
For our build, we are using:
- Frontend: A simple, modern framework (React/Vue/Angular) for a beautiful, responsive user interface.
- Backend: Flask (Python), chosen for its lightweight nature and simplicity, perfect for building our RESTful API.
- Data Abstraction: The MCP Toolbox (Python-based).
Let’s set up the database services:
1. Google Cloud Project Setup
- Create/Select a Project: In the Google Cloud Console, ensure you have an active Google Cloud project by selecting or creating a project.
- Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project .
- Enable APIs: Navigate to the APIs & Services -> Enabled APIs & services dashboard and ensure the following APIs are enabled:
- Cloud AlloyDB API
- BigQuery API
- Cloud Storage API
You can paste this link in your browser tab where you have the active GCP project open to enable these APIs.
2. AlloyDB Setup
We’ll provision a basic AlloyDB cluster.
- Navigate to AlloyDB: In the Google Cloud Console, search for and navigate to AlloyDB for PostgreSQL. Direct link.
- Create Cluster: Click CREATE CLUSTER.
- Region: Select a region (e.g.,
us-east4). - Database Version: Keep the default.
- Cluster ID: Choose a name (e.g.,
ecommerce-cluster). - Initial User/Password: Set a strong password for the
postgresuser, for now keep it as “alloydb”.
3. Configure Primary Instance:
- Instance ID: Choose a name (e.g.,
ecommerce-cluster-primary). - Machine Type: For testing, choose a small machine type (e.g., 4 vCPUs, 32 GB RAM).
4. Networking:
- Set up a VPC network and a Private service connection (standard AlloyDB setup).
- Choose the “default” network for the Private Services Access (PSA) in the “Automatic” allocated IP range.
Click “Create Cluster” button. It will take up to 10 minutes for the cluster and instance to be created.
5. Create the Database and Table: Once the cluster is active:
- Click AlloyDB Studio from the navigation menu.
- In the editor view, click open a new untitled query tab.
- Enter the following DDL and click run:
CREATE TABLE products_core_table (
product_id UUID PRIMARY KEY,
name VARCHAR(255) NOT NULL,
sku VARCHAR(50) UNIQUE NOT NULL,
price NUMERIC(10, 2) NOT NULL,
stock INT NOT NULL
);
Run the insert scripts in the AlloyDB Studio:
3. MongoDB Setup
We will use MongoDB Atlas for simplicity and scalability. Go to MongoDB Atlas and click Build a Cluster button and select the Free tier option.
- Create an Atlas Account/Cluster: Name: Cluster0
- Provider: Google Cloud
Click create.
2. Networking (IP Access):
- In the Atlas console, go to Network Access.
- Add your current development IP address, or for simplicity during development, add
0.0.0.0/0(Allow Access from Anywhere – Warning: Use this only for non-production testing). - Or you can add a stable external IP address if you are using a Cloud Workstation (like I am). To get this run the following command from your Cloud Shell Terminal of your Cloud Workstation and take the IP address resulting from it.
curl ifconfig.me
- Add this IP address in the Network Access tab of the MongoDB Atlas console:
3. Database User:
- Go to Database Access and create a user with read/write permissions for your database (e.g.,
ecommerce_db). Create user authentication credentials (username and password).
4. Get Connection String:
- Go to Databases -> Connect -> Connect your application.
- Copy the connection string. It will look like this:
mongodb+srv://:@YOUR_CLUSTER.mongodb.net
- This is the value we will use for the
${MONGODB_CONNECTION_STRING}environment variable in our Python application.
5. Data Ingestion:
- Once connected to the cluster, click the “Browse Collections” option or the “Data Explorer” view of the cluster.
- Click the Create Database button to create the database and enter the database details as follows:
Database Name: ecommerce_db
Collection Name: product_details_collection
Click Create Database button.
In the Data Explorer, select the Collection Name and you should be able to see the “Documents” tab on the right side of the console.
Click the “Add Data” icon (The + button):
Copy the JSON content from the GitHub repo file:
Paste it in the “Insert Document” editor dialog that pops up:
Insert the array of documents and you should be able to see 100 documents added upon refresh!
When you clone the application, you will see that we will be inserting non-transactional user events (views, clicks) directly into the MongoDB collection (
user_interactions_collection), optimized for speed and flexible schema. For this we use the tool “insert_user_interaction”.
4. MCP Toolbox Setup 🔗
The most complex part of polyglot persistence isn’t the databases themselves — it’s managing the code that talks to them all. A developer shouldn’t need to write SQL, MongoDB aggregate queries, Cloud Storage API calls, and BigQuery statements for a single feature. This is where the MCP Toolbox shines.
The MCP Toolbox acts as our unified data abstraction layer. It’s configured entirely through YAML files, allowing us to define “Entities” (like Product, User, or Order) and map their fields and operations to the specific underlying database tool. This design achieves separation of concerns between the application logic and the database implementation details.
The result is a clean, maintainable Python backend that interacts only with the abstract object, letting the toolbox handle the database-specific complexity.
Setting up MCP Toolbox for Databases
Open your Cloud Shell Terminal and create a new folder for this project “polyglot-persist-ecommerce” and navigate into it by running the following commands from your terminal:
mkdir polyglot-persist-ecommercecd polyglot-persist-ecommerce
Now let’s create a subfolder for toolbox and install it:
mkdir toolbox-implementationcd toolbox-implementation
# see releases page for other versions
export VERSION=0.17.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
chmod +x toolbox
Now toolbox is saved inside your toolbox-implementation subfolder.
Defining the Abstraction: The tools.yaml
To solidify our architecture, the very next step is to write the tools.yaml file. This file formally registers our two operational data stores—AlloyDB and MongoDB—and defines how our core entities are split between them.
Notice how the Order entity, for example, is defined only once, but its data is sourced from two distinct databases (header from AlloyDB, details from MongoDB). This is the power of abstraction.
We need to create this YAML file inside the toolbox-implementation subfolder. Switch to Cloud Shell Editor and open your newly created project and subfolder. Navigate into the subfolder “toolbox-implemetation”. Create the tools.yaml file and enter the following in it.
Here is the tools.yaml file content:
Make the necessary replacements for your database project_id,cluster, instance, username, password, host as required in the above YAML file.
Once done, toggle back to Cloud Shell Terminal and run the following commands to test out the tools:
./toolbox --ui
You should be able to test out all your tools with the necessary parameters:
Now, let’s deploy this data layer into Cloud Run by running the commands below in the Cloud Shell Terminal:
- Set the PROJECT_ID environment variable:
export PROJECT_ID="my-project-id"
2. Initialize gcloud CLI:
gcloud init
gcloud config set project $PROJECT_ID
3. You must have the following APIs enabled:
gcloud services enable run.googleapis.com \
cloudbuild.googleapis.com \
artifactregistry.googleapis.com \
iam.googleapis.com \
secretmanager.googleapis.com
Follow the rest of the steps in the page to complete the Cloud Run deployment. The final set of commands for deployment:
gcloud secrets create tools --data-file=tools.yamlexport IMAGE=us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:latest
gcloud run deploy toolbox \
--image $IMAGE \
--service-account toolbox-identity \
--region us-central1 \
--set-secrets "/app/tools.yaml=tools:latest" \
--args="--tools-file=/app/tools.yaml","--address=0.0.0.0","--port=8080" \
--allow-unauthenticated # https://cloud.google.com/run/docs/authenticating/public#gcloud
Don’t miss any permissions or other intermediate steps in the page. Troubleshooting steps are also listed there.
Once deployed, you will see the endpoint looking like this:
https://toolbox-*********-uc.a.run.app
What does this abstraction entail?
The tools.yaml file acts as the blueprint for polyglot congruence, ensuring that our application logic is decoupled from the underlying data technology.
Here is a summary of the core tools we’ve defined and how they leverage the specialized strengths of AlloyDB and MongoDB to execute critical e-commerce functions:
1. 💰 AlloyDB Tools: Transactional Integrity and Core Facts
The tools targeting AlloyDB use postgres-sql kind to ensure strong consistency for financial and inventory data.
2. 🎨 MongoDB Tools: Flexibility and Analytical Aggregation
The tools targeting MongoDB use mongodb-find and mongodb-aggregate kinds, leveraging its document model for schema flexibility and performance for complex, non-relational queries.
3. 📊 BigQuery Tools: Analytical Scale and Reporting
The tools targeting BigQuery use the bigquery-sql kind to leverage its massive parallel processing power for batch analytics and reporting on historical data.
When the frontend application wants to display a single product, it doesn’t execute one large, complex query. Instead, it makes two simple, abstracted calls via the MCP Toolbox. This cleanly separates transactional logic from catalog enrichment, allowing each database to be optimized for its specific task. This approach is the heart of our Polyglot Persistence architecture.
5. Google Cloud Storage for Unstructured Data
We’ve successfully established our polyglot core: AlloyDB for transactional facts and MongoDB for flexible details. However, an e-commerce catalog is useless without product visuals.
Our next database component, Cloud Storage (GCS), handles the unstructured media. We will tackle the integration of Cloud Storage (GCS) for media for the products. This significantly enhances the application’s complexity and justifies the polyglot architecture even further.
a. Creating Media Objects:
For convenience, I have generated some images based on the product dataset we have and I have stored it in a GCS bucket.
The link to these 100+ images are available in the repo file:
Download all these images to your machine so you can move it to your GCS bucket in the next step.
b. Create the Google Cloud Storage Bucket:
Go to Google Cloud Console, search for Cloud Storage and click Create Bucket. Enter the bucket name of your choice:
Click Create.
c. Make sure for the demo application access is open and public (in production scenario you would have to provision an authentication method):
Uncheck the “Enforce public access prevention on this bucket” option for demo purposes.
Go to the “Permissions” tab and under View by principals tab, click Grant Access. Enter the principal “allUsers” and role “Storage Object User”:
Click Save and confirm public access (only for the demo).
d. Now navigate into the bucket you just created and upload the image files you downloaded (don’t change the names of the images):
That is it.
e. Let’s move on to integrating this to our application:
While the MCP Toolbox is excellent for database connectivity, GCS is an object store, optimized for large, static file delivery (images, videos). Retrieving an image is best done by providing a direct URL to the client, bypassing any server-side database tools. By doing this, we achieve crucial benefits:
- Reduced Backend Load: The Flask server only retrieves the metadata (the URL), not the image bytes.
- Optimized Delivery: Images are served directly from GCS’s global network, ensuring high performance.
Our Flask app implements this by directly constructing the public GCS URL using the product’s SKU, which serves as our shared key across the three systems (AlloyDB, MongoDB, GCS).
6. Analytics and Intelligence with BigQuery ML
So far we have established the transactional and write layers (AlloyDB, MongoDB, GCS). Now, we leverage the high-volume user interaction data collected in MongoDB to drive personalized recommendations using BigQuery.
When a user clicks a product, two things should happen:
1. Tracking (we have it covered in the application):
The POST /track/view call currently goes to MongoDB (check tools.yaml for the tool ‘insert_user_interaction’). The first action taken when a user clicks a product in the catalog is a fast, non-blocking asynchronous call from the frontend to the /track/view API endpoint. The Flask route immediately loads and invokes the specialized MongoDB tool to implement the write.
The MongoDB Write for Tracking:
To handle the high-volume event stream, our insert_user_interaction tool uses the mongodb-insert kind. When the Flask backend receives a user’s click (the /track/view POST request), it invokes this tool, sending the raw event data directly to MongoDB’s user_interactions_collection.
This tool’s simple definition ensures minimal overhead for every view event:
# Tool for MongoDB Write (User Tracking)
insert_user_interaction:
kind: mongodb-insert-one
source: mongo-source
description: Inserts a user interaction event (view, search, click) into the interactions collection.
database: ecommerce_db
collection: user_interactions_collection
canonical: true
authRequired: []
This design choice maximizes write speed and schema flexibility, allowing us to easily add new tracking parameters (e.g., screen size, browser version when needed) without ever altering a database schema or worrying about data integrity constraints.
2. Product Detail View:
A modal or dedicated page should display the combined product intelligence.
These 2 things require the following steps:
🛠️ Implementing the ETL Route (MongoDB -> BigQuery Prep)
We don’t need to implement a full pipeline, but we need a tool that shows data is ready for analysis. We’ll use a MongoDB aggregation tool to count all collected interactions for each product on-demand by clicking a button to demonstrate readiness for BigQuery ingestion.
# New Tool for MongoDB Analytics Prep (ETL Source)
get_total_interactions_count:
kind: mongodb-aggregate
source: mongo-source
description: Counts the total number of user interaction events recorded in MongoDB.
database: ecommerce_db
collection: user_interactions_collection
readOnly: true
pipelinePayload: |
[
{
"$match": {
"product_id": { "$ne": null }
}
},
{
"$group": {
"_id": "$product_id",
"interaction_count": { "$sum": 1 }
}
},
{
"$project": {
"_id": 0,
"product_id": "$_id",
"interaction_count": 1
}
},
{
"$limit": 1000
}
]
pipelineParams:
- name: product_id
type: string
description: The id field to count the total number of events.
This tool summarizes the count of interaction for each product . This will be included in the tools.yaml file.
Our ETL process is initiated by the user clicking the “Run ETL” button on the frontend. The process is defined entirely by two sequential MCP tool calls within the Flask application’s /etl/run route using the tool “get_total_interactions_count”.
It groups the raw user_interactions_collection by product_id and calculates the total view count, eliminating the need for a separate transformation service.
BigQuery Data
Since BigQuery ML requires its data to be in a BigQuery table, the first step is to simulate the Extract, Transform, Load (ETL) process.
A. Prepare BigQuery Table
- Create the
ecommerce_analyticsdataset:
Click Create data set button.
2. Create the table
Next, we need a table to hold the interaction data, linking users (anonymous) to products (items).
-- DDL for the BQ training data table
CREATE TABLE ecommerce_analytics.user_product_interactions (
user_id STRING DEFAULT 'any user',
product_id STRING, -- The item ID
interaction_score INT -- Implicit rating: 1 for every product_view event
);-- Note: In a real environment, you would use a tool like Cloud Dataflow or Cloud Composer
-- to pipe data from MongoDB into this table daily.
3. Grant access to the toolbox-identity user (very important step)
Go to IAM and admin page and click the “Grant access” button. Enter the following details according to your set up and save.
Once this is done, you should be able to access BigQuery queries from your toolbox instance, provided you have already created tools.yaml and deployed it in Cloud Run. We have already completed that in the toolbox step.
B. Load & Merge to BigQuery
Performs a secure, declarative MERGE operation, updating existing product counts and inserting new ones into the user_product_interactions table in BigQuery using the tool “execute_sql_tool”. This tool uses the powerful BigQuery MERGE statement. This ensures the ETL process is idempotent: it updates existing product views and inserts new ones.
# New Tool: Write MongoDB Summary to BigQuery
execute_sql_tool:
kind: bigquery-sql
source: bigquery-source
description: Merges product interaction summary data into the BigQuery summary table for analytics.
parameters:
- name: product_summaries
type: string
description: A string of array of objects [{product_id, interaction_count}] from MongoDB.
statement: |
MERGE INTO `ecommerce_analytics.user_product_interactions` AS T
USING (
SELECT
JSON_VALUE(items, '$.product_id') AS product_id,
CAST(JSON_VALUE(items, '$.interaction_count') AS INT64) AS interaction_count
FROM
UNNEST(JSON_QUERY_ARRAY(@product_summaries)) AS items
) AS S
ON T.product_id = S.product_id
WHEN MATCHED THEN
UPDATE SET T.interaction_score = S.interaction_count
WHEN NOT MATCHED THEN
INSERT (product_id, interaction_score)
VALUES (S.product_id, S.interaction_count);
C. The BigQuery Analytics
🔥 Real-Time Reporting: The Top 5 Products
Once the data is in the BigQuery analytical layer, generating business intelligence is instantaneous.
We will use a basic analytics query to demonstrate the business intelligence aspect here to not complicate the scope of implementation. BigQuery ML’s Matrix Factorization model, which is ideal for implicit recommendation systems (like ours, based on ‘views’):
-- SQL to get the top 5 interacted products
SELECT
product_id,
interaction_score
FROM
`ecommerce_analytics.user_product_interactions`
ORDER BY
interaction_score DESC
LIMIT 5;
The Top 5 Most Viewed Products report is generated by a direct call to BigQuery via the get_top_5_views tool.
# New Tool: Top 5 Analytics from BigQuery
get_top_5_views:
kind: bigquery-sql
source: bigquery-source
description: Retrieves the top 5 most viewed products across all users based on the last ETL run.
parameters: []
statement: |
SELECT
product_id,
interaction_score
FROM
`ecommerce_analytics.user_product_interactions`
ORDER BY
interaction_score DESC
LIMIT 5;
Polyglot Final Orchestration
When the user clicks Refresh Top 5 Report, the application performs a three-way orchestrated data retrieval:
- BigQuery (Analysis): The app uses the
get_top_5_viewstool to get a list of 5product_idandinteraction_score. - AlloyDB/MongoDB (Enrichment): For each of the 5 IDs returned from BigQuery, the app reuses the polyglot read logic (
/products/) to look up the latest price, name, and category (from AlloyDB and MongoDB). - Frontend (Visualization): The data is presented in a dynamic bar chart, fully representing the final product of your polyglot architecture.
🎯 The Multimodal Strategy: How Data Serves the User
The true demonstration of our architecture is in the application’s ability to seamlessly combine data from different sources for the user experience. We employ different polyglot strategies for the bulk catalog view and the product detail view.
1. 🛒 Product Catalog (Bulk View: /products)
The primary goal of the catalog is speed and efficiency when loading 100+ items. Since our product IDs were disjoint across AlloyDB and MongoDB, we opted for a pragmatic Polyglot Concatenation strategy:
The Flask application processes this by:
- Fetching the full list from AlloyDB (
list_products_core). - Fetching the full list from MongoDB (
list_all_product_details). - Concatenating the two lists.
- Adding the GCS/image URL enrichment to every single item.
This ensures the user sees every available item from both data streams with high speed.
2. 🔎 Product Detail View (Single Item: /products/:id)
The primary goal of the detail page is data richness and application resilience. We use a Lookup & Fallback strategy to ensure we serve data even if one database is down or if the item only exists in one place.
The following is one way how we have handled data richness resilience (OPTIONAL approach):
The core principle here is that the Flask application acts as the Data Orchestrator, making multiple simultaneous calls through the MCP Toolbox and stitching the results into a single, cohesive JSON object before sending it to the frontend. This prevents the downstream application from ever needing to know whether the “Price” came from a relational column or if “Specifications” came from a flexible JSON document.
💻7. Run the Application Yourself
To test this multi-modal application, you need to clone the repository and set up your environment variables to connect to the databases you provisioned in Steps 2, 3, and 5.
- Go to you Cloud Shell Terminal, clone the repo and navigate into the project folder:
git clone https://github.com/AbiramiSukumaran/ecommerce-multi-database.gitcd ecommerce-multi-database
2. Install dependencies
pip install -r requirements.txt
3. Update the .env file you cloned from the repo with your values:
# --- DATABASE SECRETS ---
# 1. MongoDB Connection String (from MongoDB Atlas)
MONGODB_CONNECTION_STRING="mongodb+srv://:@YOUR_CLUSTER.mongodb.net"# 2. Google Cloud Storage Bucket Name (from Step 5)
GCS_PRODUCT_BUCKET="your-ecommerce-product-media-bucket"
# 3. MCP Toolbox Server Location
# Must match the address where you run the toolbox server (usually localhost:5000)
MCP_TOOLBOX_SERVER_URL="http://localhost:5000"
4. Update the data layer
Update tools.yaml for placeholder with your values.
Test your tools.yaml tools locally:
./toolbox --tools-file "tools.yaml"or
./TOOLBOX
5. Test your app locally
(Assuming you have completed all the prior sections and configurations in the blog):
python app.py
6. Check your Docker file and app.py starting point
Make sure your Docker file is updated as required based on the original that you cloned from the repo.
7. Check if the app.py is updated
It should have the following snippet in case you had it changed for our local tests. (The file from the repo should already have this):
if __name__ == '__main__':
port = int(os.environ.get('PORT', 8080))
app.run(host='0.0.0.0', port=port, debug=False)
# NOTE: debug=False is crucial for production environments like Cloud Run
8. Deploy your app to Cloud Run
gcloud run deploy multi-db-app --source .
Select the region number (say 34 for us-central1) and allow unauthenticated option (“y”), when prompted.
9. Demo your app
Open the resulting web preview link in your browser. You can now click products to write data to MongoDB and use the ETL & Top Reports Tab to move that data to BigQuery and view the analytics!
10. Link to repo
🏁 Conclusion
We have successfully demonstrated how the MCP Toolbox serves as the architectural glue for a truly modern, specialized application. By matching the right tool (or database) to the right job, we achieved:
- Flexible Data Writes: MongoDB for event logs.
- Transactional Consistency: AlloyDB for core integrity.
- High-Performance Analytics: BigQuery for business intelligence.
- Unified Development: A single Python backend abstracting all complexity via YAML-defined MCP tools.
Conclusive Architecture
That’s it! Our Intelligent E-commerce Catalog is complete! If you like to contribute to building your idea with us, register for Build and Blog Marathon: Accelerate AI, starting mid November in a city near you or virtually!
Source Credit: https://medium.com/google-cloud/architecting-for-data-diversity-the-intelligent-e-commerce-catalog-4ceadf4bf104?source=rss—-e52cf94d98af—4
