
Taskforce of AI Agents and Creators
Gemini Enterprise serves as a central hub, offering centralized visibility and control over all AI agents used within the organization, whether they are created by Google, third-party partners, or internal teams. Therefore, it is now time to discuss about Agents.
1. Ready-to-Use Google Agents
Gemini Enterprise includes a “taskforce” of specialized, pre-built agents designed to deliver immediate value across business functions:
- Deep Research: This agent is capable of complex reasoning and planning, performing hundreds of searches across the web and enterprise access-controlled data to generate a comprehensive report, drastically slashing research time from weeks to hours.
- Data Insights (Preview): This tool provides actionable insights by analyzing BigQuery data, eliminating the need for users to have prior SQL knowledge.
- NotebookLM Enterprise: An AI-powered research and writing tool designed to summarize and extract crucial information from dense, complex sources, functioning as a knowledge assistant.
- Gemini Code Assist Standard: A specialized agent focused on helping developers complete coding tasks throughout the software development lifecycle.
2. Empowering Custom Agent Creation
As said before, you can not only use the ready-to-use agents, but also build your own ones, and it doesn’t matter if you are a developer or not, because Gemini Enterprise has two options:
- Agent Designer (No-Code): This non-coding tool enables every employee (from marketing to finance) to transform their unique expertise into scalable, automated “AI helpers”.
- Agent Development Kit (ADK): This is toolkit provided by Google, designed to help developers build AI agents, that are capable of reasoning, remembering information, and interacting with each other (an innovative multiple agent approach compared to other similar tools). The agents follow Google’s Agent Protocol, which means agents from different companies can interact safely, and once build within Vertex AI, you can deploy and govern them within the Gemini Enterprise platform.
3. Third-party partners’ Agents
If you don’t want to waste time reinventing the wheel or you don’t feel ready to develop your own agent, consider checking Agent finder to discover agents which match your specific use cases.
Security and Governance
As with great power comes great responsibility, the accumulation of massive datasets demands the implementation of equally stringent security protocols and access controls. Therefore, let’s see how you can governate and secure Gemini Enterprise (or better how it secure your apps, agent and data), in particular speaking about Access, Network, Model Armor, and Encryption.
Access: Granular Data Sovereignty
No matter if you are build a custom connector or not, an app or one thousand of them, it is important to define access, and Gemini Enterprise handles this through a centralized governance framework built on strict identity and access controls (ACLs).
To ensure end users only view documents and resources they are authorized to access, Google uses your configured Identity Provider (IDP) — either Google Identity for Google Workspace sources or Workforce Identity Federation (WIF) for third-party sources like Microsoft Entra ID — to authenticate the user and determine their permissions.
Furthermore, for organizations using custom data sources with internal, application-specific user groups (referred to as external identities), administrators must create an identity mapping store to map the IDP identities to these external identities, ensuring accurate ACL enforcement for custom connector data stores.
This ensures that all generative answers and specialized agents — including Google-made agents like Deep Research — operate strictly across enterprise access-controlled data, so that example
Networking
What about network then? Gemini Enterprise is built on the robust Google Cloud infrastructure and offers critical networking and security controls to handle connectivity to both cloud and self-hosted data sources, ensuring data security and compliance for enterprise users.
To do so, there are several mechanisms that address enterprise networking challenges, particularly concerning hybrid and multi-cloud environments:
- Google’s network and VPC Firewall rules for Third-Party Connectors and External Endpoints
- VPC Service Controls for apps and data stores residing within Google Cloud
- Private Service Connect for Self-Hosted and Hybrid/Multi-Cloud Sources
Third-Party Connectors and External Endpoints: When using third-party connectors (like those for ServiceNow or Salesforce), the platform handles external network interactions. Since third-party connectors interact with public endpoints outside Google’s network (e.g., APIs for polling data or webhooks), Gemini Enterprise ensures that egress traffic is secured through granular VPC Firewall rules, which restrict outbound connections exclusively to the Fully Qualified Domain Names (FQDNs) of the external service provided by the customer.
Securing Google Cloud Resources with VPC Service Controls (VPC SC): For apps and data stores residing within Google Cloud, Gemini Enterprise integrates also with VPC Service Controls to establish a secure service perimeter. Therefore, VPC SC is crucial for mitigating the risk of data exfiltration by protecting and controlling access to the Gemini Enterprise app and its connected enterprise data. However, it is important to note that when VPC Service Controls are enabled, the creation and use of assistant actions (like sending an email or creating a Jira ticket) are blocked by default, as these are considered potential paths for data to leave the secure perimeter (To enable specific actions, the relevant services must be added to an allow list by contacting a Google representative). Kepp also in mind that VPC Service Controls can be used alongside Access Context Manager to gate public access to Gemini Enterprise applications, adding additional control beyond default authentication and authorization.
Connecting to Self-Hosted and Hybrid/Multi-Cloud Sources (Private Service Connect): For enterprise data that resides outside of the Google Cloud managed environment — such as on-premises data centers, Google Kubernetes Engine (GKE) clusters, Compute Engine VMs, or other cloud providers (like AWS or Azure) — Gemini Enterprise utilizes Private Service Connect, which establishes a secure and scalable communication channel that bypasses the public internet, minimizing security risks associated with traditional network configurations and ensuring sensitive data stays within the customer’s control and network boundaries. In this connectivity model, Gemini Enterprise acts as the service consumer, and the customer’s network (containing load balancers referencing the self-hosted resources) acts as the service producer. Notice: When setting up Private Service Connect for Gemini Enterprise, administrators must enable global access when creating the internal load balancer forwarding rule, as Gemini Enterprise may not be available in all locations.
Model Armor: Proactive AI Security Screening
Have you have hear of Cloud Armor? Well if you haven’t, Google Cloud Armor is a security service offered by Google Cloud that provides distributed denial-of-service (DDoS) protection and Web Application Firewall (WAF) capabilities.
Similarly for AI applications, you have Model Armor which operates by proactively screening both user prompts and the responses given by the Gemini Enterprise assistant to protect against various risks and ensures responsible AI practices. Notice: Model Armor is available on all Gemini Enterprise editions at no additional cost.
Model Armor performs functions directly related to 1) Safety Filtering 2) Data Governance and Compliance 3) Input/Prompt Injection Defenses
Safety Filtering — To enhance the security and safety of your AI applications Model Armor works by:
- by proactively screening the prompts and responses given by the Gemini Enterprise assistant. This screening process helps protect against various risks and ensures responsible AI practices.
- by applying response blocking. The system’s response to potential issues in queries or responses is governed by an enforcement type, and if you set it to “Inspect and block” (which is the default when creating a template using the console), Gemini Enterprise blocks the request and displays an error message if a policy violation is detected.
Data Governance and Compliance — Model Armor contributes by securing the interaction stream and providing auditable records of the security process (while, as you remember, the high-level policy of data usage is governed by the specific Gemini Enterprise edition). When Gemini Enterprise is configured to use Model Armor, the effective compliance certifications are the common subset of both products. (Notice: Google recommends reviewing both Gemini Enterprise and Model Armor certifications to ensure they meet regulatory requirements). In addition, Model Armor can write Data Access audit logs and these logs analyze and report on the request and response screening verdicts generated by Model Armor. These audit logs do not contain the actual user queries or assistant responses but record screening decisions, making them safe for reporting and analytics and in any case Google recommends logs should be rerouted to a secure storage destination like BigQuery, which offers stricter access controls, rather than configuring Cloud Logging directly in the Model Armor template for Gemini Enterprise apps, as this could expose sensitive data.
Input/Prompt Injection Defenses — Model Armor acts as a defense mechanism against malicious inputs, specifically targeting the prompts users send to the model.
- Prompt Screening: Model Armor’s core function is to proactively screen user prompts. If the screening service is configured to Inspect and block, it will prevent the execution of queries or inputs that violate the policy.
- Handling Malicious Content: If a Model Armor template is configured to screen user requests, and a document included in the request violates the policies, that document is discarded and isn’t included in the request. This prevents attempts to influence the model’s behavior or inject harmful data via uploaded content.
- Failure Control: When the Model Armor screening service is unavailable (e.g., due to processing failures), administrators can configure Gemini Enterprise to intentionally “Block all user interactions” (Fail Closed mode). This proactive blocking mode ensures that potentially malicious or unscreened requests are not processed, serving as a robust final defense against prompt risks during service interruptions.
While Google Cloud Armor protects the network infrastructure, the features you described as Model Armor protect the AI model and its data/interactions, which it’s a great way to think about the different layers of security needed for modern cloud applications!
Encryption Keys
If Google default encryption (primarily a AES-256 for data at rest and a TLS for data in transit) is not enough, since maybe you have more stringent security and sovereignty requirements, the Gemini Enterprise supports Customer-Managed Encryption Keys (CMEK) in Cloud KMS, which offers granular control over data encryption for data at rest.
In particular, CMEK protection extends beyond the data stores themselves to also cover other app-owned core information, such as session data generated during searches with follow-ups, provided the associated data stores are CMEK-protected.
And remember, using CMEK keys gives customers control over:
- Protection level: You can control the cryptographic strength and type of key used for encryption, such as a symmetric or asymmetric key.
- Location: You can specify the geographic location where your key is stored and used, which is important for meeting data domiciling or locality requirements.
- Rotation schedule: You can set up automatic key rotation, which involves creating a new key version for the same key ring. This is a security best practice that limits the amount of data encrypted by a single key version.
- Usage permissions: You can manage who can use the key by granting or revoking access through the use of roles and permissions. For example, you can grant the
cloudkms.cryptoKeyEncrypterDecrypter
role to a service account, allowing it to encrypt and decrypt data, and then revoke this role to deny access. - Key lifecycle management: You can control the entire lifecycle of the key, including creating, enabling, disabling, and destroying it. If you disable or destroy a key, you can also revoke access to all data encrypted by that key.
- Audit logs: Cloud providers maintain audit logs of all key usage, allowing you to monitor who is using your keys and when.
If you prefer, Gemini Enterprise also supports External Key Manager (EKM) or Hardware Security Module (HSM) in combination with CMEK, although EKM usage is currently in GA with an allowlist.
An Example:
Let’s use an example to show the interaction of all these components, but consider that a Secure Network Perimeter, the use of Customer-Managed Encryption Keys (CMEK) and Cloud Armor are not always required.
- Secure Network Perimeter (Optional): Before any query is processed, the entire interaction must occur within your secure network boundary, enforced by VPC Service Controls. This acts as the first line of defense, creating a virtual perimeter that prevents data exfiltration and ensures that all communication between your services and Gemini Enterprise happens over a private, trusted channel.
- User Authentication: An end-user submits a query from within the secure network and the first step is to authenticate the user’s identity against your company’s official Identity Provider (IDP).
- Identity Federation: If needed, Gemini Enterprise uses Workforce Identity Federation (WIF) to securely accept identities from third-party IDPs like Microsoft Entra ID or Okta to enforce your existing identity policies without managing separate credentials for the AI.
- Prompt Shielding (Optional): Before the query is even processed by the Gemini model, Model Armor inspects the prompt. It’s looking for malicious attempts to “jailbreak” the model, extract sensitive information about the model’s architecture, or generate harmful content. It acts as a proactive gatekeeper for the model’s input.
- Permission Check: The user’s verified identity is then checked against the Access Control Lists (ACLs) on the underlying data sources. The system asks, “For this specific user, what files, rows, or records are they allowed to see?”
- Identity Mapping (For Custom Sources): If you’re using a custom connector for an internal application, the Identity Mapping Store acts as a translator. It maps the user’s corporate identity (e.g.,
s.jones@company.com
) to their application-specific username (e.g.,s_jones_crm
), ensuring the correct permissions are enforced. - Encrypted & Filtered Data Retrieval: The system retrieves only the data that the user is explicitly authorized to access. This data, while stored at rest in Google Cloud, may be protected by Customer-Managed Encryption Keys (CMEK) or other encryption keys.
- Secure & Shielded Response Generation: Finally, the Gemini agent generates an answer based only on the pre-filtered, permission-aware data. During this process, Model Armor (Optional) provides an additional layer of protection for the model itself, shielding it against extraction and misuse, ensuring the integrity of the generative process.
The Power of Openness: Collaborative AI
While robust security is non-negotiable, Gemini Enterprise platform is also built on a principle of openness, to enable agents to communicate and conduct secure transactions regardless of the underlying model or platform, supporting standard protocols like the Agent2Agent Protocol (A2A) and Agent Payments Protocol (AP2), recognizing that the most powerful AI solutions will emerge from collaboration, not isolation.
This commitment is embodied by supporting open standards for agent-to-agent communication and transaction:
- Agent2Agent Protocol (A2A) – The Universal Language of Agents: A2A is an open communication standard that allows different AI agents to collaborate seamlessly, irrespective of their underlying model (e.g., Gemini, GPT, custom models) or the platform they run on. Example: Your internal Gemini Enterprise agent needs highly specialized information that resides with an external logistics agent managed by a third-party vendor. A2A allows your agent to securely send a request and receive a response in a standardized format, eliminating complex, bespoke integrations. It’s the SMTP for AI agents, enabling secure, cross-platform conversations.
- Agent Payments Protocol (AP2) – The Economy of Agents: AP2 is an open protocol that enables secure, trusted payments between AI agents. This moves beyond mere communication to facilitate a dynamic, on-demand economy where agents can buy and sell services. Example: Your Gemini financial analysis agent requires a real-time market sentiment analysis that’s best provided by a specialized external AI agent. Using AP2, your agent can securely request this analysis and automatically pay for the service upon delivery, transforming complex micro-transactions into seamless, automated flows.
Final consideration
Gemini Enterprise is designed to help customers meet strict compliance requirements and is backed by security measures designed for trust.
The platform supports stringent compliance needs, including but not limited to HIPAA (Health Insurance Portability and Accountability Act) and FedRAMP High-compliant which demonstrates that the platform meets the highest level of security controls required to safeguard the US government’s most sensitive, unclassified data.
Observability
Last but not least (and central to security and operation), now we speak about logging, monitoring and observability.
The Gemini Enterprise provides an interactive Analytics dashboard experience powered by Looker to get insight into the usage trends, search quality, and end-user engagement of app.
In addition, Gemini Enterprise incorporates robust logging and monitoring capabilities by utilizing the comprehensive Observability suite, an integrated approach that ensures administrators have the necessary tools to detect, investigate, and respond to operational issues, security threats, and usage trends across their AI deployments.
- For foundational operational security and auditing, Gemini Enterprise relies heavily on detailed logging systems, namely Cloud Logging, to monitor errors and warnings related to key operational processes, specifically when importing documents or working with data connectors.
- For deeper investigation, logs can be accessed through the Logs Explorer or exported to a long-term sink such as BigQuery for more complex analysis, which is also recommended by Google to ensure stricter access controls over sensitive log data.
- Furthermore, Gemini Enterprise integrates Audit Logging, generating logs that record administrative and access activities within Google Cloud resources. These audit logs categorize actions into Admin Activity (for administrative changes, such as creating or deleting data stores and engines) and Data Access (for operations involving data reads or writes, such as searching or importing documents). This audited record ensures transparency and non-repudiation for governance purposes.
- As said before, crucial to security observability is also the integration of Model Armor, which writes Data Access audit logs that analyze and report on the request and response screening verdicts generated during interactions with the assistant.
- Finally, for organizations requiring the highest level of accountability, Access Transparency provides additional logs that specifically capture actions taken by Google personnel when accessing customer content. Notice: Access Transparency requires the app and data stores to be configured in multi-regions (US or EU), and it does not cover data associated with preview features or search analytics data.
Gemini Enterprise Tiers
As mentioned Gemini Enterprise is structured into several editions designed to meet the varying scale, complexity, and security needs of businesses, from small teams to large, heavily regulated corporations. Considering how quickly they tiers and their features change, please read the official documentation.
How to turn it on and turn it off
Gemini Enterprise is built upon the Discovery Engine API within Google Cloud, therefore, controlling the platform’s operation is managed through the API status.
Required APIs for Use: In addition to purchasing a tier model, to start utilizing Gemini Enterprise, several foundational Google Cloud APIs must be enabled in your project, including the Vertex AI API, the Gemini Enterprise (Discovery Engine) API, the Cloud Storage API, and the Identity and Access Management API.
Turning Off Gemini Enterprise: Turning off Gemini Enterprise stops billing and there is no guarantee that your data related to Gemini Enterprise will persist. However, if you delete the project, then all data is deleted according to the deletion policy for projects.
Source Credit: https://medium.com/google-cloud/gemini-enterprise-handbook-a-unified-secure-agentic-platform-for-enterprise-data-grounding-and-ai-0874378c5c27?source=rss—-e52cf94d98af—4