In Part 1, we examined how AI agents suffer from “temporal blindness” — hallucinating that a delivery has arrived simply because a calendar invite said “Tuesday.” We learned that without real-time traffic data, an agent is just guessing.
But in the design-build space, “time” is only one variable. The other, equally expensive variable is context.
Google’s Solar API and Aerial View API weren’t designed for interior architecture. They were built for solar installers and mapping applications. But these tools contain something invaluable: ground truth about how buildings actually exist in three-dimensional space, how light moves through urban environments, and how surrounding structures create constraints that no floor plan captures.
Understanding Solar Context Through Data
The Solar API provides far more than solar panel recommendations. When you query a specific address, the API returns buildingInsights — a comprehensive analysis of how that building interacts with sunlight throughout the year.
We don’t just want the agent to dump raw solar data; we want it to help form a design opinion. By wiring the Solar API’s insights directly into the agent’s reasoning loop, we can programmatically generate a “Lighting Persona” for the property.
Here is how we use Python to extract this design intelligence:
from google.maps import solar_v1
def analyze_lighting_persona(latitude, longitude):
client = solar_v1.SolarClient()
# 1. Get the Ground Truth from Solar API
# We request HIGH quality imagery to ensure precise roof segment data
request = solar_v1.FindClosestBuildingInsightsRequest(
location={"latitude": latitude, "longitude": longitude},
required_quality=solar_v1.ImageryQuality.HIGH
)
insights = client.find_closest_building_insights(request=request)
max_sunshine = insights.building_insights.solar_potential.max_sunshine_hours_per_year
# 2. Translate Data into Design Logic
# This logic grounds the LLM's creative suggestions in physical reality
if max_sunshine < 1800:
return {
"Persona": "The Cave",
"Risk": "Client depression, dying plants, drab textiles",
"Action": "Budget +20% for architectural lighting. Use mirrors. Avoid silk."
}
elif max_sunshine > 3000:
return {
"Persona": "The Greenhouse",
"Risk": "Fading millwork, UV damage to art collections",
"Action": "Mandatory solar shades. Specify UV-resistant fabrics only."
}
return {"Persona": "Balanced", "Action": "Standard lighting plan acceptable."}
This isn’t about catching mistakes — it’s about informed design from the start. When we know a penthouse gets 3,200 hours of annual sunlight, we specify UV-resistant fabrics and recommend automated shades. When we identify a garden-level unit with 1,100 hours, we design a lighting scheme that emulates natural light patterns.

The Vertical Dimension: Aerial View for Context Awareness
While the Solar API tells us about light, the Aerial View API reveals the physical context that constrains every design decision. This API provides photorealistic videos and images of buildings from multiple angles, capturing the surrounding environment that determines what’s actually possible.
In NYC a 3rd-floor walk-up might actually have a specialized freight hoist, or it might be blocked by historic landmark scaffolding. Instead of sending a designer to the site for a preliminary check, the agent requests a cinematic video of the building’s exterior context.
import requests
def request_virtual_survey(address):
api_key = "YOUR_API_KEY"
# Request a cinematic 3D video of the building exterior
# This captures vertical constraints (fire escapes, neighboring walls)
# that 2D satellite imagery misses.
response = requests.post(
"https://aerialview.googleapis.com/v1/videos:renderVideo",
params={"key": api_key},
json={
"address": address,
"videoType": "AERIAL_CINEMATIC"
}
)
# The agent returns a checklist for the human architect to review
# alongside the generated video link.
return {
"status": "Visualizing Context",
"video_state": response.json().get('state'),
"human_review_checklist": [
"Is the freight entrance blocked by scaffolding?",
"Are there neighboring brick walls blocking the primary view?",
"Is the roof accessible for HVAC crane lifts?"
]
}
The Unit Economics of the “Digital Survey”
In the Design-Build industry, the “Site Survey” is a significant line item. Sending a Project Manager to a site for a preliminary assessment costs roughly $1,000 in billable hours and travel time.
Often, these initial visits result in a “No-Go” decision. The space isn’t right; the constraints are too high. That is $1,000 wasted.
By architecting a Spatial Intelligence layer using the Solar and Aerial View APIs, the cost to “survey” the site digitally drops to pennies. If an AI agent can analyze 10 potential sites for a client and flag that 4 of them with poor solar orientation or inaccessible approaches before we visit, we have effectively saved thousands of dollars in high-value labor. We shift the human architect’s time from data gathering to creative problem solving.
Conclusion: From Hallucination to Simulation
The Andon Labs paper taught us that AI fails when it operates in a vacuum. In the vending machine experiment, the vacuum was time. In Design-Build, the vacuum is the environment.
An AI that designs a room without knowing where the sun is, or plans a delivery without seeing the street, is merely hallucinating a pleasant fiction.
By integrating the Google Maps Platform into our AI architecture, we are moving from hallucination to simulation. We are giving our agents the ability to “see” the invisible constraints of the city — the shadows, the angles, the verticality — so that when they propose a design, it is not just beautiful. It is buildable.
In Part 3 of this series, we will move inside the building to explore “The Inventory Agent” — how we are using computer vision and semantic search to manage the thousands of physical samples that make up a design library.
The Invisible Constraint: How AI “Sees” a Building Before We Step Inside (part 2) was originally published in Google Cloud – Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source Credit: https://medium.com/google-cloud/the-invisible-constraint-how-ai-sees-a-building-before-we-step-inside-part-2-081ea35f6b46?source=rss—-e52cf94d98af—4
