Last year, we announced the Google Gen AI SDK as the new unified library for Gemini on Google AI (via the Gemini Developer API) and Vertex AI (via the Vertex AI API). At the time, it was only a Python SDK. Since then, the team has been busy adding support for Go, Node.js, and Java but my favorite language, C#, was missing until now.
Today, I’m happy to announce that we now have a Google Gen AI .NET SDK! This SDK enables C#/.NET developers use Gemini from Google AI or Vertex AI with a single unified library.
Let’s take a look at the details.
Installation
To install the library, run the following command in your .NET project directory:
- code_block
- <ListValue: [StructValue([('code', 'dotnet add package Google.GenAI'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x7fd4a02ef070>)])]>
Import it to your code as follows:
- code_block
- <ListValue: [StructValue([('code', 'using Google.GenAI;'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x7fd4a02ef5e0>)])]>
Create a client
First, you need to create a client to talk to Gemini.
You can target Gemini on Google AI (via the Gemini Developer API):
- code_block
- <ListValue: [StructValue([('code', 'using Google.GenAI;\r\n\r\n// Gemini Developer API\r\nvar client = new Client(apiKey: apiKey);'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x7fd4a02ef580>)])]>
Or you can target Gemini on Vertex AI (via the Vertex AI API):
- code_block
- <ListValue: [StructValue([('code', '// Vertex AI API\r\nvar client = new Client(\r\n project: project, location: location, vertexAI: true\r\n)'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x7fd4a02efee0>)])]>
Generate text
Once you have the client, you can generate text with a unary response:
- code_block
- <ListValue: [StructValue([('code', 'var response = await client.Models.GenerateContentAsync(\r\n model: "gemini-2.5-flash", contents: "why is the sky blue?"\r\n );\r\nConsole.WriteLine(response.Candidates[0].Content.Parts[0].Text);'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x7fd4a02efcd0>)])]>
You can also generate text with a streaming response:
- code_block
- <ListValue: [StructValue([('code', 'await foreach (var chunk in client.Models.GenerateContentStreamAsync(\r\n model: "gemini-2.5-flash",\r\n contents: "why is the sky blue?"\r\n )) {\r\n Console.WriteLine(chunk.Candidates[0].Content.Parts[0].Text);\r\n }'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x7fd4a02ef9d0>)])]>
Generate image
Generating images is also straightforward with the new library:
- code_block
- <ListValue: [StructValue([('code', 'var response = await client.Models.GenerateImagesAsync(\r\n model: "imagen-3.0-generate-002",\r\n prompt: "Red skateboard"\r\n);\r\n\r\n// Save the image to a file\r\nvar image = response.GeneratedImages.First().Image;\r\nawait File.WriteAllBytesAsync("skateboard.jpg", image.ImageBytes);'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x7fd4a02eff10>)])]>

Configuration
Of course, all of the text and image generation is highly configurable.
For example, you can define a response schema and a generation configuration with system instructions and other settings for text generation as follows:
- code_block
- <ListValue: [StructValue([('code', '// Define a response schema\r\nSchema countryInfo = new()\r\n{\r\n Properties = new Dictionary<string, Schema> {\r\n {\r\n "name", new Schema { Type = Type.STRING, Title = "Name" }\r\n },\r\n {\r\n "population", new Schema { Type = Type.INTEGER, Title = "Population" }\r\n },\r\n {\r\n "capital", new Schema { Type = Type.STRING, Title = "Capital" }\r\n },\r\n {\r\n "language", new Schema { Type = Type.STRING, Title = "Language" }\r\n }\r\n },\r\n PropertyOrdering = ["name", "population", "capital", "language"],\r\n Required = ["name", "population", "capital", "language"],\r\n Title = "CountryInfo",\r\n Type = Type.OBJECT\r\n};\r\n\r\n// Define a generation config\r\nGenerateContentConfig config = new()\r\n{\r\n ResponseSchema = countryInfo,\r\n ResponseMimeType = "application/json",\r\n SystemInstruction = new Content\r\n {\r\n Parts = [\r\n new Part {Text = "Only answer questions on countries. For everything else, say I don\'t know."}\r\n ]\r\n },\r\n MaxOutputTokens = 1024,\r\n Temperature = 0.1,\r\n TopP = 0.8,\r\n TopK = 40,\r\n};\r\n\r\nvar response = await client.Models.GenerateContentAsync(\r\n model: "gemini-2.5-flash",\r\n contents: "Give me information about Cyprus",\r\n config: config);'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x7fd4a02efb50>)])]>
Similarly, you can specify image generation configuration:
- code_block
- <ListValue: [StructValue([('code', 'GenerateImagesConfig config = new()\r\n{\r\n NumberOfImages = 1,\r\n AspectRatio = "1:1",\r\n SafetyFilterLevel = SafetyFilterLevel.BLOCK_LOW_AND_ABOVE,\r\n PersonGeneration = PersonGeneration.DONT_ALLOW,\r\n IncludeSafetyAttributes = true,\r\n IncludeRaiReason = true,\r\n OutputMimeType = "image/jpeg",\r\n};\r\n\r\nvar response = await client.Models.GenerateImagesAsync(\r\n model: "imagen-3.0-generate-002",\r\n prompt: "Red skateboard",\r\n config: config\r\n);'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x7fd4a02efa00>)])]>
Conclusion
In this blog post, we introduced the Google Gen AI .NET SDK as the new unified SDK to talk to Gemini.
Here are some links to learn more:
-
API reference
-
Source code on GitHub
- DemoApp with samples on Live API and more
Source Credit: https://cloud.google.com/blog/topics/developers-practitioners/introducing-google-gen-ai-net-sdk/
