Gemini Setup

Published

Now that you understand what LLMs are and how AI Engineering differs from ML Engineering, it's time to get hands-on. In this chapter, we'll set up Google's Gemini AI model so you can start building with one of the most powerful LLMs available today.

Think of this setup process like preparing your development environment for any new technology - we need to get our credentials, install the right tools, and configure everything properly before we can start coding.

What is Gemini?

Gemini is Google's family of large language models designed to be highly capable across text, code, audio, image, and video understanding. For AI Engineers, Gemini offers:

  • Multiple model variants (Gemini Pro, Gemini Flash) for different use cases
  • Multimodal capabilities - can process text, images, and other media types
  • Competitive performance with excellent reasoning abilities
  • Google AI Studio for easy experimentation and testing

Let's break down why Gemini is an excellent choice for learning AI Engineering:

  1. Developer-friendly API: Clean, well-documented interface
  2. Generous free tier: Perfect for learning and prototyping
  3. Fast response times: Especially with Gemini Flash
  4. Strong code generation: Excellent for TypeScript and other programming tasks

Getting Your Gemini API Key

Before we can use Gemini in our code, we need to get an API key from Google. Here's how:

Step 1: Access Google AI Studio

  1. Go to Google AI Studio
  2. Sign in with your Google account
  3. Accept the terms of service if prompted

Step 2: Generate Your API Key

  1. Look for the "Get API Key" button (usually in the bottom left navigation)
  2. Click "Create API Key" (usually in the top navigation)
  3. Choose "Create API key in new project" (recommended for beginners)
  4. Copy your API key immediately - you won't be able to see it again

Important Security Note: Treat your API key like a password. Never commit it to version control or share it publicly. We'll show you how to store it securely in the next section.

Understanding API Costs and Limits

Let's talk about the practical side of using Gemini:

Free Tier Limits

  • Gemini 2.5 Flash: 10 requests per minute, 250,000 tokens per day, 250 requests per day
  • Gemini Pro: 5 requests per minute, 250,000 tokens per day, 100 requests per day

What This Means for Learning

These limits are generous for learning purposes. A typical conversation turn uses 100-500 tokens, so you can have many interactions while learning.

Token Counting Tip

Think of tokens as pieces of words. Roughly:

  • 1 token ≈ 4 characters in English
  • 100 tokens ≈ 75 words
  • Your API key dashboard shows real-time usage

Environment Setup

Now let's prepare your development environment for working with Gemini. We'll set up a TypeScript project that you can use for the upcoming "First LLM Call" chapter.

Prerequisites Check

Make sure you have these installed:

node --version  # Should be 18 or higher
npm --version   # Should be 9 or higher

If you need to install Node.js, visit nodejs.org and download the LTS version.

Project Setup

Create a new directory for your AI Engineering projects:

mkdir ai-engineering-course
cd ai-engineering-course
mkdir gemini-setup
cd gemini-setup

Initialize a new TypeScript project:

npm init -y
npm install typescript @types/node ts-node
npm install @google/generative-ai
npm install dotenv @types/dotenv

Create a TypeScript configuration:

npx tsc --init

Update your tsconfig.json with these essential settings:

{
  "compilerOptions": {
    "target": "ES2020",
    "module": "commonjs",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true,
    "resolveJsonModule": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist"]
}

Environment Variables Setup

Create a .env file in your project root:

touch .env

Add your API key to the .env file:

GEMINI_API_KEY=your_actual_api_key_here

Critical Security Step: Create a .gitignore file to prevent accidentally committing your API key:

echo "node_modules/" > .gitignore
echo ".env" >> .gitignore
echo "dist/" >> .gitignore

Project Structure

Create the basic folder structure:

mkdir src
touch src/index.ts

Your project should now look like this:

gemini-setup/
├── src/
│   └── index.ts
├── .env
├── .gitignore
├── package.json
├── tsconfig.json
└── node_modules/

Working Code Example

Let's create a simple test to verify your Gemini setup is working correctly. This will be the foundation for your first LLM call in the next chapter.

Basic Connection Test

Create this in your src/index.ts file:

import { GoogleGenerativeAI } from "@google/generative-ai";
import * as dotenv from "dotenv";

This imports the Gemini SDK and dotenv for environment variable management. The GoogleGenerativeAI class is your main interface to Gemini's capabilities.

dotenv.config();

This loads your environment variables from the .env file, making your API key available as process.env.GEMINI_API_KEY.

const apiKey = process.env.GEMINI_API_KEY;

if (!apiKey) {
  console.error("GEMINI_API_KEY not found in environment variables");
  process.exit(1);
}

This safely retrieves your API key and exits gracefully if it's missing. Always validate your configuration before making API calls.

const genAI = new GoogleGenerativeAI(apiKey);

This creates your Gemini client instance. Think of this as your connection to Google's AI services.

async function testConnection() {
  try {
    const model = genAI.getGenerativeModel({ model: "gemini-2.5-flash" });
    console.log("✅ Gemini setup successful!");
    console.log("Model loaded:", model);
  } catch (error) {
    console.error("❌ Setup failed:", error);
  }
}

This function tests your connection by attempting to load the Gemini Flash model. We're not making an actual API call yet - just verifying the setup works.

testConnection();

This runs our test function when you execute the script.

Running Your Test

Add this script to your package.json:

{
  "scripts": {
    "start": "ts-node src/index.ts",
    "build": "tsc",
    "dev": "ts-node --watch src/index.ts"
  }
}

Now test your setup:

npm start

You should see: ✅ Gemini setup successful!

If you see an error, double-check:

  1. Your API key is correctly copied in .env
  2. No extra spaces or quotes around the API key
  3. The .env file is in the project root
  4. You've installed all dependencies

Choosing the Right Gemini Model

Gemini offers different models for different needs. Here's when to use each:

Gemini Flash

  • Best for: Learning, rapid prototyping, simple tasks
  • Speed: Very fast responses
  • Cost: Most economical
  • Use when: You need quick iterations while learning

Gemini Pro

  • Best for: Complex reasoning, detailed analysis
  • Speed: Slower but more thoughtful
  • Cost: Higher per request
  • Use when: You need the highest quality responses

For this course, we'll primarily use Gemini Flash since it's perfect for learning and experimentation.

Common Setup Issues and Solutions

Let's address the most frequent problems beginners encounter:

"API Key Not Found" Error

Problem: Your environment variables aren't loading Solution:

  • Verify .env file is in project root
  • Check for typos in variable name
  • Restart your terminal/IDE after creating .env

Import/Module Errors

Problem: TypeScript can't find the Gemini package Solution:

  • Run npm install again
  • Check your node_modules folder exists
  • Verify TypeScript configuration

FAQ

Summary

You've successfully set up Gemini for AI Engineering development! Here's what you accomplished:

  • Obtained a Gemini API key from Google AI Studio
  • Configured a secure development environment with proper API key management
  • Set up a TypeScript project with all necessary dependencies
  • Tested your connection to verify everything works
  • Learned about different Gemini models and when to use each

Your development environment is now ready for making your first LLM call. In the next chapter, we'll use this setup to send your first request to Gemini and handle the response - the foundation of all AI Engineering work.

Remember: Keep your API key secure, start with Gemini Flash for learning, and don't worry about the free tier limits while you're getting started.

Complete Code

You can find the complete, runnable code for this tutorial on GitHub: https://github.com/avestalabs/academy/tree/main/1-fundamentals/gemini-setup

Share this article: