First LLM Call

Published

Now that you have Gemini set up and ready to go, it's time for the exciting part - making your first actual call to a Large Language Model! This is where the magic happens and where you'll see AI respond to your prompts in real-time.

Think of this as having your first conversation with an AI assistant. Just like talking to a person, you'll ask a question (send a prompt) and get a response back. The difference is that this conversation happens through code, giving you complete control over the interaction.

Understanding the LLM Request-Response Cycle

Before we dive into code, let's understand what happens when you make an LLM call:

graph LR
    A[Your Code] --> B[Send Prompt]
    B --> C[Gemini API]
    C --> D[AI Processing]
    D --> E[Generated Response]
    E --> F[Your Code Receives Response]
    F --> G[Display/Process Result]

Here's what each step involves:

  1. Your Code: Prepares a prompt (question or instruction)
  2. Send Prompt: Makes an HTTP request to Gemini's API
  3. Gemini API: Receives and validates your request
  4. AI Processing: The LLM generates a response based on your prompt
  5. Generated Response: AI sends back the generated text
  6. Your Code Receives Response: Your application gets the result
  7. Display/Process Result: You can show it to users or use it in your app

What Makes a Good First Prompt?

Your first prompt is like your first impression - it sets the tone for what you can expect from the AI. Here are the key principles:

Be Clear and Specific

Instead of: "Tell me about programming" Try: "Explain what a variable is in programming in simple terms"

Start Simple

Your first prompts should be straightforward. Complex, multi-part questions can wait until you're comfortable with the basics.

Set Context When Needed

If you want the AI to respond in a particular style or format, mention it in your prompt.

Working Code Example

Let's build your first LLM call step by step. We'll start with the simplest possible example and then add more features.

Basic LLM Call

Building on your setup from the previous chapter, update your src/index.ts:

import { GoogleGenAI } from "@google/genai";
import * as dotenv from "dotenv";

dotenv.config();

We're using the same imports and configuration from your setup. This ensures your API key is loaded and ready to use.

const apiKey = process.env.GEMINI_API_KEY;

if (!apiKey) {
  console.error("GEMINI_API_KEY not found in environment variables");
  process.exit(1);
}

const genAI = new GoogleGenAI({ apiKey });

This creates your Gemini client, just like in the setup chapter. Now we're ready to make our first actual call.

async function makeFirstCall() {
  try {
    const prompt = "What is TypeScript in one sentence?";

    console.log("🤖 Sending prompt:", prompt);

    const response = await genAI.models.generateContent({
      model: "gemini-2.5-flash",
      contents: prompt,
    });

    console.log("✅ AI Response:", response.text);
  } catch (error) {
    console.error("❌ Error making LLM call:", error);
  }
}

Let's break down what's happening here:

  • prompt contains our question for the AI
  • generateContent() sends the prompt to Gemini and waits for a response
  • We extract the text from the response and display it
makeFirstCall();

This runs our function when you execute the script.

Running Your First Call

Execute your code:

npm start

You should see something like:

🤖 Sending prompt: What is TypeScript in one sentence?
✅ AI Response: TypeScript is a superset of JavaScript that adds static type definitions to help catch errors during development.

Congratulations! You just made your first LLM call. The AI understood your question and provided a relevant, accurate response.

Testing Different Prompts

Now that you've made your first successful call, let's explore how different types of prompts work with the same basic pattern. You can test these by replacing the prompt in your makeFirstCall() function:

Mathematical Questions: Try: "What is 2 + 2?" The AI will give you straightforward numerical answers and can handle basic math operations.

Code Generation: Try: "Write a simple hello world function in TypeScript" You'll see the AI can generate actual code snippets that you can copy and use.

Educational Explanations: Try: "Explain the concept of variables in programming to a beginner" The AI adapts its language and complexity based on your specified audience.

The beauty of LLM calls is that the same basic code structure works for any type of prompt - you just change the text you send!

This demonstrates how different types of prompts (math, code, explanations) all work with the same API call pattern.

FAQ

Summary

You've successfully made your first LLM call and learned the fundamentals of AI interaction! Here's what you accomplished:

  • Made your first API call to Gemini and received an AI-generated response
  • Understood the request-response cycle that powers all LLM interactions

Complete Code

You can find the complete, runnable code for this tutorial on GitHub: https://github.com/avestalabs/academy/tree/main/1-fundamentals/first-llm-call

Share this article: