Introduction to Artificial Intelligence

Introduction

When working with Large Language Models (LLMs), the way you structure your prompts can dramatically impact the quality and accuracy of responses. Think of prompting as giving instructions to a highly capable assistant - the clearer and more strategic your instructions, the better the results.

In this chapter, we'll explore three fundamental prompting strategies that every AI engineer should master: zero-shot prompting (asking without examples), few-shot prompting (learning from examples), and chain of thought prompting (thinking step-by-step). These techniques form the foundation of effective AI interaction and can significantly improve your model's performance across various tasks.

Core Concepts

Zero-Shot Prompting

Zero-shot prompting is the simplest approach where you provide a clear instruction or question without any examples. The model relies entirely on its pre-trained knowledge to understand and respond to your request.

Key characteristics:

  • No examples provided
  • Relies on model's existing knowledge
  • Works well for common tasks
  • Quick and straightforward

When to use: Simple tasks, well-defined problems, or when you want to test the model's baseline capabilities.

Few-Shot Prompting

Few-shot prompting involves providing one or more examples of the desired input-output pattern before asking the model to perform the task. This helps the model understand the format, style, and expectations for the response.

Key characteristics:

  • Includes 1-5 examples typically
  • Demonstrates the desired pattern
  • Improves consistency and accuracy
  • Requires more tokens but often worth it

When to use: Complex formatting requirements, specific output styles, or when zero-shot results are inconsistent.

Chain of Thought Prompting

Chain of thought prompting encourages the model to break down complex problems into step-by-step reasoning. Instead of jumping directly to an answer, the model explains its thinking process, leading to more accurate and reliable results.

Key characteristics:

  • Explicit step-by-step reasoning
  • Shows the thinking process
  • Particularly effective for complex problems
  • Can be combined with few-shot examples

When to use: Mathematical problems, logical reasoning, complex analysis, or multi-step tasks.

Environment Setup

Before we dive into the code examples, let's set up a simple environment to test these prompting strategies. We'll use a basic TypeScript setup that simulates API calls to an LLM service.

# Create a new directory for our prompting examples
mkdir prompting-strategies
cd prompting-strategies

# Initialize npm and install dependencies
npm init -y
npm install axios dotenv
npm install -D typescript @types/node ts-node

# Create TypeScript configuration
npx tsc --init

Create a .env file for your API configuration:

API_KEY=your_api_key_here
API_ENDPOINT=https://api.your-llm-provider.com/v1/chat/completions

Working Code Example

Let's build a practical example that demonstrates all three prompting strategies for a sentiment analysis task.

Basic Setup and Types

// prompting-demo.ts
import axios from "axios";
import * as dotenv from "dotenv";

dotenv.config();

interface PromptResponse {
  content: string;
  strategy: string;
}

interface APIResponse {
  choices: Array<{
    message: {
      content: string;
    };
  }>;
}

This sets up our basic types and imports. We define interfaces to structure our responses and ensure type safety throughout our examples.

Zero-Shot Prompting Implementation

async function zeroShotPrompting(text: string): Promise<PromptResponse> {
  const prompt = `Analyze the sentiment of this text and classify it as positive, negative, or neutral: "${text}"`;

  const response = await callLLM(prompt);

  return {
    content: response,
    strategy: "zero-shot",
  };
}

Here we implement zero-shot prompting by providing a clear, direct instruction. The model uses its pre-trained knowledge to classify sentiment without any examples to guide it.

Few-Shot Prompting Implementation

async function fewShotPrompting(text: string): Promise<PromptResponse> {
  const prompt = `Analyze the sentiment of the following texts:

Example 1: "I love this product! It works perfectly." → Positive
Example 2: "This is terrible and doesn't work at all." → Negative
Example 3: "The product is okay, nothing special." → Neutral

Now analyze: "${text}" → `;

  const response = await callLLM(prompt);

  return {
    content: response,
    strategy: "few-shot",
  };
}

In this few-shot example, we provide three clear examples showing the input-output pattern we want. This helps the model understand exactly how to format its response and what constitutes each sentiment category.

Chain of Thought Prompting Implementation

async function chainOfThoughtPrompting(text: string): Promise<PromptResponse> {
  const prompt = `Analyze the sentiment of this text step by step:

Text: "${text}"



Step 1: Identify key emotional words and phrases
Step 2: Determine the overall tone
Step 3: Consider context and nuance
Step 4: Classify as positive, negative, or neutral with reasoning

Please work through each step:`;

  const response = await callLLM(prompt);

  return {
    content: response,
    strategy: "chain-of-thought",
  };
}

Chain of thought prompting breaks down the sentiment analysis into logical steps. This approach helps the model provide more thoughtful and accurate analysis by explicitly reasoning through the process.

API Call Helper Function

async function callLLM(prompt: string): Promise<string> {
  try {
    const response = await axios.post<APIResponse>(
      process.env.API_ENDPOINT!,
      {
        model: "gpt-3.5-turbo",
        messages: [{ role: "user", content: prompt }],
        max_tokens: 150,
        temperature: 0.1,
      },
      {
        headers: {
          Authorization: `Bearer ${process.env.API_KEY}`,
          "Content-Type": "application/json",
        },
      }
    );

    return response.data.choices[0].message.content;
  } catch (error) {
    throw new Error(`API call failed: ${error}`);
  }
}

This helper function handles the actual API communication. We use a low temperature (0.1) to get more consistent results, which is important when comparing different prompting strategies.

Demonstration Function

async function demonstratePromptingStrategies() {
  const testText =
    "The movie was decent but the ending felt rushed and disappointing.";

  console.log("Testing Prompting Strategies\n");
  console.log(`Text to analyze: "${testText}"\n`);

  // Test zero-shot
  const zeroShot = await zeroShotPrompting(testText);
  console.log(`${zeroShot.strategy.toUpperCase()}:`);
  console.log(zeroShot.content);
  console.log();

  // Test few-shot
  const fewShot = await fewShotPrompting(testText);
  console.log(`${fewShot.strategy.toUpperCase()}:`);
  console.log(fewShot.content);
  console.log();

  // Test chain of thought
  const chainOfThought = await chainOfThoughtPrompting(testText);
  console.log(`${chainOfThought.strategy.toUpperCase()}:`);
  console.log(chainOfThought.content);
}

// Run the demonstration
demonstratePromptingStrategies().catch(console.error);

This demonstration function tests all three strategies with the same input text, allowing you to compare their effectiveness and output quality.

FAQ

Summary

Prompting strategies are fundamental tools for effective AI engineering. Zero-shot prompting offers simplicity and speed for straightforward tasks. Few-shot prompting provides consistency and format control through examples. Chain of thought prompting enables complex reasoning and problem-solving through step-by-step analysis.

The key to mastering these techniques is understanding when to apply each strategy and how to combine them effectively. Start with zero-shot for baseline performance, add few-shot examples for consistency, and incorporate chain of thought reasoning for complex tasks. Remember that the best prompting strategy depends on your specific use case, model capabilities, and performance requirements.

Complete Code

You can find the complete, runnable code for this tutorial on GitHub: [Link to GitHub Repository]

Share this article: