Prompt Engineering Fundamentals
In the previous module, you built a basic AI assistant that could answer questions and track token usage. But you might have noticed that sometimes the AI responses were exactly what you wanted, while other times they were... well, not quite right. The difference often comes down to how you phrase your requests to the AI.
Welcome to prompt engineering - the art and science of crafting effective prompts that consistently guide AI models toward generating the responses you actually want. Think of it as learning to communicate clearly with a very powerful but literal-minded assistant.
What is Prompt Engineering?
Prompt engineering is the practice of designing and optimizing the text inputs (prompts) you send to Large Language Models to get the best possible outputs. It's like providing a roadmap for the AI, steering it toward the specific response you have in mind.
Just as you wouldn't ask a colleague "do something with the data" and expect great results, you can't expect an LLM to read your mind. The quality of your prompt directly impacts the quality of the response.
graph LR
A[Vague Prompt] --> B[Unpredictable Output]
C[Well-Crafted Prompt] --> D[Consistent, Quality Output]
style A fill:#ffcccc
style B fill:#ffcccc
style C fill:#ccffcc
style D fill:#ccffcc
The Anatomy of an Effective Prompt
Let's break down what makes a prompt effective by examining the key components:
1. Context and Role
Give the AI context about what role it should play or what perspective it should take.
Poor:
"Write about JavaScript"
Better:
"You are a senior developer explaining JavaScript concepts to a junior developer who knows Python but is new to JavaScript."
2. Clear Instructions
Be specific about what you want the AI to do.
Poor:
"Help me with code"
Better:
"Write a TypeScript function that validates email addresses and returns a boolean. Include error handling for edge cases."
3. Format Specification
Tell the AI exactly how you want the response formatted.
Poor:
"Explain async/await"
Better:
"Explain async/await in TypeScript using this format: 1) Brief definition, 2) Simple code example, 3) Common use case, 4) One potential pitfall to avoid."
4. Examples (When Helpful)
Show the AI what good output looks like.
Poor:
"Generate test data"
Better:
"Generate test data for a user object. Example format: { id: 1, name: 'John Doe', email: 'john@example.com', age: 30 }"
Common Prompt Engineering Patterns
The Instruction Pattern
Start with a clear action verb and be specific about the task.
Analyze the following TypeScript code and identify potential performance issues:
[code here]
Focus on:
- Memory leaks
- Inefficient loops
- Unnecessary API calls
The Few-Shot Pattern
Provide examples to guide the AI's response style.
Convert these JavaScript functions to TypeScript with proper type annotations:
Example:
JavaScript: function add(a, b) { return a + b; }
TypeScript: function add(a: number, b: number): number { return a + b; }
Now convert:
function getUserName(user) { return user.name || 'Anonymous'; }
The Chain-of-Thought Pattern
Ask the AI to think through problems step by step.
Debug this TypeScript error step by step:
Error: "Property 'length' does not exist on type 'string | number'"
Code: function processInput(input: string | number) { return input.length; }
Think through:
1. What is the error telling us?
2. Why does this happen with union types?
3. How can we fix it?
4. Provide the corrected code
Working Code Example
Since you already have the basic setup from previous chapters (API key configuration, imports, etc.), let's focus on building a simple prompt comparison tool.
Step 1: Define Our Test Question
async function testPromptTechniques() {
const userQuestion = "How do I handle errors in TypeScript?";
We'll use this question to test different prompt approaches and see how they affect the AI's response.
Step 2: Create Basic and Enhanced Prompts
// Basic prompt
const basicPrompt = userQuestion;
// Enhanced prompt with role and structure
const enhancedPrompt = `You are a TypeScript expert teaching a developer.
Question: ${userQuestion}
Please structure your response as:
1. Brief explanation
2. Simple code example
3. Best practice tip`;
Here we're comparing a simple, direct prompt with an enhanced version that includes role context and output structure. This demonstrates two key prompt engineering techniques in action.
Step 3: Display the Prompts
console.log("Basic prompt:", basicPrompt);
console.log("Enhanced prompt:", enhancedPrompt);
console.log("AI is thinking...");
It's helpful to see exactly what prompts we're sending to understand the difference in approach.
Step 4: Test Both Prompts
// Test both prompts
const basicResponse = await genAI.models.generateContent({
model: "gemini-2.5-flash",
contents: basicPrompt,
});
const enhancedResponse = await genAI.models.generateContent({
model: "gemini-2.5-flash",
contents: enhancedPrompt,
});
We're making two separate API calls with the same question but different prompt structures to compare the results.
Step 5: Display and Compare Results
console.log("Basic response:", basicResponse.text);
console.log("Enhanced response:", enhancedResponse.text);
}
testPromptTechniques();
This final step shows both responses side by side, allowing you to see how prompt engineering techniques affect the quality and structure of AI responses.
The complete function demonstrates how adding role context and structure to your prompts can dramatically improve the quality and consistency of AI responses. A common pitfall to watch out for is assuming that more complex prompts are always better - sometimes a simple, clear prompt is exactly what you need.
Understanding Prompt Engineering Principles
Principle 1: Be Specific and Clear
Vague prompts lead to vague responses. Instead of "help me with TypeScript," try "show me how to define interfaces for API response objects in TypeScript."
Principle 2: Provide Context
The AI doesn't know your background or project context. Give it the information it needs to provide relevant responses.
Principle 3: Use Examples When Helpful
Sometimes showing is better than telling. Provide examples of the format or style you want.
Principle 4: Iterate and Refine
Prompt engineering is iterative. Start with a basic prompt, analyze the response, and refine your approach.
Principle 5: Consider the AI's Perspective
Think about what information the AI needs to give you the best possible response.
Common Pitfalls to Avoid
Being Too Vague: "Fix my code" vs. "Fix the TypeScript compilation error in this function that validates user input"
Overloading with Instructions: Don't cram too many requirements into one prompt. Break complex tasks into smaller, focused prompts.
Ignoring Context: Each conversation turn builds on previous context. Consider what the AI already knows from your conversation.
Not Testing Variations: Test your prompts with different inputs to ensure they work consistently.
FAQ
Summary
Prompt engineering is a crucial skill for getting consistent, high-quality responses from AI models. By understanding the anatomy of effective prompts and applying proven patterns like role-based prompts, structured output requests, and step-by-step reasoning, you can dramatically improve your AI interactions. The key principles are being specific and clear, providing adequate context, using examples when helpful, iterating on your prompts, and considering the AI's perspective. With practice and experimentation, you'll develop an intuition for crafting prompts that unlock the full potential of LLMs.
Complete Code
You can find the complete, runnable code for this tutorial on GitHub: https://github.com/avestalabs/academy/tree/main/2-core-llm-interactions/prompt-engineering-fundamentals