aiwithgrant
about me
aiwithgrantGuidesGoogle
Google
Google
Official Docs
Beginner

The Complete Gemini Prompting Guide

Everything from Google's Gemini prompting docs. Four input types, few-shot patterns, multimodal handling, parameter tuning, and agentic workflow strategies.

Official Google docs →
Content sourced from official Google documentation
1

Four types of prompt input

Google identifies four input categories, and understanding which one you're using changes how you write the prompt. Question Input: direct questions ('What is photosynthesis?'). Task Input: specific actions ('Summarize this article'). Entity Input: items for classification ('Classify this email as spam or not'). Completion Input: partial content for the model to finish ('The capital of France is...'). Completion input is underrated. Giving the model a pattern to continue is often more precise than explicit instructions.

💡If you're struggling to get the right format, try Completion Input. Start the output yourself and let Gemini continue the pattern.
2

Clear and specific instructions

Specify constraints on everything: length ('one sentence'), structure ('as a table with columns for X, Y, Z'), and depth ('system instructions emphasizing comprehensiveness vs conciseness significantly change the output'). A one-sentence summary constraint produces concise, accurate explanations. Without it, Gemini will give you paragraphs.

💡System instructions are your secret weapon with Gemini. A single line like 'Be concise. Answer in 2-3 sentences max.' transforms verbose responses into scannable answers.
Controlling output depth
Explain machine learning in one paragraph (3-4 sentences). Target audience: business executives with no technical background. Use a real-world analogy. Do not use jargon.
Without constraints, you get a 500-word essay. With them, you get a focused explanation that actually serves your audience.
3

Few-shot examples are essential

Google's own recommendation: 'Always include few-shot examples in your prompts.' Zero-shot (no examples) works for simple tasks, but few-shot dramatically improves consistency. Identical questions with different examples produce different results, proving that examples steer the model more than instructions alone. Use specific, varied examples. Show positive patterns, not anti-patterns. Keep formatting consistent across all examples.

💡Optimal example count varies by task. Classification usually needs 3-5 examples. Format conversion needs 2-3. When in doubt, start with 3.
Translation with style
Translate the following English text to French. Match the tone and formality level.

Example:
English: Hey, wanna grab coffee later?
French: Salut, on se prend un caf plus tard ?

Example:
English: Dear Sir, I am writing to formally request...
French: Monsieur, j'ai l'honneur de vous adresser...

Now translate:
English: {{TEXT}}
Without examples, every translation comes out in the same generic tone. With varied examples, Gemini matches the formality of each input.
4

Context and prefixes

Contextual information helps models understand constraints. Including reference materials significantly improves accuracy. Prefixes are a powerful and underused technique that serves three purposes: Input prefixes mark semantic parts ('English:' vs 'French:'), Output prefixes signal the expected format ('JSON:'), and Example prefixes label components for easier parsing. These small labels dramatically improve the model's understanding of your prompt structure.

💡Output prefixes are especially useful. Starting your expected output with 'JSON:' or 'Analysis:' primes the model to respond in that exact format.
5

Breaking down complex prompts

For complex tasks, three strategies: 1) Break down instructions into separate prompts for each step, 2) Chain prompts where one output feeds into the next, 3) Aggregate responses by running parallel operations on different data portions. Chaining is the most powerful. Each step gets the model's full attention, and you can inspect intermediate results to catch errors early.

💡If a single prompt keeps dropping one of your requirements, that's the signal to split it into steps. Each step should have one clear objective.
6

Model parameters that matter

Max output tokens controls response length (100 tokens is roughly 60-80 words). Temperature controls randomness: 0 is deterministic, higher is creative. Critical for Gemini 3: keep the default 1.0 temperature. Deviating, especially lowering it, can cause looping and degraded performance on reasoning tasks. topK selects from the K most probable tokens. topP selects until cumulative probability hits a threshold (default 0.95). stop_sequences halts generation at specified text.

💡The Gemini 3 temperature thing is a gotcha. Unlike other providers where you lower temp for consistency, Gemini 3 works best at 1.0. Don't touch it.
7

Agentic workflow strategies

For complex AI agents built on Gemini, steer three behavioral dimensions: Reasoning (how deep it decomposes problems, how it diagnoses issues, how exhaustively it gathers information), Execution (adaptability to new data, persistence in error recovery, risk assessment), and Interaction (when to ask for permission, how verbose to be, output precision). Use methodical planning before action, comprehensive constraint analysis, and persistent problem-solving.

💡Define your agent's personality across all three dimensions in the system prompt. A 'cautious agent' should have high permission-seeking and low risk tolerance. A 'power user agent' should be the opposite.

Key topics covered

Multimodal prompting
System instructions
Few-shot prompting
JSON mode
Parameter tuning
Agentic workflows
Context handling
Read the full guide
View the complete Google documentation
Official docs →

More guides