aiwithgrant
about me
aiwithgrantGuidesMistral
Mistral
Mistral
Official Docs
Intermediate

Prompting Capabilities

Mistral's official prompting guide. System prompt design, JSON mode, function calling, worded rating scales, and the specific anti-patterns to avoid.

Official Mistral docs →
Content sourced from official Mistral documentation
1

System prompt design

Mistral uses two input levels: system and user. The system prompt sets the role and behavioral rules, managed by developers. The user prompt provides the specific task. Start with a concise role and task: 'You are a <role>, your task is to <task>.' You can use role-separated messages or concatenate them. Keep the system prompt focused on WHO the model is and HOW it should behave, put WHAT to do in the user message.

💡A clean separation of concerns: system = identity and rules, user = task and data. Don't mix them.
Clean system prompt
[System] You are a customer support agent for AcmeSaaS. You are professional, concise, and helpful. Always check the knowledge base before answering. If you don't know something, say so and offer to escalate.

[User] A customer is asking why their data export is failing. Here's the error log: {{ERROR_LOG}}
The system prompt defines behavior. The user prompt provides the specific task and data. Clean separation means you can swap tasks without rewriting the role.
2

Structure and formatting

Organize instructions hierarchically with clear sections and subsections. Imagine writing for someone with zero prior context. This is critical with Mistral: the model performs significantly better with well-structured prompts. Use Markdown and/or XML-style tags because they're readable (easy for humans to scan), parsable (simple to extract programmatically), and familiar (Mistral models saw these formats extensively during training).

💡When in doubt, use Markdown headers for sections and XML tags for data boundaries. This combination works reliably across all Mistral models.
3

Few-shot prompting

Embed examples directly in your prompt or use the standard user/assistant message format. For tasks like classification, show exact input/output pairs so the model learns the format, not just the concept. This is especially effective for JSON output, category mapping, and any task where format consistency matters more than creativity.

💡Use the message format (user/assistant turns) for few-shot when working with the chat API. It's cleaner than embedding examples as text.
Language detection with few-shot
[User] Detect the language:
La maison est belle.

[Assistant] {"language": "French", "confidence": 0.98}

[User] Detect the language:
Das Haus ist schon.

[Assistant] {"language": "German", "confidence": 0.95}

[User] Detect the language:
{{INPUT_TEXT}}
Two examples teach the exact JSON format and the confidence score pattern. The model follows the structure precisely.
4

JSON mode and structured outputs

Mistral's JSON output enforcement ensures the model generates valid, parsable JSON every time. This is a game-changer for production pipelines where you need consistent structure. Define your schema clearly, use few-shot examples showing the exact JSON shape, and enable JSON mode in the API. The model will conform to your schema without adding markdown code blocks or explanatory text.

💡Always provide at least one JSON example in your prompt, even with JSON mode enabled. The example teaches the schema better than a description.
5

What to avoid (Mistral-specific)

These anti-patterns cause the most problems with Mistral models: Blurry quantitative adjectives ('too long', 'many', 'few'). Replace with exact numbers. Vague words ('things', 'stuff', 'interesting'). State exactly what you mean. Contradictions in long prompts. Use decision trees instead of conflicting rules. Asking the model to count words. Provide character counts as input instead. Generating unnecessary tokens. Request only what you need.

💡The word counting issue is universal but especially bad with Mistral. If you need exactly 100 words, don't ask the model to count. Write your constraint as a character range instead.
Avoiding vague language
Write a 3-sentence summary. Include: the main finding, the methodology used, and one practical implication. Target audience: non-technical managers.
'Fairly short', 'important things', and 'interesting' are all subjective. Specific constraints get consistent results.
6

Worded scales beat numeric scales

This is a unique Mistral insight: when rating or scoring, worded scales consistently outperform numeric scales. Instead of 'Rate on 1 to 5', use: 'Very Low (highly irrelevant), Low (not good enough), Neutral (not particularly interesting), Good (worth considering), Very Good (highly relevant).' The descriptions anchor each level. Convert to numbers after if needed.

💡This works because the model understands what 'Good (worth considering)' means much better than what '4' means. The label carries semantic information that numbers lack.
7

Four core capabilities

Mistral's guide covers four practical patterns in depth: Classification (zero-shot and few-shot categorization), Summarization (condensing documents while preserving key info), Personalization (adapting output to user preferences and context), and Evaluation (assessing quality and relevance). Each pattern works best with the structured approach: clear role, explicit format, and examples.

💡For classification tasks, few-shot beats zero-shot almost every time with Mistral. Even one example improves accuracy significantly.

Key topics covered

Function calling
JSON mode
Guardrailing
System prompts
Few-shot prompting
Classification tasks
Read the full guide
View the complete Mistral documentation
Official docs →

More guides