Direct Instruction
Platforms:
claudeopenaigeminim365-copilot
What It Is
Section titled “What It Is”Direct instruction means giving the model clear, explicit commands in imperative language. Instead of hinting at what you want (“It would be nice if…”) or asking open-ended questions, you tell the model exactly what to do: “Write a…”, “List the…”, “Analyze the…”. This is the foundation of effective prompting — a baseline technique that improves every other pattern.
Why It Works
Section titled “Why It Works”Modern LLMs (large language models) are specifically trained to follow instructions through a process called instruction tuning and RLHF (Reinforcement Learning from Human Feedback). Direct, unambiguous commands align perfectly with this training. The more explicit your instruction, the less the model has to guess about what you want — and guessing is where errors, hallucinations, and off-target responses happen.
When to Use It
Section titled “When to Use It”- Every prompt (this is a baseline technique, not situational)
- When zero-shot results are close but not quite right
- When you need precise control over what the model does and doesn’t do
- When working with others who will reuse your prompts
The Pattern
Section titled “The Pattern”{Action verb} {specific task}.{Constraints: length, format, audience, tone}.{What to include or exclude}.Filled-in example:
Write a 150-word product description for a wireless Bluetooth speaker.Target audience: tech-savvy millennials.Tone: casual and enthusiastic.Include: battery life, water resistance, sound quality.Exclude: technical specifications and pricing.Examples in Practice
Section titled “Examples in Practice”Structured risk assessment
Section titled “Structured risk assessment”Context: You need a concise risk analysis for a cloud migration proposal that your team can review quickly.
List the top 5 risks of migrating from on-premise to cloud infrastructure.For each risk, provide:- A one-sentence description- Likelihood (high/medium/low)- One mitigation strategy
Format as a numbered list.Why this works: Every aspect of the output is specified — the number of items, the structure of each item, and the format. The model has no room to deviate.
Readability rewrite
Section titled “Readability rewrite”Context: You’re adapting a technical report for a general-audience newsletter and need to lower the reading level.
Rewrite the following paragraph for a 9th-grade reading level. Keep thesame meaning but simplify vocabulary and shorten sentences. Do not addnew information.
[paste paragraph here]Why this works: The constraints are explicit and measurable — reading level, meaning preservation, no additions. The “Do not add new information” instruction prevents the model from elaborating.
Constrained comparison
Section titled “Constrained comparison”Context: Your engineering team is choosing between two databases and needs a focused comparison, not a general overview.
Compare PostgreSQL and MongoDB for a real-time analytics workload processing10,000 events per second. Focus only on: write throughput, query flexibility,and operational complexity. Respond in a two-column table.Why this works: The comparison dimensions are locked down to three specific criteria, the workload is quantified, and the output format is specified. This prevents the model from producing a generic “pros and cons” list.
Common Pitfalls
Section titled “Common Pitfalls”Related Techniques
Section titled “Related Techniques”- Zero-Shot Prompting — direct instruction makes zero-shot prompts sharper
- Output Formatting — specify exactly how the model should structure its response
- Contextual Prompting — combine with context for domain-specific direct instructions
- Prompt Engineering Overview
- Content Creation use case
Further Reading
Section titled “Further Reading”- Ouyang et al. 2022 — Training Language Models to Follow Instructions with Human Feedback — arxiv.org/abs/2203.02155
- Wei et al. 2021 — Finetuned Language Models Are Zero-Shot Learners (FLAN) — arxiv.org/abs/2109.01652
- Zhang et al. 2023 — Instruction Tuning for Large Language Models: A Survey — arxiv.org/abs/2308.10792