← AgentAwake
🎯
Chapter 20 · 10 min read
𝕏

Prompt Engineering for Agents

It's not the same as prompting ChatGPT. Here's what's different.

Prompting an agent is not the same as prompting ChatGPT. An agent has persistent memory, tools, scheduled tasks, and operates autonomously. The prompt engineering techniques that work for chat conversations often fail for agents.

🍕 Real-life analogy
Giving someone directions to your house is different from giving them a GPS. Directions work for one trip. A GPS works for every trip, adapts to traffic, and handles unexpected situations. Agent prompts are GPS systems, not directions. They need to handle any situation, not just the one you're thinking of right now.
ChatGPT-Style Prompting
  • "Write me a blog post about AI agents"
  • Single-turn, no context, no constraints
  • Generic output that sounds like everyone else
  • You re-explain your style every single time
Agent-Style Prompting
  • Agent reads SOUL.md, knows your voice and audience
  • Pulls today's research from daily notes automatically
  • References your past posts to maintain consistency
  • Produces a first draft that sounds like YOU wrote it

Chat Prompting vs Agent Prompting

Let's make this concrete. Here's the same task, prompted two ways:

❌ Chat-Style Prompt
"Hey, can you check if there are any important emails and let me know? Also maybe check Twitter for anything interesting about AI agents."

Vague. Open-ended. Requires follow-up questions. Fine for a conversation — terrible for an autonomous agent.

✅ Agent-Style Prompt
"Morning briefing. Check email: flag subjects containing 'urgent' or senders in knowledge/vip-list.md. Check Twitter: search '$AI agent' last 4hrs, filter 5k+ follower accounts. Output: bullets, max 15 lines. If nothing urgent: 'All clear.' Save to memory/YYYY-MM-DD.md."

Specific. Self-contained. Handles edge cases. Defines output format. Works at 3 AM with zero human input.

The difference? The agent prompt is a complete specification. It tells the agent what to do, how to do it, what the output looks like, where to save it, and what to do if there's nothing to report. The chat prompt requires a human to answer 5 follow-up questions before work can begin.

The "Fresh Session Test"

Here's a mental model that will improve every prompt you write: Imagine someone with amnesia reading this prompt. They're smart — really smart — but they have no context about you, your project, or your preferences. Can they complete the task from the prompt alone?

This is literally what happens with isolated cron sessions. No chat history. No memory of yesterday. Just the prompt + knowledge base files. If your prompt passes the "fresh session test," it'll work reliably at 3 AM.

The 5 Principles of Agent Prompting

1. State, Don't Ask

In chat, you ask questions. For agents, you state rules. The agent should never need to ask "what do you want me to do?" — it should know from the prompt.

❌ Chat-style

"Can you check if there are any urgent emails?"

✅ Agent-style

"Check emails. If subject contains 'urgent' or sender is in VIP list, forward to Discord immediately. Otherwise, batch for daily summary."

2. Define Boundaries, Not Just Tasks

An agent running autonomously needs to know what it shouldn't do, not just what it should.

Boundary-aware prompt
Generate today's content plan.

DO:
- Research trending topics in AI/SaaS
- Draft 3 tweet options
- Save drafts to content/drafts.md

DO NOT:
- Post anything without approval
- Engage with controversial topics
- Reply to other accounts
- Use hashtags (they look spammy)

IF UNSURE:
- Save to content/needs-review.md with your concern
- Do not ask me — decide based on these rules
3. Give Examples, Not Descriptions

Showing beats telling. 2 examples are worth 200 words of explanation.

❌ Describing

"Write in a casual, witty, slightly irreverent tone that feels authentic and human."

✅ Showing

"Match this voice:
GOOD: 'Hot take: most AI agents are just chatbots with a cron job.'
BAD: 'In this article, we will explore the fascinating world of AI agents.'"

4. Design for Failure

Chat prompts assume success. Agent prompts must handle: tool failures, missing data, rate limits, unexpected inputs.

Failure-aware prompt
Search Twitter for $ES_F sentiment.

IF search returns results:
  → Analyze sentiment and include quotes
IF search fails or returns no results:
  → Use yesterday's sentiment from memory
  → Add note: "⚠️ Live search unavailable, using cached data"
IF search returns spam/irrelevant results:
  → Filter to accounts with 5k+ followers
  → If still no good data, state "Insufficient signal today"
5. Enforce Output Format

Agents often need to produce output that feeds into other systems. Strict format prevents downstream failures.

Format enforcement
CRITICAL INSTRUCTION: Output MUST follow this exact format.
Do NOT summarize. Do NOT add commentary outside the template.
Do NOT skip sections.

# 📅 Daily Report — [Date]

## 📊 Metrics
- Tasks completed: [number]
- Tasks failed: [number]
- Total cost: $[amount]

## ✅ Completed
- [Task 1]: [one-line result]
- [Task 2]: [one-line result]

## ❌ Failed
- [Task]: [reason] → [suggested fix]

## 📋 Tomorrow
- [Priority 1]
- [Priority 2]

🔌 Platform-Specific Prompting Tips

🐾 OpenClaw — AGENTS.md is your system prompt
  • • Put rules in AGENTS.md — it's loaded every session automatically
  • • Use SOUL.md for personality/voice — keep it separate from rules
  • • Cron job prompts should be self-contained (isolated sessions have no chat history)
  • • Use "CRITICAL INSTRUCTION" prefix for rules the model must never skip
🤖 Claude — System prompt best practices
  • • Claude follows system prompts very faithfully — invest time in getting them right
  • • Use XML tags for structure: <rules>...</rules>
  • • Claude responds well to "think step by step" for complex reasoning
  • • For tool use: describe tools precisely — Claude will use them more effectively
💬 ChatGPT — Custom GPT instructions
  • • GPT-4o benefits from explicit role assignment: "You are a [role] who [does what]"
  • • Use numbered steps for multi-step tasks
  • • ChatGPT can be chatty — add "Be concise. No preamble. No concluding remarks."
  • • Custom GPT instructions have a character limit — prioritize rules over examples
💻 Cursor / Windsurf / Cline — .cursorrules
  • • Keep .cursorrules under 2000 lines — too long and the model ignores parts
  • • Put the most important rules first (models pay more attention to the beginning)
  • • Include code examples of your patterns — the agent will match them
  • • Negative examples ("never do X") are just as important as positive ones
🎯 The Meta-Skill
Prompt engineering for agents is less about clever tricks and more about thinking like a manager writing an employee handbook. Be clear, be specific, anticipate problems, give examples, and define boundaries. The better your "handbook," the less you need to intervene.

Before and After: Prompt Upgrades

Vague Prompts
  • "Write good content for social media"
  • "Be helpful and professional"
  • "Check my emails and handle them"
  • "Make my code better"
  • "Do market research on competitors"
Agent-Ready Prompts
  • "Write a Twitter thread (8-12 tweets) about [topic] using the voice in knowledge/resources/my-best-content.md. Start with a hook, end with a CTA."
  • "Respond in a casual, first-name tone. Use contractions. Max 3 sentences unless asked for detail. Never say 'certainly' or 'I'd be happy to.'"
  • "Read unread emails. For each: summarize in 1 line, suggest a reply draft, flag urgency (🔴🟡🟢). Skip newsletters and promotions."
  • "Review this PR for: security issues, performance bottlenecks, and missing error handling. Suggest fixes with code snippets. Ignore style/formatting."
  • "Check pricing pages of [3 URLs]. Extract: plans, prices, feature limits, and free tier details. Format as a comparison table in knowledge/resources/competitor-pricing.md."

The Prompt Structure That Works

Every great agent prompt has these five sections, in this order:

  • 1. Identity — who is this agent? What's its role?
  • 2. Context — what files/info should it load? What's the situation?
  • 3. Task — what specifically should it DO? (concrete verbs)
  • 4. Constraints — what should it NOT do? What are the boundaries?
  • 5. Output format — how should the result be structured?

Miss any of these and your agent fills in the blanks with assumptions. Sometimes good assumptions. Often terrible ones.

Advanced: The Few-Shot Technique

Instead of describing what you want, show examples. Include 2-3 examples of ideal output in your prompt. The agent will pattern-match and produce similar results. This works better than any amount of description for things like tone, format, and style.

Think of it like training a new employee: "Here are three reports I loved. Make the next one like these." Way more effective than a 500-word style guide.

🧠 Quick Check
You want your agent to write in your personal voice. What's the most effective prompting technique?
Prompt Engineering Checklist
0/8 complete

Share this chapter

𝕏

Chapter navigation

21 of 36