Table of Contents

Introduction to Generative AI: Prompt Engineering

Note: the commonly used term is prompt engineering (not “engenering”). This article introduces prompt engineering for generative AI models and is written for a technical audience (hackers, builders, and curious tinkerers). It explains core concepts, practical techniques, example patterns, tooling, evaluation strategies, and ethical/security considerations — all formatted for DokuWiki.

What is Prompt Engineering?

Prompt engineering is the craft of designing, structuring, and tuning inputs (prompts) for generative models to steer their outputs toward desired behavior. For text LLMs it includes:

Good prompt engineering reduces iteration time, increases reliability, and helps align outputs with safety and quality requirements.

Core principles and mental models

Typical prompt anatomy

A robust prompt generally contains:

Example skeleton (pseudocode):
System: You are a concise assistant that answers factually.
User: Context: <short facts>
User: Task: <what to do>
User: Output: return JSON with keys ["summary","confidence","sources"]
User: If unknown, respond with {"error":"unknown"}.

Common prompt patterns (with use-cases)

    Summarize the following text in <=50 words. Use bullet points. Do not invent facts.
    
    Example 1:
    Q: What is a rabbit?
    A: A small mammal that hops.
    Example 2:
    Q: What is a crow?
    A: A black bird known for mimicry.
    Now answer: Q: What is a fox?
    
    Think step-by-step, list assumptions, then give the final answer.
    
    You are a senior security engineer. Review this architecture and list 5 potential attack vectors.
    
    Output must be valid JSON: {"vulns":[{"name":...,"severity":...}], "notes":"..."}
    
    If the task requests personally identifiable info (PII), refuse with "I cannot provide PII".
    

Practical workflow (iterate like a hacker)

1. **Define success criteria.** What does "good" look like? (precision, format, speed)
2. **Start minimal.** Create the shortest prompt that could work.
3. **Test with representative inputs.** Use real or edge-case examples.
4. **Add constraints and examples** to fix misbehaviors.
5. **Measure outputs.** Use automatic checks (parsing, unit tests) where possible.
6. **Version prompts.** Track prompt changes and performance.
7. **Automate A/B testing.** Compare prompt variants on a dataset.
8. **Monitor for drift.** Models or data distribution can change; re-evaluate prompts periodically.

Important model knobs explained

Advanced techniques

Safety, ethics, and security

Common pitfalls and how to avoid them

Example prompts with annotations

1) Simple summarization (human-friendly) <source lang=“text”> Task: Summarize the following article in 5 bullet points, each ⇐ 20 words. Article: <paste article here> Constraint: Do not add facts not present in the article. </source>

2) JSON-constrained vulnerability scan (machine-friendly) <source lang=“text”> System: You are a security assistant. User: Given the text below, extract potential security issues and return valid JSON only with keys: { “issues”: [{“id”:1,“title”:“”, “description”:“”, “severity”:“low|medium|high”, “evidence”:“” }], “meta”:{“source”:“”} } If none, return {“issues”:[], “meta”:{“source”:“”}}. Text: <paste architecture notes> </source>

3) Few-shot style mimicry <source lang=“text”> You are a copywriter. Match the tone and brevity of these examples.

Example 1: Q: What is X? A: X is the smallest useful unit.

Example 2: Q: What is Y? A: Y helps make Z faster.

Now: Q: What is “prompt engineering”? Answer: </source>

4) Multi-step decomposition (planning + execution) <source lang=“text”> Step 1: List 3 approaches to clean noisy logs. Step 2: For the best approach, provide a 5-step implementation plan with commands. Only after Step 1 is done, show Step 2. </source>

Testing & evaluation strategies

Tooling & workflow integration

Templates & cheat sheet

<source lang=“text”> System: You are a concise summarizer. User: Summarize the text below into 3 bullet points (each ⇐25 words). If the text contains no factual claims, reply “No facts”. Text: <paste here> </source>

<source lang=“text”> System: You are a structured extractor. User: Extract “name”, “date”, “amount” from this receipt. Output JSON only. If a field is missing, put null. Receipt: <paste here> </source>

<source lang=“text”> System: You are an expert reviewer in <domain>. User: Provide 5 numbered critiques, each with a suggested fix and a severity (low/med/high). Text: <paste here> </source>

Glossary (short)

Final notes for hackers

Prompt engineering is both art and engineering. Treat prompts as software: minimal reproducible examples, tests, version control, CI for prompts, and monitoring in production. Combine LLM creativity with deterministic tools for reliability. Keep safety and privacy at the core of any deployment.

Appendix: quick checklist before shipping a prompt

Want more?

If you want, I can: