Context Blocks: How I Structure AI Inputs That Actually Work
If you’ve read any of my recent posts, you know I’ve been deep in the weeds with Model Context Protocol (MCP), foundation models, and AI-native app design.
There’s a pattern I keep coming back to — not because it’s trendy, but because it works.
I call it context blocks.
These are small, composable structures of input that tell the model one coherent thing. Not a blob of hand-written prompt text. Not a mystery box of user state. Just clean, modular context the model can reason with.
This post breaks down how I use them — and why they’ve become foundational to how I build AI features that actually ship.
From Prompt Soup to Structured Context
When I first started integrating OpenAI into my apps, I did what most devs do:
“Stephen is a user. He’s trying to track calories. It’s 3PM. He just ate a banana. He has a protein goal of 140g. Recommend something.”
Sometimes it worked. Often it didn’t.
The outputs felt inconsistent — and worse, unpredictable. Every change in phrasing seemed to break things. Debugging was guesswork.
The solution wasn’t writing better prompts. It was structuring better context.
What Is a Context Block?
A context block is a typed, self-contained input object passed into the model. Each block answers a single, specific question like:
- Who is the user?
- What are they doing right now?
- What happened just before this?
- What goal are they trying to reach?
- What’s the time, location, or device context?
They're designed for the model, but also readable by humans. Here’s a real example:
{
"user_profile": {
"name": "Stephen",
"goals": {
"daily_protein_grams": 140,
"calorie_target": 2200
},
"preferred_units": "metric"
},
"meal_context": {
"logged_item": "banana",
"macros": {
"calories": 89,
"protein": 1.1,
"carbs": 23
},
"timestamp": "2025-10-06T15:03:00Z"
},
"time_context": {
"local_time": "15:03",
"day_phase": "afternoon"
}
}
The model doesn’t need to guess who you are or what time it is. You’ve already framed it — cleanly, consistently, and testably.
Why Use Context Blocks?
Here’s what context blocks unlock for me in practice:
- Modularity: I can include or exclude blocks depending on the flow — e.g. no
meal_context
needed when updating settings. - Reusability: Blocks like
user_profile
orgoals
remain relatively stable across sessions. - Testability: I can isolate and test a single block’s effect on model behavior — no more prompt spaghetti.
- Developer Clarity: These structures help onboard other devs, make debugging easier, and improve traceability.
Common Context Blocks I Use
Here’s a rough inventory of blocks I now use across projects:
Block Name | Purpose |
---|---|
user_profile | Identity, preferences, goals |
task_context | Current user action or app intent |
time_context | Local time, day phase, urgency level |
input_payload | Raw input from the user (text, voice, etc.) |
history_context | Recent behavior, successes, failures |
system_state | App version, platform, feature flags |
Some flows only need 2–3 blocks. Others stack 5–6 to create a rich model of what’s happening.
Foundation Models Love This Too
If you're using Apple’s Foundation Models (especially on-device), this approach still holds.
The APIs (e.g. GenerateTextRequest
) favor structured input, and I’ve found the clarity of context blocks improves response quality — even without OpenAI in the loop.
Whether you’re sending data to GPT-4 or to Apple’s on-device model, the philosophy is the same:
Shape inputs like an API, not a paragraph.
Start Small. Structure Early.
You don’t need a complex schema or data pipeline to benefit from this approach. Here’s how to begin:
- Pick a single AI feature in your app.
- Break it into 3–4 logical context blocks.
- Format them as JSON.
- Feed them to your model.
- Observe and refine.
The model will thank you — and so will your future self.
Why This Matters
This isn’t about trends. It’s about building AI that actually works in production.
Context blocks give you control. They let you reason about what the model knows. They help you write less, debug faster, and scale more confidently.
In AteIQ, this structure is why we can handle flexible meal input without the model hallucinating wildly. It’s why feedback feels consistent. It’s why we can evolve the experience without reengineering every prompt.
If you’re building AI-powered features — especially for consumer apps — I can’t recommend this pattern enough.
This post breaks down how I use context blocks to build more modular, testable, and scalable AI features inside AteIQ and beyond.
If you’re exploring this space too, I’d genuinely love to connect or you can drop me a line.
And if you’re just starting out, I hope this blog becomes a place you can revisit and grow alongside.
Until next time — structure that context.