Designing Context: The Craft Behind Smarter AI Inputs

What I’ve learned from applying MCP principles in real apps — and how to design context that actually helps your model think.

At this point, most of us have played with prompts.
Some of us have even tried building with them.
But if you're serious about AI-native products, you quickly realise: prompting alone won't scale.

This is where context design comes in. And once you see it, you can’t unsee it.

In my last few posts, I unpacked what MCP (Model Context Protocol) is and why it matters.
Since then, I’ve been hands-on — trying to structure smarter inputs, shape context graphs, and define the right kind of data to feed the model.

This post isn’t about what MCP is.
It’s about how to use it well.


1. Good Context Isn’t Big — It’s Relevant

It’s tempting to treat MCP like a data dump: just pass everything the model might need, and let it figure it out. I did this at first. And it worked — sometimes. But often, the model latched onto irrelevant details, or worse, hallucinated its way around a noisy input.

Here’s what I learned: context should feel more like a scalpel than a sponge.

Your goal isn’t to be exhaustive — it’s to be selective. What data helps the model reason more effectively in this moment? What’s signal, and what’s just background noise?

One way I approach this: I imagine the model is a teammate joining midway through a project. What do they need to catch up quickly? What can I leave out?

Designing context means exercising judgment. You’re not just formatting data. You’re curating relevance.


2. Design Context Like It’s Modular

When I first started shaping inputs for GPT-4, I’d write everything inline — a big, handcrafted prompt with messy interleaved values like:

“Stephen is trying to finish work. He's had 3 meetings today. It’s 8:30 PM. He wants to wrap up a design handoff.”

It worked... until it didn’t.

As the number of variables grew, things got brittle. Prompts ballooned. Outputs wobbled.

What changed everything was thinking in modules — structuring the input as reusable context blocks. Things like:

  • userProfile
  • goals
  • recentActivity
  • timeContext
  • inputPayload

Each module tells the model one coherent thing.
Together, they paint the full picture.

This mindset has a few big upsides:

  • You can test modules independently
  • You can cache/reuse components
  • You can progressively build complexity without breaking the whole flow

It also makes your prompt logic way easier to maintain — and reason about — especially as your product grows.

In AteIQ, this modular approach is the only reason the AI layer scales at all. It allows me to slot in the right data for the moment — a breakfast meal? a late-night snack? a logged pattern? — without rewriting everything from scratch.

It’s like building a flexible lens. Not a fixed sentence.


3. Context Evolves — and So Should You

Here’s a thing I didn’t expect when I first started shaping context:
Not all context should last forever.

Some things matter now.
Some things matter always.
Some things… just get in the way.

A mistake I made early on was trying to persist everything the user did — just in case it was helpful later. But more often than not, that cluttered the model’s ability to focus on what mattered in the moment.

Think of context as living data. You’re not just creating it — you’re curating it.


So what does that look like in practice?

Let’s say you’re building a productivity app. A user is logging a new task at 10PM.

Prompt v1 (messy, overloaded):

Stephen is a software engineer working on 3 projects.
He prefers using dark mode.
He uses time-blocking to manage his day.
He hasn’t completed 2 tasks today.
This is the 5th task he's logged today.
It’s 10PM.
He just added: "Refactor onboarding flow for team handoff."

Cool. But that’s a lot. The model might latch onto the time-blocking bit (not relevant), or the task count (meh), or the wrong priority.


Prompt v2 (refined, modular, time-aware):

{
  "user_profile": {
    "role": "software engineer",
    "timezone": "GMT+1"
  },
  "current_focus": {
    "project": "Onboarding refactor",
    "task": "Refactor for team handoff"
  },
  "time_context": {
    "local_time": "22:00",
    "day_state": "late evening"
  },
  "behavior_patterns": {
    "evening_productivity": "high"
  }
}

The model now understands:

  • Who Stephen is
  • What he's doing now
  • When it’s happening
  • Why that time might affect how he works

It’s not just feeding the model facts.
It’s feeding it a frame of mind.


How I’m Applying This in AteIQ

Without going too deep into specifics, this idea of evolving context plays a big role in how AteIQ understands and reacts to user input.

What you log at breakfast isn’t always relevant at dinner — but sometimes it is. And if you're hitting your protein target early in the day, maybe the app’s tone shifts to affirmation instead of advice.

That’s not live yet — but it’s coming.
AteIQ is already being built with context in mind, and as the roadmap progresses, that design philosophy will become more and more visible.


Context design isn’t about stuffing everything in.
It’s about creating space for the model to think clearly.


What This Unlocks Next

Right now, context design feels like an emerging discipline — somewhere between prompt engineering, UX design, and systems thinking.

But it won’t stay that way for long.

We’re going to see tooling emerge:

  • Context engines that help apps structure inputs in real time
  • Declarative schemas for common roles, flows, or tasks
  • Visual context editors for designing how a model “sees” your user
  • Maybe even LLM debuggers that track what the model knew vs what it used

And when that happens, context won't be a byproduct of AI apps.
It’ll be a first-class layer in every product stack.

If you’re building today, you’re early.
If you’re learning now, you're future-proofing your skillset.

Because the next generation of apps won’t just be AI-powered.
They’ll be context-aware by design.


This post explored how MCP can transform AI apps from reactive tools into deeply contextual teammates. I’ll be sharing more — from design strategies to real-world experiments inside AteIQ.

If you’re exploring this space too, I’d genuinely love to connect or you can drop me a line.
And if you’re just getting started, I hope this blog becomes a place you can revisit and grow alongside.

Until next time — structure that context.

Stephen Dixon

Stephen Dixon

iOS Developer. Previously at strong.app and buffer.com. Founder ios-developers.io. Building and designing for screens since 1998!
Manchester, England