Getting to Grips with MCP: My Early Learnings (and Why You Should Care)

I’ll admit it: when I first heard “MCP” in the context of AI, I tilted my head like a confused Labrador. Model Context Protocol? Cool acronym. No idea what it meant.

Fast forward a week or so, and I’ve spent a few evenings poking around, reading docs, and testing small things to wrap my head around it. This post is me capturing that early-stage exploration — what clicked, what confused me, and what I think is worth keeping an eye on.

So… what is MCP?

At its core, Model Context Protocol (MCP) is a way of structuring the context you pass into a language model — like GPT-4 — so its responses are more useful, grounded, and relevant. It’s part of a broader movement in the AI world, especially among devs working with OpenAI’s APIs, to move beyond loose prompts and toward typed, contextual inputs.

Instead of just chucking a big prompt at the model and hoping for the best, MCP-style thinking says:

“Let’s define what the model should know, cleanly and clearly, before it does anything.”

If you’re building an AI feature that behaves more like a teammate — helping someone plan, write, organize, search, decide — then MCP is your friend. It gives structure to the chaos.

What I’ve Learned So Far

1. It’s all about relevant structure

The goal isn’t to overwhelm the model with data — it’s to give it the right data. The stuff that helps it reason more effectively in a given moment.

If you're building a nutrition app (like I am), that might mean defining the user's dietary goals, recent meals, preferred units, and the current time of day — as structured context objects, not just plain text.

2. It’s like creating a mental model for the model

MCP isn’t an official spec or library — it’s more of a protocol in spirit. Think clean JSON, typed objects, and consistent formats that allow the model to “understand” its operating environment.

Some people are calling this the start of "agentic" behavior — I think of it more like giving the model memory and awareness, without turning it into a sci-fi butler.

3. OpenAI tools are already playing well with this style

Function calling, tool usage, retrieval-augmented generation (RAG) — all of these work better when you treat context as a first-class citizen. You’re not just slapping together a prompt; you’re designing a system of inputs, and MCP thinking helps you scale that without it becoming spaghetti.

Why This Matters (Even If You’re Not Building AI Apps)

Because soon, almost every app will have some kind of intelligent feature — even if it’s just a better search bar or autofill. And when that time comes, giving models high-quality context will be a superpower.

This isn’t about hype. It’s about control. Clarity. Predictability.
The difference between “meh” and “magic.”


Takeaways for Fellow Devs

  • Don’t wait for the perfect tutorial — start small and play with it.
  • Treat context as code — define types, structure your inputs.
  • GPT-4 and tools like function calling love this approach.
  • You don’t need to build a full-blown agent to start applying MCP thinking.

This is just the beginning of my MCP learning curve. I’m hoping to write more — maybe share actual examples, code snippets, or even how I’m shaping context in AteIQ behind the scenes.

If you're experimenting with this stuff too, let’s connect.
And if you're curious but unsure where to start, I’ll try to make this blog a place you can come back to and grow alongside me.

Until next time — structure that context!

Stephen Dixon

Stephen Dixon

iOS Developer. Previously at strong.app and buffer.com. Founder ios-developers.io. Building and designing for screens since 1998!
Manchester, England