Getting to Grips with MCP: My Early Learnings (and Why You Should Care)

Stephen Dixon
· 2 min read
Send by email

When I first heard about “MCP” in the context of AI, I shrugged it off.

Model Context Protocol? Sounds like a cool acronym. No clue what it actually meant.

A few days later, I’d read the docs, run a handful of tests, and started thinking differently about how I design AI features — especially in my apps that use OpenAI under the hood.

This post captures what I’ve learned so far. Not as a guide. More as a field note — from one developer to another.


What is MCP?

At its core, Model Context Protocol (MCP) is about structuring inputs to a language model in a deliberate, predictable way.

Rather than tossing raw prompts at GPT and hoping for relevance, MCP-style thinking encourages this mindset:

Define what the model should know — and structure that knowledge clearly — before asking it to respond.

If you’re building AI features that behave more like teammates (assistants, coaches, planners, etc.), MCP helps you manage context with more control and consistency.

It’s not a formal spec or library. It’s a protocol in spirit — a mindset that shapes how you pass data into models.


What’s Clicked for Me

1. It’s About Relevance, Not Volume

MCP isn’t about flooding the model with data. It’s about passing the right data — in the right shape — so the model can reason effectively in the moment.

In my case (building AteIQ, an AI-powered nutrition tracker), relevant context includes:

  • User dietary goals
  • Their recent meals
  • Preferred units (e.g. grams vs ounces)
  • Time of day or local timezone

I treat those as first-class context objects — not just text embedded in a prompt.


2. It Feels Like Creating a Mental Model for the Model

MCP encourages you to think of the model not just as a text generator — but as a system responding to structured inputs.

I define types, use consistent keys, and favor clean JSON over fuzzy language. The model responds better because it has a stable mental model of its operating environment.

I’ve seen others call this the beginning of agentic design. Personally, I think of it as structured memory:
awareness without complexity.


3. OpenAI’s Tooling Rewards MCP Thinking

  • Function calling
  • Tool use
  • RAG pipelines
  • Multi-turn memory
  • Assistant APIs

All of these work better when you treat context as code, not just unstructured text.

You’re not building prompts. You’re designing a system of inputs. And if you get that right, the outputs become far more reliable — even across different model versions.


Why This Matters (Even If You’re Not Building Full AI Apps)

Because soon, every app will have something AI-related:

  • Smart autofill
  • Relevancy-aware search
  • Contextual assistants
  • Inline suggestions

And when that time comes, the quality of your context will define the quality of your product.

MCP isn’t about jumping on a trend. It’s about creating clarity between your app and the model.

That’s the difference between:

  • "It kind of works sometimes"
  • and
  • "It knows exactly what I mean."

Takeaways for Fellow Devs

  • Start small. You don’t need a full agent.
  • Define inputs like you’d define data models.
  • Think in types and structures, not just prompt strings.
  • MCP isn’t official — but it’s already useful.
  • GPT-4, function calling, and RAG pipelines reward this mindset.

This post shares my first practical look at MCP — from the lightbulb moments to real usage inside AteIQ.

As I continue evolving the architecture across my AI-native apps, I’ll share more concrete patterns, failures, and refinements.

If you’re exploring this space too, I’d genuinely love to connect or you can drop me a line.

And if you're just getting started, I hope this blog becomes a place you can revisit and grow alongside.

Until next time — give your models something to work with.