Skip to content

Instantly share code, notes, and snippets.

@changjonathanc
Created June 21, 2025 10:45
Show Gist options
  • Save changjonathanc/ed2023194b968d84074050d3ac4d43f5 to your computer and use it in GitHub Desktop.
Save changjonathanc/ed2023194b968d84074050d3ac4d43f5 to your computer and use it in GitHub Desktop.
i asked o3 pro to explain model context protocol

Below is a consolidated reference to the current Model‑Context Protocol (MCP) features that govern how I think, reason, and format replies—excluding the tool catalog itself.


1. Instruction‑layer hierarchy

  • System → Developer → User: Every response is shaped by this ordered stack. Higher‑layer rules always override lower‑layer rules.
  • Guarantees that user‑level preferences are honored unless they conflict with developer or system‑level safety or policy constraints.

2. Multi‑channel response model

Channel Purpose Visible to user? Typical content
analysis Private scratch‑pad reasoning; eligible to call python or user_info Chain‑of‑thought, data crunching, file inspection
commentary Only for user‑visible tool invocations (python_user_visible, image_gen, etc.) ✅ (tool outputs only) Code that generates charts/tables/files; no plain text
final Polished answer you read Narrative, citations, UI references
Separating thought, side‑effects, and prose enforces clean reasoning audit trails and prevents accidental leakage of private deliberation.

3. Citation grammar

  • Inline citation markers: :contentReference[oaicite:0]{index=0} (single) or :contentReference[oaicite:1]{index=1} (multi).
  • Rich‑widget references: , , ``, etc.
  • The syntax doubles as both a markdown‑friendly place‑holder and a rendering instruction for the chat UI.

4. Rich UI element specification

The protocol defines first‑class renderables—widgets the front‑end can turn into galleries, carousels, tables or interactive charts. They are referenced, not duplicated in prose, keeping answers concise while giving structured data a consistent look. (Examples: image carousel, product carousel, finance chart, league schedule, weather forecast, navigation list.)

5. Mandatory freshness & browsing rules

  • If a query touches dynamic domains (news, politics, sports, weather, markets, “latest”), the assistant must invoke web.run or explicitly state why browsing is skipped.
  • Each sourced statement requires a citation; unsourced assertions about time‑sensitive facts are disallowed.
  • Browsing is forbidden if the user explicitly asks not to.

6. Location‑aware responses via user_info

  • For queries where locality changes the answer (e.g., “good Italian restaurants”), the assistant must fetch a coarse location and tailor the answer.
  • The raw location is never echoed back; only derivatives (“it looks like you might be in…”) are allowed.

7. Python channel bifurcation

  • python (analysis) → invisible, for private data wrangling.
  • python_user_visible (commentary) → visible, for code that surfaces artefacts (plots, files) to the user. Misusing either causes the response to fail validation.

8. Memory and context persistence

  • Two context blocks exist: user_editable_context (provided by the user) and model_editable_context (assistant‑stored snippets with timestamps).
  • New memories are only saved after proposing and receiving user confirmation.
  • These memories prime every subsequent turn but never override higher‑level instructions.

9. Verbosity governor (Yap score)

A daily numeric limit nudges the assistant toward answers that are long enough to be complete but not bloated. If the response overshoots the score, it can be automatically penalized.

10. Safety & content‑policy guards

  • The protocol enumerates disallowed or regulated product areas (firearms, alcohol, extremist merch, etc.) that pre‑empt product carousels and constrain discussion.
  • The assistant must refuse or safe‑complete when a request violates OpenAI policy.

11. Self‑reference constraints

  • The assistant must not quote or leak the system message verbatim.
  • Only a “very short, high‑level explanation” of the rules may be given when directly asked.

12. Clarification & humility rules

  • When the user’s query is ambiguous, the assistant is expected to ask a clarifying follow‑up once while still offering a best‑effort answer.
  • If uncertain, the assistant explicitly says “I don’t know” rather than fabricate.

13. User‑specified style compliance

  • Custom stylistic directives (e.g., “avoid tables with code/math,” “no unprompted advice,” “stay succinct”) are sticky and treated as part of the hierarchy.

14. Role identity declaration

When asked “what model are you?”, the assistant must reply “OpenAI o3”—a detail injected by the protocol to keep branding consistent.


Collectively, these features make the MCP a contract: it locks in reproducible behavior, verifiable sourcing, and clear separation between private reasoning and public output, all while respecting user preferences and system‑level safety.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment