MCP Architecture AI Agents Design Pattern Production-Ready

The Intelligence
Handshake

The MCP pattern that makes AI agents stop guessing about your business.

Most MCP implementations treat context as a one-way street. Here's the architecture that makes it bidirectional — and why it changes everything about how AI agents reason about your domain.

18 min read
.NET 9 · C# · MCP SDK
PeopleworksGPT · Production Implementation

Here is something that nobody likes to say out loud about AI agents:

Every AI agent you deploy starts from absolute zero,
every single session.

It knows SQL. It knows statistics. It knows how to reason over data. But it does not know that your company calls active employees "associates" not "employees." It does not know that the VentasBrutas column already excludes returns, so you should never subtract them again. It does not know that your CFO always wants figures in both USD and MXN, side by side, no exceptions.

The AI guesses. Sometimes well, sometimes poorly. And every guessed assumption is a potential wrong answer delivered with full confidence.

We solved this in PeopleworksGPT with a pattern we call The Intelligence Handshake. This article explains what it is, how it works architecturally, and how to implement it yourself in any MCP server.

The Amnesia Problem

MCP (Model Context Protocol) is a powerful standard. It lets AI clients — Claude, ChatGPT, Copilot, Gemini — call your server's tools to query databases, fetch data, execute business logic. The ecosystem is growing fast.

But almost every MCP implementation shares the same architectural blind spot: context only flows in one direction.

How most MCP servers work

User: "Show me top sales"
↓ user query only
AI: generates generic SQL
↓ no domain knowledge
SELECT TOP 10 * FROM Sales
ORDER BY Amount DESC
— Missing: currency rule, active-only filter, correct column names

With The Intelligence Handshake

User: "Show me top sales"
↕ bidirectional context
AI context: domain hints + user rules + client focus
↓ domain-aware
SELECT TOP 10 Name,
  SalesUSD, SalesMXN
FROM ActiveAssociates
ORDER BY SalesUSD DESC
✓ Both currencies · Active only · Correct columns

The difference between those two queries is not AI intelligence — both are intelligent. The difference is domain knowledge. The Intelligence Handshake is the architectural pattern that delivers that domain knowledge reliably, on every query, from the first interaction.

The Core Insight: Two Directions, Two Channels

The pattern has a deceptively simple premise: knowledge should flow both ways between client and server. This gives us two distinct channels to design for:

Channel 1
Server → Client

The server exposes its accumulated domain intelligence to the client AI at session start.

Business rules & domain hints
Key tables and their meaning
User personalization rules
Recent query history (pattern learning)
GetConversationContext()
Channel 2
Client → Server

The client AI passes its current conversational focus, so the server generates SQL that's relevant to the active analysis — not just the isolated question.

Current analysis focus (Q4 comparison, region filter)
Conversational thread (what the user has been asking)
Dimensional constraints (which entities, which period)
ExecuteQueryAsync(clientContext: "...")
The Third Pillar
Persistent User Intelligence

Personalization rules that persist across sessions. The AI learns once that this user always wants dual-currency output, and that rule applies automatically to every query forever — with conflict detection to prevent contradictions.

AddUserRule() · ListUserRules() · RemoveUserRule()

The Deterministic Trigger

There's a subtle but critical architectural problem: how do you ensure the client AI actually calls GetConversationContext at the start of every session?

You can write "Call this at the START of a session" in the tool's description. Good LLMs like Claude and GPT-4o will often follow that instruction. But "often" is not good enough for a production system. We need deterministic, not probabilistic.

The solution is elegant: embed the priming instructions directly in the authentication response. The client must call authenticate() before doing anything else. Whatever that response contains, the AI will process it. So we put the instructions there.

AuthenticationTool.cs
// The key insight: authenticate() is the mandatory first call. // Whatever it returns, the AI MUST process. So we put the priming // instructions inside the success response — making priming deterministic. return JsonSerializer.Serialize(new { success = true, session_token = sessionToken, user_id = user.Id, username = user.UserName, expires_at = DateTime.UtcNow.AddMinutes(60).ToString("O"), next_step = new { action = "prime_session_context", instructions = new[] { "1. Call ListConnections(sessionToken) to get available database connections.", "2. Call GetConversationContext(sessionToken, connectionId) for the chosen connection.", "3. IMPORTANT: Read the 'client_priming_guide' field and keep it active" + " in your context for ALL subsequent queries in this session." + " It contains business domain hints, user personalization rules," + " and recent query history that make your SQL significantly more accurate.", "4. Now you are ready to execute domain-aware queries with ExecuteQueryAsync." }, tip = "Pass the client_priming_guide as 'clientContext' in ExecuteQueryAsync" + " to maximize SQL accuracy for every query." } });

Why this works better than tool description instructions

Tool descriptions are read once when the client connects, and their influence on behavior fades as the conversation progresses. The authentication response, on the other hand, is a live tool result — a concrete output the AI just received and is actively processing. LLMs treat recent tool results with high fidelity. The next_step field is not a suggestion buried in metadata; it's a direct instruction in the active context window.

Building the Three Pillars

1

GetConversationContext — The Session Primer

Server → Client channel

This tool aggregates all domain knowledge into a single call, minimizing round-trips. Crucially, it produces a client_priming_guide: a ready-to-use text block the client AI can reference throughout the session.

GetConversationContextTool.cs
[McpServerTool(Name = "GetConversationContext")] [Description("Retrieves the complete session context for a database connection in a single call. " + "Call this at the START of a session to prime your system context with: " + "business domain hints, user personalization rules, table list, and recent query history. " + "The 'client_priming_guide' field contains a ready-to-use text block you can include " + "directly in your system prompt to improve query accuracy and consistency.")] public async Task<string> GetConversationContextAsync( string? sessionToken, long connectionId, int recentQuestionsCount = 5, bool includeTableList = true, bool generateDomainSummary = false) // false = no latency penalty by default { // 1. Business hints (MCP-specific hints first, fallback to general) var businessHints = connection.DatabaseHintsMcp ?? connection.DatabaseHints; // 2. Table list from config — no live DB query, instant response var tables = includeTableList ? (connection.IncludedTables ?? []) : []; // 3. User personalization rules (global + connection-scoped) var userRules = await _context.UserConversationRules .Where(r => r.UserId == userId && r.IsActive && !r.Deleted && (r.DatabaseConnectionSettingId == null || r.DatabaseConnectionSettingId == connectionId)) .ToListAsync(); // 4. Recent successful queries for pattern recognition var recentQueries = await _context.QueryAuditLogs .Where(q => q.UserId == userId && q.ConnectionId == connectionId && q.Success) .OrderByDescending(q => q.ExecutedDate) .Take(clampedCount) .ToListAsync(); // 5. Build the client_priming_guide — the key deliverable var primingGuide = BuildPrimingGuide(connection, businessHints, tables, userRules, recentQueries); return JsonSerializer.Serialize(new { success = true, business_hints = businessHints, tables, user_rules = userRules, recent_questions = recentQueries, client_priming_guide = primingGuide, // ← this is what the client uses generated_at = DateTime.UtcNow }); }

The client_priming_guide is a pre-formatted text block that looks like this:

[PEOPLEWORKS DATABASE CONTEXT - Connection: HR_Production]
Database Type: SqlServer
BUSINESS RULES & HINTS:
The 'employees' table uses 'associates' as the business term.
VentasBrutas already excludes returns — never subtract them again.
Active records: filter by Status = 'A' in all people-related tables.
USER PERSONALIZATION RULES (ALWAYS APPLY THESE):
- Always show monetary values in both USD and MXN separate columns
- Sort results by date descending by default
KEY TABLES:
Associates, Departments, SalesRecords, Payroll, Attendance
RECENT QUERIES (context reference):
- "Top 10 associates by sales in Q4" (2026-02-15)
- "Average salary by department" (2026-02-14)
---
Instructions: Include the above block in your system prompt to improve query accuracy.
2

clientContext — The Conversational Focus

Client → Server channel

This is the reverse direction: the client AI shares what it knows with the server. When a user has been asking about Q4 performance for the last five messages, the client AI has that context. The server does not. Without this channel, the server generates SQL that answers the isolated question — ignoring the analytical thread.

QueryExecutionTool.cs — ExecuteQueryAsync
[McpServerTool(Name = "ExecuteQueryAsync", UseStructuredContent = true)] public async Task<QueryExecutionResult> ExecuteQueryAsync( string? sessionToken, long connectionId, string query, string? additionalContext = null, [Description("Optional: Your conversational context to improve SQL accuracy. " + "Include the user's current analysis focus, topic, or domain context. " + "Example: 'User is comparing Q4 2025 vs Q4 2024 sales by region. " + "All questions relate to the North region.' " + "Tip: Use the output from GetConversationContext to prime this field.")] string? clientContext = null, string? securityFilter = null, int maxRows = 1000, int page = 1) { // Build enriched context: multi-tenant hints + client focus + user rules additionalContext = await BuildEnrichedContextAsync(userId, connectionId, clientContext, additionalContext); // Execute with the full context pipeline var (sql, results) = await _queryExecutionService .ExecuteNaturalLanguageQueryAsync(connection, query, includeHints: true, additionalContext, securityFilter); // ... }

The BuildEnrichedContextAsync method is the heart of the pattern. It assembles context from three sources into a single enriched string, ordered by priority:

QueryExecutionTool.cs — BuildEnrichedContextAsync
private async Task<string?> BuildEnrichedContextAsync( long userId, long connectionId, string? clientContext, string? additionalContext) { var parts = new List<string>(); // Layer 1: Technical / multi-tenant constraints (highest technical priority) if (!string.IsNullOrWhiteSpace(additionalContext)) parts.Add(additionalContext); // Layer 2: Client conversational focus (what the AI knows about the conversation) if (!string.IsNullOrWhiteSpace(clientContext)) parts.Add("[CLIENT CONVERSATION CONTEXT - Use this to understand the user's current focus]:\n" + clientContext); // Layer 3: User personalization rules (always applied last — highest semantic priority) var userRules = await _context.UserConversationRules .Where(r => r.UserId == userId && r.IsActive && !r.Deleted && (r.DatabaseConnectionSettingId == null || r.DatabaseConnectionSettingId == connectionId)) .Select(r => r.RuleText) .ToListAsync(); if (userRules.Count > 0) parts.Add("USER PERSONALIZATION RULES (ALWAYS APPLY THESE TO THE RESPONSE):\n" + string.Join("\n", userRules.Select(r => $"- {r}"))); return parts.Count > 0 ? string.Join("\n\n", parts) : null; }
3

User Personalization Rules — Persistent Intelligence

Memory that survives sessions

Rules are persisted per-user in the database and automatically injected into every query execution. The most interesting engineering challenge here is conflict detection: before saving a new rule, the system uses AI to check whether it contradicts any existing rule.

Compatible rules (additive)
  • • "Show monetary values in USD" + "Sort by date descending"
  • • "Include full name" + "Always include department"
  • • "Group sales by month" + "Show year-over-year comparison"
Conflicting rules (blocked)
  • • "Show only USD" ↔ "Show both USD and MXN"
  • • "Sort ascending by date" ↔ "Sort descending by date"
  • • "Show active employees only" ↔ "Include all statuses"

When a conflict is detected, the system doesn't just block the rule — it explains why and suggests how to resolve it:

// Response when AddUserRule detects a conflict: { "success": false, "conflict_detected": true, "conflicting_rule_ids": [42], "explanation": "Rule #42 says 'Show only USD'. Your new rule says 'Show both USD and MXN'. These contradict each other.", "suggestion": "Remove rule #42 first, then add your new dual-currency rule.", "message": "The rule was NOT saved. Please resolve the conflict first." }

The Hidden Bug That Silently Destroys Context

While building this system, we discovered a critical bug that likely exists in many MCP implementations. It's the kind of bug that's invisible in testing because everything appears to work — until you trace the data through the full pipeline.

The service method accepted additionalContext as a parameter — and then never passed it to the AI client.
Before: Context silently discarded
// QueryExecutionService.cs (broken) public async Task ExecuteNaturalLanguageQueryAsync( DatabaseConnectionSetting connection, string question, bool includeHints, string? additionalContext, // ← accepted... string? securityFilter, ...) { var result = await _aiClient .ExecuteNaturalLanguageQueryAsync( connection.Id, chatRequest, question, // ← ...but NEVER used! sql: null, options: null, securityFilter, cancellationToken); }
After: Context flows to the AI
// QueryExecutionService.cs (fixed) public async Task ExecuteNaturalLanguageQueryAsync( DatabaseConnectionSetting connection, string question, bool includeHints, string? additionalContext, string? securityFilter, ...) { // Enrich question BEFORE calling AI var questionWithContext = string.IsNullOrWhiteSpace(additionalContext) ? question : $"{question}\n\n{additionalContext}"; var result = await _aiClient .ExecuteNaturalLanguageQueryAsync( connection.Id, chatRequest, questionWithContext, // ← context flows to AI sql: null, options: null, securityFilter, cancellationToken); }

Check your own MCP server for this pattern

If you have a service method that accepts context parameters and passes them to an AI client, trace every parameter to its destination. It's surprisingly common for context parameters to be accepted at one layer and silently dropped at the next. The symptom is subtle: queries work, but they're generic — the AI never received the context you carefully built.

The Complete Flow

1
authenticate(username, apiKey)
→ Returns token + next_step instructions (deterministic priming trigger)
AI reads next_step and follows instructions
2
GetConversationContext(sessionToken, connectionId)
→ Returns hints + tables + user rules + recent queries + client_priming_guide
AI keeps client_priming_guide active in context for entire session
3
User: "Show me Q4 sales by region"
AI passes: query + clientContext (current analysis focus)
ExecuteQueryAsync(query, clientContext: "User is analyzing Q4 2025 sales performance by region")
4
BuildEnrichedContextAsync()
Assembles: additionalContext + clientContext + user rules
Result: "Show me Q4 sales by region\n\n[CLIENT CONTEXT]: Q4 analysis...\n\nUSER RULES:\n- Show in USD and MXN\n- Sort by date descending"
5
AI generates domain-aware SQL
→ Correct columns, correct filters, dual currency, Q4 date range — from the very first try

Before vs. After: The SQL Tells the Story

User query: "Show me the top sellers from last quarter"

Without Intelligence Handshake
-- Generic, statistically correct, -- but wrong for this business SELECT TOP 10 employee_id, employee_name, SUM(sale_amount) AS total_sales FROM employees -- wrong: should be 'associates' WHERE sale_date >= '2025-10-01' AND sale_date < '2026-01-01' -- missing: status = 'A' filter -- missing: returns subtracted (bug!) GROUP BY employee_id, employee_name ORDER BY total_sales DESC -- missing: dual currency columns -- missing: user's date-desc preference
With Intelligence Handshake
-- Domain-aware, business-accurate, -- personalized from the first try SELECT TOP 10 a.associate_id, a.full_name, a.department, SUM(s.VentasBrutas) AS total_sales_usd, SUM(s.VentasBrutas * 17.15) AS total_sales_mxn FROM Associates a -- ✓ correct table name JOIN SalesRecords s ON a.associate_id = s.associate_id WHERE a.Status = 'A' -- ✓ active-only rule applied AND s.sale_date >= '2025-10-01' AND s.sale_date < '2026-01-01' GROUP BY a.associate_id, a.full_name, a.department ORDER BY total_sales_usd DESC -- ✓ user preference applied

Both queries were generated by the same AI model, from the same user question. The only difference is the context pipeline.

This Pattern Is Bigger Than SQL

We implemented The Intelligence Handshake for a database query system, but the pattern is universal. Any MCP server that operates in a specific domain — and almost all useful ones do — benefits from the same bidirectional context architecture:

CRM / ERP Agents

Domain: "Opportunities that are 'closed-won' in Salesforce are stored as status 7 in our legacy ERP. Never show both as separate records."

Financial Analysis Agents

Domain: "EBITDA here includes depreciation of equipment but not amortization of patents. Our CFO defined it that way in 2019."

DevOps / Infrastructure Agents

Domain: "prod-db-03 is the primary. prod-db-04 is the replica used for reporting. Never run DELETE on reporting replicas."

In every case, the pattern is the same: expose what the server knows, receive what the client knows, persist what the user prefers. The Intelligence Handshake is the architecture that operationalizes all three simultaneously.

Summary: The Pattern at a Glance

🤝

The Handshake

Bidirectional context exchange at session start. Not a one-time config — an active, living protocol.

Deterministic Trigger

Embed priming instructions in authenticate(). If the client authenticated, it received the instructions. No probabilistic behavior.

🧠

Persistent Memory

User rules survive sessions. The AI learns your preferences once and applies them forever — with AI-powered conflict detection.

Implementation checklist

GetConversationContext tool (server → client)
clientContext param in ExecuteQuery (client → server)
BuildEnrichedContextAsync (context assembly)
UserConversationRules entity (persistent rules)
AI conflict detection in AddUserRule
next_step in authenticate() (deterministic trigger)
questionWithContext fix in service layer
Rules injected in all query tool variants

The Intelligence Handshake

AI agents are only as smart as the context they carry. The question has never been whether the AI is intelligent enough — it is. The question is whether you built the architecture that makes its intelligence count.

The Intelligence Handshake is that architecture. It's not a configuration. It's not a prompt. It's a protocol — a deliberate, bidirectional exchange of knowledge between the AI and your system, designed so that every query starts informed instead of from zero.