Scale And Strategy
his is Milk Road, the #1 doctor-recommended source for your daily nutritious business news / insights (don't fact check that).
Here’s what we got for you today:
- What Is MCP—and Why It Might Power the Next Wave of AI Innovation
- Want lower CAC? Start by fixing your Google Ads structure.
What Is MCP—and Why It Might Power the Next Wave of AI Innovation
MCP, or Model Context Protocol, is quickly becoming one of the most important open standards in the AI ecosystem. While still early in its adoption, it’s already gaining serious traction among AI power users and builders.
At a high level, MCP is a protocol that allows AI applications—called MCP Clients—to interface with MCP Servers. You can think of these servers as offering toolsets or capabilities that the AI model can access on demand.
By establishing a standardized way for clients and servers to communicate, MCP enables any AI application to tap into a growing universe of services and tools. That includes everything from reading data from your CRM, to sending messages in Slack, or even accessing your local file system. It’s a massive leap in functionality for tools like ChatGPT, Claude—and naturally, Agent.ai.
Open standards like this are often what unlock the next wave of innovation. MCP has that kind of potential.
How I'm Using MCP Today
I’ve been testing MCP extensively. For example, I have Claude Desktop connected to a variety of MCP Servers, each offering a different capability. Right now, I can:
- Trigger agents running on Agent.ai
- Pull CRM records from HubSpot
- Read and write files locally
- Interact with Slack
- Access Google Calendar and Gmail
What makes MCP so powerful is that none of this required custom integrations. The client and server speak a shared language, so there's no need to hard-code logic for specific APIs. Servers don’t need to understand the client they’re talking to—and vice versa. Everything works through a universal protocol.
Here’s a recent example prompt I used:
“Lookup OpenAI in the HubSpot CRM and Slack the details to @dharmesh, including when my last interaction was.”
It worked—pulling data from one system and pushing it to another—all orchestrated by the LLM through MCP.
The Roadblock: Complexity and Chaos
As promising as MCP is, there’s a major bottleneck that’s holding it back from broader adoption: usability.
Right now, finding useful MCP Servers—and connecting them to a client like Claude Desktop—is difficult and unintuitive. Most servers are shared as raw GitHub repos, which means if you want to use them, you’re often responsible for hosting and configuring the whole thing yourself.
This early-stage friction comes down to a few key challenges:
- Authentication: How do you securely manage which users or models have access to what functionality?
- Trust: How do you know which servers are safe, secure, and well-designed?
- Provisioning: Most MCP servers today are distributed as open-source repos. There's no “plug-and-play” marketplace or hosting layer.
- Security: Granting LLMs the ability to take actions through tools opens up new risk surfaces. You want to know the servers you’re using are legitimate—and that the client is sandboxed appropriately.
We’re starting to see progress on all of these fronts, but today MCP is best suited for early adopters—technical users, enterprise innovation teams, and those building for narrow, high-value use cases.
Where the Opportunity Lies
Despite the current limitations, the core vision of MCP is profound: a world where any AI application can securely and easily connect to any compatible tool or service, in real time.
When we reach the point where arbitrary clients can reliably connect to arbitrary servers, without all the current friction, it will trigger an explosion of capability—and entirely new AI-native workflows will emerge.
The billion-dollar question is: who will solve the infrastructure, security, and user experience challenges to make that possible?
Because whoever does, will likely own a foundational layer of the next generation of AI.
Want lower CAC? Start by fixing your Google Ads structure.
Demand capture beats demand generation. And if you want paid search to actually convert, here’s how to set your account up for success:
1) Simplify your structure.
Your account setup can make or break performance.
Organize campaigns into four buckets:
- Branded search
- Competitor search
- Non-branded search
- Retargeting & display
Instead of lumping dozens of keywords together, go with single-theme ad groups (STAGs). That means tightly grouped keywords with matching ad copy and landing pages. It boosts CTR, Quality Score, and conversions.
No bandwidth? Start with your homepage, then build out more landing pages based on demo volume, pipeline influence, and revenue.
2) Take control of your keywords.
Use high-intent keywords like “[brand name] pricing” and do the same for competitors.
In-market campaigns should blend broad terms like “PPC agency” with more specific ones like “B2B PPC management.”
For retargeting, focus on warm prospects—people who hit pricing pages, requested demos, or visited key product content.
3) Choose the right bidding strategy.
Most of the time, Maximize Conversions is a solid choice—it aims to get the most conversions for your budget.
But when your search impression share is maxing out, switch to tROAS or tCPA and test what performs best.
4) Allocate your budget smartly.
There’s no perfect formula, but here’s a solid starting point:
- 0–40%: Brand keywords
- 40–60%: In-market keywords
- 10–20%: Competitor keywords
- ~10%: Retargeting
Adjust as you go based on results.
Final tip: Prioritize your structure first, then keywords, then bidding, then budget.
Structure drives strategy. Get that right, and the rest follows.