Warning, I am not all for the hype on MCP technology. While MCPs are impressive and in some cases useful, I don’t believe it will be as useful as certain hypes make it to be. I think there is still a place for more traditional ways of doing things, because we know they work and they are increasingly becoming more solid.

I am a little biased against MCPs, because I am disillusioned to LLMs. If a technology is going to be useful, it needs to solve a distinct problem and be reliable at solving the problem. I want to talk about the practicality and usefulness of MCP technology. From my personal observation, MCP technology may be just as useful as any other tech, and I don’t think it is more useful. Other technologies deserve just as much praise as MCPs get, but they are often overshadowed by MCP technologies because they don’t create as much of an awe.

Let me be the first to tell you, I am just a normal person. I do have opinions and biases just like anyone else. I also have my own life experience, and it permits me to see the world around me a certain way. The way I see things may be different from someone else, just like someone else’s opinion may differ from mine. This does not mean my way is the only right and true way, neither does another person’s opinion mean their’s is the only right and true way. What I mean to say is there is value to be found in collaborating among so many unique life experiences. This is what a community may look like — active, hard working, and caring communities. By all means, take what I say here and build on top of it.

Hype Technology

There are technologies that get hyped, and those technologies either grip attention for a bit or they become a norm. New technologies go through their marketing cycle in order to encourage investors in the company. This is a problem, because I believe this makes money the deciding factor for what is made. Now true, making money helps sustain everyday life, but I am pointing to the level of money making for the sake of making money.

It seems a bit circular to make money for the sake of making money. Would I rather have explosive risky growth that can yield a lot of money, or would I rather have sustainable measured growth with something that probably will have lasting niche impact? I would prefer the measured sustainable niche growth, because I know that I may find fulfillment and sustainability so much easier there. This also may mean that I won’t look foolish going for a technology’s hype.

Some hype technologies are trying to sell their product without having a problem to solve. This manner of marketing just isn’t going to prove the product’s viability. If a seller is trying to get me to buy something, I would want to know how useful the thing is going to be. I would really like for something to not just be gathering dust for a long time after I use it, because the excitement of the thing will fade away.

In my opinion, here are some technologies that I think are a bit too over-hyped in their marketing. I do not mean to say that these things have no usefulness whatsoever, but rather their hypes muddle the actual usefulness. I mention these technologies to give context to my discussion on MCPs.

  1. Gimmicky Smartphone Features.
    • I do not see why any smartphone camera should have a zoom beyond 10x. I have trouble understanding why a normal user would want anything beyond 10x on a smartphone camera. At that point, just get an actual camera if one is concerned for telephotos.
  2. Graphene
    • Was touted for its extra strength and high capacity battery. Now, it is seemingly barely talked about.
  3. Blockchain.
    • Despite its potential privacy usefulness for a decentralized system, it’s main reputation is just a gambling platform.
  4. NFTs
    • A digital piece of art associated with blockchain technology. It rose in popularity, but quickly faded for how not valuable things turned out to me.
  5. The Metaverse
    • A virtual reality that has all Facebook attributes? No thanks. The real world is just great to explore.
  6. Vision Pro
    • While more or less practical, the cost of the technology is high for usability.
  7. Quantum Computers
    • Awesome to get more computational power, but I’ve been hearing about this technology for a long time. Is it going to be commercial sometime?
  8. Foldable Phones.
    • Cool technology, but it’s not very durable, it’s expensive, nor does it seem to be practical for an everyday phone.

Now, don’t get me wrong, some of these things can have legitimate usefulness. My issue comes from people using them for pure entertainment value and acting like it’s the best product ever (cough Apple) (I do like Apple products BTW), or these products are so costly to develop that no one would get it. I think of MCP technology has something similar to the previous products. There could be something useful about it, but the hype is seemingly drowning out the practicality.

MCP Technology

MCP technology seems to be gripping the attention of a lot of software engineers. It’s named Model Context Protocol, and it’s a technology to extend the capability and functionality of LLM chatbots with real data. It is impressive that one can extend a chatbot with something that seems like a REST API call, because that would mean a chatbot could get more accurate information instantly and do things on behalf of the user. This is a nice touch, I will admit. Even then I am holding onto a bit of skepticism for this technology. The setup and usefulness for MCP technology seems to be just as equal as a standard REST API.

I believe the hype for this new technology is blinding software engineers to whom this technology may really assist. Most of the hype that I’ve observed seems to be focused on making chatbots for a normal user. I see MCP-enabled chatbots as an interface for a power-user, because I don’t see most users as power users. Another downside of MCP technology is that the tech is dependent on an LLM, and I believe that the limited usefulness of LLMs is becoming increasingly apparent. If one technology is dependent on another, then the dependent technology is limited and constrained by the other technology.

Not so User Friendly

There seems to be a vibe among some software engineers that by programming a chatbot for the main feature of a website, one wouldn’t have to program an old fashioned user interface again. I think this viewpoint shows the limited knowledge of UX design and research of software engineers — me included seeing as my knowledge in that field is low. I anticipate that if a chatbot is the primary feature on a website, a common user will have to guess at what to do, because the features of website won’t be readily obvious to see. And as soon as a user starts guessing features then frustration grows. If there’s a frustrated user, then the user will begin to divest in the website.

The book “Don’t Make Me Think”, sets up a point that the user needs easy to pick cues in order to accomplish their task. There are a couple examples of how this works. Finding specific profile information in Amazon is really frustrating. In social media, infinite scroll feeds hack into this psychology to allow a user get an easy hit of dopamine with a simple scroll gesture. In the book, The Power of Habit, it demonstrates that humans want to do things as efficiently as possible and thinking is costly. That is why and how habits form.

A chatbot interface would have to cue in the user for its features, and immediately this seems to be going counter to the point of not programming a traditional interface. Part of the point of traditional interfaces is to provide enough cues so as to instruct the user how to accomplish their task easily. So if a chatbot needs to have “traditional features” why not just keep making a traditional website?

Or even better, conduct UX interviews and research to see where a chatbot could actually be useful, in order to narrow in on the usefulness of the chatbot. I can’t see a chatbot explicitly listing all the features for it to be useful. I’ve got a couple anecdotes for this at best. One, I consider myself a power user, and yet that doesn’t mean I take advantage of all the features that are available to me in the terminal and other developer tooling. I pretty much only take advantage of the features that are useful to me. By this, I can understand if I make features more usable, then that could be easier for a normal user. Two, Microsoft Word is a power user’s software, but for me it hits the same obstacles for usefulness. I only need about 10-20 features out of Word’s hundreds of features. A program like Google Docs is goated to me because of the reliability and essential feature set.

A wizard and a robot chat in a neo-noir setting. There is a chat bubble above the robot's head.
This seems like a fancy way to talk to an LLM…

It’s odd to me to see a push for the experience of a website be oriented around a chatbot, instead of a chat being a complementary feature of the website. Having a chat agent be the main feature of website seems like it’ll lead to too much for a user to do. I think we are already seeing how too many features are leading AI business to fail. The AI-service companies that are winning are the ones that have targeted problems and focused solutions. Even then, as I have stated, I can easily see an MCP-enabled chatbot being the main function of a website if and only if it was intended for power users.

LLM Usefulness and Constraints

I presented on LLM limitations at UtahJS Conference; essentially, while LLMs can be a productivity booster, they don’t boost all productivity. As things currently stand, LLM productivity is context dependent and non-deterministic. This is a major issue for MCP servers, because this opens up to context overwhelm/underwhelm and security vulnerabilities. If I use MCPs too much, then there becomes context bloat risk. If I use it for a security system, I risk some crafty individual just manipulating the LLM to get sensitive information.

Additionally, I’ve already seen too many demos where the demonstrator had to argue with the chatbot to get what it wanted. Aren’t demos supposed to be a medium so as to encourage the audience to use it? I’m not encouraged to use LLMs more by these demos; rather, they encourage me to be more cautious while use LLMs. The caution is an expectation mismatch with the perceived hype.

Anthropic recently came out and said prompt engineering isn’t as useful as we think it is, and they suggested context engineering instead. This is just weird to me. Okay something we’ve been told to do is now not as efficient as it was originally supposed to be. But now we need to give an AI all the useful context in order for AI to do something useful.

Once I read this, this sounded familiar. Isn’t this the same as duck programming, where a programmer talks the context out to a rubber duck? So Anthropic is essentially suggesting to do duck programming to an AI bot? At the point of giving sufficient context, couldn’t a good software engineer figure the problem out anyway?

I will admit, I could be fine with making LLMs more like duck programming. That way I am still engaging my problem solving skills, and then the LLM can do all the work. The only hesitancy I have with that is when the LLM actually does hallucinate, because LLMs are non-deterministic. There’s no knowing if I can reliably trust an LLM to not pull in something bad. Things would have to be done in smaller bursts, instead of doing sweeping code changes. How can I trust tooling that is not nearly 100% reliable?

Tooling

There is a claim that tooling configuration speed is so much faster than normal tooling. What does “normal” look like? If such an assumption is to be made, then there should be something to back it up. I can see that one just needs to install an MCP server in VSCode or something similar, but at that comparison couldn’t I just equate that to installing a VSCode extension? With that VSCode, I can have buttons that a calls a specific endpoint to do something.

There is also setting up a MCP server that seems to be missing from this tooling comparison. If an MCP server doesn’t already exist yet, then I would have to program the MCP server to do what I want it to do, and this seems to go along the lines of setting up a Webpack configuration. At this point, it seems to me that setting up a Webpack configuration and an MCP Server would take an equal amount of time.

I’ll give an example. At this link I followed an MCP demo to create a weather MCP server. This simple MCP server contains the server, weather function, typing (API typing too), server tools setup, and formatting. This total file is at 231 lines. I know I can get something more useful in less lines or less lines, and it all depends on how the tool is used. For a MCP weather server, all of the setup is for the LLM to parse natural language and tell the user they probably should bring an umbrella (or similar situations).

Here’s a near equivalent version is just Deno Typescript in 107 lines of code or less time.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
const NWS_API_BASE = "https://api.weather.gov";
const USER_AGENT = "weather-app-deno/1.0";

interface AlertFeature {
  properties: {
    event?: string;
    areaDesc?: string;
    severity?: string;
    status?: string;
    headline?: string;
  };
}

interface ForecastPeriod {
  name?: string;
  temperature?: number;
  temperatureUnit?: string;
  windSpeed?: string;
  windDirection?: string;
  shortForecast?: string;
}

interface AlertsResponse {
  features: AlertFeature[];
}

interface PointsResponse {
  properties: {
    forecast?: string;
  };
}

interface ForecastResponse {
  properties: {
    periods: ForecastPeriod[];
  };
}

async function makeNWSRequest<T>(url: string): Promise<T | null> {
  const headers = {
    "User-Agent": USER_AGENT,
    Accept: "application/geo+json",
  };

  try {
    const response = await fetch(url, { headers });
    if (!response.ok) throw new Error(`HTTP error! status: ${response.status}`);
    return (await response.json()) as T;
  } catch (error) {
    console.error("Error making NWS request:", error);
    return null;
  }
}

function formatAlert(feature: AlertFeature): string {
  const props = feature.properties;
  return [
    `Event: ${props.event || "Unknown"}`,
    `Area: ${props.areaDesc || "Unknown"}`,
    `Severity: ${props.severity || "Unknown"}`,
    `Status: ${props.status || "Unknown"}`,
    `Headline: ${props.headline || "No headline"}`,
    "---",
  ].join("\n");
}

export async function getAlerts(state: string): Promise<string> {
  const stateCode = state.toUpperCase();
  const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`;
  const alertsData = await makeNWSRequest<AlertsResponse>(alertsUrl);

  if (!alertsData) return "Failed to retrieve alerts data";
  const features = alertsData.features || [];
  if (features.length === 0) return `No active alerts for ${stateCode}`;

  const formattedAlerts = features.map(formatAlert);
  return `Active alerts for ${stateCode}:\n\n${formattedAlerts.join("\n")}`;
}

export async function getForecast(latitude: number, longitude: number): Promise<string> {
  const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`;
  const pointsData = await makeNWSRequest<PointsResponse>(pointsUrl);

  if (!pointsData)
    return `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API.`;

  const forecastUrl = pointsData.properties?.forecast;
  if (!forecastUrl) return "Failed to get forecast URL from grid point data";

  const forecastData = await makeNWSRequest<ForecastResponse>(forecastUrl);
  if (!forecastData) return "Failed to retrieve forecast data";

  const periods = forecastData.properties?.periods || [];
  if (periods.length === 0) return "No forecast periods available";

  const formattedForecast = periods.map((p) =>
    [
      `${p.name || "Unknown"}:`,
      `Temperature: ${p.temperature || "Unknown"}°${p.temperatureUnit || "F"}`,
      `Wind: ${p.windSpeed || "Unknown"} ${p.windDirection || ""}`,
      `${p.shortForecast || "No forecast available"}`,
      "---",
    ].join("\n")
  );

  return `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join("\n")}`;
}

// Example usage (uncomment to run directly):
// const alerts = await getAlerts("CA");
// console.log(alerts);
// const forecast = await getForecast(34.05, -118.25);
// console.log(forecast);

Don’t forget about tool reliability. When I want a tool, I want it to work 99% of the time. I’ve seen too many frustrating demos and been in too many frustrating AI interactions to know that LLM tools don’t work 99% of the time. Of course, I do acknowledge that there are edge-cases in traditional tooling, but even then that seems to be more rare than LLM missteps, because they are simply edge-cases.

Conclusion

Now if MCPs were accessible without a chatbot interface, perhaps then this hype is worthwhile. I’m imagining a simple machine learning program, and not a chatbot, that can access an MCP server at any point. But then it enters the already existing competition of GraphQL and just plain normal REST. If I’m going to boil a chatbot interface into something more simple, like a button (😱), that already exists. Although perhaps an LLM + MCP can have an easy use case, but I believe that creating a website experience oriented around an MCP isn’t it.

There is one project that really calls my attention. Fabric seeks to augment AI UX into something usable. The description for the project puts the issue with LLMs home I think. “It’s all really exciting and powerful, but it’s not easy to integrate this functionality into our lives. In other words, AI doesn’t have a capabilities problem—it has an integration problem.” Only when AI is boiled down to simple and easy automation tasks, then it can really excel.

Resources

  1. Vibe Coding has a Security Problem
  2. LLMs and Brain Rot
  3. AI Powered Freelance Development
  4. The 10 most overhyped technologies in IT
  5. AI Washing: The New ‘Dot-Com’ Hype — How Companies Are Misleading Investors and Consumers
  6. Effective context engineering for AI agents \ Anthropic
  7. Everything wrong with MCP
  8. The Hidden Dangers of MCP Servers: What You Need to Know
  9. Where MCP Falls Short
  10. AI vibe coding tools may be going from boom to bust, new data shows. Here’s why.