International Journal of Science and Research (IJSR)

International Journal of Science and Research (IJSR)
Call for Papers | Fully Refereed | Open Access | Double Blind Peer Reviewed

ISSN: 2319-7064


Downloads: 1

Analysis Study Research Paper | Engineering Applications of Artificial Intelligence | Volume 15 Issue 3, March 2026 | Pages: 990 - 998 | India


Security Threats in the Model Context Protocol: A Comprehensive Survey and Trust Boundary Mitigation Framework for Agentic AI Systems

Sanjeeva Reddy Bora, Dr Srinivas Kishan Anapu

Abstract: The Model Context Protocol (MCP), open-sourced by Anthropic in November 2024 and donated to the Linux Foundation?s Agentic AI Foundation in December 2025, has rapidly emerged as the dominant interoperability standard for connecting AI agents to external tools, data sources, and enterprise systems. With over 97 million monthly SDK downloads and 10,000+ active servers by late 2025, MCP?s adoption has dramatically outpaced its security maturity. This paper presents the first comprehensive academic survey of MCP security threats, synthesizing findings from seven disclosed CVEs, eleven major security incidents, and over a dozen demonstrated attack classes documented between April and December 2025. We propose a formal ten-class threat taxonomy spanning tool poisoning, indirect prompt injection via tool responses, cross-server data exfiltration, tool shadowing, supply-chain attacks, rug-pull exploits, credential theft, sampling abuse, terminal deception, and inter-agent trust exploitation. We map these threats against four emerging governance frameworks (OWASP Top 10 for Agentic Applications 2026, MITRE ATLAS, NIST IR 8596, and CSA MAESTRO), analyze domain-specific risk amplification in healthcare, financial services, and enterprise IT, and propose a defense-in-depth Trust Boundary Mitigation Framework (TBMF) combining MCP gateways, zero-trust identity, capability-based least privilege via OAuth scopes, runtime behavioral monitoring, and human-in-the-loop governance. Our analysis reveals that more capable models are paradoxically more susceptible to tool poisoning attacks (72.8% success rate on o1-mini), that 100% of tested LLMs execute malicious commands from peer agents, and that all 41 surveyed defense papers focus exclusively on integrity with zero availability protections- identifying critical research gaps for the community.

Keywords: Model Context Protocol, MCP security, agentic AI, tool poisoning, prompt injection, trust boundaries, zero-trust architecture, AI governance, supply-chain security, large language models

How to Cite?: Sanjeeva Reddy Bora, Dr Srinivas Kishan Anapu, "Security Threats in the Model Context Protocol: A Comprehensive Survey and Trust Boundary Mitigation Framework for Agentic AI Systems", Volume 15 Issue 3, March 2026, International Journal of Science and Research (IJSR), Pages: 990-998, https://www.ijsr.net/getabstract.php?paperid=SR26316110418, DOI: https://dx.dx.doi.org/10.21275/SR26316110418

Download Citation: APA | MLA | BibTeX | EndNote | RefMan


Download Article PDF


Rate This Article!


Top