Traditional computer security principles can help develop secure agentic systems
Umar Iqbal, collaborators develop IsolateGPT to make LLM-based agents more secure

Large language models (LLMs) are becoming more and more useful, from booking appointments to summarizing large volumes of text. Some LLM-based agents can interact with external applications, such as calendars or airline booking apps, introducing privacy and security risks.
To mitigate this risk, Umar Iqbal, assistant professor of computer science & engineering in the McKelvey School of Engineering at Washington University in St. Louis, and Yuhao Wu, a doctoral student in Iqbal’s lab, have developed IsolateGPT, a method that keeps the external tools isolated from each other while still running in the system, allowing the user to get the benefits from the apps without the risk of exposing user data. Other collaborators include Ning Zhang, associate professor of computer science & engineering at WashU; Franziska Roesner, the Brett Helsel Professor, and Tadayoshi Kohno, professor, both in the Paul G. Allen School of Computer Science & Engineering at the University of Washington.
The research was presented at the Network and Distributed System Security Symposium Feb. 24-28, 2025.
“These systems are very powerful and can do a lot of things on a user’s behalf, but users currently cannot trust them because they are simply unreliable,” Iqbal said. “We know that there’s a lot of benefit in having these tools interact with each other, so we define the interfaces that allow them to precisely interface with each other and provide the user with the information to know that the interfacing origin comes from a trustworthy component.”
Iqbal describes IsolateGPT as securing the tools or third-party apps by isolating them into separate containers or sandboxes in a hub-and-spoke system. The hub is the central trustworthy interface that can receive queries from users and route them to the appropriate apps. The stand-alone containers or sandboxes, or spokes, allow the apps to resolve user queries in an isolated environment without losing any functionality. Finally, IsolateGPT allows the spokes to communicate with each other via the trustworthy hub.
Iqbal and his collaborators compared IsolateGPT with a general system they developed, VanillaGPT, that does not isolate the third-party apps. To compare, they used benchmarks that simulated user requests that do not require using apps, require the use of a single or multiple apps, or require collaboration among multiple apps.
One case study involved booking a ride-share with the lowest fare. In a traditional computing system, such as a mobile device, a user would search a few ride-sharing services to compare fares, then choose the service with the lowest fare. In an LLM-based system, a user could simply direct the system to “book a ride with the lowest fare.” As a result, the system may install a few ride-sharing apps, provide the LLM with the relevant information, load the responses into memory, compare them, then authorize the app with the lowest fare to book the ride. However, if the system is nonisolated, a malicious or compromised ride-sharing app could direct the LLM to manipulate the fares of the other apps. With IsolateGPT, this cannot happen, as malicious instructions from the problematic app cannot cross isolated boundaries, Iqbal said.
“We found that for all benchmarks, IsolateGPT provides the same functionality as the baseline VanillaGPT while providing the key advantage of additional security,” Iqbal said.
“Looking ahead, we see IsolateGPT as an effort that helps the research community understand the viability, strengths and limitations of execution isolation in securing LLM-based systems,” Iqbal said. “We envision IsolateGPT providing a foundation for deeper explorations that build on execution isolation, such as enforcing access control through a permission model or where execution isolation can be complementary in securing LLM-based systems.”
Iqbal and his collaborators have partnered with LlamaIndex to offer IsolateGPT as a Llama Pack. The source code is available at https://github.com/llm-platform-security/SecGPT.
Wu Y, Roesner F, Kohno T, Zhang N, Iqbal U. IsolateGPT: An Execution Isolation Architecture for LLM-BASED Agentic Systems. Presented at the Network and Distributed System Security Symposium, Feb. 24-28, 2025. DOI: doi.org/10.14722/ndss.2025.241131
Funding for this research was provided by the National Science Foundation (CNS-2154930, CNS-2238635); the Office of Naval Research (N000142412663); and the Army Research Office (W911NF-24-1-0155).