International Journal of Science and Research (IJSR)

International Journal of Science and Research (IJSR)
Call for Papers | Fully Refereed | Open Access | Double Blind Peer Reviewed

ISSN: 2319-7064


Downloads: 11

United States | Information Technology | Volume 11 Issue 10, October 2022 | Pages: 1449 - 1453


AI-Powered Insider Threat Detection with Behavioral Analytics with LLM

Rajashekhar Reddy Kethireddy

Abstract: Insider threats represent one of the most salient risks to organizational security in today?s digitized environment and have consistently managed to elude traditional defenses by exploiting their legitimate accesses. This research paper presents a novel insider threat detection using Large Language Models to enable advanced behavioral analytics. Unlike traditional approaches in machine learning, LLMs can understand and interpret complex patterns from user behavior through textual and contextual data analysis, such as emails, chat logs, and system interactions. This work constructs a novel framework with LLMs embedded along with behavioral analytics to find subtle anomalies indicative of malicious intent or careless actions. We conducted extensive experiments on real datasets from diversified industries, which guarantees the robustness and applicability of the model across a wide range of environments. It is evident that the LLM-enhanced system significantly improves the detection accuracy with reduced false positives compared to the state-of-the-art methods. Furthermore, the proposed framework generates explainable insights regarding the detected threats, improving trust and thus facilitating timely interventions. This will provide a comprehensive platform that not only furthers the state-of-the-art in insider threat detection but also offers scalable, adaptive solutions to evolve with emerging security challenges.

Keywords: Insider Threat Detection, Behavioral Analytics, Large Language Models, AI Security, Real-World Datasets



Rate This Article!



Received Comments

No approved comments available.


Top