New research data reveals generative AI is a divisive topic amongst IT leaders, as cyber teams evaluate the benefits of the technology alongside concerns around data privacy
London, UK: Corelight has published a new research* paper highlighting the strong divide among European IT leaders over the suitability of generative AI (GenAI) for use by their cybersecurity teams.
-
Maintaining compliance policies (41%)
-
Recommending best practices on domain-specific languages like identity and access management policy (36%)
-
Unstructured vulnerability information (35%)
-
Providing remediation guidance (35%)
-
Unstructured network connection and process information (32%)
Alongside some clear concerns and question marks about the practical use and implementation of GenAI in a security environment, 68% of respondents with dedicated threat hunters say it’s already helping their threat detection and protection efforts. And a further 28% plan to incorporate these capabilities into more use cases in the future.
Despite the legitimate concerns of many European ITDMs, many have a positive view of the future. More than 40% of respondents claim AI and automation are central to creating “the perfect security formula”.
“Generative AI has been successfully applied for alert enrichment and contextualisation, providing SOC analysts with enhanced incident response capabilities,” added Ignacio Arnoldo, Director of Data Science, Corelight.
He continued: “GenAI's adoption is hindered by concerns over data confidentiality and model accuracy. As models improve in overall reasoning capacity and cybersecurity knowledge, and as more LLM deployments include structural privacy protections, GenAI is set to become integral to security operations.”
Corelight helps customers mitigate data protection concerns by establishing a functional firewall so that customer-specific data cannot interact with the GenAI model. Pre-vetted GenAI prompts are used to contextualise alerts and provide analysts with investigative recommendations.