Microsoft unveils Security Copilot built on GPT-4
Microsoft is launching Security Copilot, a tool that combines artificial intelligence with a security platform that company officials say will provide advanced capabilities to protect IT networks from sophisticated threats.
The technology is backed by OpenAI’s generative AI GPT-4, and combines Microsoft’s global threat intelligence capabilities and vast security network, which generates more than 65 trillion daily signals, Microsoft said Tuesday.
For Microsoft executives, the copilot offers a solution for a vastly outnumbered security workforce, which has 3.4 million unfilled positions globally.
The remaining security operations staff have in many cases found themselves fighting an endless battle of chasing down sophisticated nation-state and criminal adversaries who can generate new threat activity faster than network defenders can weed out false signals.
“The volume and velocity of attacks requires us to continually create new technologies that can tip the scales in favor of defenders,” Vasu Jakkal, Microsoft’s corporate VP of security, compliance, identity and management, said in a blog post released Tuesday. “Security professionals are scarce, and we must empower them to disrupt attackers’ traditional advantages and drive innovation for their organizations.”
The learning model will enable new skills development over time, improving detection capabilities and speed, according to Microsoft. Security Copilot will integrate with other Microsoft security products and over time integrate with an ecosystem of third-party products.
The arrival of generative AI ups the ante for both defensive and offensive cybersecurity use cases, according to Avivah Litan, VP distinguished analyst at Gartner.
Threat actors have used AI in the past to construct attacks with more speed and effectiveness, and network defenders have used AI for years in various security products and services, including detection and response, endpoint security, user behavior analytics and other services.
“In the end it becomes a cat and mouse game that moves much faster than it does now,” Litan said via email. “Whoever has the most effective, generative AI cybersecurity offense or defensive capability wins in the short run.”
Microsoft plans to protect customer data from unauthorized use. Customer data will not be used to enrich or train AI models used by others, Jakkal said.