CRITICALAi
Global

8 guiding principles for reskilling the SOC for agentic AI

·Source: CSO Online

Updated:

Executive Summary

At DXC Technology, global CISO Mike Baker has established one of the largest agentic security operation centers (SOCs) in the world. To upskill the workforce as part of this journey, he embedded experts from agentic SOC vendor 7AI within his security teams. When Damon McDougald , global cybersecurity services lead at Accenture, wanted to retrain his team for agentic AI, the first thing he did was

Analysis

At DXC Technology, global CISO Mike Baker has established one of the largest agentic security operation centers (SOCs) in the world. To upskill the workforce as part of this journey, he embedded experts from agentic SOC vendor 7AI within his security teams. When Damon McDougald , global cybersecurity services lead at Accenture, wanted to retrain his team for agentic AI, the first thing he did was immerse himself in the technology. He signed up for an Anthropic boot camp, took courses to familiarize himself with the technology, then sent members of his team to take bootcamp classes as well. John White , an early adopter of agentic AI when he was CISO at Virgin Atlantic, tells CSO that he purposely gave a new agentic AI tool to a junior staffer on his Virgin Atlantic team, provided the staffer with minimal direction, and told him to go off and play with the tool. “Within a couple of days, he was building his own workflows with no experience in the tooling at all,” says White, who recently moved from Virgin Atlantic to become field CISO at Torq. While there are many paths to retraining security teams for agentic AI — from hands-on training to hands-off experimentation — there are several broad principles that CISOs should follow. Embrace the agentic imperative The first principle by which CISOs need to operate when it comes to future-proofing their SOCs is that agentic AI, the reality of which might not yet match expectations , will be an essential part of that transformation. “Every security leader needs to start planning for an agentic future because our adversaries will be operating at machine speed and human-based processes, limited by our own biology, will not be able to scale to the needs of the future,” Baker tells CSO. White adds, “The big risk is not adapting fast enough. Lots of CISOs are waiting until the perfect solution comes along or the platform that does everything. That itself introduces risk in the organization. Inaction is a risk.” Chris Cochran , field CISO and vice president of AI security at the SANS Institute, tells CSO, “I just hosted a dinner with 30 security execs and half were self-described AI skeptics. The problem is that hesitation is a strategic liability. Adversaries are leveraging AI aggressively and continuously. Security teams that aren’t moving at the same pace are falling behind.” Set the tone from the top The second principle for reskilling security teams for agentic AI is all about leadership. As Baker says, CISOs must set the tone. That means building a culture of rapid experimentation, iteration, and innovation. “Fail fast and move forward,” he says. A key aspect of CISO leadership is understanding the needs of the business , Baker adds. “The challenge of re-disciplining security teams and the executives that run those teams is highly dependent on their ability to lean into what the business needs, to become business enablers by embracing AI, and all that it has to offer,” he notes. “The bigger reskilling is security’s ability to run at the speed of business and enable the business to transform, leveraging AI in a safe and secure manner.” Making the most of agentic AI for cybersecurity is as much a mindset change as it is a technological one, White adds. “You have to articulate as a leader how things are going to change, and rewire the mindset to the fact that you don’t have to do everything yourself,” he explains. “The majority of execution is going to be done agentically, roles move into defining outcomes, designing workflows, being able to articulate intent through natural language, and having a bit of judgment around the outcomes.” Respect resistance but work to overcome it As with any technology shift, resistance to change needs to be addressed, particularly with a technology that threatens to usurp security roles — specifically level 1 and level 2 SOC analysts. “There’s real cultural resistance in the security community. Some operators distrust AI outputs. Others don’t want to change workflows that have worked for years,” says Cochran. He argues that agentic AI will actually create new roles : “What does emerge are genuinely new specializations: AI security (protecting AI systems from attacks), AI safety (ensuring that agents behave reliably and within boundaries), and AI governance.” White says that at Virgin Atlantic “there was some nervousness to begin with; people think their roles will get taken by AI and that’s not the case.” That junior staffer who went off to experiment with the Torq tool came back and announced he wanted to change roles and become an automation workflow specialist. “It shows how quickly someone’s mindset can change,” White says. “There is always somewhat of a resistance around change,” says DXC’s Baker. “We had to go through a growing process and a training process. But in a short amount of time people on the team have that ‘aha’ moment. We have been able to reskill our humans to different value-add tasks. We’re able to do amazing things with humans in terms of redeploying them; it’s almost like supercharging their careers.” Get hands on and intentional It’s critical that CISOs carve out time for overworked and overstressed security practitioners to play with agentic tools in a secure sandbox setting. DXC offers a playground called LabX, where security practitioners can experiment with agentic AI in a safe and governed manner, Baker says. To help level up DXC staff, Baker also established an AI training track on the company’s learning management platform, encouraging cybersecurity staff to take time to not only experiment but also more formally develop their skills. Accenture’s McDougald points out that Anthropic training courses offer their own sandbox environments where security pros can enter prompts, analyze responses, then refine and tune the agentic output. He also advises CISOs to create formal training plans and free up security practitioners to ensure they have the time necessary to get their feet wet with the new technology. Emphasize governance and humans in the loop Agentic AI can do amazing things, but “practitioners need to understand that AI is non-deterministic. It can be wrong. It can drift. It can be unintentionally deceptive. That means training can’t just cover how to use AI; it also has to cover how AI can fail, and how to catch it when it does,” SANS’ Cochran says. “The core principle is: Give AI room to scale, but never fully remove the human.” He recommends that security teams build escalation paths, define override authorities, establish audit trails, and create regular review cycles where humans evaluate agentic performance. At DXC, Baker says that agents have taken over basic triage and investigation of alerts but there’s still a human at L3, who receives analysis from the AI agents. “We always have a human in the loop,” he says. Similarly, Accenture’s McDougald says he has deployed agents at L1 and L2 but “it still has to be human-led.” Security professionals need to continually vet agentic outputs and provide feedback in an ongoing iterative process. Rethink the cyber organization In a SOC where L1 and L2 functions are agentic, organizational changes will necessarily occur . “You have to appreciate that this is an organizational change as much as a technology change,” says White. Agentic AI will have an impact on “the way we design teams, the way we manage people, the way that roles evolve,” he says. The traditional career ladder of moving up the tiers no longer applies when entry-level roles are agentic. “New people coming to security are going to have to take a different route,” he notes. White adds that traditional security teams have been divided into disciplines and silos, but “those roles now start to move around, merge, and become more of a holistic capability than a siloed one.” Foster skills that optimize human-AI collaboration The introduction of agentic systems will necessarily transform cyber’s skillset. Josh Taylor , lead cybersecurity analyst at Fortra, says, “The SOC analyst’s job has always been about processing signals, triaging alerts, correlating events, escalation. Agentic AI doesn’t eliminate that work; it will relocate an analyst from inside the process to above it. The fundamental reskilling challenge won’t be as technical; it will be cognitive.” He adds, “When an agent triages 200 alerts and presents five for human review, the analyst needs to assess whether the agent’s reasoning was sound. CISOs should invest in training that builds ‘model intuition,’ the ability to recognize when an agent’s output feels right but is structurally wrong.” CISOs should also emphasize training that teaches analysts to set policies and define constraints on what agents are allowed or not allowed to do, such as block production traffic or send external communication, Taylor says. “SOC teams need to build decision boundaries the same way they build incident response playbooks.” White adds, “The beauty of agentic AI is that all you need is a well-articulated statement of what you’re trying to achieve, and the agent will do the rest for you. This type of intent-driven, automated, AI-based engineering is available now.” But humans need to evaluate whether the new workflow or process achieved the desired goal, deliver the value that was expected, and they need to understand how to go back to the agent, refine the prompts and achieve a more favorable outcome, he says. Reimagine your operating model With 120,000 end users, Baker understood that his security teams were buried in alerts, data, and telemetry. Today, his tier 1 and tier 2 SOC analyst roles are agentic, but the SOC is only the beginning. His roadmap includes agentic AI playing a role in vulnerability management, penetration testing, patching, and other security functions. White’s roadmap takes a similar path. “I would imagine that we will be leveraging AI more and more.” His agentic priority list includes vulnerability management, pen testing, patching, and compliance. White says the benefits of an agentic SOC extend beyond technology to the human side of security. “The best leaders will be ones who have evolved the target operating model. In doing so, it’s going to make you have a happier workforce. Your SOC is going to look like a calm place, not chaos with alerts going everywhere.”
Source Attribution

Originally published by CSO Online on May 11, 2026.

Related Threats