What Role Can Artificial Intelligence Play in Fixing the Security Skills Shortage?




Demand for highly desirable digital skills is hitting new heights. A recent Learning and Work Institute report noted that one in four (27%) employers now need the majority of their workers to have in-depth specialist knowledge in one or more technology areas. And 60% of those surveyed expect their reliance on advanced digital skills to increase over the next five years.

The skills gap is particularly prevalent in the security tech sector. A global study from the Center for Cyber Safety and Education predicted a terrifying shortage of 1.8 million security workers by 2022. This is made worse by the number of young people taking IT-related GCSEs in the UK, falling by 40% since 2015 (according to Learning and Work Institute data).

This scarcity of qualified professionals has inflated salaries, making it hard for firms that cannot afford to offer large paychecks and grand benefit packages to secure top talent.

Plus, too many people who apply for security-focused positions do not know how to detect and respond to security threats. This doesn’t rule them out as candidates but does mean they need rigorous training to hit the ground running.

Companies are understandably seeking other means to fill the void in their security operations with the future looking bleak. Enter artificial intelligence (AI) – one of the industry’s best hopes for easing the pressures of the security skills shortage.

What Security Roles Can AI Help Fill?

Many aspects of IT security are time-consuming and costly. This is exactly where AI flourishes with its ability to dramatically speed up routine and repetitive tasks, processes and analyses that would take skilled staff hours, days or even weeks to accomplish. AI won’t replace a human team, but it can eliminate tedious work and provide the data that will allow fewer experts to make better decisions faster.

AI has been touted to bridge the technical skills gap in lots of sectors. But within cybersecurity and data privacy, two specific use cases stand out: 

Incident Response

Manually identifying and responding to incidents is both difficult and time-consuming. However, advancements in machine learning mean that AI can be used to help detect novel attacks, not just those previously ‘seen in the wild.’ This is a big step forward as responses can eventually act without human interaction. In addition, as a system continues to learn, it will begin to predict incidents before the issue even happens.

"AI has been touted to bridge the technical skills gap in lots of sectors"

Threat Hunting

Threat hunting is another time-consuming and expensive cyber-defense necessity. Traditional manual techniques have security analysts using their own knowledge about existing and known threats to identify risks. This has made looking for unknown cyber threats that evade existing security solutions and lurk undetected in a company’s networks extremely complicated. But AI can be very effective here, especially when it is integrated with behavioral analysis. And the insights AI shares are also helpful in training junior analysts who can study the results without doing the hard graft, allowing them to understand new security concerns faster.

Automating routine or repetitive tasks is a relatively straightforward and dependable use of AI. However, going deeper and using it to analyze vast amounts of data from multiple sources at scale – and then using that data to enhance the security of an organization’s systems autonomously – is where AI will add the most value in the long term.

Do We Even Need People?

AI and an increasing number of integrated cybersecurity and machine learning tools can take the pressure off corporate security teams that are understaffed and under growing pressure from an expanding threat ecosystem. But that doesn’t mean businesses using AI can dismiss the need for skilled security experts.

Human oversight of any autonomous technology is not a ‘nice to have’ but a ‘must have’ component. AI does not work without human input. If deployed without due care and knowledgeable supervision, AI use could result in many privacy and ethical difficulties as it begins to predict incidents and independently take action against users and their devices. Moreover, AI is only as good as the data it uses. Therefore, human oversight and intelligence are necessary to determine what data will feed and train AI systems. Bad data can quickly lead to bad decisions and bad outcomes.

The potential of AI to help solve the industry’s security skills shortage is exciting. Yet, companies must remember that you cannot simply throw software at a problem. An AI undertaking still requires sophisticated, knowledgeable tech experts who understand data and data privacy, know what AI can accomplish, and appreciate where it can free people to work on less tedious tasks. Ultimately, AI will help fix the skills gap because it gives teams the breathing room to focus on what’s most important and put their expertise to work where needed most.

No comments

intech company. Powered by Blogger.