EXECUTIVE SUMMARY:
When it comes to digital transformations, artificial intelligence and machine learning capabilities present tremendous opportunities. At the same time, they also expand the threat surface that CISOs and cyber risk professionals have to govern.
To successfully navigate this new landscape, organizations need to adopt a holistic approach to risk management. The reality is that CISOs may now need to have teams conduct red team exercises against AI models and AI-enabled applications.
Specific AI red teams can identify vulnerabilities, test defenses, improve incident response, ensure compliance and add-value to a comprehensive cyber security strategy.
AI and the enterprise tech stack
AI is becoming a go-to for decision-making, financial forecasting, predictive maintenance and many other enterprise functions. It’s practically becoming a part of the furniture, so to speak, in the enterprise tech stack.
The field of AI risk management and AI assurance will prove to be a growing domain for CISOs and cyber security leaders in the coming years. Threat modeling and testing for weaknesses in AI deployments will turn into essential components of managing AI risk.
AI red team development
AI red teams can help safeguard AI systems via exercises, threat modeling and risk assessments. Says data scientist and AI risk expert Patrick Hall, “You should be red-teaming your machine learning models. You should be performing at least the integrity and confidentiality attacks on your own machine learning systems to see if they’re possible.”
While that may be the ideal, implementing new processes and systems to manage and execute red team exercises isn’t such a clear-cut undertaking…
Tech giants’ AI red teams
Companies like Facebook and Microsoft have developed AI red teams in order to explore risks around their AI threat environments. The aforementioned companies seek to further understand their AI risks and security response capabilities.
But this is not the norm. And at present, there are few standardized industry best practices that advocate for AI red teams or that describe the scope of an ideal AI red team. While there are resources out there for researching and uncovering AI risks, a crystalized framework doesn’t yet exist.
Definition in-progress
For some organizations, an AI red team might engage in regular attacks on AI models. For other organizations, an AI red team might mean and do something else entirely. Regardless of the exact definition of an AI red team and the corresponding responsibilities, AI-driven risks are extant and do require professional attention.
Security risks and AI red teams
The entire list of AI-related risks remains unwritten, but a few key risks include:
- Malfunction scenarios that could lead the AI to behave in wildly unpredictable ways
- Malfunction scenarios that lead to incorrect output and wrong information
- Potential exposure of personally identifiable information or intellectual property
- Supply-chain and application build security issues
- Distribution of AI systems to unauthorized groups or persons
While many of these problems can arguably be chalked up to AI design failures, they threaten all three aspects of the CIA triad; confidentiality, integrity and assurance.
When to consider an AI red team
While AI risks may not pose serious issues for organizations that are just starting to integrate artificial intelligence into their day-to-day routines, organizations that are using AI to support serious decisions or for automated decision-making functions should indeed leverage AI red teams.
Developing an AI red team is a heavy lift for the vast majority of today’s CISOs. Effectively executing red team exercises against AI systems will demand a cross-disciplinary team of security, AI and data science experts.
It will also require better visibility into the AI models deployed by enterprises, including those within third-party software. Beyond that, teams will need means of planning for security improvements based on red team findings.
What the future may hold
The first major cyber attack on artificial intelligence/machine learning tools will likely drive attention to the subject and expand interest around AI red teaming. But experts say that you shouldn’t wait until then in order to start on AI testing…
Even if your organization cannot get started with AI testing right now, you might consider pursuing early stage steps, like threat modeling. This will enable your organization to see what might be vulnerable, easily breached, and/or disrupted ahead of an actual event.
For more insights into artificial intelligence and cyber security, please see CyberTalk.org’s past coverage. Want to stay up-to-date with trends in technology? Check out the CyberTalk.org newsletter! Sign up today to receive top-notch news articles, best practices and expert analyses; delivered straight to your inbox.
#2023, #Ai, #Amp, #Applications, #Approach, #Aria, #Article, #Articles, #Artificial, #ArtificialIntelligence, #Business, #CISO, #CISOs, #Compliance, #Cyber, #CyberAttack, #CyberRisk, #CyberSecurity, #CyberSecurityStrategy, #Data, #DataScience, #DataScientist, #Design, #Development, #DigitalTransformations, #Enterprise, #Facebook, #Financial, #Framework, #Future, #HTML, #Incident, #IncidentResponse, #Indeed, #Industry, #Insights, #IntellectualProperty, #Intelligence, #Issues, #It, #Landscape, #Learning, #List, #MachineLearning, #Management, #Microsoft, #Modeling, #News, #Newsletter, #Organization, #Organizations, #Other, #Planning, #Radars, #Red, #RedTeam, #Resources, #Risk, #RiskManagement, #Science, #Security, #SecurityRisks, #Society, #Software, #Stack, #Strategy, #Surface, #Teams, #Tech, #TechStack, #Technology, #Testing, #ThirdParty, #ThreatSurface, #Time, #Tools, #TRENDINGNOW, #Trends, #Visibility, #Vulnerabilities, #WhyRedTeamExercisesForAIShouldBeOnCISORadars
Published on The Digital Insider at https://bit.ly/3JW8NPD.
Comments
Post a Comment
Comments are moderated.