Joe Regensburger is currently the Vice President of Research at Immuta. Aleader in data security, Immuta enables organizations to unlock value from their cloud data by protecting it and providing secure access.
Immuta is architected to integrate seamlessly into your cloud environment, providing native integrations with the leading cloud vendors. Following the NIST cybersecurity framework, Immuta covers the majority of data security needs for most organizations.
Your educational background is in physics and applied mathematics, how did you find yourself eventually working in data science and analytics?
My graduate work field was Experimental High Energy Physics. Analyzing data in this field requires a great deal of statistical analysis, particularly separating signatures of rare events from those of more frequent background events. These skills are very similar to those required in data science.
Could you describe what your current role as VP of Research at data security leader Immuta entails?
At Immuta, we are focused on data security. This means we need to understand how data is being used, how it can be misused, and providing data professionals with the tools necessary to support their mission, while preventing misuse. So, our role involves understanding the demands and challenges of data professionals, particularly in regards to regulations and security, and helping solve those challenges. We want to lessen the regulatory demands, and enable data professionals to focus on their core mission. My role is to help develop solutions that lessen those burdens. This includes developing tools to discover sensitive data, methods to automate data classification, detect how data is being used, and create processes that enforce data policies to assure that data is being used properly.
What are the top challenges in AI Governance compared to traditional data governance?
Tech leaders have mentioned that AI governance is a natural next step and progression from data governance. That said, there are some key differences to keep in mind. First and foremost, governing AI requires a level of trust in the output of the AI system. With traditional data governance, data leaders used to easily be able to trace from an answer to a result using a traditional statistics model. With AI, traceability and lineage become a real challenge and the lines can be easily blurred. Being able to trust the outcome your AI model reaches can be negatively affected by hallucinations and confabulations, which is a unique challenge to AI that must be solved in order to ensure proper governance.
Do You Believe There is a Universal Solution to AI Governance and Data Security, or is it more case-specific?
“While I don’t think there is a one-size-fits-all approach to AI governance at this point as it pertains to securing data, there are certainly considerations data leaders should be adopting now to lay a foundation for security and governance. When it comes to governing AI, it’s really critical to have context around what the AI model is being used for and why. If you’re using AI for something more mundane with less impact, your risk calculator will be a lot lower. If you’re using AI to make decisions about healthcare or training an autonomous vehicle, your risk impact is much higher. This is similar to data governance; why data is being used is just as important as how it’s being used.
You recently wrote an article titled “Addressing the Lurking Threats of Shadow AI”. What is Shadow AI and why should enterprises take note of this?
“Shadow AI can be defined as the rogue use of unauthorized AI tools that fall outside of an organization’s governance framework. Enterprises need to be aware of this phenomenon in order to protect data because feeding internal data into an unauthorized application like an AI tool can present enormous risk. Shadow IT is generally well-known and relatively easy to manage once spotted. Just decommission the application and move on. With shadow AI, you don’t have a clear end-user agreement on how data is used to train an AI model or where the model is ultimately sharing its responses once generated. Essentially, once that data is in the model, you lose control over it. In order to mitigate the potential risk of shadow AI, organizations must establish clear agreements and formalized processes for using these tools if data will be leaving the environment whatsoever.
Could you explain the advantages of using attribute-based access control (ABAC) over traditional role-based access control (RBAC) in data security?”
Role-based access control (RBAC) functions by restricting permits or system access based on an individual’s role within the organization. The benefit of this is that it makes access control static and linear because users can only get to data if they are assigned to certain predetermined roles. While an RBAC model has traditionally served as a hands-off way to control internal data usage, it is by no means indestructible, and today we can see that its simplicity is also its main drawback.
RBAC was practical for a smaller organization with limited roles and few data initiatives. Contemporary organizations are data-driven with data needs that grow over time. In this increasingly common scenario, RBAC’s efficiency falls apart. Thankfully, we have a more modern and flexible option for option control: attribute-based access control (ABAC). The ABAC model takes a more dynamic approach to data access and security than RBAC. It defines logical roles by combining the observable attributes of users and data, and determining access decisions based on those attributes. One of ABAC’s greatest strengths is its dynamic and scalable nature. As data use cases grow and data democratization enables more users within organizations, access controls must be able to expand with their environments to maintain consistent data security. An ABAC system also tends to be inherently more secure than prior access control models. What’s more, this high level of data security does not come at the expense of scalability. Unlike previous access control and governance standards, ABAC’s dynamic character creates a future-proof model.”
What are the key steps in expanding data access while maintaining robust data governance and security?
Controlling data access is used to restrict the access, permissions, and privileges granted to certain users and systems that help to ensure only authorized individuals can see and use specific data sets. That said, data teams need access to as much data as possible to drive the most accurate business insights. This presents an issue for data security and governance teams who are responsible for ensuring data is adequately protected against unauthorized access and other risks. In an increasingly data-driven business environment, a balance must be struck between these competing interests. In the past, organizations tried to strike this balance using a passive approach to data access control, which presented data bottlenecks and held organizations back when it came to speed. To expand data access while maintaining robust data governance and security, organizations must adopt automated data access control, which introduces speed, agility, and precision into the process of applying rules to data. There are five steps to master to automate your data access control:
- Must be able to support any tool a data team uses.
- Needs to support all data, regardless of where it’s stored or the underlying storage technology.
- Requires direct access to the same live data across the organization.
- Anyone, with any level of expertise, can understand what rules and policies are being applied to enterprise data.
- Data privacy policies must live in one central location.
- Once these pillars are mastered, organizations can break free from the passive approach to data access control and enable secure, efficient, and scalable data access control.
In terms of real-time data monitoring, how does Immuta empower organizations to proactively manage their data usage and security risks?
Immuta’s Detect product offering enables organizations to proactively manage their data usage by automatically scoring data based on how sensitive it is and how it is protected (such as data masking or a stated purpose for accessing it) so that data and security teams can prioritize risks and get alerts in real-time about potential security incidents. By quickly surfacing and prioritizing data usage risks with Immuta Detect, customers can reduce time to risk mitigation and overall maintain robust data security for their data.
Thank you for the great interview, readers who wish to learn more should visit Immuta.
#Agreement, #Ai, #AiModel, #AiTools, #Alerts, #Analysis, #Analytics, #Approach, #Article, #Background, #Business, #BusinessEnvironment, #Calculator, #Challenge, #Cloud, #CloudData, #CloudEnvironment, #Cybersecurity, #CybersecurityFramework, #Data, #DataDemocratization, #DataGovernance, #DataPrivacy, #DataScience, #DataSecurity, #DataUsage, #DataUse, #DataDriven, #DataDrivenBusiness, #Deal, #Democratization, #Easy, #Efficiency, #Energy, #Enterprise, #Environment, #Events, #Experimental, #Focus, #Foundation, #Framework, #Future, #Governance, #GovernanceFramework, #Hallucinations, #Healthcare, #HighEnergyPhysics, #How, #Immuta, #Impact, #Insights, #Integrations, #INterview, #Interviews, #It, #Learn, #LESS, #Mathematics, #Mind, #Mitigation, #Model, #Monitoring, #Natural, #Nature, #Nist, #One, #Organization, #Organizations, #Other, #Physics, #Policies, #President, #Privacy, #Process, #Progression, #RealTimeData, #Regulations, #Research, #Risk, #Risks, #Roles, #Science, #Securing, #Security, #SecurityRisks, #Sensitive, #Shadow, #ShadowIt, #Simplicity, #Skills, #Solve, #Speed, #Standards, #Statistics, #Storage, #Teams, #Tech, #TechLeaders, #Technology, #Threats, #Time, #Tool, #Tools, #Training, #Trust, #Vendors, #WhatIs, #Work
Published on The Digital Insider at https://is.gd/kuAlJP.
Comments
Post a Comment
Comments are moderated.