Atul Vashistha:
Good morning, good evening, everyone. 90% of the data that we have access to today was created in the last two years. Every day, 2.5 quintillion bytes of data is added. That’s 18 zeros. A billion only has nine zeros. So you can imagine the amount of data that’s getting added every day. We’re seeing similar addition of data when it comes to supply chain, risk intelligence, when it comes to third-party risk management. So how does one take this data, this intelligence that you’re receiving on risk and actually make this actionable?
We’re very fortunate today to have a great resource that’s joining us. Jim Routh is the former Chief Security Officer of MassMutual, CVS, Aetna. He’s led risk at organizations like JP Morgan Chase, KPMG, DTCC, American Express. So he brings his 40 years of risk leadership in cybersecurity, in operational risk management. When he has helped build world-class risk management programs that not only leveraged his intelligence but used data science and automation to actually enable risk actions create a very effective risk organization. Jim, welcome.
Jim Routh:
Atul, thank you very much. Really happy to participate in this forum.
Atul Vashistha:
Wonderful, Jim. Again, thanks to the Global Risk Community for hosting this. To give you a quick background, I am the chairman and founder of Supply Wisdom. Supply Wisdom provides continuous risk management and automating risk action for companies globally. So Jim, one of the things that I wanted to start with is, very clearly, COVID accelerated the need or the visibility into how organizations needed to be better at third-party risk management and be more resilient. At the same time, if we think about our world today, we’re seeing significant disruptions in business operations. You have a couple of decades of experience in operational risk management. If you reflect on that journey, Jim, what have you experienced around business disruptions?
Jim Routh:
Well, Atul, I’ll tell you what comes to mind immediately from my perspective is I think about the fundamental mistakes that I made when I implemented my first third-party governance program. It was close to 20 years ago. I think about those mistakes because they’re largely amplified by fundamental changes and the evolution of risk in third-party governance. But mistake number one was clear, I had a one size fits all model. So I had thousands of third parties that provided different products, different services and frankly had very different risk profiles. Yet, I had a one size fits all model where I was looking at assessment information that was consistent across all of them, usually in the form of a painful questionnaire that inflicted friction on them and gave us something to work with.
Then through the annual assessments that we did, and most of the annual assessments involved a data center tour and I got sick and tired of looking at data centers all day, but essentially those were fundamental mistakes. I didn’t know it at the time. Certainly today as we look back, it seems silly. But that’s the way it was. I would say that there are a lot of enterprises today that are still using those conventional controls in a one size fits all model. It’s just not practical. Frankly, the irony is that we think about operational risk, operational risk has always been data intensive, always. Application of operational risk for third-party governance has really not been data intensive. That’s another mistake, if you will, that I made way back when…
Today we’ve got climate change driving really extreme weather events in different parts of the globe at any particular time that have the potential for disrupting our supply chains. That’s going to continue to evolve going forward. So the increase in nation state sponsored cybersecurity attacks that have a collateral effect on the private enterprise today, that’s also increasing. The volatility of financial markets increasing, political instability. These are all things that make third-party governance from an operational risk perspective, fundamentally different today, and we’re using these legacy of control management procedures that are obsolete.
Operational risk is a data intensive process. And frankly, third-party risk management for cybersecurity and for resilience, just business resilience, needed to be a data intensive process. We needed much more affective data and real time or close to real-time data. That was absolutely critical.
Atul Vashistha:
Thanks for that, Jim. So, very clearly the need is rising. So what I love to do is talk about challenges next. But before we do that, I want to remind the audience, please use the Q&A or the chat functions to ask questions or if you have any experiences you want to share, feel free to do that also. So Jim, let’s talk about the challenges. The need is very clear that when I think about risk management solutions in the market, and it’s one of the reasons why I jumped into this space to create a company, was I noticed the challenges with point in time assessments. They became stale very quickly. You talked about geopolitical. Very few companies really interlinked their service provider risks to their location risk. So they were not really monitoring location. COVID actually showed that quite a bit of how many companies were ill-prepared. So Jim, when you think about your experience with the risk management solutions that are inadequate, what else do you see as limitations of these older risk management solutions?
Jim Routh:
Well, I think fundamentally, the inputs from a data perspective are static and periodic. Typically, this is the way it works in most third-party governance programs. You’ve got limited resources that support the risk analysis function and they’re struggling to keep up with the demand in the procurement pipeline. There’s always a few companies that happened to be the pet projects of the chairman and CEO. Of course, those get ratcheted up in priority and you’re quickly putting your best resources on them to do and complete the security assessments that’s part of the procurement process. Then of course, you have the other projects that are required in the procurement process.
They have a lower priority on a relative basis. Then by the time you get to all of them, which often you don’t, but by the time you do, then you have to do the recertification and the recertification has to be by risk. So you’re the top vendors by risk, you have to have some sense of that. Of course, you’re using static periodic data to make that determination. The environment’s very volatile. Then at the end of the day, no third-party risk function has enough resources to do what’s on their plate today, much less deal with emerging risk that evolves. So it’s a zero sum game for most enterprises simply because they’re resource starved, trying to keep up with all the questionnaires that are going out.
Atul Vashistha:
Yeah. Jim, I think that summarizes the challenges risks leaders face today really well. Interestingly for me, I know the audience is really there, Jim, to hear from you on solution. So we’re going to jump to that in a minute. It’s interesting how your journey, I remember it was 2017 at the Shared Assessments Summit, where here I am in my early years in developing the risk management solution and talking about continuous monitoring, and here you are on stage talking about the evolution of risk management of how risk leaders needed to be focused. I felt like I no longer needed to evangelize because here’s a leader that’s already adopted it. Tell us a little bit about, how did you get to that conclusion in terms that you needed to move the model even as early as 2016, 2017?
But over time, what happens is you can do, the enterprise, can do much more with less, much more with fewer resources, but you need to put the implementation effort in place. That may be an investment or resource in the short-term, might have to invest in some software to do that. Then go through the engineering of the data science connected to the automation. But the end result of that will be far greater coverage,
Jim Routh:
Well, one of the first things that I did is recognized that operational risk is a data intensive process. And frankly, third-party risk management for cybersecurity and for resilience, just business resilience, needed to be a data intensive process. We needed much more affective data and real time or close to real-time data. That was absolutely critical. There were seminal events that happened. There was supply chain poisoning that took place, where the fundamental supply chain disrupted many, many businesses by planting malware and spreading that malware through this supply chain that really brought many businesses to basically unable to operate. Now, this sounds very familiar because all of us have been reading about SolarWinds, but what I’m talking about was not, which was exactly the same thing.
In fact, it was initiated by a nation state attacking another state and the collateral damage involved major companies, Merck as one example, lost 15,000 servers that were compromised, bricked and just totally non-functional in 90 seconds. Now, no human can respond in that timeframe in the enterprise. So we all woke up to the fact that we needed data in a real time, near as close to real-time as possible across a number of dimensions. And we needed not only to be able to analyze that data, but we actually needed to turn that data into some action through automation.
That gave us an ability to take the risk intelligence in close to a real time fashion and create specific responses that were automated, allowing the scarce resources for business resilience across the enterprise and third-party risk management. To then focus on where the highest risks were at that point in time, recognizing that change. It’s a game changer. So when we break out of our conventional constraints around a one size fits all model driven by annual assessments, we can start to look at multiple data feeds across many dimensions and let that trigger a series of automated events. That connecting the intelligence in real time to the automation of workflow, we actually free up resource to be more thoughtful about where they apply their time and how they implement controls.
Atul Vashistha:
Jim, this is really good. What I’m going to do over the next few minutes with you is let’s pick these threads one at a time. We’ll start with focus on continuous monitoring and a scope of risk one needs to look at. I love your thoughts on that. The second piece is then talk about how do you actually use data science to take a continuous model and make sense out of that? Then finally, how do you make sure that you can actually automate these risk actions? Like you said, in many cases, like SolarWinds, impossible for humans to take that level of action?
Jim Routh:
Let’s start with the… The first is there are multiple data elements that we wish to absorb, especially the reality is the level of interdependence from between the enterprise and third, fourth and fifth parties has increased exponentially in the last decade. So we’re much more dependent upon third parties. So we have to think about the fact that there’s a community of third-party, fourth-party, fifth-party providers and we’re all interdependent. So we have to think in terms of applying information through multiple sources. I don’t think there’s a one size fits all model. I think there are multiple sources across different dimensions. Then to agree that we can respond to the inputs with instead of every single action requiring a human to provide context and then determine what the appropriate action is.
If we can create a risk score for each entity fed by multiple sources, that risk score which is just a numerical representation of an event. By the way, I’m touching on some fundamentals of data science that we all learned whenever we took statistics when we were either a high school or as an undergrad, we all learned you can take any event and put it on an X and Y axis. So we can represent numerically any event. Now, that’s a cornerstone for any operational risk management capability. Because now that we can represent that numerically, we can take multiple inputs across multiple disciplines and come up with a single risk score for that entity. And that risk score, because it’s a numeric representation of the risk at a point in time, we can establish a threshold.
The threshold could be on a scale of one to 100, if I said 70 was the threshold. I could say, if I have a score above 70 or above, then I’m going to automatically take a specific action. I’m going to trigger that action based on when that score goes above 70 and I’m going to automate the workflow to initiate that action. Now, what I’ve done is I’ve eliminated the need for the human to digest the information, put context around it and then figure out what action to take and take that action. I’ve eliminated of all of that. So now close to real time, I’m triggering an automated event based on a change to the risk score and all of that is happening with no human involvement whatsoever. The more I can apply treatment in an automated way, it frees up my scarce resources in the enterprise to analyze the entire workflow and decide where to apply the human to establishing the context and assessing risk.
That’s a game changer because it means that you can increase the breadth of your coverage from an operational risk perspective for more and more third parties without adding critical essential resources with high level of skill. So you have a whole model that’s far more effective and it’s a better utilization of resource. That’s the end game. That’s what we want to do. We want to not only increase the scope of third, fourth and fifth parties looking across multiple dimensions of resilience for the enterprise and ultimately use our knowledge and our capabilities from a resource perspective to help the community members by providing them some of that information and encouraging them to take the right steps, which improves resilience across the entire enterprise.
Atul Vashistha:
Thank you for that, Jim. Jim, I think it’s important to also reinforce that while this sounds aspirational, it’s actually being done. You put into place, we put into place. So it’s important to recognize that. Jim, I want to take a step back for a little bit. You talked about the starting point happens to be near real time continuous data. I wanted to just poll the audience because it’s interesting for me. Even today, when I have conversations with risk leaders, sometimes they’ll tell me that, oh yes, their model is continuous. Then when I probe further, I realize they’re using the once a year as actually saying that’s continuous. I think what we mean here is not a quarterly or a once a year being continuous, but it’s being near real-time, all the time live monitoring.
The question I wanted to ask the audience, and I’m going to open up a poll for everybody to respond to is, do you continuously monitor those third parties? So let’s limit it to third parties at this point in time, not the fourth and fifth or nth parties. Do you continuously monitor those third parties that represent the highest risk for the organization? So you’ll have three choices. Yes, no, or it’s in future plan. I’m making the poll live for you at this point in time. I encourage all of you to please respond. Then I’m going to ask Jim to give us his thoughts on what he sees in the responses from you. We have about half the people have voted. Yeah. Jim, as you see the poll, I’m going to close the polls in about 30 seconds. Jim, as you see the poll, and I’m going to bring it up, what’s your reaction to these poll results that 36% say yes, they monitor continuously, but there’s about 64% that either don’t or do have a plan, but they’re not doing currently?
Jim Routh:
Yeah. The reality is I’m encouraged. I’m encouraged because I thought the numbers for those that have evolved their programs to do this would be lower. So I’m actually encouraged by these numbers. The fact that 32% habit in the terms of their future plans and 16% or 36% are doing it today, that’s actually very encouraging. The reality is we in enterprises and especially larger enterprises, it takes time. It takes time to fundamentally change behaviors and practices. Even if you have outstanding data sources, just connecting the data sources to an understanding of the data science of measuring risk with multiple inputs and then automating the actual actions from that. We’re talking about changing workflows across large enterprises, nothing happens overnight. So I’m actually somewhat encouraged by these numbers. Now I’m naturally skeptical because I come from cybersecurity. But I’m a little bit optimistic about these numbers and I feel encouraged by them.
Atul Vashistha:
Wonderful. Jim, you just said something, you have a background in operational risk, but also over the last couple of decades also focused on cybersecurity. Often when we’re having conversations with risk leaders, particularly CSOs, there are a few that we hear from that talk about that I need to expand my risk aperture. It’s no longer enough for me to just look at cyber. Talk to us about, first, why CSOs and risk leaders really need to widen their risk aperture. Then I have a couple of follow-up questions on ESG.
Jim Routh:
Yeah. So we talked earlier, Atul, about the drivers of change and really across enterprises globally. Weather and climate, market volatility, political unrest, the fact that we have such a large percentage of the population globally that are migrating as a result of weather or political events. We’re going to see that increase. Of course, these are in developing areas and where are the lowest cost labor resources that we’re trying to leverage? They’re in developing countries. So there’s a whole number of factors that just change, fundamentally change risk profiles. We have to be able to respond to that in real time. So it takes, just like any implementation of new infrastructure, process and workflow.
It takes time to digest that for an enterprise. But over time, what happens is you can do, the enterprise, can do much more with less, much more with fewer resources, but you need to put the implementation effort in place. That may be an investment or resource in the short-term, might have to invest in some software to do that. Then go through the engineering of the data science connected to the automation. But the end result of that will be far greater coverage, much more responsive capability at a lower cost simply because the labor cost will be so much lower.
Atul Vashistha:
Jim, very clearly you’re following this whole path from getting more intelligence and the need for greater intelligence, not just cyber, but really this wider aperture. So let me stay with this topic and I’m going to jump onto data science and automation right after this. So, we look at the new US administration. January 27th, an executive order focused on climate change. February 24th, if I remember right, the right date, executive order on supply chains and the poisoning of supply chains. And then the SEC on March 3rd, with a directive that they’re going to look at ESG disclosures more so this year than ever before. Let’s talk about ESG, Jim. You talked about geopolitical and weather. Do you see ESG again as a purview that risk management needs to adopt, especially as they look at third-party risk management, or is this something that you see a movement that procurement or somebody else ought to be dealing with?
Jim Routh:
No, this is clearly in the operational risks fear. Frankly, for third-party governance, most of what we do touches or involves procurement. Some of those automated treatments that I was talking about involve procurement. I’ll give you a specific example of that. This was implemented at two companies I did previously and it got really good results. So the first thing we did is whenever a risk score was triggered and increased significantly, and we were focused initially on cybersecurity risk, that risk score included something called responsiveness. It was a subjective measure and it’s basically, if we and the enterprise sent out information about something like a SolarWinds event and we were seeking information from our community, those members that shared that information actually got a higher response rate or higher response score as one of the indicators of risk for that and a higher response where it was actually a lower risk.
So, the ultimate risk score went down when the responsiveness was high. Conversely, those enterprises that never responded got a higher risk score as a result of that as an effect. In some cases, if the risk score triggered above a 4.0 on a scale of one to five, we would automate a workflow process through procurement where we would send a generic RFP out to two other suppliers that provided the same type of service or product as the vendor that triggered the risk score. The two other suppliers would respond. The relationship owner could ignore the RFPs or could actually engage the other two suppliers to see if there’s a better way of improving the service. All of this triggered largely based on the responsiveness of the third party.
Now, what happened is nine times out of 10, the verbal word got out that we were going to market looking for new services. The vendor that had the low response score discovered this, saw that, oh, well, I guess they’re serious about it, and immediately improve their level of responsiveness. All of it was triggered through workflow that didn’t require a human to do anything. It was literally an email that went out with a generic RFP and nobody had to touch or do anything. It was totally automated. That’s an example of risk triggered events driving automation that didn’t require any labor, that actually over time reached sustainable behavior or influence sustainable behavior, in this case for the third party. That’s an example of this. And it’s all based on this notion, and you have it on the screen now, where there are different dimensions and categories of risk.
You can add like responsiveness as an example, but you can look at financial risk, people risk, client risk, region risk, geographic risk, and all of that factors into a score at a point in time that’s representative. And of course, if you look at that across the entire portfolio, you can look at the portfolio score which is an aggregate of all the vendors within that portfolio. And if you decide to divide your portfolio into types of products and services, each segment can have its own risk score, and you could trigger actions based on changes to that risk score. As opposed to, let’s find out how many have to recertify their annual risk assessment based on doing a data center tour that’s absolutely useless today because everybody’s using cloud computing. That’s the beauty of this.
Atul Vashistha:
Jim, that was actually a great example, the whole sourcing example you used. Because too often, when you start talking to people about risk action automation, their mind goes very quickly to patch management, and others, and not recognizing that when you think about a wider risk aperture, financial ESG, geopolitical and others, that in many of these areas there’s a number of actions that one can actually automate. Jim, interesting. I’d love for you to comment on this.
One of the things that we saw companies do when the SolarWinds breach happened, that as the third parties were disclosing that they have been breached, what we saw was that companies could take the risk profile of the breach third-party and use the risk scores to run those risk scores against the entire universal third parties that were in the system and be able to identify who was at similar risks because they happen to have very similar risk scores. What do you think about that approach in terms of leveraging data science?
The first place that I would start is looking at the universe of third parties for an enterprise and deciding how to categorize the types of products and services from a risk standpoint into a specific category. In the last two enterprises that I’ve worked in had eight, I think the first time I had seven categories, and then the second time I had eight categories. I don’t think there’s a right number of categories. I think what you have to do is group them.
Jim Routh:
That’s Nirvana. I’ll give you a specific example that is factual. When COVID hit, it was probably mid-March last year, and one of the first things I recognized is that our offshore suppliers that were dealing with customer information had to, by mandate in their countries, had to work from home. Now, financial services as an industry, which has typically mature third-party governance programs on a relative basis, just largely due to regulation and breadth of risk, no one in financial services ever encouraged a third-party to encourage resources to work from home. And the reason for that is that when a third-party offshore resource is working in a development environment or a call center environment, there are other people around and those other people around can observe behavior and they can actually enforce behavior.
For example, if I’m a third-party and I take out my phone and I hold it up to the screen to take a picture, if I’m in a call center, I may not be allowed to have my phone. I have to put it into a locker. Or if my supervisor is sitting across the table, he’s going to say, “Jim, what are you doing?” So there’s an inherent control that’s applied just through socialization. Well, now, if I’m working from home, which I have to do for health reasons, it’s the right thing to do, I still have access on my screen to sensitive customer information. And now I have a way, a vehicle for taking it off network, that’s not monitored and there’s no control for it. Now look, at the company I was previously, MassMutual, we told our third parties, “Have people work from home. Absolutely. Put infrastructure in place, including pay for their last mile if that’s what you need to do, but do the right thing from a health perspective.”
Now, while you’re doing that, I need your help because this is unprecedented. We don’t have any controls for this. So we brainstorm together some potential controls, we came up with one. Was it great? No, it wasn’t great. Was it better than nothing? Yes, it was better than nothing. So we said is, “Give us a name.” This is the enterprise asking a third-party, “Give us a name of every employee that you’ve asked to work from home, that has access to this customer information, and give us their IP address for their home network.” We had five and about, I’ll say five to 600 people in that category in that segment and five companies.
We talked to the five companies. Four of the five companies said, “Absolutely, be to do that.” As a matter of fact, we’re going to do notify them that we’re doing this so they’re fully aware of that. And of course, the enterprise in our case, we said, “Great.” That’s a prophylactic right there in terms of that controls, because they’re aware that we’re going to take their information, share it with a third-party Intel provider. And if they find dark web activity that’s typically indicator of fraud, they’re going to notify us and we’re going to notify the supplier who’s going to notify the individual. Everybody in that chain became aware that this was going to be implemented.
One of the five vendors said, “No way we’re doing that for privacy reasons.” So we implemented this over, it might’ve been six or seven days later, we actually implemented it for all five, sorry, all four. And the fifth one a week later, decided that they were going to throw their lot in and say, “We’re willing to do that now because we see others are following the lead.” Now, the extension of the attack surface that was the result of an event, in this case the pandemic, triggered an increase in risk that was managed within a couple of weeks of implementation because of the community. Everybody recognized that it was a different ball game with a different set of rules.
Well, look, those same vendors today are still having their folks work at home, they’re healthy because they can work at home. They’re enabled, they’re not destitute economically because they can’t work. So far we’ve had no major, I don’t have real time data, but we’ve had no major incidents as a result of it. And that’s a way for a community to come together and deal with the change in the attack surface triggered from the risk.
Atul Vashistha:
Jim, that’s a great idea of adding another data source. In this case, the IP addresses of those work from home team members. Jim, I want to dig a little bit deeper into the conversation we were having regarding automating risk actions. Let’s follow a couple of examples. These three are actual alerts that have happened recently regarding certain third parties. The first one is a financial risk alert about a drop in credit rating. The next one is a geopolitical alert regarding a weather pattern. And then the final one in this case was regarding a patch management update. So Jim, talk to us about little bit deeper, these are some examples I know that you and I have talked about, how does one actually start a program like this? And then secondly, is how do you then scale that to apply across these different risk categories?
Jim Routh:
Yeah. The first place that I would start is looking at the universe of third parties for an enterprise and deciding how to categorize the types of products and services from a risk standpoint into a specific category. In the last two enterprises that I’ve worked in had eight, I think the first time I had seven categories, and then the second time I had eight categories. I don’t think there’s a right number of categories. I think what you have to do is group them. So for example, off shore, we talked about off shore service providers with access to sensitive customer information. That’s a category. What does it mean? It means that that category has a specific set of controls based on the risk profile of those in that category.
Now, high risk commercial software providers. That’s a different category. They look nothing like offshore third-party services. They’re different. They have a different risk profile. There’s a different set of control requirements that they have in their product firm versus a service firm. By the way, one of them categories that you need to think about today in light of SolarWinds, which was supply chain poisoning, one of the categories you have to think about is repositories management SaaS services. So you may have a category of SaaS services or SaaS service providers that are used by the enterprise. And maybe you have some way of determining the highest risk relative to the data that’s shared, but somewhere in that domain, if you will, that subcategory, you’ve got to have repo providers, GitHub, GitLab, BitLocker or Bitbucket. These are what developers are using to access open source code components.
They’re joining into these communities through a SaaS service that, oh, by the way, that SaaS service today is probably not part of your third-party governance program, but should be part of your third-party governance program as one of the lessons learned from the SolarWinds breach. Maybe you want some specific controls in place like identity access management controls that’s unique for them. That becomes a category or a domain. So figure out in your universe, how many domains that you want from a risk standpoint, design the specific controls for that category, and then measure the compliance to those controls as one of the factors that goes into the risk score for the vendor, for the individual entity in that category.
Then ultimately, add data and as many sources of near real time data as you can, so then you have multiple dimensions of risk that are all feeding into that risk score for the vendor that ultimately then you use to establish thresholds and trigger action. The basis is really universe, sub-categories based on risk, specific control requirements for each category, then the vendor risk score, which is a composite of the categories plus other real time sources of data that you have, and then the trigger to automation of workflow that’s non-human dependent, that closes the loop and then the humans step back and look at the whole thing and say, where do we need to focus our time?
Atul Vashistha:
Yeah. So Jim, one of the key takeaways from the conversation you and I had last week for me was, when you think about supply chain poisoning, and I actually liked that perspective because when you start thinking like that, it makes you spend time and effort recognizing where in my supply chain, in my third-party delivery can part of that supply chain be poisoned? Just like the example that you talked about, where’s that code that your teams are leveraging? Where is that coming from and the probability of that potentially being poisoned? So it makes you rethink your attack surface from location to cyber, to financial, because you’re really thinking about how can something be poisoned or challenged?
Jim Routh:
Yeah. In the SolarWinds thing, and look, there’s been a lot that’s been written about it, you’ve read about it, you’ve consumed a lot of information about it. I’m just going to just make it very, very simple. The threat actor, who’s a sophisticated nation state sponsored threat actor, very familiar with working in the Ukraine and in Russia, was trained as a criminal and also had development expertise. What they did is they used a password, SolarWinds123. Now, my guess is all of you recognize that’s a simple password and that password was probably guessed. That’s how the penetration happen in GitHub. But the real skill that was applied, and this comes from the development experience, is that the sunburst, which is the back door that was malware that was created, was made to look just like all of the other code components in the repo. So it didn’t stand out at all.
It looked exactly like a SolarWinds professional developer put this together, and then it got packaged through the pipeline for distribution as an update. The stealthiness part of this was the skill and expertise of camouflaging the software to look like it came from the development team and they bought it. They believed it right. What I just described sounds very simple. It is, but it’s highly likely, it’s very possible to do this. Look, there are limitations to passwords. We’ve known about that for a long time. Once you’ve defeated binary control like that, you’re in, you’re trusted. As long as you act and behave as expected, you can stay in stealth mode, which is what happened here. So the reality is what do we take away as an enterprise?
Number one, apply better identity access management controls and password complexity to the DevOps teams that are setting up accounts in GitHub and GitLab. And we know how to do that. There’s no mystery there. We know how to do that. Second is the SaaS provider that’s doing repo management, that’s a third-party, make it part of your third-party governance program. By the way, it wasn’t before. We never heard of SolarWinds before. I’d never heard of it before. And my company was using it right. Shame on me. They were part of my third-party governance program. Do that today. The third thing, which is if the first two fail, and you’re a victim of supply chain poisoning, again, which has a pretty high probability especially now that the criminals all are sharing information about how to do this. So the third thing is a little bit more innovative, I guess, but also there’s some risk associated in terms of the technology is new, which is workload runtime protection.
There’s six companies today that offer that capability. There’ll be 16 over the next several years. It’s a thing, it’s a category of thing. It basically is using some machine learning algorithms to monitor workloads. And when workloads do things that break a behavioral pattern, again, we’re using data science here, then there’s either an alert or a block. You choose, you can set it up to alert or you can set it up to block, and this is runtime protection. So if a backdoor gets installed similar to what happens in SolarWinds, in your workload, whether it’s running on the cloud or running it on prem, it doesn’t matter, you run these agents, if you will, pieces of software. I encourage all of you to try out using these capabilities and get better at it, and basically start out with alerting and then eventually going to prevent or block mode. This is ultimately how we’re going to have more confidence in our software supply chain.
Atul Vashistha:
So Jim, all these activities, some of course, automation, but we talked about moving to continuous monitoring, expanding the risk aperture, leveraging data science. One thing we recognize is while risk manager is a hot job, as Bloomberg said a few months ago, but amount of work that tax that’s being put on these risks professions is rising. So one of the polls that we wanted to quickly do was, for all the risks leaders on the call, when you think about your organization, do you have enough people to meet the current operational risk requirements? Yes, no, or you are planning to do that as Jim is recommending and we are recommending, is to use automation to be able to manage that. So please, to the audience, please answer the question. Do you feel you have enough resources to meet the operational risk requirement? I’m going to let it be open for another 10 seconds, and then Jim, love for you to comment on your personal experience and compare to what you are seeing from the audience. So let me end the poll and Jim, I’m going to share the results.
Jim Routh:
Yeah. So about 20% of people say, yes, they have enough resources. And about 12.2% of that 20% are delusional. Now that’s Jim’s opinion and there’s no statistical research to back that up. But I guess I’m not surprised by 53% not having enough resources. I never had enough resources and frankly, one of the hardest things to do, and this was related to a question that Ed, I think raised earlier is, you really have to the enterprise and all your stakeholders, you’re the champion of operational risk and third-party governance. I’m making that assumption. You basically have to say, look, we’re going to make some investment. And the reason we’re making some investment is because we’re decreasing the unit cost of our ability to support third party risk going forward.
Now, don’t say anything about risk. Don’t say anything about the threats or anything like that, that muddies the water. Just very simply, you’re going to provide some investment in terms of software capital and people, and you’re going to redesign processes using data science. The outcome, the reason you’re doing it is to get a lower unit cost for your program. And that lower unit cost comes from fewer people. It may take you 18 months, 24 months to do that, but that’s the outcome. That’s what you want to do. You want to actually shrink the amount of operational costs in your third-party governance program at the same time the risks are going up. But I don’t want you to focus on risks because you’re speaking a language that all your stakeholders understand, invest money today to get a better return that’s sustainable tomorrow. That’s the formula. That’s exactly what you use.
Now, inside of that, you’re building this data science foundation. You’re using some software and data feeds in real time to give you lots of attribute information, and you’re then connecting the outcomes from a risk standpoint to the automation. Once you have that implemented, you can do all of this with fewer resources than you’re doing today. And the fewer resources you have are going to love their jobs a lot more because they’re going to feel like they’re making a material difference in improving risk management. But I don’t want you to tell the enterprise that that’s why you’re doing this. That’s an outcome. The reason why you’re doing this is to lower unit cost for your programs. And the reason I’m encouraging you to do this, it’s easy to get money in an enterprise to do this. Don’t tell anybody, but it’s easy. You just have to stick to that formula.
Atul Vashistha:
And you’ve got to bring these functions along with you, Jim.
So as you find more data across multiple dimensions which gives you lots of information that you can respond and react to, be sensitive to the fact that you want to connect as much through automation of the workflow as possible, because that’s where you’re going to be able to realize the benefits. That’s where you’re going to be able to shrink the unit costs for the program and expand the coverage.
Jim Routh:
Yeah. Frankly, operational risks, third-party governance, they’re one and the same. It doesn’t matter how you’re organized, you’re birds of a feather. And then of course, the procurement process and ways of making that more effective and efficient, you’ve got lots of stakeholders that have a vested interest in this outcome and it benefits the entire enterprise.
Atul Vashistha:
Thank you for that, Jim. Jim, I know we addressed the number of questions from our attendees today. There’s many that we may not be able to address today. What I’ll do, Jim, is I’ll reach out to you and hopefully get the answers and we can post them for everybody. But maybe do end the session, Jim, I just wanted to reinforce certain messages you said, and please feel free to add to that. I think we started with talking about, think about what you can automate as you are bringing in continuous monitoring across very comprehensive or a wide risk aperture. Then ensuring by using data science and automation mechanisms that you’re actually reducing the noise so that your risk professionals are truly focused on this integrated view that they are getting, automation taking care of a significant number of those actions, but the risk professions being focused on key escalation activities. Jim, any final words you want to leave the audience with as they take this journey that you have just explained what you have done twice before?
Jim Routh:
Yes. There’s a lifecycle here that you have to recognize, and it’s very simple. You’ve seen this across other dimensions across the enterprise and maybe have seen it in third-party governance as well. The more data you have, especially near real time data, the more ideas and thoughts you have about what to do with that data. So be careful and don’t get caught in a trap where you’re consuming data, providing context for that data, and thinking about all of the ways to use that data and maybe other data sources that you’d like to use. Until you connect the risk driver to the workflow automation, you don’t have a way of realizing the economic outcome that you’re seeking. That part is really important.
So as you find more data across multiple dimensions which gives you lots of information that you can respond and react to, be sensitive to the fact that you want to connect as much through automation of the workflow as possible, because that’s where you’re going to be able to realize the benefits. That’s where you’re going to be able to shrink the unit costs for the program and expand the coverage. If you don’t do that, then you’re just feeding more data into your current resource scarce team. That third step is the part that’s the most essential to realizing the business benefits broadly across the enterprise.
Atul Vashistha:
Jim, you’ve absolutely reinforced, like I said, what Bloomberg said, “Risk manager is a hot job.” I think the way, the journey that you described enables risk managers to actually deliver significant value to their organizations when it comes to resilience, and most importantly, the protection of that revenue that these companies are aspiring to. To the audience, once again, thank you to Jim. This recording will be available soon on crowisdom.com. And as I said before, we’ll make sure that the questions that were not answered are answered and available to you on both Jim’s LinkedIn, and also Supply Wisdom LinkedIn. Thank you, everyone, and I look forward to our next session. Thank you, Jim.