CRO Wisdom Episode 22: Matt Moog, General Manager – TPRM, OneTrust – Part 2

Atul Vashistha:

As you think about the environment today, and particularly from a third-party risk management perspective, what would be two or three things you’ll absolutely want to make sure risk leaders are paying attention to?

Matt Moog:

Well, first and foremost, I talked about this, this morning. There’s a big difference between risk and compliance. And I feel that if you’re in a risk role and you’re running a compliance function, you’re missing an opportunity to provide value back to your customers. And customers, typically either third parties or they could be core customers, they could also be business owners of a certain relationship. In third-party risk functions that operate more like compliance, they’re talking about things like, I have X number of assessments to get through and I’m looking at cycle times and operational metrics and I got to get it done. And then once the assessment’s done, you have the issues and the issues have to be managed, and you’re running that methodical operational function of, if I’m safe with a regulator and I’m safe with internal audit and I’ve done my things, we’re done.

But that’s not effectively managing risk, it’s managing activity. And I think first and foremost, understanding, again, getting uncomfortable and saying, here’s where our appetite exists. It requires you to evaluate things that are not naturally numerical into numerical models. How do you say cyber risk is from zero to a hundred? What are the characteristics of a hundred? What are the characteristics of something that might be a 75? Is the 75-medium risk? Is it still a high risk? Is it on the low end of the spectrum? And where does that exist? If that risk maintains itself, even with the issues involved above a certain appetite, why should we be messing around with the issues? If they’re acceptable, be acceptable. We find that far too often there are issues where it requires excessive levels of management to report on and to approve and to work with issues when in the grand scheme of things, if it’s within an appetite, remove the noise.

I think organizations that can get more comfortable with using data, certainly be able to set appetites for normalizing and quantifying risk. And again, that’s hard, it’s not market risk, it’s not liquidity risk, it’s not credit risk, it’s cyber, its resiliency, it’s an art form in many cases, and there’s not standardized models like there are with credit risk.

So, I think organizations that can get more comfortable with using data, certainly be able to set appetites for normalizing and quantifying risk. And again, that’s hard, it’s not market risk, it’s not liquidity risk, it’s not credit risk, it’s cyber, its resiliency, it’s an art form in many cases, and there’s not standardized models like there are with credit risk. So, you must build those models yourself. So, I think that’s first and foremost. I think the second is looking at enterprise critical type of relationships and making sure that they have differentiation within the organization and making sure that you follow the data, especially customer data because there are additional privacy requirements and things like that. But make sure you follow the data and follow the dependency. In New York, maybe 15 years ago, there used to be this commercial that would come on and it would say it’s 10 o’clock, where your kids are?

I’m used to putting my kids to bed at eight o’clock, so I hope I know where they’re at 10. But I felt the same thing about data. It’s 10 o’clock, where your data is? And I know most organizations, if you went to a Chief Operating Officer or Chief Risk Officer and said, do you know where all your data is? That would be an uncomfortable question to ask them because it’s, I think so, I think we have the right things. I remember about 10 years ago taking an organization through a number of activities to follow data as far as it would go. Third parties, fourth parties, fifth parties, in some cases, sixth parties. I’ve yet to see a seventh party, but feel free to prove me wrong. But we traced the data as far as we could go, and we learned two interesting things. We learned that about half of the fourth and fifth and sixth parties that we saw in the ecosystem were already third parties to the organization.

Risk is not just cyber. Risk is cyber, it’s resiliency, it’s compliance, and ethics, it’s geopolitical, it’s location.

We also saw about 5% of those relationships were using that data in ways that they were not contractually able to use those data. So, they were trying to normalize it and use it for marketing or use it for other types of analysis so they could be better positioned in the market and really make sure that organizations know how they can treat your data, especially customer data. I’d say the third thing is, lean in to learn about events that are happening now. COVID, the conflict in Ukraine, just the geopolitical environment in general. Risk is not just cyber. Risk is cyber, its resiliency, its compliance, and ethics, it’s geopolitical, its location. The number of organizations that I see that just have no idea how they manage any degree of location risk is just stunning. And I know you and I have had this conversation at length before, but those risk factors matter.

I think once you get your arms around each of those individual pieces, like layers, have to dive back down into it and look at those things, because there were a lot of people, when that ship was stuck in the Middle East, that had some significant issues with supply chain. During COVID when we shut everything down and people were not able to go into offices or call centers. In some cases, there were no fail safes so people could work remotely. And when the government was restricting any ability to go into an office, you have immediate disruption.

There was one financial services organization that was only able to yield 30% of their calls inbound. There was another one that realized when they started to just inventory all those points of failure, that they had 108 different call centers and they had no idea how much it had gotten that complex, and they’ve since started to consolidate that backdown but, all of these stress factors on an environment provide for an opportunity for us to learn and do something different. If we’re not embracing that chaos and change so that we can be more resilient as we go forward, then we’re missing an opportunity.

Atul Vashistha:

So Matt, I want to tease you out, you made many really important points in terms of how once you think about party risk management and how you enhance that practice. I want to tease out a couple.

Matt Moog:

Sure.

Atul Vashistha:

One, you made a really important point. It’s the really reason why I founded Supply Wisdom, which is, hey, you have to think full spectrum. You can’t just be fixated on financial risk and cyber risk. And then I actually want you to comment when I finish the second one. So that’s one, I think it’s really important point that we should reinforce more. Second is you talked about what I would call data integrity. And what I mean by that is, I’m constantly surprised by how many customers recently coming to us and asking us to understand their concentration risk, and the first question we ask is, well give us a target profile that shows what locations you’re getting your services from, and we can put it together for you. So, talk to me about those two and give advice for companies around those two points.

Matt Moog:

Well, concentration risk, first of all, is one of those things that crawl before you start to walk and run. It’s a great thing to say, we’re going to focus on it, but if you don’t really have a good relationship with your business to understand where concentration risk is actually a risk, then you’re just chasing a red herring to an extent. I think concentration risk is complex because there’s different facets of concentration risk. You could have a third party that could be a concentration risk to you directly, you could be a concentration risk to them, and it could go the other way.

It could be a risk that is more broad in general, it could be a risk that’s just specific to an individual business, and you need to manage the appetites specifically for that business. So, the concentration risk topic, I think if you really peeled it out, and I can’t remember who wrote a paper on it might have been ISACA or someone else a couple of years ago, but I think they had five different aspects of what concentration risk actually was, and after they defined it, the statement was, what are you looking to accomplish?

Atul Vashistha:

Yeah.

Matt Moog:

Is it de-risking resiliency? Is it de-risking dependency? When you look at concentration risk in a financial services organization versus a consumer product manufacturing organization versus an energy organization, they may be looked at completely differently. We all know that in manufacturing, there are a lot of redundancies baked into the supply chain. They might have four vendors providing mint, and that’s just what they have naturally, maybe with better risk management, could it be two? Who knows? But looking back and saying, what are you trying to accomplish when you attack concentration risk, and what are the measures of success relative to that? Because risk managers are going to try to push businesses to say, well, you can’t use that third party because we already use them way too much, but without offering an alternative, it makes it really difficult to try and have that collaborative back and forth with the business because they’re looking to drive revenue and they’re looking to meet customer expectations, and they’re not really looking to focus on concentration risk as a primary principle of their decision making.

Atul Vashistha:

Yeah, I think the other thing you highlighted, Matt, is you’ve got to clearly understand your third party, and what they’re doing for you. Even if you used a great example, they may have access to data that was imagined in the first contract, but what’s happening two years later is totally different. So, the other aspect we’d love for you to talk about is, the way the industry has often been in these point-in-time risk assessments. Let’s talk about how that practice needs to change and the benefit it can bring.

Matt Moog:

Well, the unfortunate truth is that I don’t think there’s a single outage, breach, incident, or issue that’s been caught while you’re actually conducting the assessment for that point in time. It’s always, something happens and then we look back and we go, well, who did the assessment? And then there’s, oh, well, there are these eight issues that no one figured out how to fix, and one of them was the underlying reason for that. Super easy to play Monday morning quarterback, super easy to go back and point at it and say, why didn’t we do this? The challenge is more, how do we get a more active and open dialogue with those third parties, and how do we make it meaningful in real time? Because, just sending the same questionnaire with the same answers and the same activity over and over and over again, expecting different results, that’s the definition of insanity.

I’m not discounting questionnaires, there’s a time and a place, certainly. In a new relationship, maybe once every three years, maybe you’re just forced to do it for that 30-enterprise critical because that’s just the right thing to do, I get it. But make sure that you’re leaning into data and obviously, Supply Wisdom has a wealth of it, and using it as points to identify where changes occur and where discussions need to be had. I would much rather have a relationship with a customer where four times within the year I see degradation in some of the risk data and I pull up and I have the discussion, and nothing actually be an issue, than to not have that data at all and be blinded to it so that when an issue occurs, and I could have looked back at data and said, I saw stress here, that was a primary factor, I should have had the discussion. If nothing else, it makes you a lot more resilient in reacting to it. Because you’ve developed a rapport, you’ve developed a relationship.

When I talk about resiliency, Mike Tyson famously said, everyone has plans till they get punched in the face or punched in the mouth, however, you want to say the quote. But resiliency is a muscle, it’s not just a thing. We have controls and we’re resilient. You have to be resilient; you have to act resilient. And in doing that, it requires you to have those discussions with those third parties. If the only time you talk to third parties, and I mean you, as an entire organization, not just the risk professionals, but if the only time you talk to them is during the assessment, you’re doing something wrong. They’re delivering services, they’re part of your ecosystem, at a bare minimum, six months, you should be pulling up and saying, how’s the health of the relationship? Any things that I should be aware of that we need to discuss? Or, oh, by the way, we saw these things occur and I think they’re worth digging into.

Atul Vashistha:

Matt, I think of this as just thinking about the discipline of how people management, and talent management changed from one-time annual reviews to now ongoing reviews. You think about how everything in business is moving to real-time and continuous.

When you’re running interviews, which is basically due diligence for people, you’re looking to get an understanding of, whether are there risks present here that would prevent me from entering into this relationship.

Matt Moog:

Well, you’re touching on my favorite comparison. We run the third-party risk as if we fire every employee at the end of the year, and then we are in January. That’s not how the world works, that’s not how business works. And we also don’t run performances as we run interviews. Why? Well, because you have data that shows how they performed. When you’re running interviews, which is basically due diligence for people, you’re looking to get an understanding of, whether are there risks present here that would prevent me from entering into this relationship. And then the ongoing monitoring, is this relationship panning out the way that I felt it would? And are there any other continual risk factors that are entering into the relationship? Think of any risk data that you’re going to pick up relative to a location issue or relative to a geopolitical issue that’s very similar to seeing a performance issue on someone that’s an employee. They’re very similar examples. So, you’re spot on with the ability to get access to data to see things in real-time be okay with some of the false positives, that’s fine.

Atul Vashistha:

Hey, we have to recognize that when we get massive amounts of data, there is going to be some level of false positives, but it’s better than what it used to be.

Matt Moog:

Yeah.

Atul Vashistha:

So, Matt, one of the areas I wanted you to talk about is AI, particularly how should third-party risk leaders think about the applications, and the partners that they’re using that bring AI as part of their service solution.

Matt Moog:

So, massively interesting question because I think if we saw the acceleration of AI that’s happened in the last, three months?

Atul Vashistha:

Yeah.

Matt Moog:

It’s significant. The acceleration of potential regulatory requirements around the safe use of AI is significant. We look at a few different AI models because there are basically two or three different AI models that are ruling the roost right now. And one of them with open AI used to be a free ability for anyone and Microsoft pretty much has control over that as of right now. But, you start to look at the AI models and people start to play with them and say, oh, they can write my research paper for me, or they can write a poem, or we have AI now that can create pictures that look almost real. They can do video, they can interpret someone’s voice so that… Who knows maybe what we’re doing right now, is fully AI?

The use of AI and the models, I think we’re going to see something that’s probably 10 times the size of the scrutiny that we saw in financial services around model risk. The use, the security of the data, the assumptions, the training, and the underlying activity. I think there are two main buckets of concern for me, one is just the effective use of models and AI models and AI that can be reliable in general, because as the models tend to learn, you’re going to start to understand where they differentiate from that safe level of activity. When you look at a broader spectrum perspective, the biggest challenge I think is AI that has the ability to almost run itself. So, we talk about risk. I know Elon Musk was giving some interviews more recently across a number of different channels and talking about imagining an AI being able to change administrative passwords and being able to run itself.

We saw some really interesting things with ChatGPT, where people were manipulating ChatGPT based on certain types of commands, and you put this stuff into the broader marketplace, people are going to try to break it, that’s just the inherent nature of what people do.

We saw some really interesting things with ChatGPT, where people were manipulating ChatGPT based on certain types of commands, and you put this stuff into the broader marketplace, people are going to try to break it, that’s just the inherent nature of what people do. So, I think there’s massive uplift, I think there is some significant concern from a risk standpoint. I think it’s used in the right ways, I think it takes tedious non-value activities out of the equation, and it positions people to focus on value. I really hope it doesn’t dumb down people because there’s this aspect of deductive reasoning.

You spoke earlier about the layers, right? Being able to assess and analyze a situation and being able to come out with your independent thoughts on why it occurred, how it occurred, how to solve it, and how to prevent it from happening in the future. That’s core to risk management capabilities, and if AI is going to remove that early stage understanding of that analysis, then it puts us in a pretty scary situation a decade or two decades from now where that sense of capability is just never learned. I mean, how many people here know how to build an iPhone?

Atul Vashistha:

Yeah, yeah.

Matt Moog:

Not Many people. We all love the benefits of it. I just hope that’s not the same case with AI, where some of those basic natural capabilities that really keep us true in debate and that gray area of risk management doesn’t go away.

Atul Vashistha:

So, Matt, I think we reinforced certain messages very clearly. We started with, hey, if you’re looking at great third-party risk management and particularly resilience, I think that’s the other thing that stands out. We no longer just talk about risk management, it’s risk management and resilience, radical transparency, number one. Number two, we talk about the benefits of information intelligence that’s continuous in real-time. The third piece, Matt, that I do want you to comment on is, all of this is not possible without great systems. And then the ability to leverage those systems to automate actions and others. Talk to us about what platforms like One Trust are able to do for customers to take advantage of the other things we just talked about.

Matt Moog:

Yeah, yeah. No, no, that’s a great question. I know people in tech don’t like it when I say this, but technology doesn’t solve problems by itself, right?

Atul Vashistha:

Yeah.

Matt Moog:

In badly configured technology can actually accelerate problems and exacerbate problems. You think of any organization where you think of a system that you were using and you’re like, wow, this is really difficult, it used to be really easy. So, there’s a sense of caution there relative to that. But I think when you truly understand what you’re trained to accomplish from a use case perspective, when you understand cycle time reduction instead of 17 clicks, being able to do something in two clicks and being able to leverage models or algorithms or automation to be able to make things a bit more user-friendly. We look at technology as making sure that it’s enabling the user experience. I came from a background where not only do I advise and build third-party risk programs for some of the largest organizations on the planet, but I also ran them.

So, when we run them, if our technology didn’t work for us, we’d rebuild it. And we’ve certainly rebuilt existing technology and partner technology that we used. And I think that experience kind of coming into OneTrust and the ability to say, does this system actually enable people to make their day-to-day jobs more efficient? Do they have the visibility into risks that they need? Do they understand the expectations from a workflow and a queuing perspective? Is it hard to get to things? Is it easy to get to things? We talk about a lot of use cases in tech, and I know in tech demos you could put a use case down and we’ll figure out how to make it work. Yeah, my question’s always, should you be doing that and trying to question, are you trying to do that because that’s the way it’s always been done?

Are you trying to do that because there’s a requirement that you have to meet? In some cases, there’s a history with internal audits where they say, you know what? No one likes this, but we have to do it like this. The other thing I would say is that tech evolves constantly and any build and deployment that you’ve done, say two or three years ago, there are capabilities within those systems that are far superior than they were three years ago, and sometimes it may require you to take steps backward to rebuild to go forward. And just the recognition of that I think is very interesting, because I talk to a lot of clients on a daily basis where I’m less concerned about OneTrust and our capabilities. I’m more concerned about is our technology meeting your needs, solving your problems, and addressing your use cases. Is it creating pain in the process?

Is it creating efficiency in the process? How do you measure that? What’s success look like? Because, I certainly don’t want to be in a situation where we sold the world to someone, and then six months later we come back and they said, cycle times are double what they were. We don’t really know what we’re doing relative to this, and we’re having challenges and problems, so we want to make sure that it’s a solution. So, technology enables process, and sometimes the underlying of that is making sure that the process is sound and fit for purpose, to enable it with technology as opposed to taking a broken process and accelerating that broken process with technology.

Atul Vashistha:

Matt, I also like to always remind people, great companies like ours don’t want clients just to be on the receiving end. We want you to be on the end in terms of engagement with us, and participate with us, because that’s where our next set of innovations, our next set of improvements come from. So, buyers meet their needs.

Matt Moog:

And we get feedback from clients all the time. I wish I could treat every single piece of feedback, we get thousands of them with the same level of scrutiny, but just like in risk management, we have to pick and choose. If I have 50 customers coming back and saying, this certain process needs to be fixed, I’m probably going to fix it, as opposed to if two people came back and said, I need this to be fixed. I think also to your point, engaging people in very open dialogue and saying, you know what? Put everything on the table, good, bad, the ugly, whatever it is, we’ll work through that and have those open dialogues and discussions because, if we’re not willing to see some of the challenges that our clients are facing, then they’re never going to get solved. I always tell the people that work for me, if I don’t know the challenges you’re dealing with, I can’t help you solve them. That same holds true for clients as well.

Atul Vashistha:

Let’s switch to jobs and careers and risk.

Speakers

Matt Moog


General Manager - TPRM

OneTrust

Matthew Moog serves as the General Manager, Third-Party Risk at OneTrust, the category-defining enterprise platform to operationalize trust. In his role, Matthew advises companies throughout their third-party risk management implementations to help meet requirements relating to relevant standards, frameworks, and laws. Prior to joining OneTrust, Matthew spent 18 years at EY where he led their Global Third-party Risk offering for Financial Services and their Third-party Risk Managed Service offering for the Americas. Moog is a CISA, a CIPM and has a BS in Management Information systems from Rensselaer Polytechnic Institute in Troy, NY

Atul Vashistha


Chairman & CEO

Supply Wisdom

Atul is the Founder of Supply Wisdom & Neo Group, and is also the visionary behind the GBSBoard and RiskBoard. For more than 21 years, Atul and his teams have worked with nations and corporations to leverage global talent, big data, automation and other technology mega-trends to accelerate new capabilities, increase resiliency, mitigate risks and enable better corporate and societal outcomes. Atul Vashistha currently serves on the boards of Shared Assessments and IAOP. Atul had the distinguished honor of serving on the US DoD Business Board for over 12 years, including as former Vice Chairman from 2018-20.

Recent Conversations

Stay Updated

We will notify you when a new conversation is posted

Recommend a Speaker


SVB Collapse - Comprehensive TPRM Analysis

The Collapse of SVB: Analysis of Risk Indicators and Next Steps for TPRM

Get Supply Wisdom’s comprehensive analysis on SVB, including indicators across a full spectrum of risks, the causes of the collapse, and precautionary steps you can take in response to the SVB collapse.