Part 4  |  Risk Managers at Risk:  7 Things You Need To Do To Save Your Company…. And Your Job
Insurance

Part 4 | Risk Managers at Risk: 7 Things You Need To Do To Save Your Company…. And Your Job

Using a Traditional Risk Management Matrix? STOP!
“Everything Should Be Made as Simple as Possible, But Not Simpler.” -Albert Einstein

Using Decision Matrix Risk Assessments (DMRAs) to categorize and prioritize risks is the standard tool for most Risk Managers as they evaluate risk tradeoffs.  And Risk Matrices have a lot going for them: they’re extremely powerful visual tools to convey positional information, useful to condense large amounts of data into understandable groupings, and easy to construct, based on the dimensions and scales chosen.  Easier and cheaper to create than more sophisticated analyses like Failure Mode and Effects Analysis (FMEA)or Monte Carlo simulations, DMRAs are readily explainable to everyone in the organization.  

There is even a standard and common four-step process that most everyone has seen in their careers:  1) Identify the risk factors; 2) Rate the risks; 3) Combine / Multiply the ratings and 4) Rank the Risks.  Evaluate the DMRA outcome and, depending on your risk tolerance and mitigation effectiveness, create your mitigation plan, work through it in sequence, and then update the matrix.  

The published output will look something like this:

Voila!  You now have your plan in place and can begin ongoing monitoring and refining.  But not so fast!  What if everything you know about DMRA is wrong?... well, maybe not everything.  But an awful lot! [1]

Risk matrices are inherently unreliable and should never be used to make decisions on risk plans or rank priorities in the absence of other data.  They are a great data presentation tool, but they should never be a decisioning tool.  Let’s explain why:

Low, Medium, High Scales Are Arbitrary

  • In the absence of historical data, our three-way split assumes a sharp distinction between the three categories on each axis. When you convert these label categories numerically and multiply (typically 1=L, 2=M, 3=H), you can create non-sensical or grossly exaggerated distinctions.  
    • Silly example:  Suppose you decide to create three groups of football players by weight:   0-250 lbs., >250-350, and >350lbs.  Does the >250-350 lbs. group really have double the effect of the 0-250 lbs. group?
    • More importantly, does a 251 lb. player have double the impact of a 249 lb. player?  Or does a 351 lb. player have50% more than a 349 lb. player (multiplication factor of 3 vs. 2). That’s what the multiplication factor concludes.
  • For any risk you identify, you are assuming the value of both the frequency and severity.  Each of these can vary greatly depending on the standard deviations of each. This introduces confusion when comparing two risks, especially if frequency and severity are negatively correlated between them.  The math is complicated, but the results are clear.  It is possible and often likely that the risk score between two risks will simply be wrong.  

Time Scales Must Be Consistent

  • Misclassification of risk category is possible unless timeframes and severity are calculated uniformly across all of the risks under consideration, and most evaluation processes and scaling rarely take this into account or check consistently for this disconnect.
  • Example:  A pump leaks 4 oz of oil every 12 hours and thus is labeled as 3 for Frequency and 1 for Severity, leading to a total score of 3.  A transformer leaks 5 gallons once every four months, and is graded as 2 for Frequency and 3 for Severity, with a total score of 6. Is the transformer really the higher rated risk by a factor of two?  It’s debatable.  Over a year, the transformer leaks 15gallons.  Over the same period, the pump leaks almost 23 gallons. The definition of the scales can generate misleading results.

Individual Perceptions of Risk Vary Greatly

  • Perceptions of risk vary widely, both from person to person and from business unit to business unit...what’s catastrophic to me may be trivial to you.
    • Risk evaluation and perception is strongly (and even irrationally influenced) by prior experience.  Your perception of a risk is greatly exaggerated by your prior history of either being in close proximity to or experiencing the negative outcome of that risk.  
    • Psychological research strongly suggests that using an odd number of scales (3 or 5) generates over-emphasis on the middle. Human beings don’t like to stick out or go to extremes in their selections.  Thus, a large bias toward the middle value.
    • By the way, all of these biases apply equally to your perception and assessment of risk mitigation assessments and residual risk calculations.  Be careful and skeptical of every assessment you make or receive.

Risk Correlations Need To Be Identified and Quantified

  • Risk Matrices appear (a bit deceptively) to rank order risk on the mathematical product of their individual scaling factors, but provide no information on risk correlations across the entire enterprise. Thus, they provide an incomplete picture on the best enterprise approaches to mitigation.
    • Where multiple risks are strongly correlated, individual mitigation efforts aimed toward an individual risk may also be effective on other risks within the organization.  In any event, understanding the nature of strong risk correlations is critical to determining the breadth and interrelatedness of risk mitigation efforts.
    • Instances of strong negative correlations between groups of risk should also be identified, and DMRAs are silent on this issue. If groups of risks share strong inverse correlations, they may cancel each other out or at least require fewer mitigation efforts to maintain exposure within the enterprise’s risk tolerance.  Because DMRAs do not take this into account, resources may be misdirected or wasted on unnecessary mitigations efforts and controls.

Additional Limitations on DMRA Analysis

  • There are limitations and restrictions that you should always keep in mind when you use risk matrices.  Although the mathematics behind them are complicated (and beyond the scope of this article)[2],the conclusions are as follows:  
    • The usual or traditional coloring scheme most often used in decisions matrices is just wrong.  Although the explanation is technical, It is not appropriate mathematically to combine multiple cells into High or Low categories. Red can only be (H, H) and Green can only be (L, L).  All other cells must be categorized as Medium.  This will typically leave the initial risk grouping with a bulge in the middle or a reverse barbell type distribution.
    • Within these three groups, it is not possible to make any rank ordering or prioritization on the relative importance of the risks without additional information.
    • Your new starting point should be the following:

Where Does This Leave Risk Managers?

With a renewed opportunity to engage their peers in thoughtful conversation and analysis.  In the next article, we’ll describe suggested approaches based on these findings.

Sources:

[1] Krisper, Michael, “Problems with Risk Matrices Using Ordinal Scales.”

[2] Eight to Late, “Cox’s risk matrix theorem and its implications for project risk management.”

Book a Free, 45-min. ERM Strategy Session Now!

If you’re a CRO, CEO, CFO or COO, please fill out the form below with your name, title*, email, Company name, and phone number. We'll give you a call some time between 8:30AM - 5 PM ET, Monday thru Friday to schedule the session.

*Appointments limited to Senior Managers with Risk Management Responsibility only.

RMA RIsk Maturity Framework

Powered by SRA Watchtower

Take the self-assessment today to
measure your institutions risk maturity.
SCHEDULE a demo
risk maturity framework

Subscribe to receive alerts when new insurance related thought leadership content is published by our ERM subject matter experts:

EXPERIENCE. WISDOM. KNOWHOW.

Book an

SRA CONSULTING

discovery session

SCHEDULE NOW
enterprise risk management for credit unions
Three ways to tap into the people, technology and insights of SRA Watchtower.
We're focused exclusively on the serving the financial & Insurance industries.

DISCOVERY 
SESSION

Discovery Session
Schedule a 30 minute discovery call with an SRA Watchtower risk expert to understand your challenges or opportunities ahead to see how Watchtower's holistic risk intelligence platform can support your goals.
SCHEDULE NOW

WATCHTOWER
DEMO

watchtower demo
Look inside Watchtower, the holistic risk intelligence platform to learn how it helps executives navigate risk and drive growth.
BOOK TODAY

Risk Intel
Podcast

Risk Intel Podcast
Listen and learn from SRA Watchtower risk enthusiasts, customers, and experts across the financial industry through our weekly risk focused podcast.
REGISTER

RMA RIsk Maturity Framework

Powered by SRA Watchtower

Take the self-assessment today to
measure your institutions risk maturity.
SCHEDULE a demo
risk maturity framework