Making risk management work (3): The framework

This post is part of the Content Is The Web risk management series.

Update, 13 Sept 2014: I finally got around to adding in the five steps a risk goes through.

Risk management replaces your old sign-off process. As part 2 explained, it changes what you ask as you work though content with other people. Once you have a big pile of information from these risk reporters, this post explains how to sort through it all. The next post introduces some of the tools you’ll use.

The risk management framework makes the entire process as objective as it can be. It rates each risk’s likelihood and consequence on separate scales, then produces a severity measurement. This determines how acceptable the risk is (or isn’t), and shows you what risks are most important.

The short version:

This needs managerial buy-in, so work with higher-ups. Classify risk consequences, then set objective grades for each type of consequence, and for likelihood. Put those grades on a grid, overlay severity ratings, then track each risks through five stages from ‘reported’ to ‘accepted’. Hey presto, you have a risk management framework.

This needs managerial buy-in, so work with higher-ups

This framework is the key to consistent, objective risk management. It needs to be trustworthy and taken seriously. It essentially sets rules for your workplace, so you need senior-level people to agree with it. That means getting the right bosses to approve the framework in the first place.

The “right boss” here is the person senior enough to balance the risk/reward scale on behalf of your company. Every risk carries a reward, remember, and this framework ultimately decides which risks are worth taking.

Classify risk consequences…

Every risk has at least one consequence, but they’re not directly comparable. In part 2 I used examples of risks to a company’s brand and to customer satisfaction, which can’t be compared directly. Other risks might have consequences that are financial (they cost money), reputational (they make you look bad), legal (they get you in trouble with regulators or the courts), or competitive (they make it harder for you to be the best in your market).

There’s no canonical list but you want between 5 and 10 categories, each narrow enough to group only similar things. Remember that the category pool needs to be wide enough that every risk fits somewhere.

After you’ve held a few risk conversations, you’ll start to see which categories will work for your organisation.

…then set objective grades for each type of consequence…

The reason you need groups of comparable consequences is, obviously, so you can compare them. For your framework you need a small scale that you can sort consequences into. I’m going to use a simple example with only 3 steps on my scale. That might be as much as you need, too. Bigger definitely isn’t better: I have yet to see any value in even a 5-step scale.

My three steps for consequences are:

  1. Barely noticeable
  2. Bad
  3. Catastrophic

Now to define those three terms, as objectively as possible. For each risk category, you’re labelling the three-step scale with measurable amounts.

Some scales are obvious. For financial risks it’s about the dollar amount (if you have different levels of managers with different spending approvals, they can make useful cut-off points). A $1,000 consequence might be ‘barely noticeable’ to your company, with $10,000 (or less) being ‘bad’, and any more counting as ‘catastrophic’.

Others need a bit more thought. Some good scales I’ve seen were based on:

  • Who’d be involved. Physical risks, if they play out, might require self-administered first aid (like a sticking plaster) when they’re ‘barely noticeable’, a doctor if they’re ‘bad’, or a visit from the coroner if they’re ‘catastrophic’.
  • The effort to put things right. Legal issues, for example might require a bit of in-house time writing a letter, or serious hours negotiating a settlement, or weeks spent on court appearances.
  • Spread. One of my favourites was a scale for reputational damage, based on the level of media coverage the event would receive. The scale went from “grumpy email from the boss” (bad) through “national media coverage” to “boss appearing on international television” (catastrophic).

What’s important is that the definitions make sense to your organisation. For every risk category, line up fitting definitions at each step of your scale.

…and set equally objective grades for likelihood.

This is easier than the consequence scale, because there’s only one version of it. Likelihoods involve people: How many people will this risk be likely to affect?

Sticking with a three-step scale, let’s say that risks might affect:

  1. Hardly anyone
  2. Only some people
  3. Almost everybody

Absolute numbers (“10,000 people is only some people”) aren’t the way to go, though. Different content serves audiences of different sizes. For some content, 10,000 people would be everyone who sees it. For other content, 10,000 views are what happens in the first hour. So, think in percentages. For example:

  • Hardly anyone: Under 0.1%
  • Only some people: Up to 10%
  • Almost everybody: More than 10%

By using proportions, you can measure content’s risks without the popularity of the content making things more difficult.

Put those grades on a grid, and overlay severity ratings

This is where the term “framework” becomes slightly more literal. Draw up a table with your likelihood grades on one axis and consequence grades on the other.

Hardly anyone Only some people Almost everybody
Catastrophic
Bad
Barely noticeable

This table is going to determine each risk’s severity. There are four measurements of risk severity, which tell you what to do with the risk next:

  1. Mitigate: The risk is unacceptable – it’s too likely to occur, or its consequence is too major, or both. Something needs to change.
  2. Revisit: While not as bad as above, the risk needs to be addressed. Usually only one aspect (likelihood or consequence) will be a concern. If there’s no way to reduce the risk without lowering content quality, it might be a risk worth taking.
  3. Accept: The risk is acknowledged, but so unlikely and/or inconsequential that it’s not worth addressing any more.
  4. (Not a risk): This looked or sounded like a risk, but once rated turned out to not matter at all.

These severities will spread through the table from top-right to bottom-left. Remember to have a senior manager on hand to approve the layout here – it determines what’s going to get your time and attention, and what’s going to be published.

A typical example would be:

Hardly anyone Only some people Almost everybody
Catastrophic Revisit Mitigate Mitigate
Bad Acceptable Revisit Mitigate
Barely noticeable (Not a risk) Acceptable Revisit

There are five stages a risk goes through

As you work on risk mitigation, you’ll reduce the likelihood and consequence ratings and shift risks towards the bottom-left corner of your grid.

As you work on risks they go through five stages:

  1. Report. A risk reporter tells you about the risk.
  2. Rate. Measure likelihood and consequence, and place the risk in the severity table. This tells you whether to accept or mitigate the risk.
  3. Mitigate. Do something to reduce the likelihood and/or consequence of the risk.
  4. Re-rate. Now that someone’s done something to mitigate the risk, there’s a new likelihood and/or consequence (and therefore a new severity) to confirm. Again, you’ll see whether to accept the risk or not.
  5. Accepted. The risk owner is happy to take the risk.
Report, rate, then either accept or mitigate.

Report, rate, then either accept or mitigate.

Hey presto, you have a risk management framework

That’s the framework done. Now there’s a way to turn the information that you gather through conversations into sensible, objective decisions and to make sure that you’re putting your effort where it’s most useful.

Part 4 of this series introduces some risk management tools that help this process run well.