This post is part of the Content Is The Web risk management series.
Risk management replaces your old sign-off process. As part 2 explained, it changes what you ask as you work though content with other people. Once you have a big pile of information from these risk reporters, this post explains how it needs to be sorted. The next post introduces some of the tools you’ll use.
The risk management framework makes the entire process as objective as it can be. It rates each risk’s likelihood and consequence on separate scales, then produces a severity measurement. This determines how acceptable the risk is (or isn’t), and shows you what risks are most important.
The short version:
This needs managerial buy-in, so work with higher-ups. Classify risk consequences, then set objective grades for each type of consequence, and for likelihood. Put those grades on a grid, and overlay severity ratings. Hey presto, you have a risk management framework.
This needs managerial buy-in, so work with higher-ups
This framework is the key to consistent, objective risk management. It needs to be trustworthy and taken seriously. It essentially sets rules for your workplace, so you need senior-level people to agree with it. That means getting the right bosses to approve the framework in the first place.
The “right boss” here is the person senior enough to balance the risk/reward scale on behalf of your company. Every risk carries a reward, remember, and this framework ultimately decides which risks are worth taking.
Classify risk consequences…
Every risk has at least one consequence, but they’re not directly comparable. In part 2 I used examples of risks to a company’s brand and to customer satisfaction, which can’t be compared directly. Other risks might have consequences that are financial (they cost money), reputational (they make you look bad), legal (they get you in trouble with the courts), or competitive (they make it harder for you to be the best in your market).
There’s no canonical list but you want between 5 and 10 categories, each narrow enough to group only similar things. Remember that the category pool needs to be wide enough that every risk fits somewhere.
After you’ve held a few risk conversations, you’ll start to see which categories will work for your organisation.
…then set objective grades for each type of consequence…
The reason you need groups of comparable consequences is, obviously, so you can compare them. For your framework you need a small scale that you can sort consequences into. I’m going to use a simple example with only 3 steps on my scale. That might be as much as you need, too. Bigger definitely isn’t better: I have yet to see any value in even a 5-step scale.
My three steps for consequences are:
- Barely noticeable
Now to define those three terms. Objectivity is crucial here, which means making a different, close-fitting set of definitions for each risk category.
For each risk category, you’re labelling the three-step scale with measurable amounts.
Some scales are obvious. For financial risks it’s about the dollar amount (if you have different levels of managers with different spending approvals, they can make useful cut-off points). A $1,000 consequence might be ‘barely noticeable’ to your company, with $10,000 (or less) being ‘bad’, and any more counting as ‘catastrophic’.
Others need a bit more thought. Some good scales I’ve seen were based on:
- Who’d be involved. Physical risks, if they play out, might require self-administered first aid (like a sticking plaster) when they’re ‘barely noticeable’, a doctor if they’re ‘bad’, or a visit from the coroner if they’re ‘catastrophic’.
- The effort to put things right. Legal issues, for example might require a bit of in-house time writing a letter, or serious hours negotiating a settlement, or weeks spent on court appearances.
- Spread. One of my favourites was a scale for reputational damage, based on the level of media coverage the event would receive. The scale went from “grumpy email from the boss” (bad) through “national media coverage” to “boss appearing on international television” (catastrophic).
What’s important is that the definitions make sense to your organisation. For every risk category, line up fitting definitions at each step of your scale.
…and set equally objective grades for likelihood.
This is easier than the consequence scale, because there’s only one version of it. Likelihoods involve people: How many people will this risk be likely to affect?
Sticking with a three-step scale, let’s say that risks might affect:
- Hardly anyone
- Only some people
- Almost everybody
Rather than absolute numbers (“10,000 people is only some people”), look at things proportionately. If you’re working with content that could expect 5,000 views a week, how many of those people are likely to be affected by the risk? You might end up with a scale of:
- Hardly anyone: Under 0.1% (or 5 people a week)
- Only some people: Up to 10% (5-500 people a week)
- Almost everybody: More than 10% (5000 people or more)
By using proportions, you can measure content’s risks without the popularity of the content making things more difficult.
Put those grades on a grid, and overlay severity ratings
This is where the term “framework” becomes slightly more literal. Draw up a table with your likelihood grades on one axis and consequence grades on the other.
|Hardly anyone||Only some people||Almost everybody|
This table is going to determine each risk’s severity. There are four measurements of risk severity, which tell you what to do with the risk next:
- Mitigate: The risk is unacceptable – it’s too likely to occur, or its consequence is too major, or both. Something needs to change.
- Revisit: While not as bad as above, the risk needs to be addressed. Usually only one aspect (likelihood or consequence) will be a concern. If there’s no way to reduce the risk without lowering content quality, it might be a risk worth taking.
- Accept: The risk is acknowledged, but so unlikely and/or inconsequential that it’s not worth addressing any more.
- (Not a risk): This looked or sounded like a risk, but once rated turned out to not matter at all.
These severities will spread through the table from top-right to bottom-left. Remember to have a senior manager on hand to approve the layout here – it determines what’s going to get your time and attention, and what’s going to be published.
A typical example would be:
|Hardly anyone||Only some people||Almost everybody|
|Barely noticeable||(Not a risk)||Acceptable||Revisit|
Hey presto, you have a risk management framework
That’s the framework done. Now there’s a way to turn the information that you gather through conversations into sensible, objective decisions and to make sure that you’re putting your effort where it’s most useful.
As you work on risk mitigation, you’ll reduce the likelihood and consequence ratings and shift risks towards the bottom-left corner of your grid. Part 4 of this series introduces some risk management tools that help this process run well.