This is what happens when I take whiteboard notes at a content strategy meetup

Last night’s content strategy meetup was a bit of a show n’ tell session about content audits. Of course, one of the best things about meetups like these is the little hints and tips you get from each other. So I wrote a few of them down:

My handwriting is so pretty

My handwriting is so pretty

Or, if you prefer things in some sort of order…

Useful resources and people to learn from

Things we’ve learned from experience

  • Every audit is different. Know why you’re auditing and what questions you want to be able to answer afterwards. This affects how you’ll go about it, what information you’ll collect, and whether you’ll be able to visibly succeed. Examples: Finding crappy, old content to delete; preparing for a migration; or just quantifying your content. These are three different types of audit, each needing a slightly different approach.
  • If you’re auditing for content quality, with a view to deleting the crap stuff, start by laying out criteria that content has to meet. Get your stakeholders and content owners to agree to the criteria beforehand. This way, you can hit the ‘delete’ key on content that fails the test without having to double-check with anyone.
  • For page-by-page audits, assign a unique identifier to each page as you go. You can’t rely on URLs since they can change, or a page can have more than one. You can’t rely on page titles since they can be repeated throughout a site. This can just be a number (the home page is 0001, for example, and everything goes from there), or more hierarchical to reflect navigation (e.g. 1.2.3 is three levels deep, the 3rd page in the 2nd section).
  • For ‘what do we have?’ audits, where you’re out to quantify your site(s), automated tools are less trustworthy than a manual clickthrough. Sorry. The do-it-yourself method also lets you really, really get to know your site (and can be a good time to put on a talking book).
  • Don’t collect data you don’t need. You’re probably going to be filling in a spreadsheet with thousands of cells. You don’t have to fill in every single one of them – you just need enough info to answer the questions you started with. Example for ‘content cull’ audits: Once you know enough to decide whether to delete a page or not, move on.
  • It’s not only about pages. Don’t forget to hunt through documents like PDFs, and files (e.g. images). These are content just as much as your HTML pages are.

Updates

  • Thursday 2nd, 2:49pm
    From Trudy: CAT – Content analysis tool
    “While an automated audit can’t replace a manual audit the output might make the job a bit easier – so I am on the look out for good tools.I found this Content Analysis Tool (CAT) a few months ago and ran the free trial. It returned information that looked to be more useful to content people than what Zenu returns. So I’m planning to give it a whirl on an up-coming project.”
  • Monday 6th, 9:57pm
    Rick from the UK (and from Twitter) has been in touch to say that Paula Ladenburg Land, one of the co-founders of the company behind CAT, has written Content Audits and Inventories: A Handbook, which sounds really useful. How cool is that? Tips from the other side of the world! (In related news, Rick’s book is out really soon.)
  • Thursday 2nd, 3:01pm (also updated above)
    Link from Emma: Maadmob’s Content inventory spreadsheet
  • Friday 3rd, 9:56am
    From Mike: Karen McGrane’s talk, ‘Adapting ourselves to adaptive content’. If you missed our ‘movie night’ meetup in June, you should definitely make the time for this.

Anything to add?

If there’s anything I’ve missed, or if you have something to add, tweet @aucklandcs or drop a comment in the Meetup message board.

I’ll be at Dunedin’s UX Design Day, October 31

One of the hardest things about getting to present at conferences and events is keeping my damn mouth shut about it until the organisers have announced the line-up. So, after a week or two of keeping my trap shut: I’M COMING TO DUNEDIN FOR UX DESIGN DAY!

Dunedin’s a special place, and any excuse to head back down for a visit is always a good thing. I lived there for seven years, it’s where I met my wonderful wife (as well as being her home town), and it’s where I made some of the best friendships of my life. As far as I recall, it was always exactly like this:
Dunedin
(That’s me on the obligatory outdoor couch, in the yellow and black t-shirt.)

UX Design Day is on October 31, and if a “one-day, sleeves-rolled-up, pva-on-your-fingers design conference” sounds like fun to you, registrations are open.

I’ll be talking about a subject I’ve riffed on a few times: the space where content and UX overlap. In particular I’ll look at how organisations can encourage that overlap, and make the most out of it. My talk is called ‘Content + UX = Better business’. At least, it will be once I’ve assembled it.

If you’re after an idea of where I’m starting from, have a look at this post: Content strategy and UX are twins. I wrote it before UX New Zealand last year, and I think it’s truer than ever now.

==

This post is 272 words long, with an average reading grade of 7.0.

Making risk management work (4): The tools you need

This post is part of the Content Is The Web risk management series.

This post explains the tools and tables you’ll use to manage risks properly. It follows on from earlier posts about the framework and conversations that risk management uses.

The short version:

Each risk is documented in a separate report, and each piece of content you work on needs a register of all its risks. So long as you’re having the right conversations and following the framework, this is basic admin

There are two main tools: a report for each risk, and a register of all risks.

Each risk is documented in a separate report

Each risk has its own report, which contains everything you need to know about it. In most cases this doesn’t need to be any more than a one-pager (whether physical or digital). However you construct them, make sure you have a decent filing system.

A risk report tells you:

  • Which content the risk relates to – this can identify the page or document you’re dealing with, or be more specific (e.g. ‘About Us, paragraph 3′)
  • Who the risk reporter is
  • The risk statement
  • The initial risk assessment (shown on the severity table)
  • Mitigation actions, assigned to an individual
  • Later assessments (after mitigation)
  • Acceptance detail – the risk owner who’s accepting the risk, and the date.

The main thing risk reports do is track responsibilities, and hold the all-important signature from the risk owner when the risk is accepted and your content is one step closer to publication.

As a risk moves through the stages I introduced in part 3, you update the risk report.

Report: As soon as a risk is raised, start a report and fill in the first three details (what the content is, the risk reporter, and the risk statement).

Rate/Re-rate: As you hold full risk conversations, you can properly assess the risk.

Accept? If you think the risk is acceptable, get the report in front of the risk owner and see what they say.

Mitigate: Set and record tasks, and document whose responsibility they are.

Accepted: When the risk owner is happy to go ahead with things the way they are, get it in writing!

Report, rate, then either accept or mitigate.

Each piece of content you work on needs a register of all its risks

A risk register gives you a single place to see progress on a given piece of content. The more content you work on, the more risk reports you’re going to have, so a register is an easy way to group reports.

Risk registers work best as spreadsheets.

  • Content name or identifier
  • Risk owner – remember from part 1 that each piece of content has one single risk owner
  • For each risk:
    • Risk statement
    • Current assessment
    • If it’s not accepted yet, the name of the person who’s working on it

So long as you’re having the right conversations and following the framework, this is basic admin

If keeping this documentation up to date is difficult, you probably have a bigger problem. The paperwork is deliberately straightforward and simple because the hard parts, if there are any, come in the conversations (i.e. teasing out all the details you need from risk reporters) and in setting up the framework (i.e. agreeing on how to measure likelihood and consequence, and working out what level of risk is okay to take).

Ladies and gentlemen, that’s it

This is the end of the Content Is The Web risk management series. I really hope you can make some good things happen with this approach. Please, let me know what you think, or ask me questions – leave a comment here or tweet @MxDEJ.

Making risk management work (3): The framework

This post is part of the Content Is The Web risk management series.

Update, 13 Sept 2014: I finally got around to adding in the five steps a risk goes through.

Risk management replaces your old sign-off process. As part 2 explained, it changes what you ask as you work though content with other people. Once you have a big pile of information from these risk reporters, this post explains how to sort through it all. The next post introduces some of the tools you’ll use.

The risk management framework makes the entire process as objective as it can be. It rates each risk’s likelihood and consequence on separate scales, then produces a severity measurement. This determines how acceptable the risk is (or isn’t), and shows you what risks are most important.

The short version:

This needs managerial buy-in, so work with higher-ups. Classify risk consequences, then set objective grades for each type of consequence, and for likelihood. Put those grades on a grid, overlay severity ratings, then track each risks through five stages from ‘reported’ to ‘accepted’. Hey presto, you have a risk management framework.

This needs managerial buy-in, so work with higher-ups

This framework is the key to consistent, objective risk management. It needs to be trustworthy and taken seriously. It essentially sets rules for your workplace, so you need senior-level people to agree with it. That means getting the right bosses to approve the framework in the first place.

The “right boss” here is the person senior enough to balance the risk/reward scale on behalf of your company. Every risk carries a reward, remember, and this framework ultimately decides which risks are worth taking.

Classify risk consequences…

Every risk has at least one consequence, but they’re not directly comparable. In part 2 I used examples of risks to a company’s brand and to customer satisfaction, which can’t be compared directly. Other risks might have consequences that are financial (they cost money), reputational (they make you look bad), legal (they get you in trouble with regulators or the courts), or competitive (they make it harder for you to be the best in your market).

There’s no canonical list but you want between 5 and 10 categories, each narrow enough to group only similar things. Remember that the category pool needs to be wide enough that every risk fits somewhere.

After you’ve held a few risk conversations, you’ll start to see which categories will work for your organisation.

…then set objective grades for each type of consequence…

The reason you need groups of comparable consequences is, obviously, so you can compare them. For your framework you need a small scale that you can sort consequences into. I’m going to use a simple example with only 3 steps on my scale. That might be as much as you need, too. Bigger definitely isn’t better: I have yet to see any value in even a 5-step scale.

My three steps for consequences are:

  1. Barely noticeable
  2. Bad
  3. Catastrophic

Now to define those three terms, as objectively as possible. For each risk category, you’re labelling the three-step scale with measurable amounts.

Some scales are obvious. For financial risks it’s about the dollar amount (if you have different levels of managers with different spending approvals, they can make useful cut-off points). A $1,000 consequence might be ‘barely noticeable’ to your company, with $10,000 (or less) being ‘bad’, and any more counting as ‘catastrophic’.

Others need a bit more thought. Some good scales I’ve seen were based on:

  • Who’d be involved. Physical risks, if they play out, might require self-administered first aid (like a sticking plaster) when they’re ‘barely noticeable’, a doctor if they’re ‘bad’, or a visit from the coroner if they’re ‘catastrophic’.
  • The effort to put things right. Legal issues, for example might require a bit of in-house time writing a letter, or serious hours negotiating a settlement, or weeks spent on court appearances.
  • Spread. One of my favourites was a scale for reputational damage, based on the level of media coverage the event would receive. The scale went from “grumpy email from the boss” (bad) through “national media coverage” to “boss appearing on international television” (catastrophic).

What’s important is that the definitions make sense to your organisation. For every risk category, line up fitting definitions at each step of your scale.

…and set equally objective grades for likelihood.

This is easier than the consequence scale, because there’s only one version of it. Likelihoods involve people: How many people will this risk be likely to affect?

Sticking with a three-step scale, let’s say that risks might affect:

  1. Hardly anyone
  2. Only some people
  3. Almost everybody

Absolute numbers (“10,000 people is only some people”) aren’t the way to go, though. Different content serves audiences of different sizes. For some content, 10,000 people would be everyone who sees it. For other content, 10,000 views are what happens in the first hour. So, think in percentages. For example:

  • Hardly anyone: Under 0.1%
  • Only some people: Up to 10%
  • Almost everybody: More than 10%

By using proportions, you can measure content’s risks without the popularity of the content making things more difficult.

Put those grades on a grid, and overlay severity ratings

This is where the term “framework” becomes slightly more literal. Draw up a table with your likelihood grades on one axis and consequence grades on the other.

Hardly anyone Only some people Almost everybody
Catastrophic
Bad
Barely noticeable

This table is going to determine each risk’s severity. There are four measurements of risk severity, which tell you what to do with the risk next:

  1. Mitigate: The risk is unacceptable – it’s too likely to occur, or its consequence is too major, or both. Something needs to change.
  2. Revisit: While not as bad as above, the risk needs to be addressed. Usually only one aspect (likelihood or consequence) will be a concern. If there’s no way to reduce the risk without lowering content quality, it might be a risk worth taking.
  3. Accept: The risk is acknowledged, but so unlikely and/or inconsequential that it’s not worth addressing any more.
  4. (Not a risk): This looked or sounded like a risk, but once rated turned out to not matter at all.

These severities will spread through the table from top-right to bottom-left. Remember to have a senior manager on hand to approve the layout here – it determines what’s going to get your time and attention, and what’s going to be published.

A typical example would be:

Hardly anyone Only some people Almost everybody
Catastrophic Revisit Mitigate Mitigate
Bad Acceptable Revisit Mitigate
Barely noticeable (Not a risk) Acceptable Revisit

There are five stages a risk goes through

As you work on risk mitigation, you’ll reduce the likelihood and consequence ratings and shift risks towards the bottom-left corner of your grid.

As you work on risks they go through five stages:

  1. Report. A risk reporter tells you about the risk.
  2. Rate. Measure likelihood and consequence, and place the risk in the severity table. This tells you whether to accept or mitigate the risk.
  3. Mitigate. Do something to reduce the likelihood and/or consequence of the risk.
  4. Re-rate. Now that someone’s done something to mitigate the risk, there’s a new likelihood and/or consequence (and therefore a new severity) to confirm. Again, you’ll see whether to accept the risk or not.
  5. Accepted. The risk owner is happy to take the risk.
Report, rate, then either accept or mitigate.

Report, rate, then either accept or mitigate.

Hey presto, you have a risk management framework

That’s the framework done. Now there’s a way to turn the information that you gather through conversations into sensible, objective decisions and to make sure that you’re putting your effort where it’s most useful.

Part 4 of this series introduces some risk management tools that help this process run well.

Making risk management work (2): Holding conversations

This post is part of the Content Is The Web risk management series.

You know the roles and definitions that risk management is based on, so now we turn to how to talk about risks with your risk reporters. After that, the next post introduces the tools you need to manage them.

(Risk reporters used to be stakeholders and points of sign-off. If that’s news to you, let me repeat the link to How risk management works (1) – Roles and definitions.)

It’s your decision to talk to risk reporters one-by-one, or all together as a group. It’s most important, especially at first, that you do actually talk. The old days of sending drafts and receiving tracked changes or free-form comments are over. Your risk reporters need to give you specific information that they probably haven’t been asked for before. You’ll need to prompt them, ask follow-up questions, and really get to know what they’re thinking. Could you do that over email? Only slowly, if at all.

It probably sounds like this will take a long time. At first it might, but by building up understanding and (hopefully) rapport, this working relationship will pay off in time. And you’ll end up with better content, too.

Here’s the short version:

Never ask “can I publish this?” again. Instead, ask “If I did publish this, what could happen?”. Talk about bad things that might happen, and in each case get specific about likelihood and consequence. Be open, be receptive, and use other people’s expertise.

Never ask “can I publish this?” again

Your old workflow was based on permissions: “Is this approved?”. With risk management, you’re not asking for approval anymore. Your risk reporters don’t hold a “stop/go” sign. Instead, they have information that you need to understand.

So throw away your old script and those old power relationships.

Instead, ask “If I did publish this, what could happen?”

The subtler question you ask instead is an “if”: “What effects might this draft content have if it becomes the final, public version?”

This new question does a few useful things:

  • It nudges people to think from the reader’s point of view
  • It encourages realism, rather than feedback about academic or unimportant things
  • It looks at content’s effect on the audience.

Talk about the content as if it’s live, and being read by real people. It might even help to use personas here. Dig into the information and impressions that you’re passing on.

Talk about bad things that might happen

Part 1 defined a risk as a bad thing that might happen. This is what you need to talk about. When you have it right, you end up with a risk statement.

“It seems wordy” isn’t a risk. “You’re missing our usual tone of voice” isn’t a risk. “You have the product measurements wrong” isn’t a risk, either. But this is the sort of thing that your colleagues or clients will be used to saying. By controlling the conversation you can tease the actual risks out.

Looking at things from the reader’s point of view helps a lot. The content is wordy: So what? So…the reader might not finish the page. And the info at the bottom is really important.

The risk statement, then, is that skim readers might miss the important info at the bottom of the page.

The tone of voice doesn’t sound right. What’s the effect of that? People who read a lot of our stuff won’t get the familiar feeling that we give them. We might sound like we’re being fake.

Or, as a risk statement: The tone might confuse people who know our brand well.

Product measurements being wrong is an easy one to turn into a risk statement. People might buy something that doesn’t do what they want.

It’s not a coincidence that every risk statement includes the word “might”.

Get specific about likelihood…

For every risk, get into detail about who could be affected. Hone in on that word “might”.

How many people might miss that important last paragraph? If your pages are usually around 100 words but this draft is a 10,000-word diatribe, probably quite a lot. But if it’s only 120 words, more people will stick around ’til the end.

The tone of voice risk is only going to affect people who already expect a certain voice from you. The better known you are, the bigger the risk’s likelihood.

The product measurement risk is very likely to play out if you’re describing a 2-bedroom house as having 5 bedrooms. Everybody’s going to pick up on that one. But if you’re saying that a 512GB hard drive is only 500GB, fewer people are going to notice.

…and get specific about consequence, too

A risk’s consequence is completely separate from likelihood. Consequences happen when risks play out.

In every case take the “might” out of the risk statement and use “when” instead.

When people miss the important info at the bottom of the page, what happens next? Maybe they miss a special deal and overpay. Maybe they don’t see that the product isn’t shipping until next year, or never see that an updated version is also available.

When our tone of voice confuses people who know our brand well, the consequence is a change in their brand perception, which works against other branding that took a lot of effort to get right.

Finally, the product measurement case: When people buy something that doesn’t do what they want, you might end up with anything from unhappy customers to legal problems.

Be open and receptive

You’ll find that risk conversations like these are quite different to the way you’ve worked through content approvals in the past. You’re asking much deeper, better questions, so you’ll end up with a lot more information.

This is a good thing.

Getting your risk reporters to describe things from the reader’s point of view, and to properly break down likelihoods and consequences, brings clarity. Chances are that neither of you will have thought about your content like this before. You’ll be surprised by what you come up with, what ends up being important, and what doesn’t.

This is everyone else’s time to say what they think. Accept what you’re hearing and let the tools you’re using – more on those in part 3 – direct attention where you need it most.

Use other people’s expertise

Another good thing about these conversations is the way they show you how other people think. It’s a chance to learn from experts, whether a brand manager explaining the consequences of losing brand voice, or a lawyer detailing what happens when the small print isn’t there.

You’ll walk away smarter. Next time you’re working on similar stuff, you’ll have more knowledge and be better at your job.

Moving on, part 3 describes a framework to make sense of all the information you draw out of risk conversations. Then part 4 looks at the tools you use as you manage risks.

Doing it wrong: Ticketmaster’s email footer

Today’s example of how not to do things comes from Ticketmaster’s email footer, which has a delightful dark grey on black thing going on.

ticketmaster

At least the link text is visible. Unfortunately, in the first case that text is “click here”.

What could they be hiding? Let’s look at that again:

ticketmaster-highlight

Bastards.

Update

Part two: If you’re going to be inaccessible, you might as well be funny

Ok, so I just bought tickets from Ticketmaster. But I had to decode this wonderful CAPTCHA first.

CAPTCHA text: technicolor yawn