Last July the Behavioural Insights Team in the UK reported on what looks like a remarkably thorough study of ways to improve the understanding of consumer contracts. It's still a hot topic, following the Consumer Rights Act 2015 and the government's 2018 Green Paper on Modernising Consumer Markets. And, of course, the explosion of interest in legal information design. The BIT is an independent consultancy that started life as a government programme known as the 'nudge unit'.
Entitled Improving consumer understanding of contractual terms and privacy policies: evidence-based actions for businesses, the report was commissioned by the Department for Business, Energy & Industrial Strategy... a lot of words in this sentence, I know. There is a summary version with guidance, a literature review and a technical report describing a series of experiments which tested a range of ideas for making terms and conditions easier to understand, and for encouraging people to read them. The materials tested are included (not always the case in all research reports on information design).
The improvements look impressive. For example, using icons is said to improve comprehension of key terms on an on-line order form by 34%. But please don't stop reading now and tweet this because there are problems.
The first problem is that this finding becomes a lot less impressive when you realise that the 34% improvement takes us from 42% comprehension to 57%. This means that the other 43% of people were not helped, so best practice is nowhere near good enough. Most of the improvements achieved were of a similar order, so the real conclusion is: "no matter what we tried, a huge number of people did not understand the key terms and conditions" (my words, my conclusion, not theirs). This should not surprise us because the 43% of respondents who were not helped is roughly the proportion of the population who score poorly in functional literacy tests.
A similar level of understanding for the Stop sign on roads would cause mayhem. But when someone who struggles with literacy fails to understand the terms and conditions for a payday loan, we could say they are in danger of a cognitive accident. Companies have a duty of care, and it's time to risk assess the small print in the same way as we risk assess physical environments.
A second problem is that, while the research team provide very thorough explanations and justifications for their experimental methodology and statistical analysis, very little is said about how they created the experimental materials, which just seem to appear with little critical description or argument.
The design of the materials is so amateur that the designer of the guidance document has felt the need to redesign them for publication in a set of guidelines, lest readers assume the research endorses everything they see. The design shown above with a green background, and green icons is from the guidance report, but in the full methodology report (below) the icons are black and the background is white. And the 'items' column is on the left, a lot of the text is upper case, and the relationship between response boxes ad response cues is different. In the guidelines version the icons are aligned on the left edge, a more prominent place for skim-readers to spot. So while this makes it a better guide to good practice, it misrepresents the research.
The redesign is careless, too. In the version that was researched, the customer is given a choice between 'Pay by card' and 'Digital wallet'. One of the key terms (the last one, with an 'i' icon) only concerns the digital wallet option so is in the wrong place. In the guidelines version (above), 'digital wallet' has become PayPal although there is no PayPal option.
This may not matter, but then again it may.
This particular study also includes a version which uses FAQs for the key terms:
It is immediately apparent that the FAQ version also includes icons. How then are we to interpret the results as reported? The FAQ version improved understanding by 36%, while icons improved it by 34%. But it is clear here that the 36% improvement was from FAQs + icons. This implies that the FAQ aspect only added 2% difference to the performance of icons alone. We'll never know, because they didn't test FAQs without icons.
So while the stats are reported and argued for in obsessive detail, they are undermined by sloppy inattention to design.
Here's another problem. The control condition was an order form which presented none of the terms, but required people to click to read a full set of small print. However, only 11.06% of people did so, although this group still achieved 42% in the comprehension test. So... erm... 89% of people were just guessing from their general knowledge of mail order?
Oh, and one more problem. The report does not tell us whether the participants could see the materials while they answered the comprehension questions. It just says "After seeing the experiment’s materials, participants... answered eight comprehension questions about the material they had seen...".
This is important because in real life you can have the material in front of you if you want the answers at the point you are placing the order. That is the assumption that information designers would make in this case. They would be trying to draw people's attention to key information, not necessarily to help them memorise it. If later access to the same information was important the designer could include it a confirmatory email which the customer could retain and refer to. For example, the 90 day return period could be highlighted on the delivery note. And so on.
In real life, people looking at the FAQ and icon conditions would have no problem in seeing the key information, compared with the control group who would have to click and read a traditional long set of small print. And we know that very few people do that.
If they had the material in front of them, then 57% comprehension is a terrible result. So I suspect they didn't have it in front of them, in which case this is not a test of understanding but of memory, and this is not the same thing at all. And if memory is of interest, educational research standardly distinguishes between immediate recall and delayed recall following a distractor task or the passage of time.
This has been a recurring theme over a long career – expensive and apparently thorough research on the presentation of information, undermined by inattention to design processes and expert critical judgement. The first publication I co-authored in 1976 was on exactly this issue: Criticism, alternatives and tests. It brings out the grumpy old man in me every time.