he poor quality of standard letters and forms systems, on which I commented in my last two blogs, are only fully apparent when comparing data from good and bad systems design.
We routinely collect such data as part of our Standards program, and use it to set achievable standards in communication system design.
CRI defines many types of Communication Standards. Here are two.
- System specifications and maintenance requirements standards
Full set of specifications—business rules, document building grammar, detailed design layout graphics and typographical style sheets for all elements—that can be implemented and tested in an organisation in less than six weeks using internal resources.
A one week turnaround on any subsequent modifications to document content or business rules using internal resources.
- Acceptability and Usability standards
Citizens who say they are able to read English should be able to:
• find at least 90% of what they look for
• use appropriately at least 90% of what they find.
Standards are useful at many stages in a system’s lifecycle.
At the Baseline Measurement stage we measure the existing performance of the system. Because this type of baseline measurement is seldom done, many organisations have no idea just how bad their system’s performance actually is.
The spreadsheet opposite illustrates the data from the baseline measurement of a client’s system, showing in yellow all the instances where the responses were below an acceptable standardSt, and showing in grey where the responses were potentially below an acceptable level. Twenty five percent of cells in the spreadsheet were yellow.
This finding at the start of a project is quite normal, though it came as a complete shock to an organisation’s management. Generally, we expect the output of most systems in current use to perform poorly. A typical result at the Baseline Measurement stage would be:
Less than 40% of letter readers can find and use the content of the letter to an acceptable standard.
Later, at the Monitoring Stage, after the system has gone live, we conduct another Communication Standards study and compare it with the Baseline Measurement data. This before and after data is critical for:
- satisfying clients that they have realised a significant return on investment (ROI)
- setting realistic performance standards for future projects, and
- enabling us to measure the performance of such systems worldwide so that we can inform debates about the standards and quality of communication that are used in different countries.
The last of these will become a leading part of our future research program, as we shift CRI’s focus from research-based and evidence-based advice and design to research-based advocacy. But more on that in my next blog.
If you look at my last two blogs on letters and forms, you will gather that many systems fail to meet good practice standards.
This need not be so. I will end this mini rant about the standard of letter and form systems with a little anecdote from one of our system design projects.
In a project for a major insurance group, we handed in a complete set of specifications just before the Christmas break, asking the internal project manager how long the IT department had allocated to code, test and implement the new system. He said they had allocated a full year.
When we came back from the Christmas break at the end of January, we had a meeting with the project manager to discuss problems and any issues at the early stage of code writing. “It’s all done, and tested” he said. “We go into production in February”!
The cost savings for the organisation were substantial. For every dollar spent on our fees, the organisation saved over $100 in IT costs. Not a bad ROI, and fairly typical of our projects in this area.
Cost savings have never been our primary focus. Our primary focus has always been—and will continue to be—on improving the relationship between organisations and the people who have to read their letters and use their forms. As it turned out, the ROI was a by-product.