Contents
Risk assessment is generally seen as a crucial step in creating information products; so much so that it is often legally required. Too bad it is also deeply flawed. Risk assessment lacks empirical grounding, since the required data is usually lacking, unobtainable, and thus simply made up. Many methods are unsound, or at least useless to technical communicators, because the probabilistic conception of risk creates a plethora of legal and methodological problems. And the underlying ethics are questionable, to say the least. No wonder most risk assessments end up as mere box-ticking exercises, compliance theater, dead spreadsheets filed and forgotten mere moments after their creation.
But it does not have to be this way. By reminding ourselves why we are assessing risks in the first place, and with some creative rule-bending, we can come up with something better. All we need is some Systems Engineering, some STS, and a little bit of Stoic philosophy.
Takeaways
In this talk, we investigate the issues that plague risk assessment. We use these insights to come up with a better framework that is a) useful to technical communicators, b) legally sound, and c) actually helps us to create safer products.
Prior knowledge
The audience should be familar with risk-assessment methodology (how and why), and ideally safety laws.