> The “blameless” aspect is crucial: a good postmortem avoids conclusions like “Dan wrote a bug and it brought down our service” and instead says “Dan wrote a bug and it brought down the service: we need to improve our testing and deployment processes to make sure that they catch this category of bugs in the future.”
The offending dog's name is still there...
I would have added some more things that you could have mitigated - like lowering your sail to half mast after the wind increase. Or only using the jib or even switching to engine power.
Which in the context of incident prevention translates into adapting to what is happening and maintaining the safety profile to prevent the incident.
Half mast sale - less force on the mast, more time to react to things when going solo.
I do something similar when interviewing, asking candidates to walk through a project they’ve worked on that didn’t go as planned, and what they learned.
Usually it’s work-related, but sometimes the personal stories like this sailing one give a better insight and show real understanding of systematic failings and that they really have the right mindset. Those real world examples speak volumes.
This is a cool idea! At first I thought it was that they give you notes about what happened, and you have to process the information real-time and suggest practical improvements.
It reminds me of NTSB reports, particularly around aircraft accidents, where even if one person was definitely to blame for the accident happening (eg a pilot performed an incorrect action that least to the loss of a plane), the report will recommend things like better training and testing standards to make sure that a pilot who crashes through incompetence can be trained more, without blaming the pilot specifically