By Ron Gantt
This is Gerald. Gerald works for a solid waste collections organization at a large municipality in the middle of the United States. He drives a large truck that picks up carts full of garbage and hauls them off to the local garbage dumps. In the process of doing this work, Gerald and his colleagues occasionally have accidents and incidents. Whether it is injuries, such as strains and sprains, or hitting other vehicles or damaging the property of residents, bad things sometimes happen in the course of picking up garbage.
In an effort to reduce the number of these negative events, we partnered with the organization to figure out what was going on. But how? The traditional approach to these projects, and the one the organization initially favored, was to have an expert come into the organization, tell them the right way to do things (based on understanding how and why things were going wrong), and help them create a process that gets the operators to do this one right way. Then, any accidents would result from not doing things the right way, and those responsible could be held accountable.
But that’s not how these things work in real life organizations. Accidents are not a product of something breaking or failing, but rather a product of the normal work system functioning in an abnormal way. This means that creating a system that prevents failure by only looking at how things fail will not work. You cannot understand failure by only looking at failure. You have to understand how the system normally functions, how it normally works, and how that same functioning sometimes fails.
Based on this, we proposed an approach that was based on participant-observation of workers in the real work environment. We would “ride along” with workers to see what the world looked like from their perspective. This way we could see what goal conflicts, constraints, and variability Gerald and his colleagues had to manage, what resources the organization has provided to help Gerald deal with these issues, and what strategies Gerald used to bridge the gap between how the organization imagined work was done and how work had to be done. From there we could identify opportunities to support the work that Gerald and his colleagues are doing, as well as identify adaptations that Gerald and his colleagues are adopting to deal with the complexity of their work.
But for an organization convinced that the traditional approach would be best, an approach based on learning from the things that go well did not make sense at first. In helping organizations understand what we are trying to do, we often find it is important to first break down the idea that we are the experts. In particular, safety professionals often have a nasty habit of telling people how to do tasks that the safety professional has not, in fact, ever done. So, we explained to the organization that all hazards and other “unsafe conditions” exist within the context of work. They are not there randomly, placed intentionally to hurt people and break stuff. To solve problems, you first have to understand them. If we are going to develop processes and tools that help them deal with the dangers Gerald and his colleagues face, we need to understand the work processes that surround and give rise to those dangers.
But if you’re going to get a clear picture of how work is done, you not only need to convince the organization, but you have to convince the workers themselves. They are often not used to people, especially safety people, observing their work. As a result, they often default to showing you how what they think you want to see, not actually what you’re looking for. You can alleviate some of this by providing communication to workers in advance, but often that is not enough.
The best way to get across to Gerald what we were trying to do was to spend a good deal of time when we first got into the truck in conversation. We allowed him and his colleagues time to ask us questions and even offered to allow them to read the notes we were writing during the observation. Again, we approached this by being deferential, noting that we wanted to help solve the problems not by looking at Gerald, but by understanding what he and his colleagues have to deal with on a daily basis. To do that we needed to learn from the experts. By acknowledging our ignorance and affirming their expertise it placed us on an even plane with them. Without a significant power differential Gerald and his colleagues were very open with us about the realities they faced every day.
Another important lesson we learned in this process is to avoid taking the lead on anything. Do not do anything unless they do it first. This is especially important when it comes to things that are procedurally required. For example, we would not put on any safety equipment (even if we knew it was required) until they would put it on. We would not put on our seat belt until they did. The reason is that we found that when we led the way on anything, the workers would follow suit. If we put on our hard hat, they would put theirs on. If we didn’t, then they would be more likely to do what they normally did. We also wouldn’t ask them about it until later. For example, if they weren’t wearing a seat belt, we would wait until a few hours into ride, and then ask them about it. Usually, we would ask in a way that allowed us to verify a hypothesis. For example, “I noticed that you have to get in and out of the truck a lot to manually pick up bags. I imagine that would make it hard to wear a seat belt, because you’d have to take it on and off a lot. Is that right?” Then the worker would confirm or deny what we saw.
The amazing thing about all of this was how open Gerald and his colleagues were with us. We got to see how complex their work was, including strategies they use to manage risk in real time and they even told us about the incidents (some of which were not reported to the company) that taught them these strategies. We were then able to use this information in two ways. First, we could present a clearer picture to management about work as done. Showing managers the realities workers face, the struggles and constraints they deal with, helps managers make better decisions, whether that is in terms of allocation of resources, or simply understanding failure better.
Second, we used a cross-functional team that included front-line workers, including Gerald, safety staff, supervisors and managers (we simply called it the Improvement Team) to go over the information and identify opportunities for to improve system resilience. This went beyond identifying and controlling hazards. We focused on ways to improve the flow of information, to reduce physical and cognitive load on drivers, and eliminating bottlenecks in the system. For small changes, employees were able to implement improvements quickly. For larger changes, we developed microexperiments to test ideas whenever possible. This provides helps managers by managing consequences as much as possible and giving them more information before having to engage in large, potentially politically consequential initiatives. At the end of the process, Gerald and the rest of the improvement team came up with a great list of potential improvements. Perhaps most importantly, the workers all reported being excited by the process. They felt heard and were hopeful that they may get support dealing with the issues they deal with on a daily basis. The conversations between managers and front-line staff were constructive, which workers realizing that management was interested in listening to them and managers realizing that workers often have great ideas to contribute. And we find that these non-quantifiable relational factors, dialogue and trust, often increase system resilience in ways more profound than even physical changes.