top of page

The Risks Hiding in Plain Sight

Initial thrust from NASA Space Shuttle launch (iStockphoto)
Initial thrust from NASA Space Shuttle launch (iStockphoto)

73 seconds.


That’s how much time elapsed before the Space Shuttle Challenger exploded after liftoff on January 28, 1986. The reality, however, is that this was a tragedy decades in the making. The lessons, and the risks, are just as applicable now as they were 40 years ago.


The Shift

During the Apollo missions in the 1960s and early ’70s, NASA’s posture toward risk was simple: we don’t launch until we can prove it’s safe to do so. Unless all systems were “Go,” the default was “No Go.” Over time, that posture slowly, almost imperceptibly, changed.


In the Shuttle era, NASA strived to make the launching of spacecraft seem routine, when they were (and still are) anything but. Like any other large, bureaucratic, high-profile organization, they operated within a complex mix of financial, scheduling, and political pressures. All this contributed to a migration toward a different, albeit unspoken, posture: “We launch unless we can prove it’s NOT safe.”


73 seconds after liftoff, we all learned it wasn’t safe.


The Warning Signs

Concerns about the structural integrity of the rubber O-rings at cold temperatures were raised by engineers before that fateful January launch. The shuttle had sat on the launchpad overnight in sub-freezing temperatures. At launch time, it was only 36°F, colder than any previous shuttle flight. But concerns about the O-rings had been raised before. In fact, engineers had observed degradation on flights that launched in much warmer conditions.


The risks were noted. Efforts were started to study and address the concerns. But the successful launches continued, and each one seemed to confirm that the O-ring issue wasn’t that serious.


The Pattern

The change in NASA’s stance on risk is an example of organizational drift: when culture, priorities, and practices shift gradually over time. Each individual change makes sense in the moment. But over time, the gap between who you say you are and how you actually operate widens, often without anyone noticing.


Organizational drift is itself a risk precisely because it happens so slowly. NASA’s leaders didn’t wake up one morning and decide to throw caution to the wind. The erosion was incremental, invisible in real-time. And one of their most consequential mistakes was treating the lack of catastrophic failure as evidence of safety.


The Question

With the clarity that hindsight provides, we can point out NASA’s mistakes. But I’m more interested in helping leaders avoid making similar missteps in the future.


Here’s a diagnostic question any organization can ask to surface signs of organizational drift:


“What practices, norms, or priorities that once defined us have quietly shifted, and what risks are we not seeing as a result?”

This question works because it reframes how leaders think about risk. Most organizations are reasonably good at scanning for external threats: market shifts, competitive pressures, regulatory changes. But drift happens internally, in the space between stated values and daily decisions. The question forces leaders to look inward, to examine not just what’s changed in the world around them, but what’s changed in how they operate.


The most dangerous risks are the ones we’ve normalized. NASA didn’t see the O-ring problem clearly because each successful launch made it easier to believe the risk was acceptable. The same pattern plays out in organizations everywhere: the shortcut that becomes standard practice, the safety check that gets skipped when schedules tighten, the value that appears in the mission statement but not in the budget.


Asking this question won’t prevent every disaster, but it can help leaders see organizational and operational risks hiding in plain sight.


Comments


Featured Posts
Recent Posts
Archive
Search By Tags
    bottom of page