Normal Accidents: the inherent risks of complex systems

Just as the flap of a butterfly’s wing in the Pacific can supposedly lead to a storm in Chicago, Risk management experts have long argued that complex, tightly coupled systems inevitably break down.

In his 1984 book Normal Accidents: Living with High-Risk Technologies, Charles Perrow, visiting professor at Stanford University who specializes in the inherent risks of complex systems, argues that disasters in complex, tightly coupled systems are inevitable for three reasons:

People make mistakes, Big accidents almost always escalate from small incidents, Many disasters stem not from the technology but from an organisational failure.

Nor can engineering redundancy eliminate the risk, he wrote, because the redundancies add more complexity to the system, lead to a shirking of responsibility among workers, or to pressures to increase production speed.

In a survey of 250 companies’ chief information security officers, McKinsey found that on average, few believe their companies are prepared: the typical security executive gives his company a C or C- grade on six of seven key measures institutions are using to reduce the potential for cyber attacks. Only in incident response and testing did they give themselves a C+. And most CIOs told McKinsey they don’t put their company’s most sensitive data on an IT cloud.