Risk management is a process, not a decision. In any situation involving an exposure to hazards (risk), there are a series of steps that should be taken to manage the process. These steps are taken on the basis of information, trust and agency. The goal of risk management is to protect the vulnerable (those most exposed to risks) while ensuring that the benefits (why the risk is taken) are delivered.
Section Two of the third part of my Post-COVID-19 Blueprint for a stable risk management strategy looks at seven key steps in a basic risk management process. It seeks to establish a clear alternative to the precautionary principle which has guided risk policy for much of the last two decades and which has failed catastrophically (as seen in our incapacity to manage the recent coronavirus pandemic risk in most Western countries).
Risk taking is something we do intuitively. Every step we take, every decision, every reaction is a result of a series of judgements in a risk management process. These decisions are built on trust: when I step onto a curb, I trust my motor skills; when I take a bite, I trust the food chain; when I apply the brakes, I trust the car. I take risks because I see benefits and in order to ensure them, I need to manage those risks (ie, lower my exposure to any hazards). If I don’t wish to take a risk, then the benefit is not worth the hazard exposure or trust and information is lacking.
I make it sound simple, ubiquitous and benign … because it is.
The Seven Key Steps of Risk Management
In a post-COVID-19 world, we need to build a more robust risk management process. This involves going back and re-learning how risks should be managed. There are seven key steps in the process.
1. Scenario building
The first step in any risk management process is to identify scenarios where alternatives and consequences are formulated. All possible events and their probabilities have to be mapped out, all possible worlds with different decision structures and policy drivers need to conceived and fed into analysis systems. Recent gamification tools have been developed to help policy actors strategise over a wider variety of risk scenarios in order to anticipate regulatory actions and provide guidance.
Unfortunately, the EU consensus-building policy model restricts scenario preparations. Most policy actors (lobbyists) only bring one scenario to the table (their ideal world) and refuse to consider other options or alternatives. This leaves consensus-building risk managers in a challenging situation: trying to appease a disparate hodgepodge of perfect worlds – that is anything but scenario building. The release of the European Commission’s Farm to Fork strategy (which will further handicap EU agriculture) at a time when we are facing a global pandemic, an economic collapse, food shortages and an imminent famine is indicative of precautionary policy with no scenario building.
2. Data collection
Collecting and analysing data (risk assessment) is the first step in the active process, evaluating any and all information from a purely scientific perspective. If you can’t measure it, you can’t manage it so clear data and scientific evidence is essential. In many hazard-exposure situations, data evolves so there should be a continuous risk assessment process. Inaccurate data can lead to bad decisions but good data can take time and demand for immediate answers tends to put unreasonable pressure on risk assessors.
While objective data is desired, there are many factors that can influence trust in this process: source of the data (eg, industry research, activists or citizen groups), collection methods, parameters, political bias, budgets and time-frames to mention a few). Often the data does not deliver the results hoped by those driving the risk assessment demands (eg, on suspected carcinogens and endocrine disrupting chemicals), but that does not discourage biased actors from demanding further data collection. These groups have recently convinced the EU and some Member State institutions to start relying on evidence provided and evaluated by non-expert citizen groups.
3. Costs – Benefits – Consequences
The second step of the process is risk analysis where the data from the risk assessment is fed into scenarios, considering the value of the benefits and the consequences if restrictions are applied. Risk management is not simply taking the data and applying the decision. In democratic institutions, risk managers need to consider the will of the public, socio-economic concerns, cultural practices and historical issues. Of course risk managers who ignore the clear scientific data may have bigger problems long term.
There may be situations where the data indicates a risk but the policymaker needs to take into consideration other factors (social, economic, cultural) in determining the best means to manage the risk. If, for example, data indicates an elevated carcinogenic risk from mobile phone radiation exposure, the risk manager will need to consider questions of benefits and consequences before banning mobile technologies. One such consideration is if there would be a means to reduce said risk exposures.
Since the EU BSE (mad-cow) crisis in the 1990s, it has been common practice to separate the risk manager from the risk assessor. As Churchill once said: Scientists should be on tap, but not on top. During the UK mad-cow debacle though, decision-makers claimed they were acting on the best advice of the experts (and tried to pass the buck to them when things blew up). Like beef consumption in the UK, respect for government advisers never recovered. I wonder what our “blameless” risk managers will say about their COVID-19 expert advice when the economies tank and the bill comes in.
4. Apply Risk Reduction Measures
Risk management is first and foremost about protecting the most vulnerable in a population. In perhaps my strongest article, I wrote how the main objective of science (and thus humanity) since the time of Francis Bacon has been to protect the weak. The development of medicines, better shelters, safer more abundant food and energy are not to benefit the strong, healthy and wealthy but to protect those at risk. The strong survive but the vulnerable need risk managers to ensure their safety.
Whatever measures that can be taken to reduce exposures to hazards need to be considered in the risk management process. These normal preventative steps are as simple as building a handrail by several steps or a speed-bump near a school. It is human nature to limit exposure to risks. When a baby learns to walk it is normal for the hands to go up, not only for balance but to prevent painful falls. In the risk management process, once a hazard is identified, means to reduce exposure to protect the vulnerable must be implemented.
In the ten weeks prior to the COVID-19 coronavirus outbreak reaching the EU, risk managers should have built firewalls around nursing homes. Instead, four months into the pandemic and risk managers across most European countries were just learning how bad the situation with the most vulnerable was and how little PPE the nursing home sector even had.
5. Identify and Communicate Vulnerabilities
Where risk reduction measures cannot practically be implemented to everyone, the next step is to empower individuals to understand and manage risks themselves. Despite their benefits, knives can cut people, hot water burns and certain caustic disinfectants could injure people if not handled properly. Risk communications, as a specialisation, emerged in the 1990s (along with better tools in science communications). It is built on trust and agency: that individuals properly informed of hazards and how to reduce their exposure will make rational decisions. The practice arose at a time of increased chemophobia, activist fear campaigns and poor regulatory reactions (from BSE to tainted blood to dioxin exposure in the food chain).
When researchers discovered slightly elevated cancer risks from the use of certain hormone replacement therapies, patients taking the medication to ease the effects of menopause and other complications consulted their doctors who informed them of the degree of risk, how to perhaps lower that risk through lifestyle activities and available alternative therapies. There was some decline in HRT use due to media alarmism, but women have not had this option taken away from them. People must be empowered to make informed risk management decisions themselves. Too often though, with the zero-risk zealots, individuals are no longer trusted to manage the risks themselves.
Trust is essential here. Not only do citizens need to trust the authorities and experts, but these authorities also have to trust and empower their populations. Nanny states tend to be more precautionary and docilian.
6. Reduce Risk Exposures to As Low As Reasonably Achievable (ALARA)
Most people intuitively understand Paracelsus: All things are poison and nothing is without poison; only the dose makes that a thing is no poison. When you identify a hazard (a poison), the task is to limit the exposure (the dose). How low? As low as reasonably achievable (ALARA). The debate is over what is reasonable, concerning the exposure, concerning the benefits and concerning the consequences. The role of the regulator in this process is to set targets to lower exposures to identified hazards (at a reasonable pace and level).
The achievements of the pesticide industry over the last 70 years is indicative of how ALARA can reduce exposures to hazardous chemicals to a very low level. The first goal of crop protection is to provide sufficient harvest to feed a population’s needs as insects, weeds and a variety of fungal diseases continuously threaten those yields. The second goal is to ensure performance while continually reducing the toxic exposure to the environment, the farmer and the consumer. In the 1950s and 60s the chemical loads were far higher than those used today. We are now at a point where the total level of carcinogens on food residues (MRLs) of a year’s consumption of fruit and vegetables is less than the carcinogens consumed in a single cup of coffee. That strikes me as a reasonable achievement.
7.1 Continuously Apply ALARA
Product stewardship involves a continuous improvement of a product, substance or technology. Once a risk is being managed reasonably, the next objective is to improve the process to reduce exposures to risk even more. The role of the regulator here is to set targets for continuous improvements in safety levels.
Car safety is a good example. In the 1960s, it was reasonably achievable to introduce seat belts in cars to reduce traffic deaths, making it mandatory in most countries by the 1980s. By the 1990s it was reasonable to add airbag devices to all cars to further protect drivers and in the 2010s automatic distance and breaking applications had become achievable. Exposure to car death risks are declining but they can always be lowered and the auto industry is continuously applying ALARA to improve safety. Although achievable, it would be unreasonable to ban all cars or reduce speed limits to 5km/h.
7.2 Consider Applying Precaution
If ALARA fails to protect individuals and if they are unable to reasonably manage their exposures, then precaution should be applied. This should be done after analysing the loss of benefits, the consequences, the risks of alternatives being worse and whether there are relevant innovative technologies being developed (what is known as the Innovation Principle). There are many cases where taking precaution is justified and reasonable. If I am at the top of a very icy stairway and there is no handrail, then precaution is prudent.
Precaution is the last step in the risk management process, when regulators have failed to prevent exposure to hazards and when benefits would have to then be given up to ensure safety. A rational use of precaution entails that a product, process or activity should be removed when suitable alternatives are in place, equally tested and not providing a significant disruption to social-economic practices. The phasing out of leaded fuel in the 1970s and 80s was a good example.
Activists like David Gee, when he was at the European Environment Agency, have campaigned hard over the last two decades to institutionalise precaution and the hazard-based approach as an alternative to risk management. When the European Commission banned certain neonicotinoid insecticides under an activist “Save the Bees” campaign, not only were there no other steps in the risk management process, farmers were left only with older, more toxic alternatives and have since abandoned many pollen-rich crops. The bees, the environment, farmers and consumers have all suffered under this misuse of precaution.
But precaution is not risk management; it is what we must do when the risk management process fails to protect the vulnerable and prevent hazardous exposure levels. Precaution attempted to circumvent the risk management process with series of zero-risk demands to be met: If not safe, if not certain, then not possible. It became a simple, politically expedient but highly irresponsible tool of convenience for cowardly policymakers.
An ounce of prevention
Several discussions on my social media pages revealed quite a few people equate prevention with precaution: that the only way to prevent and protect is by applying the precautionary principle. Nothing could be further from the truth. Precaution is applied not as a preventative measure but when there is no trust in our capacity to prevent harm. For example, if an effective cleaning chemical could harm people, we can manage the risk by entrusting individuals (with the label: Keep out of reach of children) or providing a child-proof cap to prevent accidental exposure. If there is no trust that the risk could be prevented, then the chemical would be banned.
But if you demand zero risk (that we must be certain that an action or substance is 100% safe or we stop, ban or disallow it), there is no risk management process, there is nothing to prevent as any hazard has been eliminated. This ideal zero-risk world though is pure fiction.
Did the demand for zero-risk precaution come about because of the lack of trust in our risk managers, or did it cement this distrust? I argued that precaution as the reversal of the burden of proof (guilty until proven innocent) undermines public trust in science and technology.
As a student of risk since the 1990s, I had assumed it was clear that the risk management process was for the purpose of prevention, of protecting the vulnerable from hazards while ensuring access to social goods and benefits. Precaution would be applied when the process failed to prevent harm and the benefits would have to be given up. We wear a sweater on a cold day if we want to prevent catching a cold – a normal risk management decision. We don’t go outside if we don’t trust our sweater or the weather forecast.
What two decades of precaution has done is redefine our prevention narrative by bringing in the demand for zero risk (an emotional demand for a high level of certainty for public safety). Hazard and risk are equated since zero risk entails zero exposure to hazards. At the policy level, a basic expectation is that all risk analyses will lead to a product being banned, a practice stopped or a technology blocked. This is madness.
Reducing exposures to hazards to a reasonable level (risk management) may not deliver zero-risk safety – but this is normal behaviour and acceptable risks are taken in an ubiquitous manner every day (from going outside, to driving a car to climbing a flight of stairs) … until trust is lost and the demand for zero risk is imposed. The precautionary principle has bastardised risk management and left populations naive, vulnerable and unprepared.
Risk Management and COVID-19
Too many readers have scoffed at my criticism of the precautionary COVID-19 lockdowns saying that was the only thing that could have been done. Equating prevention with precaution these docilians could not have imagined any other way to manage the coronavirus risk. This is indicative of the level of risk ignorance after two decades of reliance on the uncertainty management tool of the precautionary principle.
Over the last two decades, precaution (to stop, ban or not allow a substance or action if it is not certain to be safe) usurped the risk management process with an easy, expedient eliminatory checklist for a world demanding zero risk. The COVID-19 coronavirus outbreak showed the irresponsibility of relying only on this principle. In the ten weeks between Wuhan and European lockdowns, not one of these seven risk management steps were seriously implemented by any European risk management authorities.
The only thing they could do, when their poorly-prepared healthcare systems were on the brink of being overwhelmed, was to apply a precautionary series of lockdown measures with no analysis of the consequences. Even four months in, officials in countries from France to Belgium to the UK, Sweden and Croatia admitted they still could not offer the most basic protection to the most vulnerable of their populations – those in nursing homes where around half of all deaths occurred.
What should have been done to manage the COVID-19 coronavirus risk? Here are how the seven key steps to risk management should have been applied in the first months of 2020.
Step 1: Scenario Building
In January when there was ample evidence that the COVID-19 coronavirus was capable to be transmitted between humans, when certain risk factors were emerging and when the Chinese authorities were battling to contain the spread, European authorities needed to draw up scenarios. Those scenarios would have identified the most vulnerable groups, the elderly and those with pre-existing medical conditions, and drawn up means to protect nursing homes and high-risk individuals. This was well-known in January based on data from Wuhan, and there was more than enough time to introduce preventative measures. In reality, no one thought it would hit so hard in the West because no one was building scenarios.
Step 2: Risk Assessment
Data needed to be collected to examine the cost-benefit to society of certain actions. How much PPE was available, how could hospital beds be increased, how could outbreaks be managed without disrupting economies or social benefits? How much would it affect transmissions to shut the schools compared to other alternatives? In reality, the UK had “discovered“, in mid-March, that their healthcare system only had 5000 ventilators. They should have known this in January.
Step 3: Costs – Benefits – Consequences
With the scenarios drawn up and more data coming in, by early February, before the first cases were spread in the European Union, a clear cost-benefit risk analyses of all actions should have been formulated. Could lives of the most vulnerable be saved without locking down entire populations? Could the virus spread be managed through the healthy population while the elderly were better protected? Could populations be informed and trusted to protect themselves? As the Chinese were building new hospitals in Wuhan, shouldn’t healthcare services across the EU have been tasked with crisis management provisions? In reality, no alternatives to protect the most vulnerable were presented until it was well and truly too late.
Step 4: Risk-Reduction Measures
By the end of February, as cases in Europe were increasingly emerging, what further risk-reduction measures could have been introduced? With clear knowledge of the crisis in Kirkland, Seattle and the Diamond Princess debacle in Yokohama, the nursing homes and the vulnerable should have been locked down to prevent a surge in deaths among the most at risk. From the Korean experience, testing tools needed to be built up so that any local hotspots could be immediately tracked and traced. In reality, the only risk-reduction measures applied in February were to monitor travellers from certain regions of China.
Step 5: Risk Communications
By early March it became clear that cases in Europe were spreading and that the virus could not be contained. The risks needed to be clearly communicated to populations with measures available to empower individuals, including not only social distancing and reducing travelling, but means to protect (isolate) vulnerable family members. They also needed to know how to take care of themselves, strengthen their immunity levels to better survive the virus should they get infected. In reality, Europeans were reassured that they would not get sick; all they needed to do was wash their hands (… with soap).
Step 6: ALARA
As the virus became a pandemic in March harder decisions needed to be taken, including the possibility of large-scale societal lockdowns. Given the dangerous consequences of such an unknown action, other alternatives needed to be considered to ensure public safety while limiting the consequences to as low as reasonably achievable. Could more lives be saved without destroying economies, food supplies and general public wellness? In reality, by March 15 there was a surge of European countries locking down their citizens … not because that was the best risk-reduction measures but because their neighbours were doing it.
Step 7: More ALARA or Precaution
With constant monitoring by risk managers and continuous risk-reduction measures to prevent spikes in coronavirus transmission rates, could certain sectors of the economy and civil society be reopened? If not, precautionary measures of large scale societal and economic lockdowns would then, and only then, have to be implemented. The Swedish model seems to reflect this thinking (except for the authority’s admitted failure in protecting the elderly in care). In reality, most European countries went straight to the precautionary lockdowns with no attempts at risk management.
As a postscript, the most insidious element of the precautionary principle is how it protects the elite lifestyles of the privileged at a terrible cost to the poor who rely more on the benefits of science and technology. Having no regard for consequences implies having no regard for the people who will suffer. So while these zealots can afford organic food and elevated energy costs, those of lower means go without. When the wealthy felt threatened by a coronavirus, they chose to lock themselves away in their gardens and work from home. I’m sure they’ll tell you how much they have suffered for the common good. But what about those in less stately homes, those whose income is on the streets, those who have to choose between eating and a virus? The precautionary principle does not protect the most vulnerable … it’s an oblivious policy tool for the privileged.
I firmly believe if we had had a proper risk management process in place, if we had had a population that understood risks (rather than activist docilians demanding zero risk) and if we had had a leadership that understood the need to act early and decisively (rather than get caught up in endless consensus building processes), the precautionary lockdowns could have been avoided or their severity lessened, so many lives could have been saved and the ensuing economic collapse, famines and social malaise could have been spared.
We need to rebuild risk management in the aftermath of this horrible policy failure, moving from a precaution-based zero-risk approach to one where risks are managed in a reasonable manner and people are empowered with the tools to protect themselves. It is to my post-COVID-19 risk management blueprint that this series now turns for its conclusion.