To look more deeply at this challenge, Emergency Management asked Richard Gelb, performance management lead at the King County, Wash., Department of Natural Resources, a series of questions about the basics of performance measurement and the difficulty of quantifying results for individual programs, regional efforts and catastrophic planning.
Gelb developed a disaster training and education program, coordinated King County’s Regional Disaster Plan, and served as EOC supervisor for the1999 Seattle World Trade Organization riots. He contributed as a strategic adviser for Seattle’s Office of Sustainability and Environment and as sustainable building lead for Seattle Parks.
He provided written answers to our questions.
Question: What is the difference between performance measurement and performance management?
When performance measurement is used to improve how work is being accomplished and how an opportunity is being optimized, then measurement is part of the performance management process.
While establishing, tracking and making use of performance measures is a foundational aspect of performance management, performance can be measured with no intent of or contribution toward influencing management. Such an investment in performance measurement (with no management value) could occur based on an external requirement for accountability and/or simply reporting. Such measurement (that doesn’t inform management decisions) may be:
- one-offs/not ongoing, only retrospective in nature;
- not configured to illuminate the effectiveness of actions; and/or
- not structured to inform adjustments in strategy or approach within the purview of the actors.
What is it about emergency management that makes it difficult to develop and implement meaningful measures?
Whereas many areas of government seek to maintain or enhance community outcomes that lend themselves to monitoring and tracking (water quality, crime rates, public health), emergency management seeks advance preparedness levels, operational response capacities and community resilience that are difficult to measure. Additionally the field is relatively young, so national benchmarking tools don’t yet exist that would provide a jurisdiction with a common yardstick against which meaningful progress can be measured.
A related confounding factor in defining broadly applicable performance measures for emergency managers is that risks and vulnerabilities differ significantly by region and jurisdictional complexity. Since relevant performance measures and targets are highly context-sensitive, the development and adoption of performance tools and approaches are more difficult than in many other disciplines of government.
Recently the International Association of Emergency Managers provided a report to Congress to account for how Emergency Management Performance Grant funds have benefited the nation. They recounted the number of plans updated, exercises conducted and people trained. Is there a better way to report on outcomes versus the outputs of the funding?
Reporting on outputs does help establish accountability for investments made, but accounting for actions is not as valuable as tracking and reporting on outcomes — the results that have accrued from the actions. Until the emergency management field increases its game by more clearly defining the intended outcomes of program actions and outputs, the measurement of outputs will remain the logical fallback position.
The significant challenge for this field is to assess, determine and build measurement capacity for the key functional and community characteristics we seek to maintain and enhance. Only with an understanding of baseline conditions for these functions and community attributes can we build mechanisms to facilitate national learning about the degree various actions (outputs) correlate to improvements in the intended outcomes.
To develop a national evidence base on emergency management practice, a general sequence of steps might include:
- Identifying and defining the outcomes that deployment of emergency management resources seek to influence.
- Determining baseline levels for these conditions or outcomes.
- Assessing and identifying the effectiveness of actions (the degree that outputs move the needle on the intended outcomes).
- Sharing and validating emerging findings, and developing capacity for national learning about the relationship between outputs and outcomes.
- Building and enhancing a culture of continuous improvement, organizational learning and ongoing adjustment in actions — based on emerging findings about efficacy of outputs and actions in improving functions and community conditions.
How are expectations for improving disaster readiness and responsiveness typically measured for metropolitan areas?
Most performance measures for emergency management functions focus on output delivery: plans completed or updated, exercises conducted, corrective actions addressed and training sessions deployed. Sometimes third-party validation comes into play, such as ratings via the National Flood Insurance Program.
Why and in what way are traditional performance management tools, techniques and methods difficult to apply in the context of regional catastrophic event prevention, mitigation, response and recovery?
Increasingly governments are recognizing the need for two levels of measurement:
1. Measures of the community conditions they seek to improve: health, environmental, economic prosperity.
2. Measures of actions taken/under way, which are assumed to improve community conditions.
Traditional performance management tools can be difficult to apply in advancing regional catastrophic preparedness because:
- There isn’t (yet) a broadly shared agreement on how to characterize and measure regional catastrophic preparedness levels at the community or regional scale.
- Most discoverable are the “determinants of readiness” (those actions upstream of preparedness) of mitigation, responsiveness and resiliency.
- Myriad organizations, in public, private, nonprofit sectors, are the actors who contribute to regional preparedness.
- This distributed set of actors is difficult to survey, inventory, assess and measure.
What unique considerations come into play when addressing regional-scale collaborations and partnerships?
Often regional collaborations and partnerships struggle with and/or omit the critical step of attributing responsibility to the individual organizations that have a role in advancing the collaboration. In other words, the process yields an articulation of regional improvements, outcomes, strategies or a general set of actions, without defining:
- who will do what by when;
- how these actions will be tracked and reported;
- what happens if these steps are not take; and
- when and how adjustments will occur, based on findings about the degree actions are affecting the desired regional outcome.
Why is it important to identify and define “determinants of readiness,” and what promising tools are emerging to help with this critical need?
Because catastrophic events are episodic and unique rather than predicable and repetitive, measuring progress needs to focus on determinants of readiness. Simply put, we don’t have the regularity of occurrence to use events themselves as the yardstick of readiness.
Promising tools for helping track, measure and assess determinants of readiness include the resident and business surveys and behavioral surveys, including the Centers for Disease Control and Prevention’s Behavior Risk Factor Surveillance System.
What is the importance, the challenge and the promise in looking at catastrophic event readiness and resilience from the whole community or cross-sectional perspective?
Regional catastrophic preparedness is too complex and distributed to expect that the public sector can unilaterally provide this to any community or region. Because of the criticality of engaging private (residents and businesses) and civic sector (nonprofit and community-based) participation, the measurement construct for effective performance management needs to be framed to include the key actors in these sectors.
Therefore, a necessary (and often overlooked) sequence is to:
- identify key actors in each sector;
- determine the roles of key actors by sector and actor type;
- advance disaster preparedness literacy; and
- elicit commitment for outcomes and contributions that are most germane to their role in whole community readiness for catastrophic events.