One of the items he touched on is accountability and performance measurement. What Josh wrote about performance measurement and metrics is exactly what part of the problem is. We really have not spent much time determining outcomes from what has been accomplished with the grant funding that has been provided. The application process talks to it in relationship to risks and strategies, but after implementation the outcomes are not being captured and reported. I'm not talking about "outputs" which might be the number of people trained, exercises conducted or items of equipment purchased. See my thoughts on this issue below.
The Emergency Management Performance Measurement Challenge: The traditional way to measure grant outcomes is for jurisdictions to report on the number of plans completed, students trained, exercises conducted and items of equipment purchased. While this method is an accounting for the financial resources provided the measurements are output oriented and not outcome directed.
The challenge in establishing outcome based measurements is that the frequency of events where emergency management employs resources provided by grants is very infrequent and typically not in large scale disasters. Additionally, the type and scope of disasters varies widely. It would be possible for some jurisdictions and emergency managers to go many years, perhaps an entire career, before they have an event that may or not be applicable to a measurement that they have established. In the case of a catastrophic disaster the measurement might not be a valid one because the resources of the jurisdiction are totally overwhelmed.
The International Association of Emergency Managers (IAEM) and the Urban Area Security Initiative (UASI) Association have both attempted to provide accountability reports to Congress. In both instances they have provided summary reports on the manner in which the funding has been utilized by funding category or by providing examples of the equipment that has been purchased and how it is being employed for everyday emergencies or specific disasters. These reports remain output based at best and have not established defined existing or future outcomes.
Performance Management: Elements of performance measurement have been creeping into state and local government for the last ten years. Many agencies that have been forced to establish performance measures approach the task as a compliance duty that has been placed upon them. In those instances data is collected and reported in accordance with the procedures they have been given. In reality performance measurement’s real goal is to lead to performance management. Performance management means taking the data collected and using it to improve the agency’s ability to deliver products or services in a more effective or efficient manner.
The following is one example of how performance management has actually occurred in an emergency management agency. In order to better understand how staff was using their available time a time tracking system was developed that had staff inputting their time in 30 minute intervals and categorizing the time by function or product. After using the system for a year analysis of this data revealed that 40% of staff time was being used to administer Federal Homeland Security Grant Funds. This data was useful for several purposes. It was provided as feedback to Federal authorities on the burden of administering grants. Secondly, the agency took steps to hire additional staff dedicated to grant administration in order to lessen the administrative burden on program personnel and allow them to focus their time and efforts on higher level tasks like the coordination and implementation of programmatic elements that would provide greater disaster resiliency for the regional community.
Instead of seeing performance measurement as a fiery hoops to jump through, we need to embrace a culture of measuring what we are doing in order to improve our performance.