Question about ASRS

Subscribe
I have one more question about the Aviation Safety Reporting System, or ASRS. I am not a pilot. I am researching an article about applying aviation safety principles to medicine. In medicine we have nothing like ASRS, so I don't have any insight as to how well this works other than what I have read. If anyone has time to address any of the following questions about ASRS, or organizational feedback in aviation, your insights would be most helpful.

1. I understand that ASRS reports are used to alert pilots about safety hazards in a monthly newsletter, "Callback". However, are these reports also used to modify safety regulations in aviation? If so how does that work? Are new regulations recommended by NASA? Or do they come from the FAA or the individual carriers? Do pilots have a chance to give input on any new regulations from some mechanism such as public comment, or through actions of your union / other representatives?

2. One of the most common areas for high hazard industries to fail, is when "production pressure" is not limited, and motivations for profit / efficiency are used to discount safety concerns. How do you keep this from happening in aviation?

3. Aviation is both very safe and very efficient. Do pilots ever give feedback on ideas to improve efficiency? Or does efficiency primarily come from competition between carriers?

I am particularly interested in how different types of organizational feedback systems function, and how we can use these mechanisms in health care. Feedback is one of the key differences between chaotic and adaptive systems. Feedback seems to be ubiquitous in high hazard industries that are also very safe. However, a feedback mechanism is also used in manufacturing (Toyota production system) to improve both quality and efficiency / cost. As far as I can tell, this "dual use" does not occur in high hazard industries.

Any insights, including disagreements, are welcome.
Reply
Quote: 1. I understand that ASRS reports are used to alert pilots about safety hazards in a monthly newsletter, "Callback". However, are these reports also used to modify safety regulations in aviation? If so how does that work? Are new regulations recommended by NASA? Or do they come from the FAA or the individual carriers? Do pilots have a chance to give input on any new regulations from some mechanism such as public comment, or through actions of your union / other representatives?

The National Aeronautics and Space Administration (and its predecessor NACA) have a long history of doing research but no ruling making authority. They can be tasked to study an issue (for example, the affect of ice on a wing) and report to the FAA. The NTSB studies accidents and does have a list of suggested rule changes but again, no rule making authority. The FAA is sole rulemaker. An issue comes up and the cry goes out, "We need a rule!" The FAA staff generates a "proposed rule" and circulates it as a Notice of Proposed Rulemaking (NPRM). Looking at the FAA NPRM website, there are rules open covering noise limits for tilt rotor aircraft, dispatcher qualifications, pilots updating navigation data bases and safety enhancements to commercial airports among other things open right now. There will be a comment period, usually a few months where any party, whether an individual, union or airline can submit comments. At the close of the comment period, the FAA stuff will digest the comments and either publish a final rule that goes into effect on some date or submit a revised NPRM to a new comment period.

2. One of the most common areas for high hazard industries to fail, is when "production pressure" is not limited, and motivations for profit / efficiency are used to discount safety concerns. How do you keep this from happening in aviation?

Some people would say we don't. The industries a very similar - very technical, lots of money involved and with life and death results. One comparision I've heard, medicine gets to bury its mistakes, we end up on the 6 o'clock news.

For a long time the FAA was tasked with the dual mandate of "promoting" and "regulating" aviation. It was pointed out many times that there was an inherent conflict between the two duties and after the Value Jet crash in 1996 the "promoting" job was written out of FAA's job description.

3. Aviation is both very safe and very efficient. Do pilots ever give feedback on ideas to improve efficiency? Or does efficiency primarily come from competition between carriers?

Some airlines have "suggestion boxes" but most pilots probably wonder what happens to the ideas submitted. In the end it is the bean counters that normally win. The problem sometimes becomes "We're saving 10 dollars a day doing X." "Yeah, but it's costing us 100 dollars a week to do X."

I am particularly interested in how different types of organizational feedback systems function, and how we can use these mechanisms in health care. Feedback is one of the key differences between chaotic and adaptive systems. Feedback seems to be ubiquitous in high hazard industries that are also very safe. However, a feedback mechanism is also used in manufacturing (Toyota production system) to improve both quality and efficiency / cost. As far as I can tell, this "dual use" does not occur in high hazard industries.
Besides the ASRP, many airlines have an Aviation Safety Action Program in house. While they normally get used in the same "get out of jail" free sense as ASRP, the idea behind ASAP is that crews would report issues before they caused accidents/incidents. The airline would monitor the reports and if they got multiple reports about the problem, take some action. If after landing at airport ABC, several crews reported having to taxi down a taxiway without sidelighting at night, the airline might talk with the airport about changing the route the planes follow. Or at a minimum, they could issue a bulletin advising crews to be alert on that route.

Some large airlines also have a Flight Operations Quality Assurance program. They are controversial in the US. Where the ASAP and ASRP are voluntary reports by the pilot, FOQA uses the data from the aircraft flight data recorders to plot trends. The FDR "black box" on modern planes records over 90 data parameters. For example, if maintenance reported multiple flap track problems, the airline could pull the FDR data and plot the speed at which the flaps where extended. (On larger aircraft there are maximum airspeeds at which different angles of flaps can be extended.) If the flaps were being extended at the maximum speed all the time, the airline could revise their training to remind crews that the flap tracks will have a longer service life if the flaps arfe extended at lower speeds. The pilot concerns are that if the data is collected it won't be de-identified and the company will track down and punish individuals, "You flew ship 123 on the 23rd into Miami and extended flaps to 20 degrees at 241 knots! You should know the limit is 240!

In the end information is power. Life and careers are are too short to making the same mistakes over and over. If you're going to screw up, at least do it in some new and unique way.
Reply
Thanks Twin Wasp:

That is interesting. You have one national rule maker (the FAA). Dual use for the "rule maker" was tried and withdrawn. And you have both national and local (inside the carrier) feedback systems.

We have multiple "rule makers". This results in a wide range of rule quality and poor compliance with all rules including the rules that no one disputes.

Adaptive systems theory says that the feedback is necessary to "refine the rules". Since we don't have infrastructure for either national or local feedback (inside a hospital), plus multiple rule makers, it is clearer why our rules don't work. It is interesting to me how the national vs. local feedback seem to be different. The world's expert on national reporting (NASA) seems to have internally failed in their local feedback for the Challenger accident. I tried to ask them if they had an internal feedback system, and how they modified it after Challenger, but they did not respond.

One thing that may be a little different in medicine is that we know a significant percentage of our literature for what is "evidenced based" will be refuted in the future. Thus, a mechanism for improving, or even withdrawing, old rules would also be important.
Reply
Re: NASA and the Challenger accident. They were probably closer to the old FAA with dual agenda of exploring space, an inherently dangerous undertaking, and safety. Military aviation is the same way, one body that both wants to complete a mission and bring the plane and crew back safely. Which is more important today? I often wondered how the Air Force would fight an all out war with all the rules they have. But if you throw out the rules only during wartime, everyone is in a new game they haven't played before. When I was instructing, I wanted students to at least see the edge of the abyss so they knew how they got there and more importantly how to get back. It's why accident reports are often part of training, to see how that crew got into that situation.

Many training programs talk about a "Swiss Cheese" model of accident prevention. The idea is that most accidents are not caused by one factor but a bunch of bad things happening. The anti-skid on the plane isn't working today. No big deal, there are ways to account for that. Oh, by the way, the runway is wet from a thunderstorm. Uhm. And there is reported windshear on departure. Now if an engine were to fail on departure it could get real interesting real fast. Having been exposed to reports of other crews in situations like this, the idea is the crew will decide another cup of coffee is in order as the storm will past in 30 minutes or so.

And we also have changes in "the literature." Smaller planes and older planes have rubber bladders on the leading edge of the wing to deal with ice accumulation while flying in clouds. For 60 years the guidance was to wait till there was 1/8 to 1/4 of an inch of ice on the wing before activating the system. There were stories about thin ice molding itself to the boots if they were activated too early leaving the boots to flex in a void between the wing and the ice. Then as part of the fallout of an accident in the early 90s, NASA did a lot of research into inflight icing. They came out and said to run the "boots" at the first sign of icing. I haven't flown a plane with boots since that report came out but I spent all of the 80s in planes with boots. If I were to end up in an airplane with boots, I'm not sure which model I'd follow. I believe it was Chomsky who said effective communication brings about a change in behavior, in this case I'm not sure one report will change ten years experience.

Icing in flight and on the ground, windshear, wake turbulence, controlled flight into terrain and mid air collisions are all accident causes that have been studied in the last 40 years. By informing pilots about the issue and developing procedures and technology to deal with them, the accidents caused by them have been reduced. Unfortunately it has taken a string of accidents to identify the issues.
Reply
Your thoughts about "dual use" causing the Challenger accident is interesting. I wonder if this is one of the primary risk factors in situations where a high hazard industry fails to limit "production pressure", and safety concerns become increasingly subservient to efficiency.

Unfortunately, some element of "dual use" is the only thing currently on the agenda as a proposal in health care at the present time. Because we cannot show a significant improvement in patient safety in the last 12 years, the agency responsible for safety and quality research, called the AHRQ, is lobbying for increased regulatory authority. That would be like combining NASA and the FAA. Also, there is nothing on the agenda at the Federal level for feedback (no ASRP). Some specialties are starting their own reporting systems, but they are spotty and they are primarily set up for a "Callback" style newsletter, not as a means to improve regulation.
Finally, although various "rewards" are being tried to encourage "rule compliance", liability mitigation is not one of them. The "get out of jail" seems to be critical.

I am considering discussing Reason's "Swiss Cheese" model early in this article because from what I understand about the model is that the key to transforming an unsafe system to a safe system is to: "close the holes" in the Swiss cheese, and to build redundancy. Closing the holes requires feedback, and feedback requires infrastructure.

So it seems that to transform health care base on lessons from aviation, we need to avoid the "dual use trap", have feedback at the national and local levels, consolidate rule making, and have a reward system that mitigates liability. Feel free to disagree if I missed something.
Reply
You need feedback at the individual level. You might want to look at the cockpit voice recorder transcript for United 173 in Portland in 1978.
Close-Up: United Airlines Flight 173
They had a problem with the landing gear before they landed and circled around a while to sort things out. Nothing wrong with that. They left Denver with an extra hour's worth of fuel which more than exceeded the regulations. However the Captain lost track of time and they circled around checking things and getting ready and ran out of fuel within a couple of minutes of when the flight plan said they would but about 8 miles short of the airport. The Flight Engineer on a DC-8 is the only one with fuel gauges in front of him. At one time the Captain told the F/E to plan on landing in another 10 or 15 minutes and the F/E said that would run them low. What he should have done at that point was to grab the Captain's shoulder, point out the fuel gauges and say, "WE'VE GOT 5 MINUTES FUEL LEFT. WE NEED TO LAND NOW!" But that's not the way things were done in the 70s. Unfortunately the F/E, the only cockpit crew member who seemed to know what was going on, was killed in the crash.

This accident more than any other thing brought about the concept of Cockpit Resource Management. The old school was that the Captain was right and you didn't question the Captain. The new idea, that took a while to be accepted, was that all the crew members bring some knowledge and experience with them. Everyone is encouraged to speak up and ask questions. My pre-departure briefings end with, "Any questions, comments or anything to add?" The idea is keep everyone in the loop. I always liked students who talked to themselves, then I knew what they were thinking.
Reply
Here's the short version of the NTSB report. The last line says it all.


OPERATOR - UNITED AIR LINES,INC.
DEPARTURE POINT INTENDED DESTINATION LAST ENROUTE STOP
NEW YORK,NY PORTLAND,OR DENVER,CO
TYPE OF ACCIDENT PHASE OF OPERATION
ENGINE FAILURE OR MALFUNCTION IN FLIGHT: HOLDING
COLLIDED WITH: TREES LANDING: ROLL
PROBABLE CAUSE(S)
PILOT IN COMMAND - MISMANAGEMENT OF FUEL
PILOT IN COMMAND - DIVERTED ATTENTION FROM OPERATION OF AIRCRAFT
MISCELLANEOUS ACTS,CONDITIONS - INATTENTIVE TO FUEL SUPPLY
MISCELLANEOUS ACTS,CONDITIONS - FUEL EXHAUSTION
FACTOR(S)
AIRFRAME - LANDING GEAR: NORMAL RETRACTION/EXTENSION ASSEMBLY
AIRFRAME - LANDING GEAR: LANDING GEAR WARNING AND INDICATING COMPONENTS
COPILOT - MISMANAGEMENT OF FUEL
PERSONNEL - FLIGHT ENGINEER: OTHER
COMPLETE POWER LOSS - COMPLETE ENGINE FAILURE/FLAMEOUT-4 ENGINES
EMERGENCY CIRCUMSTANCES - FORCED LANDING OFF AIRPORT ON LAND
REMARKS- OTR CRW MEMBERS FAILED TO CONVEY CONCERN ABOUT FUEL EXH TO PIC UNTIL ACDNT WAS INEVITABLE.
Reply
Applying ASAP or ASRS to medicine is exactly what the country needs. This would be a highly effective way to reduce medical errors over the long term. What makes the program so effective in aviation is the rate at which reports are made. If someone thinks they made an error, a report is made almost 100% of the time. Why? Because the alternative is a suspension if the error is discovered. The trick in medicine is to increase the penalties for errors at the same time some sort of reporting system is installed. Doctors and nurses need to know that they will lose a month's pay for mistakes that aren't reported, otherwise they will never fill out a report.
I know the current culture is to sweep things under the rug so that is going to be an uphill climb to change the attitude.
Reply
So that is the crash that gave us CRM. The example illustrates the concepts in a powerful way.

It makes me remember a flight I had from San Francisco to Dulles where we were circling Dulles because of fog. We had two missed approaches at Dulles before diverting to Pittsburgh. I wonder what our fuel gauges were reading at Pitt when we landed.

I also had a case where someone was saved because of CRM. An extremely rare equipment failure occurred that required action within about a minute. However, the "symptoms" looked just like another condition that required a completely different intervention.

We did not solve the problem in a minute, but we methodically went through each possible cause, and everyone with an idea was heard until we found the answer. Each idea bought us a little time and kept fixation error from taking control.

Unfortunately, there is no required training in CRM in medicine. I only knew about it because I had recently written a short newsletter article about safety.

I think to strengthen this article I need to include an example that illustrates how things would have gone if medicine had standards / training similar to aviation that also illustrates all the ways we could have found the problem faster (simulation training), or avoided it completely (checklist). I need to see what sort of release I need to talk about this since we are restricted by HIPPA.

It seems like we would need to consolidate the rule making to go along with any new "penalty" / reward system. You cannot penalize someone for not following a bad rule.

We had a situation nationally where people were getting injured or killed because practitioners were disabling alarms. The ability to disable alarms was removed from monitors along with advisories not to do that. Then what happened was every time an anesthetic began, all the alarms would go off. It sounds crazy but that is what happened. It would be like every time you started down the runway to take off, bells ring all over the cockpit for no reason. Because there is no local or national feedback to "improve the rules", this went on for some time, and still it does happen.

Most people automatically put a 2 minute silence on the alarms in this situation because you cannot think with all these bells going off. You can't really fine people for that. If there was a feedback mechanism locally or nationally then the default settings on the alarms would be adjusted to something reasonable. Without the feedback people resort to the "work around".

So it seems we are missing feedback at all levels: (individual ~ CRM, local ~ ASAP, and national ~ ASRS /ASRP?) and a reward system that mitigates liability, plus rule maker consolidation so that properly vetted standards are introduced in practice, training, and equipment. All three components seem to be important.

For twelve years medicine has tried to see what "high reliability" interventions could be applied. Unfortunately, we have just scratched the surface. It is getting easier to see why there is no / minimal improvement. I hope I can paint a picture with words that makes it easy to understand.
Reply