It should be noted in the doomed Air France 447 flight, the plane activated the stall warning because of a high angle of attack that was leading to stall. (thanks pdx for the corrected info)
At some point the system rejected the data and stopped the stall warning because the angle of attack was so severe that it considered the data erroneous. This is speculated to have caused the co-pilot to keep pulling back on the stick and to maintain the stall because everytime he let the nose of the plane come down the stall warning activated again as the AoA was decreasing into the range that the airplane considered a real signal. https://www.vanityfair.com/news/business/2014/10/air-france-...
A plane's instruments, the actions software takes, and it's interactions with the humans that fly the plane isn't as simple as an if statement.
If anything, fewer and simpler controls or automated systems are easier to debug and work around than a plane that has an internal calculus of what is valid data.
Air France 447 crashed because of a stall caused by a severe AoA that MCAS might have prevented (if MCAS simple pushes the nose down at high AoA it might. I am unsure of the exact implementation). MCAS obviously had a different impact on the Lion and Ethiopian Air Flights.
The challenge I think is to keep two things separated, one is the flight control laws that the system is implementing to keep the plane in the air (to the best of its ability), and the other is the situational awareness indicators for the pilots so that they can tell what what the plane is "thinking" about how it is flying (or not).
The closest analogy I can come up with is the SQL explain command. That command will generate the complete decision tree for how records are included or excluded from a SQL query. The air equivalent might be display that shows the flight status, and the state of the instruments that are being used to determine that status. And then it is up to the pilots (or DBA :-) to figure out if what is happening is what they think should be happening.
To use this particular example, it is, in my opinion, negligent on Boeing's part not to include an indicator for every change in the flight control laws. If MCAS activates to avoid a stall it should always indicate that it is, and why it is activated. It has been reported that this was an "extra price" option for the jet, and it is that choice that makes it feel negligent to me.
Generally, there seems to be enough indications with backups in the cockpit so that the pilot can ascertain what is going on with the aircraft reliably (backups and such), but what was missing here was, again in my opinion, was the rationale the plane was using for the flight laws it was implementing being available as well.
This is not the problem. MCAS in itself sadly is the problem.
They had to solve the issue that the pitch-up moment shouldn't be accelerating on its own just depending on AoA, which was (aerodynamically) inevitable without automated adjustments due to the placement of the engines.
They had to resort to the worst possible cludge, since they weren't even allowed to add new electrical systems (which would have caused a recertification and/or a retraining), so they resorted to an already existing system (assisted trimming by autopilot).
The effect of MCAS is only "fixed" by manually adjusting the trim, which involves moving a jackscrew which holds the last section of the elevator at a fixed angle. Unfortunately the required force of moving this jackscrew increases with the airspeed. There is no easy way out. A bit more background can be found here: https://www.satcom.guru/2018/11/stabilizer-trim.html
Btw, the extra price item was an AoA disagree warning. This is related to MCAS in a way like an odometer failure to traction control. It wouldn't have helped, if pilots stuck to their memory items and checklist (which they have to: they recognize runaway trim and have to react accordingly).
To get a slight feeling what it means to have runaway trim (without assistance which they had to disable in concert with MCAS) please have a look at this video:
https://vimeo.com/329558134
The problem with this approach is that it does not generalize. You might think that if pilots just had this one piece of information, it would be obvious what's wrong, so let's provide that information. But if you apply that consistently to every system, the crew will be buried in notifications all the time, most of which are perfectly normal, and most of the rest are normal responses to the abnormal conditions produced by the actual fault.
Even in a space shuttle, the flight crew did not see most of the data, and NASA had a room full of engineers following the telemetry.
When you have less than a minute to save everybody, the last thing you need is something like an SQL explain result.
> It has been reported that this was an "extra price" option for the jet, and it is that choice that makes it feel negligent to me.
not quite. the things they offered for extra were the AoA sensor readouts and an annunciator for AoA disagreement. the mcas was not part of that and the mcas was not briefed in the difference training from the other 737 models. the mcas taking input from a single sensor was to blame. it should not have activated when there was substantial disagreement from the two sensors. it also should have 3 sensors and vote on agreement and not just difference.
That's what I wonder since I heard of the two sensor thing. Why wheren't there 3 sensors or even more. We omit those kinds of failures for the simplest blog style websites. Feed that info into 3 flight controllers for that matter and fall back to Manual flight in case less than two agree.
It wasn't due to the cost. It is the whole business of avoid pilot retraining - which is apparently "must-have" for airlines. At some point, Boeing prioritised papering over the fundamental difference in the flight characteristics of MAX _above_ the pilots' needs to be informed of what is actually happening with the plane.
His proposal makes no sense. Under his new logic, it would be fine for the MCAS to continue forcing the trim to full deflection and crash the plane if the AoA was stuck anywhere between 15 and 24 degrees.
He's simply not solved the root problem and added a bad kludge for a single error case (AoA stuck > 25 degrees). While at the same time removed a potential safety control for a real world case where the AoA was validly over 25 degrees.
Note: I am assuming by RUNAWAY_TRIM() he means LET_MCAS_ADJUST_TRIM = TRUE; since runaway trim means something entirely different to my mind (that the electric trim is stuck/faulty and causing the trim to run to full deflection in one or other direction).
It is difficult to be sure from the flight data chart that was published in the Seattle Times, but it looks possible that the faulty AofA reading was below 25 degrees for most of the Lion Air flight.
Reading between the lines, Greenspun has presented these crashes as "the first mass killings by software". That claim looks clearer if you present it as wholly due to the absence of a few common-sense lines of code, and downplay the other systems design issues, such as the lack of triple redundancy, the failure to use the dual redundancy to detect sensor error, and the fact that the system was unnecessarily powerful, being able to drive the trim all the way forward.
From my pilot perspective, if a fellow pilot took the action MCAS did, I would consider it homicidal. This is based on the last instance of MCAS activation at ~05:43:21, airspeed is already exceeded maximum operating speed (Vmo), MCAS commands 1.3 units of nose down trim inside of 5 seconds, which changed the attitude to ~18 degrees nose down within 6 seconds. The vertical acceleration momentarily went negative, meaning everyone in the plane came off their seat, and anyone or anything not strapped down went to the ceiling. Soon after this we can see their maximum yoke nose up inputs up to that point, clearly this is the struggle point, and it was not effective, the mistrim was too great.
At the altitude and airspeed at the time this last nose down was commanded, it can lead to only death, it's not recoverable, so yeah mass killing by software is not an extreme claim.
If you view the MCAS as a control system that is a safety component of the 737 MAX, then you need a really good reason to completely cut off that safety system when a particular sensor input goes above a given threshold. In this case the argument being made is that if the input is above 24 degrees the safety system should be turned off, potentially leading to a stall. It's almost an argument against having that system entirely.
The problem is much more complicated than this and requires thinking end to end about 1) what the purpose of the system is, 2) when and how it should operating, 3) how much control it should have, 4) how it's activity is made visible to the pilot when it performs any control, 5) how and under what conditions it should automatically disable itself, or 6) be able to be manually disabled, and 7) how the pilot is made aware of all of these situations in a way that doesn't cause confusion in potentially complex scenarios involving other failures and alarms going off, and 8) proper training so the pilot can manage the plane when the characteristics have changed after the system is disabled. Oh, and 9) in this case since it's impossible for a pilot to manually trim a 737 when it's above a couple of hundred knots, making sure an electronic trim assistance function is available in this scenario (which it wasn't due to tragic idea to overload the use of the trim runaway cutout switches).
And I am probably missing a bunch of things, which is part of the point.
To be clear, I do not object to the 'verdict', but I do not think it helps to suggest that all the issues raised by these crashes can be dealt with just by giving each sensor one or two somewhat arbitrary thresholds beyond which they should be ignored. While I cannot think of any reason not to have adopted the proposed solution within MCAS, and it would have prevented the Ethiopian Airlines crash, it would not have eliminated the possibility in other circumstances (possibly including Lion Air), on account of the other problems with MCAS. If, in these cases, the sensors had failed to a plausible value (say, 15 degrees), the outcome would have been the same.
A commonality in AF 447 and the two MAX cases: angle of attack is not displayed to the pilots.
A substantial difference between them, is automation had totally checked out in the 447 case, whereas in the MAX cases automation took action contrary to reality.
I let the engineers argue over redundancy. As a pilot I think it's a factor, but far less so than the pilots being denied knowing their airplane.
447 pilots had not undertaken any in-flight training, at high altitude, for the “vol avec IAS douteuse” procedure or on manual aeroplane handling; while the MAX pilots had no training expressly on MCAS upset so they could experience the exact consequences of ensuing mistrim, the difficulty of solving that mistrim at low altitude, and the aggressiveness of MCAS in any subsequent nose down. MAX pilots are additionally denied knowing about the true pitch up tendency in high angles of attack without the benefit of MCAS (either it wrongly thinks AOA is OK, or stab trim is cutout).
The Ethiopian Airlines pilots had the situation progressing toward stability. They had (apparently) re-enabled electric trim to give them enough authority to reset the trim, and had a sane attitude. But then within seconds they lost their chance as MCAS took incredibly aggressive action that put them in an unrecoverable 40 degree nose down attitude.
There really is no substitute for training. I don't accept that the "fix" for the MAX is a software update alone. Simulators must be capable of showing various kinds of MCAS working, not working, and perturbed (erroneous sensor input) cases, at various phases of flight. All the excuses to avoid training are bullshit.
Yes it was 1.3 units of change inside of 5 seconds, without respect to being just over Vmo (maximum operating speed). If a human pilot did that in the same situation, it would be considered incompetency or sabotage.
Shown on the same page at the same time, vertical acceleration went very slightly negative as a direct result of this MCAS nose down change. Everyone came off their seat, including the pilots, and at that same time you see yoke position changes downward (less nose up to be exact) which made matters worse even though they were already doomed at that point.
The real question is: what training would you drop in order that the pilots get this alternate training? You can't just say, "train train train"; eventually the pilots have to actually fly.
- You've produced no evidence these pilots are at any training limit, you just made that up, so I call bullshit.
- I'm a pilot and a former flight instructor so I have some credibility in refusing a binary choice like you've proposed. It's not that much training. I had to do difference training for checkouts on a regular basis (giving it and receiving it) and it's not high quantity training, it is high quality training.
- The strongest argument there's plenty of opportunity for training is the fact the MAX had the same type certification as the previous NG model series. So what's the worst case scenario for difference training? It'd require an aircraft type certificate, which will require pilots undergo training and checkout for an additional type rating.
Quite a lot of pilots have multiple type ratings, it's not unusual, in fact the pilot of Ethiopian Airlines 302 had type ratings for B737-7/800, B757/767, B777, and B787. What, was he only training and never flying?
This is incorrect. If you read the original script of pilot conversation in the cockpit, you'd know that he simply forgot he was pulling back on the stick. He disregarded clear pilot training and did what he was exactly not supposed to do, I reckon out of nervousness. That flight, needed a better pilot. Period.
Forgot? Better? Neither of those things is supported by the final report, the cockpit voice or flight data recorders.
The occurrence of the failure in the context of flight in cruise completely surprised the pilots of flight AF 447.
Which pilot needed to be better?
The crew, progressively becoming de-structured, likely never understood that it was faced with a “simple” loss of three sources of airspeed information.
and
de-structuring of crew cooperation fed on each other until the total loss of cognitive control of the situation
OK so you need two better pilots, right? And what specifically would make them better in your view?
The final report is pretty clear, the most relevant cause bullets are on page 203, there's no need to speculate and provide your own version. It very clearly cites training deficiencies, cockpit ergonomics in that there was no clear display of airspeed inconsistencies and flight director indications that could have led the crew to believe their actions were correct even though they weren't, and significant simulator deficiencies.
The training and ergonomics basically set them up to be shocked at the situation they were in upon autopilot disconnect.
BEA even blame regulatory oversight of Air France for an inspection regime that failed to identify any of the rather numerous problems BEA found across the board.
Pinning this accident on a pilot is inappropriate.
You can say what you like, but that was incompetence and no technology can make up for downright incompetence.
The report statements are diplomatic. No report will point fi ger at anyone. In that particular case, the entire conversation in the cockpit was publicised. So, instead of letting others tell you what is and what is not, learn to read and draw opinions for yourself. Just because the investigators didn't explicitly say it, doesn't mean I'm wrong.
The pilot who have been pulling on the stick for a very long time was the incompetent pilot there. The guy who did the exact opposite of what should have been done during a stall warning out of nerves. That's incompetence. Many factors might have caused that unfavourable scenario, but he brought that plane down.
Fascinating. But it seems like the kind of scenario that would be odd for the pilot, no? You nose down, a stall warning goes off, so you nose back up again? That's exactly the opposite of what a pilot would be trained to do, right?
I'd imagine in a car, if my vehicle warned me that I was going too fast when I slowed down, my reaction wouldn't be to speed the vehicle back up to avoid the warning...
It's not well sourced. It's a speculation in an old Popular Mechanics article published before the official accident report. The official accident report does not identify the sidesticks as a factor in the crash (because they weren't).
Downvotes are so cool, right? Just post your better informed opinion and enjoy the upvotes. This has the side-effect of not just informing me but others about your more correct opinion.
The thread you link to explains why the sidesticks are a red herring:
>Again, the stick inputs from the PF are very easy to see if you just look at them.
You would see that immediately if you sat in an airbus pilot seat.
The PNF is going to spend most of his time looking at the instruments. He's hardly any more likely to be looking off to the side at his own control stick than he is to be looking at the other control stick, so linking them would be unlikely to make any difference.
I didn't say anything about sidesticks (in a comment below i did). I don't believe myself that the sidesticks were a real factor in themselves - but if the PM had sensed that PF (bonin) was pulling up he would have reacted earlier.
The PNF wouldn't have sensed anything because his hand wouldn't have been on the stick.
The premise of the Popular Mechanics article is that for a significant period, each of the pilots thought that they were the PF and were unaware that the other pilot was making stick inputs at the same time. This is unlikely, because the Airbus has a clear "dual input" warning. If you read the transcript, you can see that there's actually quite a lot of discussion between the pilots about who is in control. It was only the captain who had any clear idea of the correct control inputs to make, and he wasn't seated at the controls at all, so linked sticks wouldn't have made any difference to him.
Yes, if memory serves in AirFrance 447 that upon finding out the co-pilot had been pulling up the whole time the other pilot is shocked to hear this and realizes the problem. I remember being really affected by this so I went out and found the transcript:
02:13:40 (Bonin) But I've had the stick back the whole time!
[At last, Bonin tells the others the crucial fact whose import he has so grievously failed to understand himself.]
02:13:42 (Captain) No, no, no… Don’t climb… no, no.
02:13:43 (Robert) Descend, then… Give me the controls… Give me the controls!
What you're pasting there is an editorialized quotation of the transcript from an old Popular Mechanics article. The transcript and its English translation are here:
> my reaction wouldn't be to speed the vehicle back up to avoid the warning...
In car you can see and feel what's going on. In a jet at night, with complex behaviour in 3D, you'll have much less sensory input, close to zero. Hence Instrument flying.
The final report says it's impossible to evaluate the level of fatigue as they have no data on their sleep during the stopover. But from the CVR, the report says the crew showed no signs of objective fatigue
> everytime he let the nose of the plane come down the stick shaker activated again as the AoA was decreasing into the range that the airplane considered a real signal
That's true, but the pilot response was insanity. When the stick shaker comes on you push the nose down every time without question, unless perhaps you're at very low altitude and then you might be in an unrecoverable situation anyway... but you don't pull back :)
AF447 was an Airbus 330 which does not have a physical stick shaker. It has a "Stall Stall Stall" warning that sounds and was likely drowned out by the other alarms that were filling the cockpit.
A330s also have disconnected sticks, so when a pilot has input on one side of the plane, that input is not replicated on the other side of the plane.
>AF447 was an Airbus 330 which does not have a physical stick shaker. It has a "Stall Stall Stall" warning that sounds and was likely drowned out by the other alarms that were filling the cockpit.
Just to be clear, an Airbus 330 won't let the pilot maintain an attitude that could lead to a stall, so it has a much higher degree of protection than a simple stick shaker. The situations in which these protections are turned off are situations in which a stick shaker would also fail to function reliably.
I suggest to read a bit more about AF447. This was 90% the missing idea of the other pilots that one pilot (i.e. the pilot flying) could be so extremely uneducated(i.e. stupid) to try to pitch up while stalling is imminent/occuring. The remaining 10% is the airbus fuckup of mixing inputs of both sticks.
1. The pitot tubes froze (this was already a known possibility) and this particular plane was scheduled to replace those tubes;
2. When the speed metrics became unreliable, system didn't do anything uncommanded, disengaged autopilot gracefully, notified the pilots and handed control over to them;
3. Stall warnings went off and one of the pilots continued to pull on the stick (this baffled other pilots I heard speak about this too because this is the exact opposite thing to do during a stall)
4. It's clear from their exchanges in the cockpit that this particular pilot was very nervous during the whole thing.
What's complicated about that? I can't see how a computerised system could have helped here. Computer had no reliable grounds to make a decision and it did exactly what it should and what should have neen expected by the pilots at the time.
The other problem with AF447 was that the pilot and copilot were giving conflicting instructions to the plane and there wasn't any feedback that they were doing so, and that the senior pilot was taking a nap...
Interesting. So the system under consideration consists of both the airplane and the two pilots. Is this part of the design effort — I guess it is — and how do we even model the combination of technology and humans?
It's interesting to me that the system would be designed to detect that a sensor is feeding it bad data, then go back to trusting that input after it had been determined to be bad.
At some point the system rejected the data and stopped the stall warning because the angle of attack was so severe that it considered the data erroneous. This is speculated to have caused the co-pilot to keep pulling back on the stick and to maintain the stall because everytime he let the nose of the plane come down the stall warning activated again as the AoA was decreasing into the range that the airplane considered a real signal. https://www.vanityfair.com/news/business/2014/10/air-france-...
A plane's instruments, the actions software takes, and it's interactions with the humans that fly the plane isn't as simple as an if statement.
If anything, fewer and simpler controls or automated systems are easier to debug and work around than a plane that has an internal calculus of what is valid data.
Air France 447 crashed because of a stall caused by a severe AoA that MCAS might have prevented (if MCAS simple pushes the nose down at high AoA it might. I am unsure of the exact implementation). MCAS obviously had a different impact on the Lion and Ethiopian Air Flights.
Basically, it's complicated.