The Wisdom of the System and the Future of Freedom in the Wake of the Germanwings Crash

AP Photo/Michael Probst
AP Photo/Michael Probst

We should be mindful that accident investigators are often looking for the quickest possible explanation, and MSM journalists are typically eager to take the bait. Just on Wednesday, the news from the Germanwings crash in France was that investigators were examining the possibility that a flaw in the Airbus avionics gave the planes a dangreous tendency to lose altitude.

And yet now, on Thursday, a completely different explanation was put forth and blared all over the world: the “mass killer co-pilot.” It is, indeed, a heavy thing to accuse a pilot not only of suicide on the job, but also of mass murder. But as the cynical saying in journalism holds, oftentimes a juicy story is “too good to check!” Indeed, Breitbart News’s John Hayward was right to warn against the quick embrace of any theory that might be too pat, even as he detailed the co-pilot’s alleged problems.

Yet it does appear that aviation suicide-murder is a real phenomenon: In 1999, Egyptair flight 990 took off from New York’s JFK airport and soon crashed in the Atlantic; investigators concluded that the pilot deliberately crashed his plane. And it’s widely thought, although far from proven, that one or both of the pilots of the Malaysian Airlines flight 370, which disappeared in the South Pacific in 2014, had similar suicidal motivations. So that’s two flights, claiming a total of 456 lives—something to worry about.

Moreover, on a smaller scale at least, pilot malevolence is an undeniable phenomenon: In 1974, an Army private stole a military helicopter and landed it, without authorization, on the White House lawn. In 1994, a crazy man crashed his Cessna into the White House, killing himself. We can assume, by now, that the Secret Service has some sophisticated air defense for the White House complex.

Yet for the rest of us, the issue of dangerous stray aircraft will remain. In 2010, another crazy kamikaze crashed his aircraft into an IRS building in Austin, Texas, killing himself and an IRS worker, and injuring 13.

Finally, the ultimate examples of malevolent piloting—at least so far—are the four doomed flights of September 11, 2001, in which jihadi hijackers took nearly 3000 lives.

So what should we do? How to prevent such tragedies in the future?

We might start with the sobered realization that the “meme” of suicide-as-performance-art seems to be spreading. We see it, not only in various jihadis and other airborne lunatics, but also in freeway speeders, who seem to savor the opportunity to be chased by the police in broad daylight, when the news helicopters can easily spot them. And if the maniacal drivers get killed, or kill themselves—well, that’s part of the show. If it bleeds, it leads.

It could be the case that aspects of modern life are encouraging deadly activity; certainly, in a world of bigger and faster machines, a determined individual can be a lot deadlier. If so, we can expect the guardians of modern life—its technological and political upholders—to respond. After all, we can say, the system has wisdom, even if an increasing number of humans seem to be deranged and deadly.

In particular, the phenomenon of Big Data would seem to offer promise for greater safety. We might consider, for example, the last few words of the subtitle of David Weinberger’s 2012 bookToo Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room. Yes, we might get a chuckle out of that phrase, “the Smartest Person in the Room Is the Room,” but it conveys an important truth: There is wisdom in the system.

From a technological point of view, it’s possible to imagine a safety system that would have prevented most, if not all, of the air crashes we mentioned earlier. That is, we might build up from the familiar concept of an airplane auto-pilot, and then add in Big Data having the capacity to spot tiny deviations in flight plans before they became big deviations. And so it would be possible to envision a comprehensive system that might prevent such incidents in the future. If the Germanwings co-pilot really did intend to crash the plane, it would have been nice, for the sake of the other 149 people on the plane, if there had been an “override” command from Air Traffic Control that would have landed the plane safely.

It’s fair to say, of course, that such a fail-safe system would be controversial. For starters, it would have to work under all conditions—no “Sorcerer’s Apprentice”-type amok episodes. But chances are, over time, that it would work pretty well; it’s not so hard, indeed, to imagine a system in which robots and computers replace human pilots altogether. Yes, computers are subject to Murphy’s Law, and yet many of us, if not most, willingly put our trust in “machine learning” systems every day.

More profoundly, there’s the issue of personal autonomy and freedom. What could/would happen if all aviation were to come under the supervision/control of a computer?

These issues are already starting to come to a head in the realm of automobile transportation. Already, the “black boxes” aboard new cars convey a great deal of information about the car and its travel. Moreover, if various proposals for a vehicle miles-traveled tax were to come to fruition, well, then the government would know just about everything about our vehicular activities. And of course, if “driverless cars” were to become a reality, then, by definition, “the system” would know exactly where your car was—and, presumably, where you were, too.

To be sure, as with “pilotless planes,” there’s something in the driverless-car idea to creep out just about everyone. Yet on the other side of the equation, we can hear the siren songs of convenience, efficiency, and safety. Only these songs aren’t sirenic—they are real; driverless cars offer new and real hope for shut-ins, for example.

Indeed, the history of just about every auto-safety innovation over the last century has been fraught with controversy. In each instance—from street signs and lights, to parking meters and license plates, to seatbelts and airbags—there were concerns raised over privacy and liberty. And yet with a few notable exceptions, such as the “interlock” device for seatbelts, these changes have become permanent facts of our life. (Interestingly, even the interlock is making a partial comeback, as a tool against drunk driving.)

So yes, there will be controversy in this new technology, but we can also see a certain inexorability, not only for driverless cars, but also for aviation overrides. Nobody wants another Germanwings-type disaster.

However, we have only scratched the surface of what’s possible. We’ll take up more of those possibilities in the next installment.

COMMENTS

Please let us know if you're having issues with commenting.