The kinds of issues that Boeing is encountering with implementation of new technologies are, in a sense, universal. Most consumer technology companies have to deal with this kind of stuff when designing new products. What is different here is that, because of the nature of Boeing’s business, these issues can lead to life-and-death situations, especially when mistakes are made.
Software is playing a bigger role in the implementation of the logic for decision making in the working of products everywhere. In the case of the Boeing 737 MAX 8 (and most likely the other MAX variants), a particular aspect of the software implementation became a key element in establishing the “stability” of the product, i.e., the aircraft, during a certain mode of operation. The software implementation turned out to be flawed in its implementation. Rather than depend on human beings to control the aircraft during a particularly unstable period of flight of the aircraft, the design had the software take over the flying of the plane during that period of time. The logic of the overall system design was shown to be faulty in one of the planes that crashed (and the authorities will probably conclude that something similar led to the second crash). In their rush to get the product out, Boeing failed to account adequately for all the possible ways in which things could go wrong, especially when control is wrested away from the human beings flying the plane.
How did Boeing end up with this kind of a design? The basic design of the 737 is quite old (from the 1960s) and not the best suited for upgrading to the latest technologies, including newer engines that are more efficient. Boeing was trying to match the performance of their newest products to the latest version of the newer (from the 1980s) Airbus A320 line of aircraft without having to design a new aircraft from the ground up, a process that would have supposedly cost more money and time. The solution approach that Boeing ended up with turned out to be something that was not ideal – an aircraft that was known to be unstable under certain conditions. The solution that they came up with to handle the instability was to use software to control the system so that it could at least be “meta-stable”. (Some military aircraft are designed this way.) The idea was to implement this “feature” without modifying how pilots who were used to flying the 737s would fly this new plane. Basically, they wanted to introduce the product in a way that the unstable nature of the design was not obvious to the pilots, so that their experience of flying a new plane would match that of flying an existing design. Instead of talking about the differences in the design and familiarizing pilots with how they should handle these differences, they deliberately tried to make things appear to be simpler than they actually were by addressing the problem with software control. What the heck! Boeing trusted the software more than instincts of the pilots?!
I am not a software engineer, but the small number of people who have been following my blogs know by now that I like to rail against the scourge of bad software. I feel I have a right to do so based on my experiences with such software. But the problem these days seems to go beyond that of “bad software” – it also seems to lie in the way the software logic is integrated into the whole system. And at the same time, whole systems are becoming more and more dependent on this kind of software. Our two hybrid cars, the Honda Civic from 2008 and the Prius from 2015, are two completely different beasts when it comes to integrating the operations of the electric motor, the gasoline engine, and the battery, into one coherent system to supply torque to the wheels. This whole process is dependent on decisions made using logic implemented in software. The logic, and the practical results from the implementations, are completely different for the two cars. Who knows how they came up with the logic, and how many software bugs there are in the control systems! When I complained about the Honda when I had problems, they were quite reluctant to give me any technical information. The good thing is that nothing seems to have been compromised when it comes to safety.
I used to work in an industry where the pressures of succeeding quickly with the introduction of new products was a primary driver in the decision making process. (This is probably a truism for most industries.) Thank goodness we were manufacturing products that did not deal with life-and-death issues. Failure in our systems could not, for the most part, kill you. Safety of the product was ensured by following regulations in this regard. But when these kinds of market forces impact a multi-billion dollar aircraft industry, a situation where the lives of millions of regular folks who are flying is involved, you have the potential for very significant problems. If you try to cut corners hoping that there is nothing fatal that lies out of sight, you are asking for trouble. The regulators are supposed to be the final arbitrator for safety issues, but what can they really understand about complicated systems like the ones we are building today. Ultimately, the onus lies on the one building the product, and this is true for any kind of product.
Boeing will survive their current problems, but their reputation is tarnished, at least for the short term. They really came out of this looking small and insincere, trying to hide behind the FAA. They could have gained more trust from the public by being proactive, and even responding more forcefully after the first crash.
Truth of the matter is that situations like these have happened in the past for both of the big aircraft manufacturers that remain today – Airbus and Boeing. When Airbus first introduced fly-by-wire technologies, there was even a crash at an airshow.
It is true that fatal flaws in aircraft are not limited to those of the software kind. Planes have been crashing due to hardware failures since man began to fly. It is only that fatal flaws of the software kind are completely predictable. They should be easier to find and test for from the design and implementation perspective. The software should be able to respond to all the known hardware issues (which are unfortunately unavoidable) in some way, and the software should not be buggy. And you cannot have the software introducing new failure modes, especially when safety is involved. That should be unacceptable.
In general, flying commercial aircraft is probably much safer today than it has ever been. The problem (as I see it) seems to be that companies are willing to play with people’s lives in their approach for introducing new technology and making money, and this is preventing the system from being as safe as it really can be when new products are introduced. Some companies seem to be too willing to take a risk of losing human lives in the process of learning more about their new products. And then they are slow to take responsibility. There has to be some kind of social liability associated with this approach.
Privacy is something that none of us who live in the digital connected world really have. While we would like to believe that we are safe from prying eyes by using the tools provided by the different vendors who design security solutions that incorporate into our systems, I think that this ship has sailed. The moment you decided to be a part of the Internet, be it on the social media, or be it for simple browsing, or e-mail, or chatting, you created a door into your device, and a means for your information to become available to the snoops, and also for folks who want to misuse your device. The security solutions I mentioned before can barely keep up with the hacking world in this regard. And it only takes one mistake to open the backdoor into your system! The best you can do is try to limit the damage.
There are all kinds of snoops. There are the ones trying to get at your confidential information to do something bad to you. There are those who are trying to misuse your personal information for other illicit purposes. There are those who are trying to legally or illegally gain some commercial advantage, trying to sell things to you by learning more about you from your computer. And then there is the government that might suspect you of doing something illegal on your computer.
Why has it been so easy for people to get into our private systems? For one thing, most of the systems that we work with have fundamental software design flaws that can be exploited. Next, whenever you are connected into the Internet, you have an address at which you can be reached. Then, for reasons of convenience, and for supporting required functionalities, systems also include means for others to get access to your working environment for legitimate purposes. (For example, remote login capability exists for debugging purposes.)
Once you have an identity on the network, there are ways for people to try to access your system for both legitimate and nefarious purposes. Every time you visit a website you are executing code from the website on your computer. Websites leave cookies on your computer regularly when you browse them. And sometimes you give outsiders access inadvertently by going to a website that interacts with your computer in a malicious manner. Once you have have hit the wrong button on the browser screen, or in an e-mail, or even opened a malicious application file that you downloaded, you could be at the mercy of the entity on the the other side of the communication link established.
And then there are many of us who are willing to give up our privacy willingly in return for something that we want. It happens all the time when you give your information to companies like Facebook, or Google, or LinkedIn or Microsoft, to name a few. It happens when you make a purchase at any online shopping site like Amazon or even an Expedia. And then the systems that these organizations use for storing all this information are not foolproof. Personal information for millions of people have been stolen from the records of more than one government agency.
Your digital communications are themselves not safe from snooping. Communications from your smart phone can be intercepted by fake cell towers, and communications through an ISP can be snooped upon directly. Both the bad guys and the good guys take advantage of this approach.
There are rules and regulations meant to address many of the above scenarios to try to protect your privacy, but in many cases rules cannot keep up with either the technologies nor the human ingenuity when it comes to creating problems and creating chaos. Then there are the human tendencies that make us disregard the speed-bumps in the processes that are meant to make us slow down and think for a minute. We make mistakes that allow our privacy to be compromised. When was the last time one read a EULA? When was the last time one read and reacted to the privacy statement (mandated by law) they received from their financial organization? Do we accept and store all cookies offered up when browsing a website?
Tim Cook at Apple has decided that the privacy of the owner of a device must be protected at all costs. In this case, he is talking about access to the contents of a device by a third party that has your device in their hands and wants to look into its contents without asking you. They want to make it extremely difficult, if not impossible, to do something like this. Recently Apple introduced the concept of having all the contents of the device encrypted, and limiting access to the decryption key to the the owner of the device (i.e., even Apple does not know what it is). In order to be able to use the key, the user has to first get access to his or her device with a password. If somebody tries to hack the password too many times, the device stops working completely. The system is “bricked“. The only way to break the system is to guess the password without too many attempts. Apple does not have a back door in its current software that lets it bypass this security.
This is where government access to a device becomes the topic of discussion. What the FBI has asked Apple to do is to hack into their own system so that they can read the contents of another person’s smartphone. Apple is refusing in spite of being under a court order. They are in a difficult place. If they attempt to break their own system and are successful, it could indicate that others could also find a way to hack into their supposedly super-secure system. They designed the system to work this way for a reason!
Is Apple justified in refusing to cooperate with the FBI? Under ideal conditions I would say that they are not, since once you become a part of a society and its systems and use it to your benefit, you have some responsibilities to the system also. But we also know that the system is not infallible, and can easily be manipulated and misused (as shown by Edward Snowden). And the tendency for misuse is somehow inbuilt into the system because of human nature and can perhaps never be fixed.
Where should the line be drawn with regards to trying to protect privacy under these circumstances? It is certainly a dilemma…