View Single Post
Old 08-10-2018 | 05:57 AM
  #103  
fadec
Gets Weekends Off
 
Joined: Mar 2015
Posts: 963
Likes: 0
Default

Originally Posted by atpcliff
Not the problem with automation.
Here's the two problems:
1-The plane is controlled via a link of some kind. The link is accidentally broken, or intentionally broken, or taken over.
2-The plane is autonomous, and the computers either quit, or they decide to "911".

I can see having one pilot and AI working together, so the human can fix a situation that isn't fixable by a computer, or the human can take over, if something goes very wrong with the AI control of the aircraft. Or, the human could operate the aircraft, assisted by the AI to prevent a stupid human trick.
Those are technical problems. The chain of trust, authentication, and even multiple party verification of commands can be maintained with cryptography. Spoofed signals (even from a suicidal pilot) can be defeated. The computer can be programmed to execute a fail-safe if it detects something fishy. The engineering is tedious yet trivial.

But we don't trust humans. We trust motives such as greed and self preservation. Taking a pilot out of the cockpit doesn't mean we're trusting a machine. It means we're trusting different motives. I'd sooner trust a machine than the motives of a man in the ground. So I say it's either 1) two pilots, 2) one pilot, one ground controller, and one non-overridable on-board autonomous computer with fail-safes for command disagreement or authentication failures, or 3) a totally autonomous on-board computer.
Reply