Humans are going to have to learn to "trust the AI" without understanding why it is right. I often make an analogous point to my kids -- "At your age, if you and Dad disagree, chances are that Dad is right" :-) Of course, I always try to explain the logic behind my thinking, but in the case of some complex machine optimizations (e.g., Go strategy), humans may not be able to understand even the detailed explanations.
In some areas of complex systems -- neuroscience, genomics, molecular dynamics -- we also see machine prediction that is superior to other methods, but difficult even for scientists to understand. When hundreds or thousands of genes combine to control many dozens of molecular pathways, what kind of explanation can one offer for why a particular setting of the controls (DNA pattern) works better than another?
There was never any chance that the functioning of a human brain, the most complex known object in the universe, could be captured in verbal explication of the familiar kind (non-mathematical, non-algorithmic). The researchers that built AlphaGo would be at a loss to explain exactly what is going on inside its neural net...
NYTimes: ... “Last year, it was still quite humanlike when it played,” Mr. Ke said after the game. “But this year, it became like a god of Go.”On earlier encounters with AlphGo:
... After he finishes this week’s match, he said, he would focus more on playing against human opponents, noting that the gap between humans and computers was becoming too great. He would treat the software more as a teacher, he said, to get inspiration and new ideas about moves.
“AlphaGo is improving too fast,” he said in a news conference after the game. “AlphaGo is like a different player this year compared to last year.”
“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”