I remain largely unconvinced about quite a few different technology "enthusiasms". This is one of them. A year or two ago I listened to a data scientist from the Pentagon say that, for the first time in history, we have the ability to produce correct answers but without knowing how the answer was actually arrived at. He was raising this as an ethical and legal concern since we can observe machine learning models draw human-like conclusions but we often don't understand how they're doing it. The danger here, in my view, revolves around the very notion of "correct answers" as that relates to some of the uses these models are being put to. Also, the game is kind of rigged by the selection and breadth of input parameters, which are of computational necessity tightly circumscribed as compared to the real world.