Who is the true black box?2018 Feb 04
When we have some smart people like Elon Musk and Stephen Hawking bashing talking about something totally outside their fields, in the first moment we need to be careful about their opinions; but when we have the same smart guys using some alarmistic speech about what they do not understand, we really need to take a more pragmatic view about their argument and see the flaws, especially when those guys talk about regulation over this “black boxes”. This article of Vijay Pande of the New York Times called “Artificial Intelligence’s ‘Black Box’ Is Nothing to Fear” give us a sober view of what’s the real black box are:
Given these types of concerns, the unseeable space between where data goes in and answers come out is often referred to as a “black box” — seemingly a reference to the hardy (and in fact orange, not black) data recorders mandated on aircraft and often examined after accidents. In the context of A.I., the term more broadly suggests an image of being in the “dark” about how the technology works: We put in and provide the data and models and architectures, and then computers provide us answers while continuing to learn on their own, in a way that’s seemingly impossible — and certainly too complicated — for us to understand.
There’s particular concern about this in health care, where A.I. is used to classify which skin lesions are cancerous, to identify very early-stage cancer from blood, to predict heart disease, to determine what compounds in people and animals could extend healthy life spans and more. But these fears about the implications of black box are misplaced. A.I. is no less transparent than the way in which doctors have always worked — and in many cases it represents an improvement, augmenting what hospitals can do for patients and the entire health care system. After all, the black box in A.I. isn’t a new problem due to new tech: Human intelligence itself is — and always has been — a black box.
Let’s take the example of a human doctor making a diagnosis. Afterward, a patient might ask that doctor how she made that diagnosis, and she would probably share some of the data she used to draw her conclusion. But could she really explain how and why she made that decision, what specific data from what studies she drew on, what observations from her training or mentors influenced her, what tacit knowledge she gleaned from her own and her colleagues’ shared experiences and how all of this combined into that precise insight? Sure, she’d probably give a few indicators about what pointed her in a certain direction — but there would also be an element of guessing, of following hunches. And even if there weren’t, we still wouldn’t know that there weren’t other factors involved of which she wasn’t even consciously aware.
If the same diagnosis had been made with A.I., we could draw from all available information on that particular patient — as well as data anonymously aggregated across time and from countless other relevant patients everywhere, to make the strongest evidence-based decision possible. It would be a diagnosis with a direct connection to the data, rather than human intuition based on limited data and derivative summaries of anecdotal experiences with a relatively small number of local patients.
But we make decisions in areas that we don’t fully understand every day — often very successfully — from the predicted economic impacts of policies to weather forecasts to the ways in which we approach much of science in the first place. We either oversimplify things or accept that they’re too complex for us to break down linearly, let alone explain fully. It’s just like the black box of A.I.: Human intelligence can reason and make arguments for a given conclusion, but it can’t explain the complex, underlying basis for how we arrived at a particular conclusion. Think of what happens when a couple get divorced because of one stated cause — say, infidelity — when in reality there’s an entire unseen universe of intertwined causes, forces and events that contributed to that outcome. Why did they choose to split up when another couple in a similar situation didn’t? Even those in the relationship can’t fully explain it. It’s a black box.
A good point about this argument it’s that when we get some diagnostics from our doctors, how much of this knowledge (e.g. field experience, practical skills, etc) are totally interpretable, uniform with the literature and transparent for their patient in the right moment of the examination? And the final argument of the article shows about an common aspect of computation: debugging.
The irony is that compared with human intelligence, A.I. is actually the more transparent of intelligences. Unlike the human mind, A.I. can — and should — be interrogated and interpreted. Like the ability to audit and refine models and expose knowledge gaps in deep neural nets and the debugging tools that will inevitably be built and the potential ability to augment human intelligence via brain-computer interfaces, there are many technologies that could help interpret artificial intelligence in a way we can’t interpret the human brain. In the process, we may even learn more about how human intelligence itself works.
The final conclusion of this is: if you see a smart, well known person talking about something totally outside of their original field, be skeptic.