Can Machines Decide What's Worthwhile?
Is it Worthwhile?
As a graduate student I periodically attended talks in the mathematics department. It would always be the same. A professor in the department would introduce the speaker. The speaker would read from a sheaf of papers and occasionally write or draw on the chalkboard. After an hour the speaker would finish and the professor would ask if there were any questions. Invariably, there would be none. We'd all applaud the speaker and be off on our separate ways.
I always left these talks a bit puzzled. Why were there no questions? Was it fear? That was definitely my reason. It might have been boredom, but why even attend then? One thing was for sure: it couldn't have been because the talk was clear. It wasn't to a complete novice like me. But it probably wasn't clear to professional mathematicians either because of how specialized these talks were. So the professionals listened but didn't formulate any specific questions.
After a few of these puzzling sessions, another reason for the silence suddenly came to me. Of course there were no questions about the paper - everyone just assumed that the speaker was not making any mistakes in their reasoning in going straight from propositions to proof. Rather, the central question was about the *value* of the work. Is it an *interesting* topic? How *creative* are the researcher's methods? What is the paper's *contribution* to mathematics? Does it substantially advance the field? Does it lead to surprising results in other areas of mathematics?
These are questions about the *value* of the paper. It's not easy to ask them at the end of a talk! Hence the silence.
A Problem for Machines?
This question of value can be asked of machines too. Specifically, it can be asked of the computer programs that implement machine learning. Yes, a machine can translate English to Japanese with eerie fidelity, learn to recognize faces, and distinguish malignant from benign tumors. But can machines be creative? Can they choose to learn interesting things? Can they determine that learning X is more valuable than learning Y and proceed to do just that? Can they *motivate* people to then follow them in this pursuit?
We might think there's a limit to what machines can do. Here are Erik Brynjolfsson and Andrew McAfee in a *Harvard Business Review* article, The Business of Artificial Intelligence:
A great place to start a discussion of the limits of AI is with Pablo Picasso’s observation about computers: “But they are useless. They can only give you answers.” They’re actually far from useless, as ML’s [Machine Learning's] recent triumphs show, but Picasso’s observation still provides insight. Computers are devices for answering questions, not for posing them. That means entrepreneurs, innovators, scientists, creators, and other kinds of people who figure out what problem or opportunity to tackle next, or what new territory to explore, will continue to be essential.
And here is *New York Times* columnist David Brooks on What Data Can and Cannot Do:
Data struggles with the social. Your brain is pretty bad at math (quick, what’s the square root of 437), but it’s excellent at social cognition. People are really good at mirroring each other’s emotional states, at detecting uncooperative behavior and at assigning value to things through emotion.
Computer-driven data analysis, on the other hand, excels at measuring the quantity of social interactions but not the quality. Network scientists can map your interactions with the six co-workers you see during 76 percent of your days, but they can’t capture your devotion to the childhood friends you see twice a year, let alone Dante’s love for Beatrice, whom he met twice.
We think the biggest and most important opportunities for human smarts in this new age of superpowerful ML lie at the intersection of two areas: figuring out what problems to work on next, and persuading a lot of people to tackle them and go along with the solutions. This is a decent definition of leadership, which is becoming much more important in the second machine age.
There may be limits to what machines can do, but surely they're not the ones described above. We humans know how to ask interesting questions. For machines to do the same, we just have to give them a training set that contains interesting and uninteresting questions that humans have come up with in a particular domain. There may not be an explicit rules-based pattern to creativity, but that's exactly what machine learning is for and that's exactly what machine learning is great at doing. Give a machine learning algorithm a good training set (Question, Domain, Interesting? (Yes, No)) and let it figure it out. Why not?
And if measuring the quality of interaction and not just quantity is the issue, let the machine know about devotion level in addition to interaction frequency. Why not?
Want to teach machines to motivate? Give it scripts of motivating speeches (and not-so-motivating ones) and let it do some learning. Why not?
Are there Limits for Creative, Interesting Machines?
People widely recognized as creative (Picasso for example) have a hard time explaining exactly what it is they're doing to be that way. Ask Roger Federer to explain how he hits his glorious backhand and you're likely to get an unsatisfying answer. But here's precisely where machine learning shines -- on those things that we can do but can't specify the rules for how we get it done. We can point to great art, glorious backhands, motivating sentiments, and terrible examples of each. And as long as we can do that, machines can get very good at distinguishing the great from the run of the mill.
So as long as we are able to distinguish what's interesting, creative, or worthwhile from what isn't, we can build training sets for machines to divine the patterns behind these "traits". The real problem is that we may not know ourselves (or there may be no answer). There's often debate about what's worthwhile. And what's worthwhile has a funny way of showing itself only years after it's been done and dusted.
The real limit to what machines can accomplish could ultimately be a reflection of our own limit in answering questions like *what is it for something to be worthwhile?*
For more big questions from Jitendra, try reading "How Is Learning Even Possible?"