The Times magazine has a great little
summary of some recent social science research, which studied the effects of social influence on judgements of quality. The researchers placed a number of songs online and asked volunteers to rate them. One group rated them without seeing others' opinions. In a number of "worlds" the raters were allowed to see the opinions of others in their world. Unsurprisingly, the interactive worlds exhibited large fluctuations, in which songs judged as mediocre by isolated listeners rose on the basis of small initial fluctuations in their ratings (e.g., in a particular world, the first 10 raters may have all liked an otherwise mediocre song, and subsequent listeners were influenced by this, leading to a positive feedback loop).
It isn't hard to think of a number of other contexts where this effect plays out. Think of the careers of two otherwise identical competitors (e.g., in science, business, academia). The one who enjoys an intial positive fluctuation may be carried along far beyond their competitor, for no reason of superior merit. The effect also appears in competing technologies or brands or fashion trends.
If outcomes are so noisy, then successful prediction is more a matter of luck than skill. The successful predictor is not necessarily a better judge of intrinsic quality, since quality is swamped by random fluctuations that are amplified nonlinearly. This picture undermines the rationale for the high compensation awarded to certain CEOs, studio and recording executives, even portfolio managers. In recent years I've often heard the argument that these people deserve their compensation because they generate tremendous value for society by making correct decisions about resource allocation (especially if they sit at the cash nexus of finance). However, the argument depends heavily on the assumption that the people in question are really adding value, rather than just throwing darts. If the system is sufficiently noisy it may be almost impossible to tell one way or the other. We may be rewarding the lucky, rather than the good, and a society with fewer incentives for these people may be equally or nearly equally efficient.
See related discussion of studio executives
here, and another related discussion
here.
As anyone who follows the business of culture is aware, the profits of cultural industries depend disproportionately on the occasional outsize success — a blockbuster movie, a best-selling book or a superstar artist — to offset the many investments that fail dismally. What may be less clear to casual observers is why professional editors, studio executives and talent managers, many of whom have a lifetime of experience in their businesses, are so bad at predicting which of their many potential projects will make it big. How could it be that industry executives rejected, passed over or even disparaged smash hits like “Star Wars,” “Harry Potter” and the Beatles, even as many of their most confident bets turned out to be flops? It may be true, in other words, that “nobody knows anything,” as the screenwriter William Goldman once said about Hollywood. But why? Of course, the experts may simply not be as smart as they would like us to believe. Recent research, however, suggests that reliable hit prediction is impossible no matter how much you know — a result that has implications not only for our understanding of best-seller lists but for business and politics as well.
...But people almost never make decisions independently — in part because the world abounds with so many choices that we have little hope of ever finding what we want on our own; in part because we are never really sure what we want anyway; and in part because what we often want is not so much to experience the “best” of everything as it is to experience the same things as other people and thereby also experience the benefits of sharing.
There’s nothing wrong with these tendencies. Ultimately, we’re all social beings, and without one another to rely on, life would be not only intolerable but meaningless. Yet our mutual dependence has unexpected consequences, one of which is that if people do not make decisions independently — if even in part they like things because other people like them — then predicting hits is not only difficult but actually impossible, no matter how much you know about individual tastes.
The reason is that when people tend to like what other people like, differences in popularity are subject to what is called “cumulative advantage,” or the “rich get richer” effect. This means that if one object happens to be slightly more popular than another at just the right point, it will tend to become more popular still. As a result, even tiny, random fluctuations can blow up, generating potentially enormous long-run differences among even indistinguishable competitors — a phenomenon that is similar in some ways to the famous “butterfly effect” from chaos theory. Thus, if history were to be somehow rerun many times, seemingly identical universes with the same set of competitors and the same overall market tastes would quickly generate different winners: Madonna would have been popular in this world, but in some other version of history, she would be a nobody, and someone we have never heard of would be in her place.
...Fortunately, the explosive growth of the Internet has made it possible to study human activity in a controlled manner for thousands or even millions of people at the same time. Recently, my collaborators, Matthew Salganik and Peter Dodds, and I conducted just such a Web-based experiment. In our study, published last year in Science, more than 14,000 participants registered at our Web site, Music Lab (www.musiclab.columbia.edu), and were asked to listen to, rate and, if they chose, download songs by bands they had never heard of. Some of the participants saw only the names of the songs and bands, while others also saw how many times the songs had been downloaded by previous participants. This second group — in what we called the “social influence” condition — was further split into eight parallel “worlds” such that participants could see the prior downloads of people only in their own world. We didn’t manipulate any of these rankings — all the artists in all the worlds started out identically, with zero downloads — but because the different worlds were kept separate, they subsequently evolved independently of one another.
This setup let us test the possibility of prediction in two very direct ways. First, if people know what they like regardless of what they think other people like, the most successful songs should draw about the same amount of the total market share in both the independent and social-influence conditions — that is, hits shouldn’t be any bigger just because the people downloading them know what other people downloaded. And second, the very same songs — the “best” ones — should become hits in all social-influence worlds.
What we found, however, was exactly the opposite. In all the social-influence worlds, the most popular songs were much more popular (and the least popular songs were less popular) than in the independent condition. At the same time, however, the particular songs that became hits were different in different worlds, just as cumulative-advantage theory would predict. Introducing social influence into human decision making, in other words, didn’t just make the hits bigger; it also made them more unpredictable. ...