One-stop shop to make India 20 times richer

The fundamental mistakes of analysis about income inequality – a brilliant exposition by Thomas Sowell

I’m extracting a brilliant section from the Thomas Sowell Reader for the illumination of mankind. If you understand this, you’ll become immune to some of the major errors/delusions that beset many of the “intellectuals” of this world.



Variations in income can be viewed empirically, on the one hand, or in terms of moral judgments, on the other. Most of the contemporary intelligentsia do both. But, in order to assess the validity of the conclusions they reach, it is advisable to assess the empirical issues and the moral issues separately, rather than attempt to go back and forth between the two, with any expectation of rational coherence.

Empirical Evidence

Given the vast amounts of statistical data on income available from the Census Bureau, the Internal Revenue Service and innumerable research institutes and projects, one might imagine that the bare facts about variations in income would be fairly well known by informed people, even though they might have differing opinions as to the desirability of those particular variations. In reality, however, the most fundamental facts are in dispute, and variations in what are claimed to be facts seem to be at least as great as variations in incomes. Both the magnitude of income variations and the trends in these variations over time are seen in radically different terms by those with different visions as regards the current reality, even aside from what different people may regard as desirable for the future. Perhaps the most fertile source of misunderstandings about incomes has been the widespread practice of confusing statistical categories with flesh-and-blood human beings. Many statements have been made in the media and in academia, claiming that the rich are gaining not only larger incomes but a growing share of all incomes, widening the income gap between people at the top and those at the bottom. Almost invariably these statements are based on confusing what has been happening over time in statistical categories with what has been happening over time with actual flesh-and-blood people.

A New York Times editorial, for example, declared that “the gap between rich and poor has widened in America.”1 Similar conclusions appeared in a 2007 Newsweek article which referred to this era as “a time when the gap is growing between the rich and the poor?and the superrich and the merely rich,?2 a theme common in such other well-known media outlets as the Washington Post and innumerable television programs. “The rich have seen far greater income gains than have the poor,” according to Washington Post columnist Eugene Robinson.3 A writer in the Los Angeles Times likewise declared, “the gap between rich and poor is growing.”4 According to Professor Andrew Hacker in his book Money: “While all segments of the population enjoyed an increase in income, the top fifth did twenty-four times better than the bottom fifth. And measured by their shares of the aggregate, not just the bottom fifth but the three above it all ended up losing ground.”5 E.J. Dionne of the Washington Post described “the wealthy” as “people who have made almost all the income gains in recent years” and added that they are “undertaxed.”6

Although such discussions have been phrased in terms of people, the actual empirical evidence cited has been about what has been happening over time to statistical categories—and that turns out to be the direct opposite of what has happened over time to flesh-and-blood human beings, most of whom move from one income category to another over time. In terms of statistical categories, it is indeed true that both the amount of income and the proportion of all income received by those in the top 20 percent bracket have risen over the years, widening the gap between the top and bottom quintiles.7 But U.S. Treasury Department data, following specific individuals over time from their tax returns to the Internal Revenue Service, show that in terms of people, the incomes of those particular taxpayers who were in the bottom 20 percent in income in 1996 rose 91 percent by 2005, while the incomes of those particular taxpayers who were in the top 20 percent in 1996 rose by only 10 percent by 2005—and the incomes of those in the top 5 percent and top one percent actually declined.8

While it might seem as if both these radically different sets of statistics cannot be true at the same time, what makes them mutually compatible is that flesh-and-blood human beings move from one statistical category to another over time. When those taxpayers who were initially in the lowest income bracket had their incomes nearly double in a decade, that moved many of them up and out of the bottom quintile—and when those in the top one percent had their incomes cut by about one-fourth, that may well have dropped many, if not most, of them out of the top one percent. Internal Revenue Service data can follow particular individuals over time from their tax returns, which have individual Social Security numbers as identification, while data from the Census Bureau and most other sources follow what happens to statistical categories over time, even though it is not the same individuals in the same categories over the years.

Many of the same kinds of data used to claim a widening income gap between “the rich” and “the poor”—names usually given to people with different incomes, rather than different wealth, as the terms rich and poor might seem to imply—have led many in the media to likewise claim a growing income gap between the “super-rich” and the “merely rich.” Under the headline “Richest Are Leaving Even the Rich Far Behind,” a front-page New York Times article dubbed the “top 0.1 percent of income earners—the top one-thousandth” as the “hyper-rich” and declared that they “have even left behind people making hundreds of thousands of dollars a year.”9 Once again, the confusion is between what is happening to statistical categories over time and what is happening to flesh-and-blood individuals over time, as they move from one statistical category to another.

Despite the rise in the income of the top 0.1 percent of taxpayers as a statistical category, both absolutely and relative to the incomes in other categories, as flesh-and-blood human beings those individuals who were in that category initially had their incomes actually fall by a whopping 50 percent between 1996 and 2005.10 It is hardly surprising when people whose incomes are cut in half drop out of the top 0.1 percent. What happens to the income of the category over time is not the same as what happens to the people who were in that category at any given point in time. But many among the intelligentsia are ready to seize upon any numbers that seem to fit their vision.11

It is much the same story with data on the top four hundred income earners in the country. As with other data, data on those who were among the top 400 income earners from 1992 to 2000 were not data on the same 400 people throughout the span of time covered. During that span, there were thousands of people in the top 400—which is to say, turnover was high. Fewer than one-fourth of all the people in that category during that span of years were in that category more than one year, and fewer than 13 percent were in that category more than two years.12

Behind many of those numbers and the accompanying alarmist rhetoric is a very mundane fact: Most people begin their working careers at the bottom, earning entry-level salaries. Over time, as they acquire more skills and experience, their rising productivity leads to rising pay, putting them in successively higher income brackets. These are not rare, Horatio Alger stories. These are common patterns among millions of people in the United States and in some other countries. More than three-quarters of those working Americans whose incomes were in the bottom 20 percent in 1975 were also in the top 40 percent of income earners at some point by 1991. Only 5 percent of those who were initially in the bottom quintile were still there in 1991, while 29 percent of those who were initially at the bottom quintile had risen to the top quintile.13 Yet verbal virtuosity has transformed a transient cohort in a given statistical category into an enduring class called “the poor.”

Just as most Americans in statistical categories identified as “the poor” are not an enduring class there, studies in Britain, Canada, New Zealand and Greece show similar patterns of transience among those in low-income brackets at a given time.14 Just over half of all Americans earning at or near the minimum wage are from 16 to 24 years of age15—and of course these individuals cannot remain from 16 to 24 years of age indefinitely, though that age category can of course continue indefinitely, providing many intellectuals with data to fit their preconceptions.

Only by focussing on the income brackets, instead of the actual people moving between those brackets, have the intelligentsia been able to verbally create a “problem” for which a “solution” is necessary. They have created a powerful vision of “classes” with “disparities” and “inequities” in income, caused by “barriers” created by “society.” But the routine rise of millions of people out of the lowest quintile over time makes a mockery of the “barriers” assumed by many, if not most, of the intelligentsia.

Far from using their intellectual skills to clarify the distinction between statistical categories and flesh-and-blood human beings, the intelligentsia have instead used their verbal virtuosity to equate the changing numerical relationship between statistical categories over time with a changing relationship between flesh-and-blood human beings (?the rich? and ?the poor?) over time, even though data that follow individual income-earners over time tell a diametrically opposite story from that of data which follow the statistical categories which people are moving into and out of over time.

The confusion between statistical categories and flesh-and-blood human beings is compounded when there is confusion between income and wealth. People called “rich” or “super-rich” have been given these titles by the media on the basis of income, not wealth, even though being rich means having more wealth. According to the Treasury Department: “Among those with the very highest incomes in 1996—the top 1/100 of 1 percent—only 25 percent remained in this group in 2005.”16 If these were genuinely superrich people, it is hard to explain why three-quarters of them are no longer in that category a decade later.

A related, but somewhat different, confusion between statistical categories and human beings has led to many claims in the media and in academia that Americans’ incomes have stagnated or grown only very slowly over the years. For example, over the entire period from 1967 to 2005, median real household income—that is, money income adjusted for inflation—rose by 31 percent.17 For selected periods within that long span, real household incomes rose even less, and those selected periods have often been cited by the intelligentsia to claim that income and living standards have “stagnated.”18 Meanwhile, real per capita income rose by 122 percent over that same span, from 1967 to 2005.19 When a more than doubling of real income per person is called “stagnation,” that is one of the many feats of verbal virtuosity.

The reason for the large discrepancy between growth rate trends in household income and growth rate trends in individual income is very straightforward: The number of persons per household has been declining over the years. As early as 1966, the U.S. Bureau of the Census reported that the number of households was increasing faster than the number of people and concluded: “The main reason for the more rapid rate of household formation is the increased tendency, particularly among unrelated individuals, to maintain their own homes or apartments rather than live with relatives or move into existing households as roomers, lodgers, and so forth.?20 Increasing individual incomes made this possible. As late as 1970, 21 percent of American households contained 5 or more people. But, by 2007, only 10 percent did.21

Despite such obvious and mundane facts, household or family income statistics continue to be widely cited in the media and in academia—and per capita income statistics widely ignored, despite the fact that households are variable in size, while per capita income always refers to the income of one person. However, the statistics that the intelligentsia keep citing are much more consistent with their vision of America than the statistics they keep ignoring.

Just as household statistics understate the rise in the American standard of living over time, they overstate the degree of income inequality, since lower income households tend to have fewer people than upper income households. While there are 39 million people in households whose incomes are in the bottom 20 percent, there are 64 million people in households whose incomes are in the top 20 percent.22 There is nothing mysterious about this either, given the number of low-income mothers living with fatherless children, and low-income lodgers in single room occupancy hotels or rooming houses, for example.

Even if every person in the entire country received exactly the same income, there would still be a significant “disparity” between the average incomes received by households containing 64 million people compared to the average incomes received by households containing 39 million people. That disparity would be even greater if only the incomes of working adults were counted, even if those working adults all had identical incomes. There are more adult heads of household working full-time and year-around in even the top five percent of households than in the bottom twenty percent of households.23

Many income statistics are misleading in another sense, when they leave out the income received in kind—such as food stamps and subsidized housing—which often exceeds the value of the cash income received by people in the lower-income brackets. In 2001, for example, transfers in cash or in kind accounted for more than three-quarters of the total economic resources at the disposal of people in the bottom 20 percent.24 In other words, the standard of living of people in the bottom quintile is about three times what the income statistics would indicate. As we shall see, their personal possessions are far more consistent with this fact than with the vision of the intelligentsia.

Moral Considerations

The difference between statistical categories and actual people affects moral, as well as empirical, issues. However concerned we might be about the economic fate of flesh-and-blood human beings, that is very different from being alarmed or outraged about the fate of statistical categories. Michael Harrington’s best-selling book The Other America, for example, dramatized income statistics, lamenting “the anguish” of the poor in America, tens of millions “maimed in body and spirit” constituting “the shame of the other America,” people “caught in a vicious circle” and suffering a “warping of the will and spirit that is a consequence of being poor.”25 But investing statistical data with moral angst does nothing to establish a connection between a transient cohort in statistical categories and an enduring class conjured up through verbal virtuosity.

There was a time when such rhetoric might have made some sense in the United States, and there are other countries where it may still make sense today. But most of those Americans now living below the official poverty line have possessions once considered part of a middle class standard of living, just a generation or so ago. As of 2001, three-quarters of Americans with incomes below the official poverty level had air-conditioning (which only one-third of Americans had in 1971), 97 percent had color television (which fewer than half of Americans had in 1971), 73 percent owned a microwave oven (which fewer than one percent of Americans had in 1971) and 98 percent of “the poor” had either a videocassette recorder or a DVD player (which no one had in 1971). In addition, 72 percent of “the poor” owned a motor vehicle.26 None of this has done much to change the rhetoric of the intelligentsia, however much it may reflect changes in the standard of living of Americans in the lower income brackets.

Typical of the mindset of many intellectuals was a book by Andrew Hacker which referred to the trillions of dollars that become “the personal income of Americans” each year, and said: “Just how this money is apportioned will be the subject of this book.?27 But this money is not apportioned at all. It becomes income through an entirely different process.

The very phrase “income distribution” is tendentious. It starts the economic story in the middle, with a body of income or wealth existing somehow, leaving only the question as to how that income or wealth is to be distributed or “apportioned” as Professor Hacker puts it. In the real world, the situation is quite different. In a market economy, most people receive income as a result of what they produce, supplying other people with some goods or services that those people want, even if that service is only labor. Each recipient of these goods and services pays according to the value which that particular recipient puts on what is received, choosing among alternative suppliers to find the best combination of price and quality—both as judged by the individual who is paying.

This mundane, utilitarian process is quite different from the vision of “income distribution” projected by those among the intelligentsia who invest that vision with moral angst. If there really were some pre-existing body of income or wealth, produced somehow—manna from heaven, as it were—then there would of course be a moral question as to how large a share each member of society should receive. But wealth is produced. It does not just exist somehow. Where millions of individuals are paid according to how much what they produce is valued subjectively by millions of other individuals, it is not at all clear on what basis third parties could say that some goods or services are over-valued or under-valued, that cooking should be valued more or carpentry should be valued less, for example, much less that not working at all is not rewarded enough compared to working.

Nor is there anything mysterious in the fact that at least a thousand times as many people would pay to hear Pavarotti sing as would pay to hear the average person sing.

Where people are paid for what they produce, one person’s output can easily be worth a thousand times as much as another person’s output to those who are the recipients of that output—if only because thousands more people are interested in receiving some products or services than are interested in receiving other products and services—or even the same product or service from someone else. For example, when Tiger Woods left the golf tournament circuit for several months because of an injury, television audiences for the final round of major tournaments declined by varying amounts, ranging up to 61 percent.28 That can translate into millions of dollars’ worth of advertising revenue, based on the number of television viewers.

The fact that one person’s productivity may be a thousand times as valuable as another’s does not mean that one person’s merit is a thousand times as great as another’s. Productivity and merit are very different things, though the two things are often confused with one another. An individual’s productivity is affected by innumerable factors besides the efforts of that individual—being born with a great voice being an obvious example. Being raised in a particular home with a particular set of values and behavior patterns, living in a particular geographic or social environment, merely being born with a normal brain, rather than a brain damaged during the birth process, can make enormous differences in what a given person is capable of producing.

Moreover, third parties are in no position to second-guess the felt value of someone’s productivity to someone else, and it is hard even to conceive how someone’s merit could be judged accurately by another human being who “never walked in his shoes.” An individual raised in terrible home conditions or terrible social conditions may be laudable for having become an average, decent citizen with average work skills as a shoe repairer, while someone raised from birth with every advantage that money and social position can confer may be no more laudable for becoming an eminent brain surgeon. But that is wholly different from saying that repairing shoes is just as valuable to others as being able to repair maladies of the brain. To say that merit may be the same is not to say that productivity is the same. Nor can we logically or morally ignore the discrepancy in the relative urgency of those who want their shoes repaired versus those in need of brain surgery. In other words, it is not a question of simply weighing the interest of one income recipient versus the interest of another income recipient, while ignoring the vastly larger number of other people whose well-being depends on what these individuals produce.

If one prefers an economy in which income is divorced from productivity, then the case for that kind of economy needs to be made explicitly. But that is wholly different from making such a large and fundamental change on the basis of verbal virtuosity in depicting the issue as being simply that of one set of “income distribution” statistics today versus an alternative set of “income distribution” statistics tomorrow.

As for the moral question, whether any given set of human beings can be held responsible for disparities in other people’s productivity—and consequent earnings—depends on how much control any given set of human beings has maintained, or can possibly maintain, over the innumerable factors which have led to existing differences in productivity. Since no human being has control over the past, and many deeply ingrained cultural differences are a legacy of the past, limitations on what can be done in the present are limitations on what can be regarded as moral failings by society. Still less can statistical differences between groups be automatically attributed to “barriers” created by society. Barriers exist in the real world, just as cancer exists. But acknowledging that does not mean that all deaths—or even most deaths—can be automatically attributed to cancer or that most economic differences can be automatically attributed to “barriers,” however fashionable this latter non sequitur may be in some quarters.

Within the constraints of circumstances, there are things which can be done to make opportunities more widely available, or to help those whose handicaps are too severe to expect them to utilize whatever opportunities are already available. In fact, much has already been done and is still being done in a country like the United States, which leads the world in philanthropy, not only in terms of money but also in terms of individuals donating their time to philanthropic endeavors. But only by assuming that everything that has not been done could have been done, disregarding costs and risks, can individuals or societies be blamed because the real world does not match some vision of an ideal society. Nor can the discrepancy between the real and the vision of the ideal be automatically blamed on the existing reality, as if visionaries cannot possibly be mistaken.


Sanjeev Sabhlok

View more posts from this author

Leave a Reply

Your email address will not be published. Required fields are marked *

Notify me of followup comments via e-mail. You can also subscribe without commenting.