One-stop shop to make India 20 times richer

Category: Economics

Minimum wage’s origin in ultra-racists who wanted the poor/blacks/minorities to be wiped out

Yesterday I had a debate with a friend who argued that minimum wages are good. I explained to him that the concept of minimum wage is racist and driven by the eugenic desire to kill blacks and monitories. He didn’t think that’s the case. Well, ignorance doesn’t make someone right.

I’ll be providing him with a link to this blog post of mine, plus this, below – an important extract from a peer reviewed article published in JEP.

SOURCE:

Thomas C. Leonard – Retrospectives: Eugenics and Economics in the Progressive Era – Journal of Economic Perspectives—Volume 19, Number 4—Fall 2005—Pages 207–224

The Eugenic Effects of Minimum Wage Laws

During the second half of the Progressive Era, beginning roughly in 1908, progressive economists and their reform allies achieved many statutory victories, including state laws that regulated working conditions, banned child labor, insti­tuted “mothers’ pensions,” capped working hours and, the sine qua non, fixed minimum wages. In using eugenics to justify exclusionary immigration legislation, the race-suicide theorists offered a model to economists advocating labor reforms, notably those affiliated with the American Association for Labor Legislation, the organization of academic economists that Orloff and Skocpol (1984, p. 726) call the “leading association of U.S. social reform advocates in the Progressive Era.”

Progressive economists, like their neoclassical critics, believed that binding minimum wages would cause job losses. However, the progressive economists also believed that the job loss induced by minimum wages was a social benefit, as it performed the eugenic service ridding the labor force of the “unemployable.” Sidney and Beatrice Webb (1897 [1920], p. 785) put it plainly: “With regard to certain sections of the population [the “unemployable”], this unemployment is not a mark of social disease, but actually of social health.” “[O]f all ways of dealing with these unfortunate parasites,” Sidney Webb (1912, p. 992) opined in the Journal of Political Economy, “the most ruinous to the community is to allow them to unre­strainedly compete as wage earners.” A minimum wage was seen to operate eugen­ically through two channels: by deterring prospective immigrants (Henderson, 1900) and also by removing from employment the “unemployable,” who, thus identified, could be, for example, segregated in rural communities or sterilized.

The notion that minimum-wage induced disemployment is a social benefit distinguishes its progressive proponents from their neoclassical critics, such as Alfred Marshall (1897), Philip Wicksteed (1913), A. C. Pigou (1913) and John Bates Clark (1913), who regarded job loss as a social cost of minimum wages, not as a putative social benefit (Leonard, 2000) .

Columbia’s Henry Rogers Seager, a leading progressive economist who served as president of the AEA in 1922, provides an example. Worthy wage-earners, Seager (1913a, p. 12) argued, need protection from the “wearing competition of the casual worker and the drifter” and from the other “unemployable” who unfairly drag down the wages of more deserving workers (1913b, pp. 82–83). The minimum wage protects deserving workers from the competition of the unfit by making it illegal to work for less. Seager (1913a, p. 9) wrote: “The operation of the minimum wage requirement would merely extend the definition of defectives to embrace all individuals, who even after having received special training, remain incapable of adequate self-support.” Seager (p. 10) made clear what should happen to those who, even after remedial training, could not earn the legal minimum: “If we are to maintain a race that is to be made of up of capable, efficient and independent individuals and family groups we must courageously cut off lines of heredity that have been proved to be undesirable by isolation or sterilization ….

The unemployable were thus those workers who earned less than some mea­sure of an adequate standard of living, a standard the British called a “decent maintenance” and Americans referred to as a “living wage.” For labor reformers, firms that paid workers less than the living wage to which they were entitled were deemed parasitic, as were the workers who accepted such wages—on grounds that someone (charity, state, other members of the household) would need to make up the difference.

For progressives, a legal minimum wage had the useful property of sorting the unfit, who would lose their jobs, from the deserving workers, who would retain their jobs. Royal Meeker, a Princeton economist who served as Woodrow Wilson’s U.S. Commissioner of Labor, opposed a proposal to subsidize the wages of poor workers for this reason. Meeker preferred a wage floor because it would disemploy unfit workers and thereby enable their culling from the work force. “It is much better to enact a minimum -wage law even if it deprives these unfortunates of work,” argued Meeker (1910, p. 554). “Better that the state should support the inefficient wholly and prevent the multiplication of the breed than subsidize incompetence and unthrift, enabling them to bring forth more of their kind.” A. B. Wolfe (1917, p. 278), an American progressive economist who would later become president of the AEA in 1943, also argued for the eugenic virtues of removing from employment those who “are a burden on society.”

In his Principles of Economics, Frank Taussig (1921, pp. 332–333) asked rhetor­ically, “how to deal with the unemployable?” Taussig identified two classes of unemployable worker, distinguishing the aged, infirm and disabled from the “feebleminded … those saturated with alcohol or tainted with hereditary disease … [and] the irretrievable criminals and tramps….” The latter class, Taussig proposed, “should simply be stamped out.” “We have not reached the stage,” Taussig allowed, “where we can proceed to chloroform them once and for all; but at least they can be segregated, shut up in refuges and asylums, and prevented from propagating their kind.”5

The progressive idea that the unemployable could not earn a living wage was bound up with the progressive view of wage determination. Unlike the economists who pioneered the still-novel marginal productivity theory, most progressives agreed that wages should be determined by the amount that was necessary to provide a reasonable standard of living, not by productivity, and that the cost of this entitlement should fall on firms.6

But how should a living wage be determined? Were workers with more depen­dents, and thus higher living expenses, thereby entitled to higher wages? Arguing that wages should be a matter of an appropriate standard of living opened the door, in this era of eugenics, to theories of wage determination that were grounded in biology, in particular to the idea that “low-wage races” were biologically predisposed to low wages, or “under-living.”7 Edward A. Ross (1936, p. 70), the proponent of race-suicide theory, argued that “the Coolie cannot outdo the American, but he can underlive him.” “Native” workers have higher productivity, claimed Ross, but because Chinese immigrants are racially disposed to work for lower wages, they displace the native workers.

In his Races and Immigrants, the University of Wisconsin economist and social reformer John R. Commons argued that wage competition not only lowers wages, it also selects for the unfit races. “The competition has no respect for the superior races,” said Commons (1907, p. 151), “the race with lowest necessities displaces others.” Because race rather than productivity determined living standards, Com­mons could populate his low-wage-races category with the industrious and lazy alike. African Americans were, for Commons (p. 136), “indolent and fickle,” which explained why, Commons argued, slavery was required: “The negro could not possibly have found a place in American industry had he come as a free man … [I ] f such races are to adopt that industrious life which is second nature to races of the temperate zones, it is only through some form of compulsion.” Similarly, Wharton School reformer Scott Nearing (1915, p. 22), volunteered that if “an employer has a Scotchman working for him at $3 a day [and] an equally efficient Lithuanian offers to the same work for $2 … the work is given to the low bidder.”

When U.S. labor reformers reported on labor legislation in countries more precocious with respect to labor reform, they favorably commented on the eugenic efficacy of minimum wages in excluding the “low-wage races” from work. Harvard’s Arthur Holcombe (1912, p. 21), a member of the Massachusetts Minimum Wage Commission, referred approvingly to the intent of Australia’s minimum wage law to “protect the white Australian’s standard of living from the invidious competition of the colored races, particularly of the Chinese.” Florence Kelley (1911, p. 304), perhaps the most influential U.S. labor reformer of the day, also endorsed the Australian minimum -wage law as “redeeming the sweated trades” by preventing the “unbridled competition” of the unemployable, the “women, children, and Chinese [who] were reducing all the employees to starvation …”

For these progressives, race determined the standard of living, and the stan­dard of living determined the wage. Thus were immigration restriction and labor legislation, especially minimum wages, justified for their eugenic effects. Invidious distinction, whether founded on the putatively greater fertility of the unfit, or upon their putatively greater predisposition to low wages, lay at the heart of the reforms we today see as the hallmark of the Progressive Era.

Not all progressives endorsed eugenics, and not all of those who endorsed eugenics were progressives, traditionally defined, still less proponents of minimum wages. Taussig was not especially well-disposed to minimum wages, but his intemperate remarks measure the influence of eugenic ideas upon eco­nomics in the Progressive Era.

====

6As Lawrence Glickman (1997, pp. 85–91) argues, the progressive view of wage determination drew upon the labor union theory of the 1880s. Frank Foster of the American Federation of Labor, for example, argued (as quoted in Mussey, 1927, p. 236) that “it is not commonly the value of what is produced which chiefly determines the wage rate, but the nature and degree of the wants of the workers, as embodied in their customary mode of living.” Likewise, the influential and pioneering labor reformer Carroll Wright (1882, pp. 4–5) , one of the first Americans to call for a legal minimum wage, asserted that “[t]he labor question” is a matter of the “wants of the wage-laborer.”

7 Progressives also argued that there was a “female” standard of living, something that was determined by women’s biological nature, or by their “natural” roles as mothers and helpmeets (Leonard, 2005).

 

Continue Reading

The most readable Thomas Sowell – on the shambles that is leftism

Some of Sowell’s wonderful articles on Leftism:

12/06/16: The Left’s Gambles

11/02/16: The Left’s Vision

10/21/16: The Left and the Masses: Part III
10/20/16: The Left and the Masses: Part II
10/19/16: The Left and the Masses

12/22/15: The Busybody Left

07/23/15: The Fact-Free Left: Part II
07/22/15: The Fact-Free Left

03/11/14: The Left Versus Minorities

04/25/14: The High Cost of Liberalism: Part III
04/24/14: The High Cost of Liberalism: Part II
04/23/14: The High Cost of Liberalism

08/06/13: Busybody Politics

07/05/13: The Mindset of the Left: Part IV
07/04/13: The Mindset of the Left: Part III
07/03/13: The Mindset of the Left: Part II
07/02/13: The Mindset of the Left

12/02/08: Freedom and the Left

09/09/08: The Vision of the Left

05/16/07: Presumptions of the Left
05/15/07: The Anger Of The Left

08/24/06: The Left and crime, Part II
08/23/06: The Left and crime

01/07/05: The Left monopoly

08/06/04: The left’s vocabulary
08/05/04: The left’s vision

12/05/03: The high cost of busybodies, Part IV
12/04/03: The high costs of busybodies, Part III
12/03/03: The high costs of busybodies, Part II
12/02/03: The high costs of busybodies

12/04/00: Poverty and the Left

5/29/98: The insulation of the Left

Continue Reading

The most brilliant Thomas Sowell – on the nonsensical public policy concept of “affordability”

I regret I’m discovering Sowell’s extensive work only now. It is an amazing pleasure to read his articles and books.

In this one, I’m noting a few of his insights on the ultra-spurious concept of affordability:

====EXTRACT===

“Many of the cant words of politics are simply evasions of reality. A prime example is the notion of making housing, college, health insurance, or other things “affordable.”

“Virtually anything can be made more affordable in isolation, simply by transferring resources to it from elsewhere in the economy, and having most of the costs absorbed by the U. S. Treasury.

“The federal government could make a Rolls Royce affordable for every American, but we would not be a richer country as a result. We would in fact be a much poorer country”

– Thomas Sowell

===ARTICLE===

“Affordability” strikes again

prices are conveying an underlying reality about costs and scarcity — a reality that is not going to be changed by throwing the taxpayers’ money around to make things “affordable.”

housing prices have been forced up by artificially created scarcities, many under pious political labels. In one of the hotbeds of environmentalism and other forms of liberalism — California’s Marin County, across from San Francisco — the average price of a house rose five-fold in just one decade.

===ARTICLE==

The ‘Affordable Housing’ Fraud

==ARTICLE===

Who can afford it?

==ARTICLE==

Affordable housing

 

Continue Reading

The fundamental mistakes of analysis about income inequality – a brilliant exposition by Thomas Sowell

I’m extracting a brilliant section from the Thomas Sowell Reader for the illumination of mankind. If you understand this, you’ll become immune to some of the major errors/delusions that beset many of the “intellectuals” of this world.

===

“INCOME DISTRIBUTION” –by Thomas Sowell

Variations in income can be viewed empirically, on the one hand, or in terms of moral judgments, on the other. Most of the contemporary intelligentsia do both. But, in order to assess the validity of the conclusions they reach, it is advisable to assess the empirical issues and the moral issues separately, rather than attempt to go back and forth between the two, with any expectation of rational coherence.

Empirical Evidence

Given the vast amounts of statistical data on income available from the Census Bureau, the Internal Revenue Service and innumerable research institutes and projects, one might imagine that the bare facts about variations in income would be fairly well known by informed people, even though they might have differing opinions as to the desirability of those particular variations. In reality, however, the most fundamental facts are in dispute, and variations in what are claimed to be facts seem to be at least as great as variations in incomes. Both the magnitude of income variations and the trends in these variations over time are seen in radically different terms by those with different visions as regards the current reality, even aside from what different people may regard as desirable for the future. Perhaps the most fertile source of misunderstandings about incomes has been the widespread practice of confusing statistical categories with flesh-and-blood human beings. Many statements have been made in the media and in academia, claiming that the rich are gaining not only larger incomes but a growing share of all incomes, widening the income gap between people at the top and those at the bottom. Almost invariably these statements are based on confusing what has been happening over time in statistical categories with what has been happening over time with actual flesh-and-blood people.

A New York Times editorial, for example, declared that “the gap between rich and poor has widened in America.”1 Similar conclusions appeared in a 2007 Newsweek article which referred to this era as “a time when the gap is growing between the rich and the poor?and the superrich and the merely rich,?2 a theme common in such other well-known media outlets as the Washington Post and innumerable television programs. “The rich have seen far greater income gains than have the poor,” according to Washington Post columnist Eugene Robinson.3 A writer in the Los Angeles Times likewise declared, “the gap between rich and poor is growing.”4 According to Professor Andrew Hacker in his book Money: “While all segments of the population enjoyed an increase in income, the top fifth did twenty-four times better than the bottom fifth. And measured by their shares of the aggregate, not just the bottom fifth but the three above it all ended up losing ground.”5 E.J. Dionne of the Washington Post described “the wealthy” as “people who have made almost all the income gains in recent years” and added that they are “undertaxed.”6

Although such discussions have been phrased in terms of people, the actual empirical evidence cited has been about what has been happening over time to statistical categories—and that turns out to be the direct opposite of what has happened over time to flesh-and-blood human beings, most of whom move from one income category to another over time. In terms of statistical categories, it is indeed true that both the amount of income and the proportion of all income received by those in the top 20 percent bracket have risen over the years, widening the gap between the top and bottom quintiles.7 But U.S. Treasury Department data, following specific individuals over time from their tax returns to the Internal Revenue Service, show that in terms of people, the incomes of those particular taxpayers who were in the bottom 20 percent in income in 1996 rose 91 percent by 2005, while the incomes of those particular taxpayers who were in the top 20 percent in 1996 rose by only 10 percent by 2005—and the incomes of those in the top 5 percent and top one percent actually declined.8

While it might seem as if both these radically different sets of statistics cannot be true at the same time, what makes them mutually compatible is that flesh-and-blood human beings move from one statistical category to another over time. When those taxpayers who were initially in the lowest income bracket had their incomes nearly double in a decade, that moved many of them up and out of the bottom quintile—and when those in the top one percent had their incomes cut by about one-fourth, that may well have dropped many, if not most, of them out of the top one percent. Internal Revenue Service data can follow particular individuals over time from their tax returns, which have individual Social Security numbers as identification, while data from the Census Bureau and most other sources follow what happens to statistical categories over time, even though it is not the same individuals in the same categories over the years.

Many of the same kinds of data used to claim a widening income gap between “the rich” and “the poor”—names usually given to people with different incomes, rather than different wealth, as the terms rich and poor might seem to imply—have led many in the media to likewise claim a growing income gap between the “super-rich” and the “merely rich.” Under the headline “Richest Are Leaving Even the Rich Far Behind,” a front-page New York Times article dubbed the “top 0.1 percent of income earners—the top one-thousandth” as the “hyper-rich” and declared that they “have even left behind people making hundreds of thousands of dollars a year.”9 Once again, the confusion is between what is happening to statistical categories over time and what is happening to flesh-and-blood individuals over time, as they move from one statistical category to another.

Despite the rise in the income of the top 0.1 percent of taxpayers as a statistical category, both absolutely and relative to the incomes in other categories, as flesh-and-blood human beings those individuals who were in that category initially had their incomes actually fall by a whopping 50 percent between 1996 and 2005.10 It is hardly surprising when people whose incomes are cut in half drop out of the top 0.1 percent. What happens to the income of the category over time is not the same as what happens to the people who were in that category at any given point in time. But many among the intelligentsia are ready to seize upon any numbers that seem to fit their vision.11

It is much the same story with data on the top four hundred income earners in the country. As with other data, data on those who were among the top 400 income earners from 1992 to 2000 were not data on the same 400 people throughout the span of time covered. During that span, there were thousands of people in the top 400—which is to say, turnover was high. Fewer than one-fourth of all the people in that category during that span of years were in that category more than one year, and fewer than 13 percent were in that category more than two years.12

Behind many of those numbers and the accompanying alarmist rhetoric is a very mundane fact: Most people begin their working careers at the bottom, earning entry-level salaries. Over time, as they acquire more skills and experience, their rising productivity leads to rising pay, putting them in successively higher income brackets. These are not rare, Horatio Alger stories. These are common patterns among millions of people in the United States and in some other countries. More than three-quarters of those working Americans whose incomes were in the bottom 20 percent in 1975 were also in the top 40 percent of income earners at some point by 1991. Only 5 percent of those who were initially in the bottom quintile were still there in 1991, while 29 percent of those who were initially at the bottom quintile had risen to the top quintile.13 Yet verbal virtuosity has transformed a transient cohort in a given statistical category into an enduring class called “the poor.”

Just as most Americans in statistical categories identified as “the poor” are not an enduring class there, studies in Britain, Canada, New Zealand and Greece show similar patterns of transience among those in low-income brackets at a given time.14 Just over half of all Americans earning at or near the minimum wage are from 16 to 24 years of age15—and of course these individuals cannot remain from 16 to 24 years of age indefinitely, though that age category can of course continue indefinitely, providing many intellectuals with data to fit their preconceptions.

Only by focussing on the income brackets, instead of the actual people moving between those brackets, have the intelligentsia been able to verbally create a “problem” for which a “solution” is necessary. They have created a powerful vision of “classes” with “disparities” and “inequities” in income, caused by “barriers” created by “society.” But the routine rise of millions of people out of the lowest quintile over time makes a mockery of the “barriers” assumed by many, if not most, of the intelligentsia.

Far from using their intellectual skills to clarify the distinction between statistical categories and flesh-and-blood human beings, the intelligentsia have instead used their verbal virtuosity to equate the changing numerical relationship between statistical categories over time with a changing relationship between flesh-and-blood human beings (?the rich? and ?the poor?) over time, even though data that follow individual income-earners over time tell a diametrically opposite story from that of data which follow the statistical categories which people are moving into and out of over time.

The confusion between statistical categories and flesh-and-blood human beings is compounded when there is confusion between income and wealth. People called “rich” or “super-rich” have been given these titles by the media on the basis of income, not wealth, even though being rich means having more wealth. According to the Treasury Department: “Among those with the very highest incomes in 1996—the top 1/100 of 1 percent—only 25 percent remained in this group in 2005.”16 If these were genuinely superrich people, it is hard to explain why three-quarters of them are no longer in that category a decade later.

A related, but somewhat different, confusion between statistical categories and human beings has led to many claims in the media and in academia that Americans’ incomes have stagnated or grown only very slowly over the years. For example, over the entire period from 1967 to 2005, median real household income—that is, money income adjusted for inflation—rose by 31 percent.17 For selected periods within that long span, real household incomes rose even less, and those selected periods have often been cited by the intelligentsia to claim that income and living standards have “stagnated.”18 Meanwhile, real per capita income rose by 122 percent over that same span, from 1967 to 2005.19 When a more than doubling of real income per person is called “stagnation,” that is one of the many feats of verbal virtuosity.

The reason for the large discrepancy between growth rate trends in household income and growth rate trends in individual income is very straightforward: The number of persons per household has been declining over the years. As early as 1966, the U.S. Bureau of the Census reported that the number of households was increasing faster than the number of people and concluded: “The main reason for the more rapid rate of household formation is the increased tendency, particularly among unrelated individuals, to maintain their own homes or apartments rather than live with relatives or move into existing households as roomers, lodgers, and so forth.?20 Increasing individual incomes made this possible. As late as 1970, 21 percent of American households contained 5 or more people. But, by 2007, only 10 percent did.21

Despite such obvious and mundane facts, household or family income statistics continue to be widely cited in the media and in academia—and per capita income statistics widely ignored, despite the fact that households are variable in size, while per capita income always refers to the income of one person. However, the statistics that the intelligentsia keep citing are much more consistent with their vision of America than the statistics they keep ignoring.

Just as household statistics understate the rise in the American standard of living over time, they overstate the degree of income inequality, since lower income households tend to have fewer people than upper income households. While there are 39 million people in households whose incomes are in the bottom 20 percent, there are 64 million people in households whose incomes are in the top 20 percent.22 There is nothing mysterious about this either, given the number of low-income mothers living with fatherless children, and low-income lodgers in single room occupancy hotels or rooming houses, for example.

Even if every person in the entire country received exactly the same income, there would still be a significant “disparity” between the average incomes received by households containing 64 million people compared to the average incomes received by households containing 39 million people. That disparity would be even greater if only the incomes of working adults were counted, even if those working adults all had identical incomes. There are more adult heads of household working full-time and year-around in even the top five percent of households than in the bottom twenty percent of households.23

Many income statistics are misleading in another sense, when they leave out the income received in kind—such as food stamps and subsidized housing—which often exceeds the value of the cash income received by people in the lower-income brackets. In 2001, for example, transfers in cash or in kind accounted for more than three-quarters of the total economic resources at the disposal of people in the bottom 20 percent.24 In other words, the standard of living of people in the bottom quintile is about three times what the income statistics would indicate. As we shall see, their personal possessions are far more consistent with this fact than with the vision of the intelligentsia.

Moral Considerations

The difference between statistical categories and actual people affects moral, as well as empirical, issues. However concerned we might be about the economic fate of flesh-and-blood human beings, that is very different from being alarmed or outraged about the fate of statistical categories. Michael Harrington’s best-selling book The Other America, for example, dramatized income statistics, lamenting “the anguish” of the poor in America, tens of millions “maimed in body and spirit” constituting “the shame of the other America,” people “caught in a vicious circle” and suffering a “warping of the will and spirit that is a consequence of being poor.”25 But investing statistical data with moral angst does nothing to establish a connection between a transient cohort in statistical categories and an enduring class conjured up through verbal virtuosity.

There was a time when such rhetoric might have made some sense in the United States, and there are other countries where it may still make sense today. But most of those Americans now living below the official poverty line have possessions once considered part of a middle class standard of living, just a generation or so ago. As of 2001, three-quarters of Americans with incomes below the official poverty level had air-conditioning (which only one-third of Americans had in 1971), 97 percent had color television (which fewer than half of Americans had in 1971), 73 percent owned a microwave oven (which fewer than one percent of Americans had in 1971) and 98 percent of “the poor” had either a videocassette recorder or a DVD player (which no one had in 1971). In addition, 72 percent of “the poor” owned a motor vehicle.26 None of this has done much to change the rhetoric of the intelligentsia, however much it may reflect changes in the standard of living of Americans in the lower income brackets.

Typical of the mindset of many intellectuals was a book by Andrew Hacker which referred to the trillions of dollars that become “the personal income of Americans” each year, and said: “Just how this money is apportioned will be the subject of this book.?27 But this money is not apportioned at all. It becomes income through an entirely different process.

The very phrase “income distribution” is tendentious. It starts the economic story in the middle, with a body of income or wealth existing somehow, leaving only the question as to how that income or wealth is to be distributed or “apportioned” as Professor Hacker puts it. In the real world, the situation is quite different. In a market economy, most people receive income as a result of what they produce, supplying other people with some goods or services that those people want, even if that service is only labor. Each recipient of these goods and services pays according to the value which that particular recipient puts on what is received, choosing among alternative suppliers to find the best combination of price and quality—both as judged by the individual who is paying.

This mundane, utilitarian process is quite different from the vision of “income distribution” projected by those among the intelligentsia who invest that vision with moral angst. If there really were some pre-existing body of income or wealth, produced somehow—manna from heaven, as it were—then there would of course be a moral question as to how large a share each member of society should receive. But wealth is produced. It does not just exist somehow. Where millions of individuals are paid according to how much what they produce is valued subjectively by millions of other individuals, it is not at all clear on what basis third parties could say that some goods or services are over-valued or under-valued, that cooking should be valued more or carpentry should be valued less, for example, much less that not working at all is not rewarded enough compared to working.

Nor is there anything mysterious in the fact that at least a thousand times as many people would pay to hear Pavarotti sing as would pay to hear the average person sing.

Where people are paid for what they produce, one person’s output can easily be worth a thousand times as much as another person’s output to those who are the recipients of that output—if only because thousands more people are interested in receiving some products or services than are interested in receiving other products and services—or even the same product or service from someone else. For example, when Tiger Woods left the golf tournament circuit for several months because of an injury, television audiences for the final round of major tournaments declined by varying amounts, ranging up to 61 percent.28 That can translate into millions of dollars’ worth of advertising revenue, based on the number of television viewers.

The fact that one person’s productivity may be a thousand times as valuable as another’s does not mean that one person’s merit is a thousand times as great as another’s. Productivity and merit are very different things, though the two things are often confused with one another. An individual’s productivity is affected by innumerable factors besides the efforts of that individual—being born with a great voice being an obvious example. Being raised in a particular home with a particular set of values and behavior patterns, living in a particular geographic or social environment, merely being born with a normal brain, rather than a brain damaged during the birth process, can make enormous differences in what a given person is capable of producing.

Moreover, third parties are in no position to second-guess the felt value of someone’s productivity to someone else, and it is hard even to conceive how someone’s merit could be judged accurately by another human being who “never walked in his shoes.” An individual raised in terrible home conditions or terrible social conditions may be laudable for having become an average, decent citizen with average work skills as a shoe repairer, while someone raised from birth with every advantage that money and social position can confer may be no more laudable for becoming an eminent brain surgeon. But that is wholly different from saying that repairing shoes is just as valuable to others as being able to repair maladies of the brain. To say that merit may be the same is not to say that productivity is the same. Nor can we logically or morally ignore the discrepancy in the relative urgency of those who want their shoes repaired versus those in need of brain surgery. In other words, it is not a question of simply weighing the interest of one income recipient versus the interest of another income recipient, while ignoring the vastly larger number of other people whose well-being depends on what these individuals produce.

If one prefers an economy in which income is divorced from productivity, then the case for that kind of economy needs to be made explicitly. But that is wholly different from making such a large and fundamental change on the basis of verbal virtuosity in depicting the issue as being simply that of one set of “income distribution” statistics today versus an alternative set of “income distribution” statistics tomorrow.

As for the moral question, whether any given set of human beings can be held responsible for disparities in other people’s productivity—and consequent earnings—depends on how much control any given set of human beings has maintained, or can possibly maintain, over the innumerable factors which have led to existing differences in productivity. Since no human being has control over the past, and many deeply ingrained cultural differences are a legacy of the past, limitations on what can be done in the present are limitations on what can be regarded as moral failings by society. Still less can statistical differences between groups be automatically attributed to “barriers” created by society. Barriers exist in the real world, just as cancer exists. But acknowledging that does not mean that all deaths—or even most deaths—can be automatically attributed to cancer or that most economic differences can be automatically attributed to “barriers,” however fashionable this latter non sequitur may be in some quarters.

Within the constraints of circumstances, there are things which can be done to make opportunities more widely available, or to help those whose handicaps are too severe to expect them to utilize whatever opportunities are already available. In fact, much has already been done and is still being done in a country like the United States, which leads the world in philanthropy, not only in terms of money but also in terms of individuals donating their time to philanthropic endeavors. But only by assuming that everything that has not been done could have been done, disregarding costs and risks, can individuals or societies be blamed because the real world does not match some vision of an ideal society. Nor can the discrepancy between the real and the vision of the ideal be automatically blamed on the existing reality, as if visionaries cannot possibly be mistaken.

 

Continue Reading

Was Adam Smith a socialist? #3 – Are high profits a sign that a country is going to ruin?

The proponent of the view that Adam Smith’s system did not permit inequality in the first place cited the following comment from Smith to “prove” his point: that profits are “always highest in the countries which are going fastest to ruin.”

Apparently, this indicates he was against high profits. And hence a socialist. This idea is also related to Keynes’s view about the declining marginal efficiency of capital.

There is some merit in this view.

We know that poor societies will tend to have a lower rate of financial savings (instead, animals, and particularly children, are used as a means of saving) and will tend to consume all output.

In such a society, only those investments become profitable that allow higher rates of profit to be generated and hence enable the payment of higher rates of interest which are necessary to borrow money for investment.

Further, in such developing societies, with highly underdeveloped and incomplete markets, there are many more opportunities for profit than the opportunities available in well developed countries. However, these profitable opportunities are associated with greater risk, so not many people are willing to invest in developing societies.

This is where Schumpeter comes in. Innovation and competition are the main long term drivers of profit. With every commercial innovative breakthrough, there are immediate profit opportunities but unless the government institutionalises monopolistic power, the market will always wear down the profit margin.

Two forces are at work in relation to innovation:

(a) the technical frontier gets harder to breach and the “bang” for each new innovation potentially gets relatively smaller than the bang from the previous big innovation. For instance, the internet is a much smaller innovation (relatively speaking) than the discovery of electricity; and

(b) more research takes place and knowledge is generated at an exponential pace, as a society becomes more developed.

The empirical question is – which of these two forces “wins” at a particular point in time.

Thus, it is simply untrue to assert that profits are “always highest in the countries which are going fastest to ruin”. The question is – what’s the level of associated risk, the level of innovation, the depth of markets.

It is unclear whether Adam Smith fully understood the concept of profit. 

Once again, I would like to reminder readers that Smith was a founder of economics. He has been superceded on numerous things by other analysis.

Everyone who reads Smith should use his BROADER understandings as a starting point, and thereafter read more – much more (i.e. many other authors); in order to understand the market.

I think I’ll now move on to other issues. The idea that Smith was somehow against inequality – even if true – is related (at best) to some of his erroneous understandings. We must look at the TRUTH that lies in his work and not focus on things on which he has been superceded.

Continue Reading