Showing posts with label Finance and Accounting. Show all posts
Showing posts with label Finance and Accounting. Show all posts

Saturday, January 23, 2010

The World Distribution of Income: Falling Poverty and… Convergence, Period

Abstract

We estimate the world distribution of income by integrating individual income distributions for 139 countries between 1970 and 2000. Country distributions are constructed by combining two widely-used data sets: the PPP-Adjusted National Accounts data of the Penn World Tables is used to anchor the mean and Deininger and Squire (1996) and World Bank microeconomic surveys are used to pin down the dispersion.

The WDI is used to estimate poverty rates and headcounts. The CDF for 1990 stochastically dominates that of 1970, which means that poverty rates declined for all conceivable poverty lines. The 2000 CDF also stochastically dominates the 1970 distribution for all relevant levels of income. The two distributions for levels below $262 cross only because Congo/Zaire is included in the analysis, even though no good National Accounts data is available for this country for the late 1990s.

Poverty rates are reported for four poverty lines. For all lines, poverty rates in 2000 were between one-third and one-half of what they were in 1970. There were between 250 and 500 million less poor people in 2000 than in 1970. The number of people that live on less than one-dollar-a-day in 2000 was about 195 million, an order of magnitude less than the 1.2 billion widely publicized by institutions like the World Bank and the United Nations. We analyze poverty across different regions and countries. Asia is a great success, especially after 1980. Latin America reduced poverty substantially in the 1970s but progress stopped in the 1980s and 1990s. The worst performer was Africa, where poverty rates increased dramatically since 1970.

We estimate nine indexes of income inequality implied by our world distribution of income. All of them show substantial reductions in global income inequality during the 1980s and 1990s.

Finally, we argue that when in 2000, the United Nations established the Millenium Goal of halving the 1990 poverty rate, the world had already gone between 60% and 70% of the way towards achieving it.

We construct an estimate of the WDI for each year from 1970 to 2000. We do so by first estimating a distribution of income for each of 139 countries accounting for 93% percent of the world’s population in 2000. Individual country distributions are constructed using two widely used data sets. First, we use PPP-adjusted GDP per capita data from the Penn World Tables 6.1 (Heston, Summers and Aten (2002)) to anchor the mean of each country’s distribution. Second, the within-country dispersion is estimated using the income and expenditure micro surveys World Bank’s World Development Indicators which expand Deininger and Squire (1996). Since microeconomic surveys are not available annually for every country, we need to make some approximation (discussed in Section 2) to assign a level of income to each quantile for each country and year. We then use a non-parametric approach to estimate a smooth income distribution for each country/year. Finally, these individual distributions are integrated to compute the WDI.

The related literature includes Bourguignon and Morrison (2002) who attempt to estimate the WDI going back to 1820. Like Sala-i-Martin (2002), Bourguignon and Morrison (2002) estimate the WDI directly by assuming that each quintile in each country is made of individuals with identical incomes. Another drawback of Bourguignon and Morrison (2002) is that their analysis comprises only 33 countries or groups of countries and ends in 1993.

Another related paper is Bhalla (2002)9. Although the methodology and the data used by Bhalla differ from that of this paper, his main conclusions in terms of the evolution poverty and global income inequality are quite similar. Bhalla (2001) uses a parametric approach called the “Simple Accounting Procedure” (SAP) to approximate the Lorenz Curve for each individual country.10 As we will discuss in the next section, we use a non-parametric approach to approximate the density function.11 Another difference from Bhalla (2002) is that he uses World Bank PPP data rather than the Penn World Tables data to pin down the mean of the distribution. For most countries, the choice of data set does not matter much. It does, however, for the largest country in the world: the growth rates of PPP-adjusted per capita GDP reported by the World Bank are much larger than those of the PWT.12

Download full text paper here


Wednesday, January 20, 2010

A Century of Work and Leisure

Abstract

Has leisure increased over the last century? Standard measures of hours worked suggest that it has. In this paper, we develop a comprehensive measure of non-leisure hours that includes market work, home production, commuting and schooling for the last 105 years. We also present empirical and theoretical arguments for a definition of “per capita” that encompasses the entire population. The new measures reveal a number of interesting 20th Century trends. First, 70 percent of the decline in hours worked has been offset by an increase in hours spent in school. Second, contrary to conventional wisdom, average hours spent in home production are actually slightly higher now than they were in the early part of the 20th Century. Finally, leisure per capita is approximately the same now as it was in 1900.

A complete accounting of non-leisure time must include time spent in home production. Xenophon (4th century BC) believed that home production was as important as market production, and devoted half of his work Oeconomicus to issues of household management (Leeds (1917)). More recently, Becker’s (1965) article made modern economists aware of the importance of measuring and modeling home production. To this end, we combine results from various studies to construct a series showing trends in the average number of hours spent on home production.

A number of cross-validation studies show time use diaries to be the most accurate source of estimates for housework (and market work for that matter) (Juster and Stafford (1985, 1991)). Thus, we use estimates based on time diary data to the extent possible.

The historical studies generally including the following activities in home production: planning, purchasing, care of family members, general cleaning, care of the house and grounds, preparing and clearing away food, making, mending, and laundry of clothing and other household textiles (Vanek (1973), page 57). Activities such as playing and talking with and reading to children are usually included in childcare in the time use studies from 1965 on. We exclude them for two reasons. First, these activities rank high on the enjoyment index and hence are more properly classified as leisure. Second, while little time was devoted to these activities in the studies from 1900 to 1965, they have become an increasingly important in terms of time expenditures. Thus, including them in home production would lead to noticeably higher estimates at the end of the sample. Our measure of childcare included in home production is basic child care plus time spent in homework help, teaching, and meeting with teachers. See the data appendix for more details.

Our studies of the time diary literature indicate that the most important distinctions are for age, gender and employment status. Our strategy for constructing total hours spent in home production is as follows. For each of the relevant age, gender and employment status cells, we first gather as much information as possible on hours of housework for that category. We then interpolate values between years of the time diary studies. Finally, we weight the estimated hours of housework of each cell by the fraction of the population that falls in that cell.

Download full text paper here


Sunday, January 17, 2010

WHY DO AMERICANS WORK SO MUCH MORE THAN EUROPEANS?

Abstract

Americans now work 50 percent more than do the Germans, French, and Italians. This was not the case in the early 1970s when the Western Europeans worked more than Americans. In this paper, I examine the role of taxes in accounting for the differences in labor supply across time and across countries, in particular, the effect of the marginal tax rate on labor income. The population of countries considered is that of the G-7 countries, which are the major advanced industrial countries. The surprising finding is that this marginal tax rate accounts for the predominance of the differences at points in time and the large change in relative labor supply over time with the exception of the Italian labor supply in the early 1970s. This finding has important implications for policy, in particular for making social security programs solvent.

Americans, that is, residents of the United States, now work much more than do Europeans. Using labor market statistics from the Organisation for Economic Co-operation and Development (OECD), I find that Americans on a per person aged 15-64 basis work in the market sector 50 percent more than do the French. This was not always the case. In the early 1970s, Americans allocated less time to the market than did the French. The comparisons between Americans and Germans or Italians are the same. Why are there such large differences in labor supply across these countries? Why did the relative labor supplies change so much over time? In this lecture, I determine the importance of tax rates in accounting for these differences in labor supply for the major advanced industrial countries and find that tax rates alone account for most of these differences in labor supply.

This finding has important implications for policy in particular for financing social security retirement programs in Europe. On the pessimistic side, one implication is that increasing tax rates will not solve the problem of these under funded plans, because increasing tax rates will not increase revenue. On the positive side, the system can be reformed in a way that makes the young better off while honoring promises to the old. This can be accomplished by modifying the tax system so that when an individual works more and produces more output, the individual gets to consume a larger fraction of this increase output.

The major advanced industrial countries, which used to be called the G-7 countries, are the European countries France, Germany, Italy, and the United Kingdom, plus Canada, Japan, and the United States. For these countries comparable and sufficiently good statistics are available to carry out this investigation. The data sources are the United Nations system of national accounts (SNA) statistics and the OECD labor market statistics and purchasing power GDP numbers.1 The periods considered are 1970–74 and 1993–96. The later period was chosen because it is the most recent period prior to the U.S. telecommunications/dotcom boom of the late 1990s, a period when the relative size of unmeasured output was probably significantly larger than normal and there may have been associated problems with the market hours statistics. The early period was selected because it is the earliest one for which sufficiently good data are available to carry out the analysis. The relative numbers subsequent to 2000 are pretty much the same as they were in the pre technology boom period 1993-96.

I emphasize that my labor supply measure is hours worked per person 15-64 in the taxed market sector. The two principal margins of work effort are hours actually worked by employees and the fraction of the working age population that work. Paid vacations, sick leave, and holidays are hours of non working time. The time of someone working in the underground economy or in the home sector is not counted. Other things equal, a country with more weeks of vacations and more holidays will have a lower labor supply in the sense that I am using the term. I focus only on that part of working time that the resulting labor income is taxed.

Download full text paper here


Thursday, January 14, 2010

Asset Prices and the Measurement of Wealth and Saving

Abstract

The paper defines concepts of real wealth and saving which take into account the intertemporal index number problem that results from changing interest rates. Unlike conventional measures of real wealth, which are based on the market value of assets and ignore the index number problem, the new measure correctly reflects the changes in the welfare of households over time. An empirically operational approximation to the theoretical measure is provided and applied to US data. A major empirical finding is that US real financial wealth increased strongly in the 1980s, much more than is revealed by the market value of assets.

Economists seem to be convinced that there exist better measures of real GDP than the Big-Mac-Index. In this paper I argue that the same is true for real wealth, and I develop a measure that accounts for the intertemporal index number problem. I will show, both in theory and in an empirical application that the new measure can significantly deviate from conventional wealth series. Such an improved measure of real wealth is important in several respects. First, it is a better welfare indicator. If the market value of assets increases because interest rates have fallen, are households richer, not just in a nominal, but in a real sense? Real wealth as I define it answers this question. It has the property that an increase in its value indicates an improvement in the economic situation of a household. I will show that this is not true for the currently used wealth measures.

Second, real wealth plays an important role in the measurement of saving. Several authors have pointed out that conventional measures of national saving, based on the National Accounts, are insufficient. They reflect investment in physical capital1 only, and even here they are incomplete, since they use some more or less arbitrary accounting principles for asset valuation, and exclude changes in the value of existing capital. Bradford (1989, 1990, 1991) has forcefully argued that the change in the market value of assets is a better measure of saving than those derived from the National Accounts. The same view is expressed, e.g., in Barro (1989, p.50). One can criticise these claims on two grounds (cf. the comment of Stiglitz, 1991, on Bradford). First, one may argue that asset prices contain valuation bubbles2, and changes in asset prices thus not always reflect changes in real wealth (reasonably defined). The fact that the proposed measure of national saving is very volatile, and even negative in some years, may be interpreted as pointing in this direction. The second critique concerns the inherent index number problem, caused by changes in interest rates. The latter problem is solved if saving is defined not as the change in market values, but as the change in real wealth as defined in the present paper. On the first criticism, the present paper has nothing to say. I proceed under the strict neoclassical assumptions of rational expectations and asset valuation by fundamentals, which I consider as a useful starting point of the analysis. An interesting question, which I will analyse theoretically as well as empirically, is whether the savings measure based on real wealth is less volatile than the measure based on the market value of assets.

Third, the new measure of real wealth has potential implications for the definition of the income tax base. In many European countries, as well as in the US, the”ideal" base of income taxation is considered to be the Schanz-Haig-Simons concept of income (cf., for example, Goode, 1990, p.62). This includes the change in the market value of assets, adjusted, of course, for ination. The appeal of this concept is not derived from formal models of optimal taxation, it is rather based on more traditional considerations of fairness and ability to pay. Tax theorists often complain that the practical implementation of the income tax does not account for inflationary changes in asset values. This paper shows that accounting for inflation in the usual sense is still insufficient. In addition, it is necessary to account for revaluations of assets that reflect changes in the intertemporal price structure, without changing the real wealth position of households. In the spirit of Schanz-Haig-Simons, only changes in real wealth as defined in this paper should be included in the income tax base.

Download full text paper here


Monday, January 11, 2010

The Three Horsemen of Growth: Plague, War and Urbanization in Early Modern Europe

Abstract

How did Europe overtake China? We construct a simple Malthusian model with two sectors, and use it to explain how European per capita incomes and urbanization rates could surge ahead of Chinese ones. Those living standards could exceed subsistence levels at all in a Malthusian setting should be surprising. Rising fertility and falling mortality ought to have reversed any gains. We show that productivity growth in Europe can only explain a small fraction of rising living standards. Population dynamics – changes of the birth and death schedules – were far more important drivers of the longrun Malthusian equilibrium. The Black Death raised wages substantially, creating important knock-on effects. Because of Engel’s Law, demand for urban products increased, raising urban wages and attracting migrants from rural areas. European cities were unhealthy, especially compared to Far Eastern ones. Urbanization pushed up aggregate death rates. This effect was reinforced by more frequent wars (fed by city wealth) and disease spread by trade. Thus, higher wages themselves reduced population pressure. Without technological change, our model can account for the sharp rise in European urbanization as well as permanently higher per capita incomes. We complement our calibration exercise with a detailed analysis of intra-European growth in the early modern period. Using a panel of European states in the period 1300-1700, we show that war frequency can explain a good share of the divergent fortunes within Europe.

Epidemics and wars frequently ravaged Europe between 1350 and 1700. We argue that death and destruction spelled riches and power in the early modern period. Europe’s precocious rise may owe more to these scourges of mankind than to technological innovation. We build a simple two-sector extension of the standard Malthusian model that can shed new light on the puzzling rise of European per capita incomes. Many interpretations of the ”rise of Europe” have emphasized technological creativity and high rates of innovation, compared to Asia (Mokyr 1990). We argue that, in a Malthusian setting, better technology cannot explain the ”First Divergence”, and we also show that fertility restriction alone is insufficient. Instead, we build a model in which per capita living standards can rise markedly without technological change or fertility decline. Some long-run growth models generate the early transition from stagnation to sustained growth by means of a delayed response of fertility to wages. This allows per capita incomes to rise slowly but steadily in tandem with population. We argue that this cannot be realistic in most settings, because fertility responds ’too rapidly’ to permit anything other than a short-lived increase in living standards. In a micro-founded model, we show that only very large, negative shocks can be followed by a marked delay between rising incomes and return to earlier population levels. We argue that the Black Death hitting Europe in the 14th century was precisely such a shock, lifting wages and per capita incomes for several generations. Richer individuals began to demand more urban goods, and because early modern European cities were ”graveyards” (Bairoch 1991), incomes could permanently exceed subsistence levels. This is particularly true because city growth acted as a catalyst for European belligerence. It also spread disease through trade – links that we call the ’Horsemen of Growth.’

We demonstrate that permanently higher mortality rates, driven by greater urbanization after the Black Death, were empirically important. In our calibrations, the mortality channel consistently emerges as accounting for at least half of the increase in per capita incomes. Fertility restriction is probably responsible for the remainder. We complement the calibration exercise with a detailed analysis of the intra-European growth record after 1300. Using a panel of European states in the period up to 1700, we find that war frequency – our preferred proxy for the ’Horsemen of Growth’ – can explain a good share of the divergent fortunes within Europe. In particular, we find that we can explain a good deal of the rise of North-Western Europe compared to the rest of the continent. The effect of war, trade, and urbanization is broadly similar – if not stronger – than Atlantic trade (as suggested by AJR 2005). While war emerges consistently as a driver of higher incomes in early modern Europe, there is little reason to assume that the same will be true today. Non-reproducible factors of production, such as land, only play a small role in most economies. Even where they matter a great deal, such as in parts of Africa, modern wars may not yield the same effect. Military technology has become markedly more destructive, of both people and capital equipment. This checks the positive effect of rising land-labor ratios.

One implication of our findings is that urbanization is not simply an indicator for development. City growth also made higher per capita incomes sustainable in a Malthusian setting. Our paper has emphasized the contrast between early modern Europe and the rest of the world. In the final analysis, Europe’s political fragmentation and geographical heterogeneity interacted with the negative shock of the Black Death in a unique way. In combination, urbanization, warfare, and trade ensured a mortality regime that was different from the one prevailing in Asia. Future work should focus on the other factor contributing to Europe’s precociously rising incomes – the emergence of the European Marriage Pattern.

Download full text article here


Friday, January 8, 2010

The Role of Mexico in the First Oil Shortage: 1918-1922, an International Perspective

Abstract

In 1921 Mexico produced a quarter of world’s petroleum, making the country the second largest producer in the world, but by 1930 it only accounted for 3 per cent of world’s production. To date the discussion has mostly relied on events taking place in Mexico for explaining the decline of the industry. Very little attention has been placed to developments in petroleum industry elsewhere, except Venezuela. Practically no attention has been paid to the reasons for the rise of oil output in Mexico. This neglects the massive changes taking place in the petroleum industry worldwide during the Great War years and its aftermath, and overall ignores the shortage of oil that occurred in the world’s markets between 1918-1921. These are crucial events in order to understand the early rise of the Mexican oil industry and set the basis for a better understanding of the subsequent sudden decline.

In 1921 Mexico produced a quarter of world’s oil, making the country the second most important producer in the world, but by 1930 it only accounted for 3 per cent of world’s production. In 1938 the petroleum industry was nationalised by the Mexican government and it took it over fifty years to regain the level of output of 1921. Two main lines of arguments have been used in order to explain the rapid decline of the Mexican oil industry during the 1920s. The first explanation argues that the decline was the result of the institutional change caused by the Mexican Revolution. The second hypothesis vindicates that Mexico simply run out of oil deposits that could be extracted at competitive costs given technology, prices and competing sources. Some authors have argued that both hypotheses are true. The problem is that the discussion has mostly relied on events taking place in Mexico, using sources and data exclusively relating to Mexico. Very little attention has been placed to developments in petroleum industry elsewhere, except Venezuela. Practically no attention has been paid to the reasons for the sudden rise of oil output in Mexico. This neglects the massive changes taking place in the petroleum industry worldwide during the Great War years and overall ignores the shortage of oil that occurred in the world markets between 1918-1921. These are crucial events in order to understand the rise of the Mexican oil industry and set the basis for a better understanding of the subsequent sudden decline.

Due to the Great War and the Soviet Revolution, Europe lost all its domestic supplies of oil and, become totally dependent on its Asian oil supplies (Dutch East Indies and British India) and overall, on the United States. Mexican oil was to play a major role at this time of shortage. This paper focuses on the rise of the Mexican oil industry by concentrating on the events taking place in the world’s petroleum industry. Thanks to the data of the American Petroleum Institute, the U.S. Department of Commerce, and the Mexican Government it is possible to place Mexico in the changing context of the world oil markets of the early 1920s. In addition, it shed some extra light into the debate about the rapid fall of the industry.

The first section of this paper reviews the literature debate, revealing how little attention has been paid to the raise of the industry, and how much concentrated on events within Mexico has the debate remained. The second section steps out of Mexico in order to show the extensive changes taking place in the oil industry worldwide during the First World War and its aftermath, including the surge in demand and the awaken of nationalism world-wide regarding the exploitation of oil resources. The intense growth of demand for petroleum products was not followed by an equal growth in supply. The distortions introduced by the War, the Soviet Revolution, the cold winters of the end of the 1910s, plus the final War effort produced the first petroleum shortage of the 20th century. Section three reveals the importance of Mexican oil at this time of shortage. Section four peeks at the sudden decline of the Mexican petroleum industry departing from the depiction of the rise of the industry provided in earlier sections. The conclusions summarise the main findings of this paper.

Download full text paper here


Tuesday, January 5, 2010

The Quasi-Judicial Role of Large Retailers: An Efficiency Hypothesis of their Relation with Suppliers

Abstract

The paper explores an efficiency hypothesis regarding the contractual process between large retailers, such as Wal-Mart and Carrefour, and their suppliers. The empirical evidence presented supports the idea that large retailers play a quasi-judicial role, acting as “courts of first instance” in their relationships with suppliers. In this role, large retailers adjust the terms of trade to on-going changes and sanction performance failures, sometimes delaying payments. A potential abuse of their position is limited by the need for re-contracting and preserving their reputations. Suppliers renew their confidence in their retailers on a yearly basis, through writing new contracts. This renovation contradicts the alternative hypothesis that suppliers are expropriated by large retailers as a consequence of specific investments.

Like all complex relationships, those established between suppliers and retailers suffer from substantial conflicts. Claims of faulty performance, either intentional or unintentional, are the main source of conflict. Other common discrepancies concern prices and deliveries. Discussion frequently arises about whether the invoiced prices are or are not in accordance with the previously agreed levels. There are also delivery delays that are punished by the retailer when they cause stock outs and losses of sales. Clarification of these arguments is difficult. Price schedules are intricate and it is hard to evaluate the cost caused by imperfect performance. Opportunism is possible on both sides. For instance, it is possible for a return of merchandise with the allegation of late delivery to be due to opportunistic behavior on the part of the retailer because sales did not go as well as planned when ordering the goods.

Errors in the administration circuits are also a main source of conflict. Examples of these are differences in the quantities and prices between the time of ordering and delivery of the merchandise, or accounting errors, where the quantity in the invoice and the delivered quantity do not correspond. Retailers claim that administrative problems are common because the administrative systems of small-size suppliers are underdeveloped. There are cases when the supplier issues the invoice and the delivery note at the same time so, if the delivery suffers from some defect, this is only discovered when the whole invoicing process has started. This makes fixing the problem cumbersome and slow. In other cases the transportation agent may fail to return the delivery notes to the supplier, causing administrative chaos. The importance of the supplier’s administration is supported by the fact that some retailers refuse to work with suppliers that lack reliable administrative systems.

How important contractual and administrative factors are becomes clear when we observe the empirical relation that exists between the average duration of the payment periods in each country and the importance attributed to the different kinds of phenomena that cause payment delays. It has been observed that the average payment period is positively correlated with the importance of debtors’ financial difficulties resulting in delays and negatively correlated with the importance of both disagreements between creditor and debtor and administrative errors. In other words, in countries with longer payment periods, debtor insolvency is more important while disagreements and administrative errors are less important, arguably because there is more time to solve both problems before the end of the contractual credit period (Table 2). This can mean that a longer payment period worsens problems with a financial origin, while it lessens those related to contractual and administrative issues.

Download full text paper here


Saturday, January 2, 2010

An Investigation of the Relationship between Job Characteristics and the Gender Wage Gap

Abstract

This paper re-examines gender wage differences, taking into account not only worker characteristics but also job characteristics. Consideration of a wide set of “job quality” indicators can explain a fraction of the wage gap that would otherwise be attributed to pure wage discrimination. In any case, the fraction of the wage gap that remains associated to differential rewards for identical factors across sexes is still substantial. Our results suggest that in order to avoid overestimation of the fraction of the wage gap attributable to discrimination, it is necessary to control for job characteristics.

We specify a model accounting for the fact that the wage equation can have sample selection problems due to participation and that individuals sort themselves into different occupations. Wages are determined by several job characteristics and individual variables such as age and education. From the model estimates, we shall implement the wage decomposition procedure proposed by Neumark (1988). Given the presence of two selection processes in our model, we shall pay special attention to how decompositions of the wage gap need to be carried out in the presence of non random assignment to different groups in the labour market. In particular we shall follow the procedures proposed by Neuman and Oaxaca (1998) in carrying out the decompositions.

Our results suggest that job characteristics are important factors in explaining wages even when controlling for individual characteristics. Moreover, when we account for job characteristics, the fraction of the gender wage gap attributable to differential rewards for men and women is reduced, reflecting the fact that men tend to be assigned to the “best” jobs. However, there remains a substantial and significant “discriminatory” component, in that the reward for job and individual characteristics is higher for men.

In section 2 we present the econometric model. Section 3 comprises discussion of the data set. The empirical results are presented in section 4, while, finally, section 5 concludes.

Download full text paper here


Tuesday, December 29, 2009

Strategy Communication and Measurement Systems

Abstract

Organizations often face the challenge of communicating their strategies to local decision makers. The difficulty presents itself in finding a way to measure performance which meaningfully conveys how to implement the organization's strategy at local levels. I show that organizations solve this communication problem by combining performance measures in such a way that performance gains come closest to mimicking value-added as defined by the organization's strategy. I further show how organizations rebalance performance measures in response to changes in their strategies. Applications to the design of performance metrics, gaming, and divisional performance evaluation are considered. The paper also suggests several empirical ways to evaluate the practical importance of the communication role of measurement systems.

Performance measures communicate objectives to local decision makers which often do not perfectly represent the true strategy, or goal, of the organization (Baker, 1992). For example, performance measures often induce unintended behaviors such as gaming (Lawler, 1990 and Courty and Marschke, 1997). This, however, only partially illustrates what I mean by imperfect performance measures. More generally, performance measures rarely perfectly represent contributions to the firm's value.5 Because of this lack of perfect and universal proxy organizations must constantly adjust their measurement systems so that it is aligned with their strategies.

Organizations typically measure performance on several dimensions and balance each of these dimensions. For example, depending on the objective that is communicated, measures of short-term financial performance such as return on capital employed or project profitability are sometimes combined with measures of less tangible assets such as market share, customer satisfaction index, or even employee commitment. This balance between different performance measures is explicit when agents are offered formal performance metrics or scorecards and implicit when agents are informally communicated the emphasis put on each measure.

Often performance measurement systems are coupled with incentive systems which reward performance either financially or through other decisions (e.g. promotion, training). Surprisingly, the fraction of the agent’s compensation paid as a financial award is usually low, although it varies widely across compensation systems (Baker, Jensen and Murphy, 1988). This observation suggests two distinct concepts of weights in performance measure: (a) the relative weights determine the emphasis put on each performance measure; (b) the absolute weights determine how high performers are rewarded and/or low performers punished. These two concepts correspond to the distinction between performance measurement systems and performance incentive systems, which plays a key role in this paper. Most of this paper will study how performance measures are chosen and how they are balanced (the relative weights on performance measures). Toward the end of the paper, I will take into account the incentive dimension of some performance measurement systems (the absolute weights on performance measures).

Download full text paper here


Saturday, December 26, 2009

SPATIAL MARKET EXPANSION THROUGH MERGERS

Abstract

In this paper we present a model that studies firm mergers in a spatial setting. A new model is formulated that addresses the issue of finding the number of branches that have to be eliminated by a firm after merging with another one, in order to maximize profits. The model is then applied to an example of bank mergers in the city of Barcelona. Finally, a variant of the formulation that introduces competition is presented together with some conclusions.

Several studies have analyzed the economic and financial consequences of bank mergers. Rhoades (1998) looked at nine large bank mergers with substantial market overlap in the early 1990s. He found that all produced significant cost cutting in line with the pre-merger projections due to branc h reductions. Piloff (1996) looked at 48 bank mergers in the 1980s, relating announcement period abnormal returns to accounting based performance measures. He found higher abnormal returns that offer the greatest potential for cost reductions (measured by geographic overlap and premerger cost measures). Piloff also found that industry-adjusted profitability of the merged banks does not change, that total expenses to assets increases, and that revenues rise in the five year period around the merger. Houston, et al. (2001) looked at analysts' estimates of projected cost savings and revenue enhancements associated with bank mergers. They found that analysts’ estimates of increases in combined bank value associated with a merger are due mainly to estimated costs savings rather than projected revenue enhancements. Finally, Avery, et al. (1999) looked at mergers during the period 1975 through 1998 involving banks with significant geographic overlap (measured by the number of branches in a ZIP code per capita). They found that these mergers resulted in a significant decrease in branches per capita.

In this paper we present a model that addresses the issue of mergers in a spatial setting. In the next section a new model is formulated that addresses the issue of finding the number of branches that have to be eliminated by a firm after merging with another one, in order to maximize revenues. The model then is applied to an example in the city of Barcelona. Finally, a variant of the formulation is presented together with some conclusions.

Download full text paper here


Wednesday, December 23, 2009

SECURITIES ANALYSTS AS FRAME-MAKERS

Abstract

In this paper we explore the mechanisms that allow securities analysts to value companies in contexts of Knightian uncertainty, that is, in the face of information that is unclear, subject to unforeseeable contingencies or to multiple interpretations. We address this question with a grounded-theory analysis of the reports written on Amazon.com by securities analyst Henry Blodget and rival analysts during the years 1998-2000. Our core finding is that analysts’ reports are structured by internally consistent associations that include categorizations, key metrics and analogies. We refer to these representations as calculative frames, and propose that analysts function as frame-makers – that is, as specialized intermediaries that help investors value uncertain stocks. We conclude by considering the implications of frame-making for the rise of new industry categories, analysts’ accuracy, and the regulatory debate on analysts’ independence.

Despite the extensive academic attention bestowed upon analysts, existing treatments provide a limited account of their intermediary role. Extant work is best understood as three broad streams. One approach, rooted in the finance and accounting literatures, views analysts as information processors and stresses their activities of search, assembly and communication of information. Another approach, based on neo-institutional sociology and behavioral finance, documents the tendency of analysts to mimic each other. We refer to it as the imitation perspective. Finally, a more recent sociological approach has started to outline the role of analysts as critics.

Analysts as information processors. The information processing literature on analysts rests on a remarkable finding: securities analysts, long regarded as valuation experts, are unable to provide accurate forecasts of stock prices. Beginning with Cowles’ (1933) seminal piece, titled “Can Stock Market Forecasters Forecast?” numerous finance and accounting theorists have documented the failure of analysts’ recommendations to produce abnormal returns and accurate forecasts of earnings and price targets (Lin and McNichols, 1998, Hong and Kubick, 2002, Michaely and Womack, 1999, Lim, 2001, Boni and Womack, 2002, Schack, 2001). 1

Two complementary explanations have been put forward to account for this failure. One view, based on the efficient market hypothesis (EMH), argues that accurate financial forecasting is simply impossible in an efficient capital market (Samuelson, 1965; Malkiel, 1973). According to the EMH, stock prices in a competitive capital market capture all relevant information about the value of a security, following a random walk. There are no mispricings, no possibility for any actor to find extraordinary profit opportunities and indeed, no scope for financial intermediaries to help their clients do so (Fama, 1965, 1991; Samuelson 1967; Jensen, 1968, 1970; Malkiel, 1973). The bleak implication for analysts is that accurate forecasting and lucrative advice are impossible.

An additional explanation for analysts’ inaccuracies, based on agency theory, is that the fiduciary relationship between analyst and investor is distorted by a variety of conflicts of interest, producing dysfunctional biases in analyst’s forecasts and recommendations. These distortions include investment banking ties (Lin and McNichols, 1998, Hong and Kubick, 2002; Michaely and Womack, 1999), access to company information (Lim, 2001), brokerage interests of the bank employing the analyst (Boni and Womack, 2002), investment interests of the clients of the bank (Sargent, 2000), or the investment interests of the analysts themselves (Schack, 2001). Analysts, in short, come across from this literature as conflict-ridden intermediaries.

The aforementioned conflicts have become particularly prominent following the Wall Street scandals of 2000-2001. During these years, top-ranked Internet analysts (including Henry Blodget) resisted downgrading their recommendations even as prices fell from record highs to zero (Boni and Womack, 2002). Other analysts were recorded privately criticizing companies they publicly recommended (Gasparino, 2005). Such was the public uproar against analysts that the Securities and Exchange Commission even issued explicit guidelines for retail investors to use analyst reports with caution (Securities and Exchange Commission, 2002).

Whether in the form of market efficiency or conflicts of interest, the approaches to analysts presented so far share a common premise: both assume that the core intermediary function performed by security analysts is to forecast the future and provide recommendations. Analysts are accordingly presented as engaged in search, assembly and diffusion of information. To highlight this common focus on information, we refer to this literature as the information processing approach.

Download full text paper here


Sunday, December 20, 2009

The Importance of Relative Performance Feedback Information: Evidence from a Natural Experiment using High School Students

Abstract

We study the effect of providing relative performance feedback information on performance under piece-rate incentives. A natural experiment that took place in a high school offers an unusual opportunity to test this effect in a real-effort setting. For one year only, students received information that allowed them to know whether they were above (below) the class average as well as the distance from this average. We exploit a rich panel data set and find that the provision of this information led to an increase of 5% in students’ grades. Moreover, the effect was significant for the whole distribution. However, once the information was removed the effect disappeared. To rule out the concern that the effect may be driven by teachers within the school, we verify our results using national level exams (externally graded) for the same students, and the effect remains.

Improving students’ performance has been an important concern for academics and educational policy makers alike. Given the recent introduction of the OECD coordinated Programme for International Student Assessment (PISA), improvements in students’ performance, measured by their grades, is at the heart of governmental reform.1 The education literature has focused on school inputs as the principle means to improve students’ performance, in particular, reduction in pupil/teacher ratio, improved quality of teacher (experience and education), and extended term length (See Krueger (1999), Card and Krueger (1992)). There is however, a lively debate regarding the effectiveness of school inputs, largely due to their associated costs (Hanushek (1996, 2003)). Moreover, the PISA reports do not show a strong positive relationship between the amount spent per student and the performance in the standardised tests in mathematics, science and reading. For example, the US ranks second in expenditure per pupil (91,770$) but ranked twenty-second (out of 30) in performance (see OECD PISA report, 2006).

More recently, there has been interest in analyzing the relevance of performance evaluations and feedback information regarding these evaluations. The effect of interim feedback information about own performance on subsequent performance has been studied mostly in labour settings2. The importance of interim feedback information on students’ performance has been empirically studied by Bandiera et al. (2008). The authors find that providing university students with interim feedback information about own performance has a positive effect on their final performance. However, feedback information involving relative performance has received less attention. The provision of relative performance feedback information allows for social comparison (individuals can evaluate their own performance by comparing themselves to others, Festinger (1954)). While this has been extensively studied in management and psychology literature (see Festinger (1954), Locke and Latham (1990) and Suls and Wheeler (2000) for an overview), it has not been fully explored in economics.3

Download full text article here


Thursday, December 17, 2009

The challenge of representative design in psychology and economics

Abstract

The demands of representative design, as formulated by Egon Brunswik (1956), set a high methodological standard. Both experimental participants and the situations with which they are faced should be representative of the populations to which researchers claim to generalize results. Failure to observe the latter has led to notable experimental failures in psychology from which economics could learn. It also raises questions about the meaning of testing economic theories in “abstract” environments. Logically, abstract tests can only be generalized to “abstract realities” and these may or may not have anything to do with the “empirical realities” experienced by economic actors.

Economists are adept at handling the requirements of classic, factorial experimental designs. However, a major problem with factorial experiments is that the basic logic (i.e., the orthogonal variation of variables) precludes generalizing results outside the laboratory (Brunswik, 1956). The rationale does seem impeccable. By varying one variable at a time and holding all others constant, one can isolate the effects. However, outside the laboratory all other variables are not constant and variables are not orthogonal. Thus, estimates of the sizes of effects (based on the experiment) are subject to other forces. You can design factorial worlds within an experiment but this may not have much to do with what happens outside it.

On the positive side, it should be noted that economists have looked at whether results that looked surprising within experimental laboratories (from the viewpoint of economic theory) can also be observed in more realistic environments, i.e., attempts have been made to demonstrate external validity. As a case in point, consider judgments of willingness-to-pay and willingness-to-accept on issues as trivial as small gambles to those as consequential as compensation awards in civil trials. Whereas the response mechanisms can be justified by economic theory, people are not machines that necessarily produce appropriate responses. Indeed, most people have limited experience with such mechanisms and responses are often sensitive to normatively irrelevant considerations whether in or outside the experimental laboratory. For good examples, see Sunstein, Hastie, Payne, Schkade and Viscusi (2002). In addition, consider evidence that supports prospect theory “in the wild” (Camerer, 2000) or empirical studies in behavioral finance (Barberis & Thaler, 2003).

Download full text article here


Monday, December 14, 2009

Psychological Pressure in Competitive Environments: Evidence from a Randomized Natural Experiment

Abstract

Much like cognitive abilities, emotional skills can have major effects on performance and economic outcomes. This paper studies the behavior of professional subjects involved in a dynamic competition in their own natural environment. The setting is a penalty shoot-out in soccer where two teams compete in a tournament framework taking turns in a sequence of five penalty kicks each. As the kicking order is determined by the random outcome of a coin flip, the treatment and control groups are determined via explicit randomization. Therefore, absent any psychological effects, both teams should have the same probability of winning regardless of the kicking order. Yet, we find a systematic first-kicker advantage. Using data on 2,731 penalty kicks from 262 shoot-outs for a three decade period, we find that teams kicking first win the penalty shoot-out 60.5% of the time. A dynamic panel data analysis shows that the psychological mechanism underlying this result arises from the asymmetry in the partial score. As most kicks are scored, kicking first typically means having the opportunity to lead in the partial score, whereas kicking second typically means lagging in the score and having the opportunity to, at most, get even. Having a worse prospect than the opponent hinders subjects' performance. Further, we also find that professionals are self-aware of their own psychological effects. When a recent change in regulations gives winners of the coin toss the chance to choose the kicking order, they rationally react to it by systematically choosing to kick first. A survey of professional players reveals that when asked to explain why they prefer to kick first, they precisely identify the psychological mechanism for which we find empirical support in the data: they want \to lead in the score in order to put pressure on the opponent."

At least since Hume (1739) and Smith (1759), psychological elements have been argued to be as much a part of human nature, and possibly as important for understanding human behavior, as the strict rationality considerations included in economic models that adhere to the rational man paradigm. Clearly then, any study of human behavior that omits these elements can yield results of unknown reliability.

Much as the rationality principle has successfully accommodated social attitudes, altruism, values and other elements (see, e.g., Becker (1976, 1996), Becker and Murphy (2000)), behavioral economics attempts to parsimoniously incorporate psychological motives not traditionally included in economic models. Theoretical models in this area firmly rely for empirical support on the observation of human decision making in laboratory environments. Laboratory experiments have the important advantage of providing a great deal of control over relevant margins. In these settings, observed behavior often deviates from the predictions of standard economic models. In fact, at least since the 1970s, a great deal of experimental evidence has been accumulated demonstrating circumstances under which strict rationality considerations break down and other patterns of behavior, including psychological considerations, emerge. Thus, an important issue is how applicable are the insights gained in laboratory settings for understanding behavior in natural environments. This challenge, often referred to as the problem of “generalizability” or “external validity,” has taken a central role in recent research in the area.1

The best and perhaps only way to address this concern is by studying human behavior in real life settings. Unfortunately, however, Nature does not always create the circumstances that allow a clear view of the psychological principles at work. Furthermore, naturally occurring phenomena are typically too complex to be empirically tractable in a way that we can discern psychological elements from within the characteristically complex behavior exhibited by humans.2

Download full text paper here


Friday, December 11, 2009

PROMISING FAILURE: Political and Company Rhetoric as a Determinant of Success

Abstract

I show that firms that make sparing use of the future tense in their annual reports significantly outperform those that use it more. Similarly, in all of the U.S. presidential elections from 1960 through 2004, the candidate who made less use of the future tense during the televised debates won the popular vote. I show that the frequency of using future-tense sentences is strongly correlated with the frequency of making promises and that the latter can be modeled within a game-theoretic framework.

What’s a promise worth? At the start of Shakespeare’s Love’s Labour’s Lost, four men swear to forgo all worldly pleasures and, in particular, sex. A group of French ladies quickly helps them to reconsider. One by one, the men break their promises. The tension between their oaths and the temptations offered by a group of attractive young women is responsible for much of the entertainment, and it allows us to gain insight into each man’s character. That solemn oaths are not good predictors of future behavior has been noted throughout the ages by writers and philosophers alike. Jean-Jacques Rousseau argued that those who make promises keep few of them: “He who is slowest in making a promise is most faithful in keeping it.”

In this paper, I ask if Shakespeare’s and Rousseau’s instincts are true and whether we should trust those who make commitments for the future. In particular, I examine if the extensive use of promises about one’s future actions predicts poor performance and/or failure. I explore this issue in two distinct areas: corporate financial success and the popular vote in U.S. presidential elections. By studying the language structure used in corporate reports, I show that the frequency of the verbs “will”, “shall”, and “going to” represents a good proxy for a frequency of promises in these reports. Therefore, I will interchangeably use the terms future tense and promises. It turns out that companies that use future tense less frequently in their reports systematically outperform companies that use future tense more often. This relationship is not restricted to financial markets; a similar pattern exists in the context of political rhetoric. The U.S. presidential candidates who consistently make more statements about the future tend to lose the subsequent popular vote. Both of these findings are consistent with theoretical predictions on the existence of equilibrium with inflated talk in the model of Kartik, Ottaviani, and Squintani (2007) (hereafter KOS), which is built on a classic “cheap talk” game (Crawford and Sobel 1982). To the best of my knowledge, this is the first study connecting game-theoretic models of talk with a feature of real everyday language.

The finance literature has recognized that quantitative data in corporate reports may be important for pricing (Fama and French 1993; Chan, Jegadeesh, and Lakonishok 1996; Sloan 1996; Franzoni and Marin 2006). The impact of language use on performance and price formation in financial markets has been studied previously. Tetlock, Saar-Tsechansky, and Macskassy (2007) showed that a fraction of negative words in company-specific news stories forecasts low earnings. Moreover, company earnings briefly under react to the information embedded in negative words. Similarly, Tetlock (2007) studied how the proportion of negative words in popular news columns on the stock market is incorporated into aggregate market valuation. The work of Antweiler and Frank (2004) is similar in spirit. The authors constructed an algorithm to assign a “bullish”, “neutral”, or “bearish” rating to more then 1.5 million messages posted at the Yahoo! Finance website about various companies and found that these messages not only help predict market volatility but also have a statistically significant effect on stock returns. These studies analyze the wording of messages about companies, or about stocks in general.

Download full text paper here


Tuesday, December 8, 2009

Optimal Contracts, Adverse Selection, and Social Preferences: An Experiment

Abstract

It has long been standard in agency theory to search for incentive compatible mechanisms on the assumption that people care only about their own material wealth. However, this assumption is clearly refuted by numerous experiments, and we feel that it may be useful to consider nonpecuniary utility in mechanism design and contract theory. Accordingly, we devise an experiment to explore optimal contracts in an adverse-selection context. A principal proposes one of three contract menus, each of which offers a choice of two incentive-compatible contracts, to two agents whose types are unknown to the principal. The agents know the set of possible menus, and choose to either accept one of the two contracts offered in the proposed menu or to reject the menu altogether; a rejection by either agent leads to lower (and equal) reservation payoffs for all parties. While all three possible menus favor the principal, they do so to varying degrees. We observe numerous rejections of the more lopsided menus, and approach an equilibrium where one of the more equitable contract menus (which one depends on the reservation payoffs) is proposed and agents accept a contract, selecting actions according to their types. Behavior is largely consistent with all recent models of social preferences, strongly suggesting there is value in considering nonpecuniary utility in agency theory.

The classic ‘lemons’ paper (Akerlof 1970) illustrated the point that asymmetric information led to economic inefficiency, and could even destroy an efficient market. Research on mechanism design has sought ways to minimize or eliminate this problem. Seminal research includes the auction results of Vickrey (1961) and the optimal taxation study by Mirrlees (1971). Applications include public and regulatory economics (Laffont and Tirole 1993), labor economics (Weiss 1991, Lazear 1997), financial economics (Freixas and Rochet 1997), business management (Milgrom and Roberts 1992), and development economics (Ray 1998).

It has long been standard in agency theory to search for incentive-compatible mechanisms on the assumption that people care only about their own material wealth. However, while this assumption is a useful point of departure for a theoretical examination, economic interactions frequently are associated with social approval or disapproval. In dozens of experiments, many people appear to be motivated by some form of social preferences, such as altruism, difference aversion, or reciprocity. Recently, contract theorists such as Casadesus-Masanell (1999) and Rob and Zemsky (1999) have expressed the view that contract theory could be made more descriptive and effective by incorporating some form of nonpecuniary utility into the analysis.

We consider the explanatory power of recent social preference models (e.g., Bolton and Ockenfels 2000, Fehr and Schmidt 1999, and Charness and Rabin 1999) in our contractual environment. Our aim is to investigate whether incorporating social preferences into contract theory could lead to a better understanding of how work motivation and performance are linked, and to thereby improve firms’ contract and employment choices, as well as productivity and efficiency.

Download full text paper here


Saturday, December 5, 2009

ON COMMERCIAL MEDIA BIAS

Abstract

Within the spokes model of Chen and Riordan (2007) that allows for non-localized competition among arbitrary numbers of media outlets, we quantify the effect of concentration of ownership on quality and bias of media content. A main result shows that too few commercial outlets, or better, too few separate owners of commercial outlets can lead to substantial bias in equilibrium. Increasing the number of outlets (commercial and non-commercial) tends to bring down this bias; but the strongest effect occurs when the number of owners is increased. Allowing for free entry provides lower bounds on fixed costs above which substantial commercial bias occurs in equilibrium.

Motivated by the recent media policy debate in the United States and ongoing attempts by the Federal Communications Commission (FCC) to loosen ownership rules there (see e.g., McChesney, 2004, for a description of the events around the 2003 attempt; another such episode occurred in 2007), we develop a model of media competition that allows for a somewhat detailed study of the quality and bias of media content for a number of different ownership structures. The analysis builds on the spokes model of Chen and Riordan (2007), which is a Hotelling type model of spatial competition that allows for arbitrary numbers of media firms and outlets (commercial and non-commercial) that compete against each other in a non-localized fashion.

We show that excessively concentrated media markets, beyond a certain cut-off, can result in substantial bias of media content. Increasing the number of separately owned media firms in the market helps towards reducing the bias; increasing the number of commercial outlets, while keeping the number of owners fixed, can also help, but clearly to a lesser extent.1

The channel through which the bias occurs in our model is through the funding of commercial media outlets by advertisers and the internalization of the effect of the media outlets' content on the advertisers' sales and advertising budgets. A motivating example for our analysis is the coverage of tobacco related health hazards in the US. For decades, despite hundreds of thousands of deaths a year, serious statistics and medical information about the health hazards of smoking were kept away from mainstream commercial media (see e.g., Baker, 1994, and Bagdikian, 2004, for chronologies as well as references documenting the statistical impact of advertising on the coverage of tobacco related health hazards; see also Ellman and Germano, 2009, for further discussion and references). Bagdikian (2004, pp. 250-252) summarizes “there were still more stories in the daily press about the causes of influenza, polio, and tuberculosis than about the cause of one in every seven deaths in the United States," so that, in the 1980's, some “64 million Americans, obviously already addicted, smoked an average of 26 cigarettes a day" with surveys indicating that half the general and two-thirds the smoking population did not think smoking made a great difference in life expectancy, Baker (1994, p. 51).2 Our model claims that alongside advertising, concentration in the media markets plays an important role in explaining such bias.

Download full text article here


Wednesday, December 2, 2009

Legal Enforcement, Public Supply of Liquidity and Sovereign Risk

Abstract

Sovereign debt crises in emerging markets are usually associated with liquidity and banking crises within the economy. This connection is suggested by both anecdotical and empirical evidence. The conventional view is that the domestic financial turmoil is caused by foreign creditors' retaliation. Yet, there is no clear-cut evidence supporting the existence of \classic" default penalties (e.g., trade sanctions or exclusion from international capital markets). This paper then proposes a novel mechanism linking sovereign defaults with liquidity and banking crises without any intervention of foreign creditors. The model considers a standard unwillingness-to-pay problem assuming that: (i) the enforcement of private contracts is limited and, as a result, public debt represents a source of liquidity; (ii) the government cannot discriminate between domestic and foreign agents. In this setting, the prospect of drying up the private sector's liquidity restores the ex-post incentive to pay of the government without any need to assume foreign penalties. Nonetheless, liquidity crises might arise when economic conditions deteriorate and the government chooses opportunistically to default in order to avoid the repayment of foreign agents. The interaction between the enforcement friction and sovereign risk is then exploited to study the implications on international capital flows and legal and institutional domestic reforms.

Log Value Added (y). Log of value added in US dollars at the 3-digit ISIC classification for manufacturing sectors. Data are sourced from the UNIDO INDSTAT 2005 database. Original data are deated using the GDP deator in United States from the World Bank's World Development Indicators 2006 CD-ROM.

Default Dummy (DEF). Dummy variable taking a value one in the first year of a default episode. Data on default episodes are sourced from the Standard and Poor's sovereign default database, as reported in Beers and Chambers (2002). This database includes all sovereign defaults on loans or bonds with private agents between 1975 and 2002, and reports the period during which the debtor government remained in default.

Financial Dependence (FinDep). An index constructed as the median share of capital expenditures not financed with the cash ow from operations (capital expenditures minus cash flow from operation divided by capital expenditures) by US-based, publicly listed firms. The index is sourced from Kroszner et al. (2007), who provide a 3-digit ISIC based reclassification of the data originally constructed by Rajan and Zingales (1998) for a mixture of 3-digit and 4-digit ISIC sectors. The data refer to the period 1980-1999 and, originally, range from -1.14 (Tobacco) to 0.72 (Transport equipment), with a higher number indicating greater financial dependence. To ease statical inference, I normalize the index such that it ranges from 0 to 1.

Liquidity Needs (Liq). An index constructed as the median ratio of inventories over total sales for US-based, publicly listed firms. This index has been initially proposed by Raddatz (2006) to measure industrys financial needs that focuses on short-term liquidity needs. The data are sourced from Kroszner et al. (2007), who compute the Raddatz index for the 3-digit ISIC manufacturing sectors. The data refer to the 1980s and, originally, range from 0.07 (Tobacco) to 0.72 (Plastic Products), with a higher number indicating greater financial dependence. To ease statical inference, I normalize the index such that it ranges from 0 to 1.

Tangibility (Tangs). An index constructed as the median ratio of net property, plant and equipment to total assets by US-publicly listed firms during the period 1980-1999 in each 3-digit ISIC manufacturing sector. The data are sourced from Kroszner et al. (2007). The original data range from 0.12 to 0.62, and are normalized such that they range from 0 to 1.

Download full text paper here