Tuesday, December 29, 2009

Strategy Communication and Measurement Systems

Abstract

Organizations often face the challenge of communicating their strategies to local decision makers. The difficulty presents itself in finding a way to measure performance which meaningfully conveys how to implement the organization's strategy at local levels. I show that organizations solve this communication problem by combining performance measures in such a way that performance gains come closest to mimicking value-added as defined by the organization's strategy. I further show how organizations rebalance performance measures in response to changes in their strategies. Applications to the design of performance metrics, gaming, and divisional performance evaluation are considered. The paper also suggests several empirical ways to evaluate the practical importance of the communication role of measurement systems.

Performance measures communicate objectives to local decision makers which often do not perfectly represent the true strategy, or goal, of the organization (Baker, 1992). For example, performance measures often induce unintended behaviors such as gaming (Lawler, 1990 and Courty and Marschke, 1997). This, however, only partially illustrates what I mean by imperfect performance measures. More generally, performance measures rarely perfectly represent contributions to the firm's value.5 Because of this lack of perfect and universal proxy organizations must constantly adjust their measurement systems so that it is aligned with their strategies.

Organizations typically measure performance on several dimensions and balance each of these dimensions. For example, depending on the objective that is communicated, measures of short-term financial performance such as return on capital employed or project profitability are sometimes combined with measures of less tangible assets such as market share, customer satisfaction index, or even employee commitment. This balance between different performance measures is explicit when agents are offered formal performance metrics or scorecards and implicit when agents are informally communicated the emphasis put on each measure.

Often performance measurement systems are coupled with incentive systems which reward performance either financially or through other decisions (e.g. promotion, training). Surprisingly, the fraction of the agent’s compensation paid as a financial award is usually low, although it varies widely across compensation systems (Baker, Jensen and Murphy, 1988). This observation suggests two distinct concepts of weights in performance measure: (a) the relative weights determine the emphasis put on each performance measure; (b) the absolute weights determine how high performers are rewarded and/or low performers punished. These two concepts correspond to the distinction between performance measurement systems and performance incentive systems, which plays a key role in this paper. Most of this paper will study how performance measures are chosen and how they are balanced (the relative weights on performance measures). Toward the end of the paper, I will take into account the incentive dimension of some performance measurement systems (the absolute weights on performance measures).

Download full text paper here


Saturday, December 26, 2009

SPATIAL MARKET EXPANSION THROUGH MERGERS

Abstract

In this paper we present a model that studies firm mergers in a spatial setting. A new model is formulated that addresses the issue of finding the number of branches that have to be eliminated by a firm after merging with another one, in order to maximize profits. The model is then applied to an example of bank mergers in the city of Barcelona. Finally, a variant of the formulation that introduces competition is presented together with some conclusions.

Several studies have analyzed the economic and financial consequences of bank mergers. Rhoades (1998) looked at nine large bank mergers with substantial market overlap in the early 1990s. He found that all produced significant cost cutting in line with the pre-merger projections due to branc h reductions. Piloff (1996) looked at 48 bank mergers in the 1980s, relating announcement period abnormal returns to accounting based performance measures. He found higher abnormal returns that offer the greatest potential for cost reductions (measured by geographic overlap and premerger cost measures). Piloff also found that industry-adjusted profitability of the merged banks does not change, that total expenses to assets increases, and that revenues rise in the five year period around the merger. Houston, et al. (2001) looked at analysts' estimates of projected cost savings and revenue enhancements associated with bank mergers. They found that analysts’ estimates of increases in combined bank value associated with a merger are due mainly to estimated costs savings rather than projected revenue enhancements. Finally, Avery, et al. (1999) looked at mergers during the period 1975 through 1998 involving banks with significant geographic overlap (measured by the number of branches in a ZIP code per capita). They found that these mergers resulted in a significant decrease in branches per capita.

In this paper we present a model that addresses the issue of mergers in a spatial setting. In the next section a new model is formulated that addresses the issue of finding the number of branches that have to be eliminated by a firm after merging with another one, in order to maximize revenues. The model then is applied to an example in the city of Barcelona. Finally, a variant of the formulation is presented together with some conclusions.

Download full text paper here


Wednesday, December 23, 2009

SECURITIES ANALYSTS AS FRAME-MAKERS

Abstract

In this paper we explore the mechanisms that allow securities analysts to value companies in contexts of Knightian uncertainty, that is, in the face of information that is unclear, subject to unforeseeable contingencies or to multiple interpretations. We address this question with a grounded-theory analysis of the reports written on Amazon.com by securities analyst Henry Blodget and rival analysts during the years 1998-2000. Our core finding is that analysts’ reports are structured by internally consistent associations that include categorizations, key metrics and analogies. We refer to these representations as calculative frames, and propose that analysts function as frame-makers – that is, as specialized intermediaries that help investors value uncertain stocks. We conclude by considering the implications of frame-making for the rise of new industry categories, analysts’ accuracy, and the regulatory debate on analysts’ independence.

Despite the extensive academic attention bestowed upon analysts, existing treatments provide a limited account of their intermediary role. Extant work is best understood as three broad streams. One approach, rooted in the finance and accounting literatures, views analysts as information processors and stresses their activities of search, assembly and communication of information. Another approach, based on neo-institutional sociology and behavioral finance, documents the tendency of analysts to mimic each other. We refer to it as the imitation perspective. Finally, a more recent sociological approach has started to outline the role of analysts as critics.

Analysts as information processors. The information processing literature on analysts rests on a remarkable finding: securities analysts, long regarded as valuation experts, are unable to provide accurate forecasts of stock prices. Beginning with Cowles’ (1933) seminal piece, titled “Can Stock Market Forecasters Forecast?” numerous finance and accounting theorists have documented the failure of analysts’ recommendations to produce abnormal returns and accurate forecasts of earnings and price targets (Lin and McNichols, 1998, Hong and Kubick, 2002, Michaely and Womack, 1999, Lim, 2001, Boni and Womack, 2002, Schack, 2001). 1

Two complementary explanations have been put forward to account for this failure. One view, based on the efficient market hypothesis (EMH), argues that accurate financial forecasting is simply impossible in an efficient capital market (Samuelson, 1965; Malkiel, 1973). According to the EMH, stock prices in a competitive capital market capture all relevant information about the value of a security, following a random walk. There are no mispricings, no possibility for any actor to find extraordinary profit opportunities and indeed, no scope for financial intermediaries to help their clients do so (Fama, 1965, 1991; Samuelson 1967; Jensen, 1968, 1970; Malkiel, 1973). The bleak implication for analysts is that accurate forecasting and lucrative advice are impossible.

An additional explanation for analysts’ inaccuracies, based on agency theory, is that the fiduciary relationship between analyst and investor is distorted by a variety of conflicts of interest, producing dysfunctional biases in analyst’s forecasts and recommendations. These distortions include investment banking ties (Lin and McNichols, 1998, Hong and Kubick, 2002; Michaely and Womack, 1999), access to company information (Lim, 2001), brokerage interests of the bank employing the analyst (Boni and Womack, 2002), investment interests of the clients of the bank (Sargent, 2000), or the investment interests of the analysts themselves (Schack, 2001). Analysts, in short, come across from this literature as conflict-ridden intermediaries.

The aforementioned conflicts have become particularly prominent following the Wall Street scandals of 2000-2001. During these years, top-ranked Internet analysts (including Henry Blodget) resisted downgrading their recommendations even as prices fell from record highs to zero (Boni and Womack, 2002). Other analysts were recorded privately criticizing companies they publicly recommended (Gasparino, 2005). Such was the public uproar against analysts that the Securities and Exchange Commission even issued explicit guidelines for retail investors to use analyst reports with caution (Securities and Exchange Commission, 2002).

Whether in the form of market efficiency or conflicts of interest, the approaches to analysts presented so far share a common premise: both assume that the core intermediary function performed by security analysts is to forecast the future and provide recommendations. Analysts are accordingly presented as engaged in search, assembly and diffusion of information. To highlight this common focus on information, we refer to this literature as the information processing approach.

Download full text paper here


Sunday, December 20, 2009

The Importance of Relative Performance Feedback Information: Evidence from a Natural Experiment using High School Students

Abstract

We study the effect of providing relative performance feedback information on performance under piece-rate incentives. A natural experiment that took place in a high school offers an unusual opportunity to test this effect in a real-effort setting. For one year only, students received information that allowed them to know whether they were above (below) the class average as well as the distance from this average. We exploit a rich panel data set and find that the provision of this information led to an increase of 5% in students’ grades. Moreover, the effect was significant for the whole distribution. However, once the information was removed the effect disappeared. To rule out the concern that the effect may be driven by teachers within the school, we verify our results using national level exams (externally graded) for the same students, and the effect remains.

Improving students’ performance has been an important concern for academics and educational policy makers alike. Given the recent introduction of the OECD coordinated Programme for International Student Assessment (PISA), improvements in students’ performance, measured by their grades, is at the heart of governmental reform.1 The education literature has focused on school inputs as the principle means to improve students’ performance, in particular, reduction in pupil/teacher ratio, improved quality of teacher (experience and education), and extended term length (See Krueger (1999), Card and Krueger (1992)). There is however, a lively debate regarding the effectiveness of school inputs, largely due to their associated costs (Hanushek (1996, 2003)). Moreover, the PISA reports do not show a strong positive relationship between the amount spent per student and the performance in the standardised tests in mathematics, science and reading. For example, the US ranks second in expenditure per pupil (91,770$) but ranked twenty-second (out of 30) in performance (see OECD PISA report, 2006).

More recently, there has been interest in analyzing the relevance of performance evaluations and feedback information regarding these evaluations. The effect of interim feedback information about own performance on subsequent performance has been studied mostly in labour settings2. The importance of interim feedback information on students’ performance has been empirically studied by Bandiera et al. (2008). The authors find that providing university students with interim feedback information about own performance has a positive effect on their final performance. However, feedback information involving relative performance has received less attention. The provision of relative performance feedback information allows for social comparison (individuals can evaluate their own performance by comparing themselves to others, Festinger (1954)). While this has been extensively studied in management and psychology literature (see Festinger (1954), Locke and Latham (1990) and Suls and Wheeler (2000) for an overview), it has not been fully explored in economics.3

Download full text article here


Thursday, December 17, 2009

The challenge of representative design in psychology and economics

Abstract

The demands of representative design, as formulated by Egon Brunswik (1956), set a high methodological standard. Both experimental participants and the situations with which they are faced should be representative of the populations to which researchers claim to generalize results. Failure to observe the latter has led to notable experimental failures in psychology from which economics could learn. It also raises questions about the meaning of testing economic theories in “abstract” environments. Logically, abstract tests can only be generalized to “abstract realities” and these may or may not have anything to do with the “empirical realities” experienced by economic actors.

Economists are adept at handling the requirements of classic, factorial experimental designs. However, a major problem with factorial experiments is that the basic logic (i.e., the orthogonal variation of variables) precludes generalizing results outside the laboratory (Brunswik, 1956). The rationale does seem impeccable. By varying one variable at a time and holding all others constant, one can isolate the effects. However, outside the laboratory all other variables are not constant and variables are not orthogonal. Thus, estimates of the sizes of effects (based on the experiment) are subject to other forces. You can design factorial worlds within an experiment but this may not have much to do with what happens outside it.

On the positive side, it should be noted that economists have looked at whether results that looked surprising within experimental laboratories (from the viewpoint of economic theory) can also be observed in more realistic environments, i.e., attempts have been made to demonstrate external validity. As a case in point, consider judgments of willingness-to-pay and willingness-to-accept on issues as trivial as small gambles to those as consequential as compensation awards in civil trials. Whereas the response mechanisms can be justified by economic theory, people are not machines that necessarily produce appropriate responses. Indeed, most people have limited experience with such mechanisms and responses are often sensitive to normatively irrelevant considerations whether in or outside the experimental laboratory. For good examples, see Sunstein, Hastie, Payne, Schkade and Viscusi (2002). In addition, consider evidence that supports prospect theory “in the wild” (Camerer, 2000) or empirical studies in behavioral finance (Barberis & Thaler, 2003).

Download full text article here


Monday, December 14, 2009

Psychological Pressure in Competitive Environments: Evidence from a Randomized Natural Experiment

Abstract

Much like cognitive abilities, emotional skills can have major effects on performance and economic outcomes. This paper studies the behavior of professional subjects involved in a dynamic competition in their own natural environment. The setting is a penalty shoot-out in soccer where two teams compete in a tournament framework taking turns in a sequence of five penalty kicks each. As the kicking order is determined by the random outcome of a coin flip, the treatment and control groups are determined via explicit randomization. Therefore, absent any psychological effects, both teams should have the same probability of winning regardless of the kicking order. Yet, we find a systematic first-kicker advantage. Using data on 2,731 penalty kicks from 262 shoot-outs for a three decade period, we find that teams kicking first win the penalty shoot-out 60.5% of the time. A dynamic panel data analysis shows that the psychological mechanism underlying this result arises from the asymmetry in the partial score. As most kicks are scored, kicking first typically means having the opportunity to lead in the partial score, whereas kicking second typically means lagging in the score and having the opportunity to, at most, get even. Having a worse prospect than the opponent hinders subjects' performance. Further, we also find that professionals are self-aware of their own psychological effects. When a recent change in regulations gives winners of the coin toss the chance to choose the kicking order, they rationally react to it by systematically choosing to kick first. A survey of professional players reveals that when asked to explain why they prefer to kick first, they precisely identify the psychological mechanism for which we find empirical support in the data: they want \to lead in the score in order to put pressure on the opponent."

At least since Hume (1739) and Smith (1759), psychological elements have been argued to be as much a part of human nature, and possibly as important for understanding human behavior, as the strict rationality considerations included in economic models that adhere to the rational man paradigm. Clearly then, any study of human behavior that omits these elements can yield results of unknown reliability.

Much as the rationality principle has successfully accommodated social attitudes, altruism, values and other elements (see, e.g., Becker (1976, 1996), Becker and Murphy (2000)), behavioral economics attempts to parsimoniously incorporate psychological motives not traditionally included in economic models. Theoretical models in this area firmly rely for empirical support on the observation of human decision making in laboratory environments. Laboratory experiments have the important advantage of providing a great deal of control over relevant margins. In these settings, observed behavior often deviates from the predictions of standard economic models. In fact, at least since the 1970s, a great deal of experimental evidence has been accumulated demonstrating circumstances under which strict rationality considerations break down and other patterns of behavior, including psychological considerations, emerge. Thus, an important issue is how applicable are the insights gained in laboratory settings for understanding behavior in natural environments. This challenge, often referred to as the problem of “generalizability” or “external validity,” has taken a central role in recent research in the area.1

The best and perhaps only way to address this concern is by studying human behavior in real life settings. Unfortunately, however, Nature does not always create the circumstances that allow a clear view of the psychological principles at work. Furthermore, naturally occurring phenomena are typically too complex to be empirically tractable in a way that we can discern psychological elements from within the characteristically complex behavior exhibited by humans.2

Download full text paper here


Friday, December 11, 2009

PROMISING FAILURE: Political and Company Rhetoric as a Determinant of Success

Abstract

I show that firms that make sparing use of the future tense in their annual reports significantly outperform those that use it more. Similarly, in all of the U.S. presidential elections from 1960 through 2004, the candidate who made less use of the future tense during the televised debates won the popular vote. I show that the frequency of using future-tense sentences is strongly correlated with the frequency of making promises and that the latter can be modeled within a game-theoretic framework.

What’s a promise worth? At the start of Shakespeare’s Love’s Labour’s Lost, four men swear to forgo all worldly pleasures and, in particular, sex. A group of French ladies quickly helps them to reconsider. One by one, the men break their promises. The tension between their oaths and the temptations offered by a group of attractive young women is responsible for much of the entertainment, and it allows us to gain insight into each man’s character. That solemn oaths are not good predictors of future behavior has been noted throughout the ages by writers and philosophers alike. Jean-Jacques Rousseau argued that those who make promises keep few of them: “He who is slowest in making a promise is most faithful in keeping it.”

In this paper, I ask if Shakespeare’s and Rousseau’s instincts are true and whether we should trust those who make commitments for the future. In particular, I examine if the extensive use of promises about one’s future actions predicts poor performance and/or failure. I explore this issue in two distinct areas: corporate financial success and the popular vote in U.S. presidential elections. By studying the language structure used in corporate reports, I show that the frequency of the verbs “will”, “shall”, and “going to” represents a good proxy for a frequency of promises in these reports. Therefore, I will interchangeably use the terms future tense and promises. It turns out that companies that use future tense less frequently in their reports systematically outperform companies that use future tense more often. This relationship is not restricted to financial markets; a similar pattern exists in the context of political rhetoric. The U.S. presidential candidates who consistently make more statements about the future tend to lose the subsequent popular vote. Both of these findings are consistent with theoretical predictions on the existence of equilibrium with inflated talk in the model of Kartik, Ottaviani, and Squintani (2007) (hereafter KOS), which is built on a classic “cheap talk” game (Crawford and Sobel 1982). To the best of my knowledge, this is the first study connecting game-theoretic models of talk with a feature of real everyday language.

The finance literature has recognized that quantitative data in corporate reports may be important for pricing (Fama and French 1993; Chan, Jegadeesh, and Lakonishok 1996; Sloan 1996; Franzoni and Marin 2006). The impact of language use on performance and price formation in financial markets has been studied previously. Tetlock, Saar-Tsechansky, and Macskassy (2007) showed that a fraction of negative words in company-specific news stories forecasts low earnings. Moreover, company earnings briefly under react to the information embedded in negative words. Similarly, Tetlock (2007) studied how the proportion of negative words in popular news columns on the stock market is incorporated into aggregate market valuation. The work of Antweiler and Frank (2004) is similar in spirit. The authors constructed an algorithm to assign a “bullish”, “neutral”, or “bearish” rating to more then 1.5 million messages posted at the Yahoo! Finance website about various companies and found that these messages not only help predict market volatility but also have a statistically significant effect on stock returns. These studies analyze the wording of messages about companies, or about stocks in general.

Download full text paper here


Tuesday, December 8, 2009

Optimal Contracts, Adverse Selection, and Social Preferences: An Experiment

Abstract

It has long been standard in agency theory to search for incentive compatible mechanisms on the assumption that people care only about their own material wealth. However, this assumption is clearly refuted by numerous experiments, and we feel that it may be useful to consider nonpecuniary utility in mechanism design and contract theory. Accordingly, we devise an experiment to explore optimal contracts in an adverse-selection context. A principal proposes one of three contract menus, each of which offers a choice of two incentive-compatible contracts, to two agents whose types are unknown to the principal. The agents know the set of possible menus, and choose to either accept one of the two contracts offered in the proposed menu or to reject the menu altogether; a rejection by either agent leads to lower (and equal) reservation payoffs for all parties. While all three possible menus favor the principal, they do so to varying degrees. We observe numerous rejections of the more lopsided menus, and approach an equilibrium where one of the more equitable contract menus (which one depends on the reservation payoffs) is proposed and agents accept a contract, selecting actions according to their types. Behavior is largely consistent with all recent models of social preferences, strongly suggesting there is value in considering nonpecuniary utility in agency theory.

The classic ‘lemons’ paper (Akerlof 1970) illustrated the point that asymmetric information led to economic inefficiency, and could even destroy an efficient market. Research on mechanism design has sought ways to minimize or eliminate this problem. Seminal research includes the auction results of Vickrey (1961) and the optimal taxation study by Mirrlees (1971). Applications include public and regulatory economics (Laffont and Tirole 1993), labor economics (Weiss 1991, Lazear 1997), financial economics (Freixas and Rochet 1997), business management (Milgrom and Roberts 1992), and development economics (Ray 1998).

It has long been standard in agency theory to search for incentive-compatible mechanisms on the assumption that people care only about their own material wealth. However, while this assumption is a useful point of departure for a theoretical examination, economic interactions frequently are associated with social approval or disapproval. In dozens of experiments, many people appear to be motivated by some form of social preferences, such as altruism, difference aversion, or reciprocity. Recently, contract theorists such as Casadesus-Masanell (1999) and Rob and Zemsky (1999) have expressed the view that contract theory could be made more descriptive and effective by incorporating some form of nonpecuniary utility into the analysis.

We consider the explanatory power of recent social preference models (e.g., Bolton and Ockenfels 2000, Fehr and Schmidt 1999, and Charness and Rabin 1999) in our contractual environment. Our aim is to investigate whether incorporating social preferences into contract theory could lead to a better understanding of how work motivation and performance are linked, and to thereby improve firms’ contract and employment choices, as well as productivity and efficiency.

Download full text paper here


Saturday, December 5, 2009

ON COMMERCIAL MEDIA BIAS

Abstract

Within the spokes model of Chen and Riordan (2007) that allows for non-localized competition among arbitrary numbers of media outlets, we quantify the effect of concentration of ownership on quality and bias of media content. A main result shows that too few commercial outlets, or better, too few separate owners of commercial outlets can lead to substantial bias in equilibrium. Increasing the number of outlets (commercial and non-commercial) tends to bring down this bias; but the strongest effect occurs when the number of owners is increased. Allowing for free entry provides lower bounds on fixed costs above which substantial commercial bias occurs in equilibrium.

Motivated by the recent media policy debate in the United States and ongoing attempts by the Federal Communications Commission (FCC) to loosen ownership rules there (see e.g., McChesney, 2004, for a description of the events around the 2003 attempt; another such episode occurred in 2007), we develop a model of media competition that allows for a somewhat detailed study of the quality and bias of media content for a number of different ownership structures. The analysis builds on the spokes model of Chen and Riordan (2007), which is a Hotelling type model of spatial competition that allows for arbitrary numbers of media firms and outlets (commercial and non-commercial) that compete against each other in a non-localized fashion.

We show that excessively concentrated media markets, beyond a certain cut-off, can result in substantial bias of media content. Increasing the number of separately owned media firms in the market helps towards reducing the bias; increasing the number of commercial outlets, while keeping the number of owners fixed, can also help, but clearly to a lesser extent.1

The channel through which the bias occurs in our model is through the funding of commercial media outlets by advertisers and the internalization of the effect of the media outlets' content on the advertisers' sales and advertising budgets. A motivating example for our analysis is the coverage of tobacco related health hazards in the US. For decades, despite hundreds of thousands of deaths a year, serious statistics and medical information about the health hazards of smoking were kept away from mainstream commercial media (see e.g., Baker, 1994, and Bagdikian, 2004, for chronologies as well as references documenting the statistical impact of advertising on the coverage of tobacco related health hazards; see also Ellman and Germano, 2009, for further discussion and references). Bagdikian (2004, pp. 250-252) summarizes “there were still more stories in the daily press about the causes of influenza, polio, and tuberculosis than about the cause of one in every seven deaths in the United States," so that, in the 1980's, some “64 million Americans, obviously already addicted, smoked an average of 26 cigarettes a day" with surveys indicating that half the general and two-thirds the smoking population did not think smoking made a great difference in life expectancy, Baker (1994, p. 51).2 Our model claims that alongside advertising, concentration in the media markets plays an important role in explaining such bias.

Download full text article here


Wednesday, December 2, 2009

Legal Enforcement, Public Supply of Liquidity and Sovereign Risk

Abstract

Sovereign debt crises in emerging markets are usually associated with liquidity and banking crises within the economy. This connection is suggested by both anecdotical and empirical evidence. The conventional view is that the domestic financial turmoil is caused by foreign creditors' retaliation. Yet, there is no clear-cut evidence supporting the existence of \classic" default penalties (e.g., trade sanctions or exclusion from international capital markets). This paper then proposes a novel mechanism linking sovereign defaults with liquidity and banking crises without any intervention of foreign creditors. The model considers a standard unwillingness-to-pay problem assuming that: (i) the enforcement of private contracts is limited and, as a result, public debt represents a source of liquidity; (ii) the government cannot discriminate between domestic and foreign agents. In this setting, the prospect of drying up the private sector's liquidity restores the ex-post incentive to pay of the government without any need to assume foreign penalties. Nonetheless, liquidity crises might arise when economic conditions deteriorate and the government chooses opportunistically to default in order to avoid the repayment of foreign agents. The interaction between the enforcement friction and sovereign risk is then exploited to study the implications on international capital flows and legal and institutional domestic reforms.

Log Value Added (y). Log of value added in US dollars at the 3-digit ISIC classification for manufacturing sectors. Data are sourced from the UNIDO INDSTAT 2005 database. Original data are deated using the GDP deator in United States from the World Bank's World Development Indicators 2006 CD-ROM.

Default Dummy (DEF). Dummy variable taking a value one in the first year of a default episode. Data on default episodes are sourced from the Standard and Poor's sovereign default database, as reported in Beers and Chambers (2002). This database includes all sovereign defaults on loans or bonds with private agents between 1975 and 2002, and reports the period during which the debtor government remained in default.

Financial Dependence (FinDep). An index constructed as the median share of capital expenditures not financed with the cash ow from operations (capital expenditures minus cash flow from operation divided by capital expenditures) by US-based, publicly listed firms. The index is sourced from Kroszner et al. (2007), who provide a 3-digit ISIC based reclassification of the data originally constructed by Rajan and Zingales (1998) for a mixture of 3-digit and 4-digit ISIC sectors. The data refer to the period 1980-1999 and, originally, range from -1.14 (Tobacco) to 0.72 (Transport equipment), with a higher number indicating greater financial dependence. To ease statical inference, I normalize the index such that it ranges from 0 to 1.

Liquidity Needs (Liq). An index constructed as the median ratio of inventories over total sales for US-based, publicly listed firms. This index has been initially proposed by Raddatz (2006) to measure industrys financial needs that focuses on short-term liquidity needs. The data are sourced from Kroszner et al. (2007), who compute the Raddatz index for the 3-digit ISIC manufacturing sectors. The data refer to the 1980s and, originally, range from 0.07 (Tobacco) to 0.72 (Plastic Products), with a higher number indicating greater financial dependence. To ease statical inference, I normalize the index such that it ranges from 0 to 1.

Tangibility (Tangs). An index constructed as the median ratio of net property, plant and equipment to total assets by US-publicly listed firms during the period 1980-1999 in each 3-digit ISIC manufacturing sector. The data are sourced from Kroszner et al. (2007). The original data range from 0.12 to 0.62, and are normalized such that they range from 0 to 1.

Download full text paper here