Tuesday, December 29, 2009

Strategy Communication and Measurement Systems

Abstract

Organizations often face the challenge of communicating their strategies to local decision makers. The difficulty presents itself in finding a way to measure performance which meaningfully conveys how to implement the organization's strategy at local levels. I show that organizations solve this communication problem by combining performance measures in such a way that performance gains come closest to mimicking value-added as defined by the organization's strategy. I further show how organizations rebalance performance measures in response to changes in their strategies. Applications to the design of performance metrics, gaming, and divisional performance evaluation are considered. The paper also suggests several empirical ways to evaluate the practical importance of the communication role of measurement systems.

Performance measures communicate objectives to local decision makers which often do not perfectly represent the true strategy, or goal, of the organization (Baker, 1992). For example, performance measures often induce unintended behaviors such as gaming (Lawler, 1990 and Courty and Marschke, 1997). This, however, only partially illustrates what I mean by imperfect performance measures. More generally, performance measures rarely perfectly represent contributions to the firm's value.5 Because of this lack of perfect and universal proxy organizations must constantly adjust their measurement systems so that it is aligned with their strategies.

Organizations typically measure performance on several dimensions and balance each of these dimensions. For example, depending on the objective that is communicated, measures of short-term financial performance such as return on capital employed or project profitability are sometimes combined with measures of less tangible assets such as market share, customer satisfaction index, or even employee commitment. This balance between different performance measures is explicit when agents are offered formal performance metrics or scorecards and implicit when agents are informally communicated the emphasis put on each measure.

Often performance measurement systems are coupled with incentive systems which reward performance either financially or through other decisions (e.g. promotion, training). Surprisingly, the fraction of the agent’s compensation paid as a financial award is usually low, although it varies widely across compensation systems (Baker, Jensen and Murphy, 1988). This observation suggests two distinct concepts of weights in performance measure: (a) the relative weights determine the emphasis put on each performance measure; (b) the absolute weights determine how high performers are rewarded and/or low performers punished. These two concepts correspond to the distinction between performance measurement systems and performance incentive systems, which plays a key role in this paper. Most of this paper will study how performance measures are chosen and how they are balanced (the relative weights on performance measures). Toward the end of the paper, I will take into account the incentive dimension of some performance measurement systems (the absolute weights on performance measures).

Download full text paper here


Saturday, December 26, 2009

SPATIAL MARKET EXPANSION THROUGH MERGERS

Abstract

In this paper we present a model that studies firm mergers in a spatial setting. A new model is formulated that addresses the issue of finding the number of branches that have to be eliminated by a firm after merging with another one, in order to maximize profits. The model is then applied to an example of bank mergers in the city of Barcelona. Finally, a variant of the formulation that introduces competition is presented together with some conclusions.

Several studies have analyzed the economic and financial consequences of bank mergers. Rhoades (1998) looked at nine large bank mergers with substantial market overlap in the early 1990s. He found that all produced significant cost cutting in line with the pre-merger projections due to branc h reductions. Piloff (1996) looked at 48 bank mergers in the 1980s, relating announcement period abnormal returns to accounting based performance measures. He found higher abnormal returns that offer the greatest potential for cost reductions (measured by geographic overlap and premerger cost measures). Piloff also found that industry-adjusted profitability of the merged banks does not change, that total expenses to assets increases, and that revenues rise in the five year period around the merger. Houston, et al. (2001) looked at analysts' estimates of projected cost savings and revenue enhancements associated with bank mergers. They found that analysts’ estimates of increases in combined bank value associated with a merger are due mainly to estimated costs savings rather than projected revenue enhancements. Finally, Avery, et al. (1999) looked at mergers during the period 1975 through 1998 involving banks with significant geographic overlap (measured by the number of branches in a ZIP code per capita). They found that these mergers resulted in a significant decrease in branches per capita.

In this paper we present a model that addresses the issue of mergers in a spatial setting. In the next section a new model is formulated that addresses the issue of finding the number of branches that have to be eliminated by a firm after merging with another one, in order to maximize revenues. The model then is applied to an example in the city of Barcelona. Finally, a variant of the formulation is presented together with some conclusions.

Download full text paper here


Wednesday, December 23, 2009

SECURITIES ANALYSTS AS FRAME-MAKERS

Abstract

In this paper we explore the mechanisms that allow securities analysts to value companies in contexts of Knightian uncertainty, that is, in the face of information that is unclear, subject to unforeseeable contingencies or to multiple interpretations. We address this question with a grounded-theory analysis of the reports written on Amazon.com by securities analyst Henry Blodget and rival analysts during the years 1998-2000. Our core finding is that analysts’ reports are structured by internally consistent associations that include categorizations, key metrics and analogies. We refer to these representations as calculative frames, and propose that analysts function as frame-makers – that is, as specialized intermediaries that help investors value uncertain stocks. We conclude by considering the implications of frame-making for the rise of new industry categories, analysts’ accuracy, and the regulatory debate on analysts’ independence.

Despite the extensive academic attention bestowed upon analysts, existing treatments provide a limited account of their intermediary role. Extant work is best understood as three broad streams. One approach, rooted in the finance and accounting literatures, views analysts as information processors and stresses their activities of search, assembly and communication of information. Another approach, based on neo-institutional sociology and behavioral finance, documents the tendency of analysts to mimic each other. We refer to it as the imitation perspective. Finally, a more recent sociological approach has started to outline the role of analysts as critics.

Analysts as information processors. The information processing literature on analysts rests on a remarkable finding: securities analysts, long regarded as valuation experts, are unable to provide accurate forecasts of stock prices. Beginning with Cowles’ (1933) seminal piece, titled “Can Stock Market Forecasters Forecast?” numerous finance and accounting theorists have documented the failure of analysts’ recommendations to produce abnormal returns and accurate forecasts of earnings and price targets (Lin and McNichols, 1998, Hong and Kubick, 2002, Michaely and Womack, 1999, Lim, 2001, Boni and Womack, 2002, Schack, 2001). 1

Two complementary explanations have been put forward to account for this failure. One view, based on the efficient market hypothesis (EMH), argues that accurate financial forecasting is simply impossible in an efficient capital market (Samuelson, 1965; Malkiel, 1973). According to the EMH, stock prices in a competitive capital market capture all relevant information about the value of a security, following a random walk. There are no mispricings, no possibility for any actor to find extraordinary profit opportunities and indeed, no scope for financial intermediaries to help their clients do so (Fama, 1965, 1991; Samuelson 1967; Jensen, 1968, 1970; Malkiel, 1973). The bleak implication for analysts is that accurate forecasting and lucrative advice are impossible.

An additional explanation for analysts’ inaccuracies, based on agency theory, is that the fiduciary relationship between analyst and investor is distorted by a variety of conflicts of interest, producing dysfunctional biases in analyst’s forecasts and recommendations. These distortions include investment banking ties (Lin and McNichols, 1998, Hong and Kubick, 2002; Michaely and Womack, 1999), access to company information (Lim, 2001), brokerage interests of the bank employing the analyst (Boni and Womack, 2002), investment interests of the clients of the bank (Sargent, 2000), or the investment interests of the analysts themselves (Schack, 2001). Analysts, in short, come across from this literature as conflict-ridden intermediaries.

The aforementioned conflicts have become particularly prominent following the Wall Street scandals of 2000-2001. During these years, top-ranked Internet analysts (including Henry Blodget) resisted downgrading their recommendations even as prices fell from record highs to zero (Boni and Womack, 2002). Other analysts were recorded privately criticizing companies they publicly recommended (Gasparino, 2005). Such was the public uproar against analysts that the Securities and Exchange Commission even issued explicit guidelines for retail investors to use analyst reports with caution (Securities and Exchange Commission, 2002).

Whether in the form of market efficiency or conflicts of interest, the approaches to analysts presented so far share a common premise: both assume that the core intermediary function performed by security analysts is to forecast the future and provide recommendations. Analysts are accordingly presented as engaged in search, assembly and diffusion of information. To highlight this common focus on information, we refer to this literature as the information processing approach.

Download full text paper here


Sunday, December 20, 2009

The Importance of Relative Performance Feedback Information: Evidence from a Natural Experiment using High School Students

Abstract

We study the effect of providing relative performance feedback information on performance under piece-rate incentives. A natural experiment that took place in a high school offers an unusual opportunity to test this effect in a real-effort setting. For one year only, students received information that allowed them to know whether they were above (below) the class average as well as the distance from this average. We exploit a rich panel data set and find that the provision of this information led to an increase of 5% in students’ grades. Moreover, the effect was significant for the whole distribution. However, once the information was removed the effect disappeared. To rule out the concern that the effect may be driven by teachers within the school, we verify our results using national level exams (externally graded) for the same students, and the effect remains.

Improving students’ performance has been an important concern for academics and educational policy makers alike. Given the recent introduction of the OECD coordinated Programme for International Student Assessment (PISA), improvements in students’ performance, measured by their grades, is at the heart of governmental reform.1 The education literature has focused on school inputs as the principle means to improve students’ performance, in particular, reduction in pupil/teacher ratio, improved quality of teacher (experience and education), and extended term length (See Krueger (1999), Card and Krueger (1992)). There is however, a lively debate regarding the effectiveness of school inputs, largely due to their associated costs (Hanushek (1996, 2003)). Moreover, the PISA reports do not show a strong positive relationship between the amount spent per student and the performance in the standardised tests in mathematics, science and reading. For example, the US ranks second in expenditure per pupil (91,770$) but ranked twenty-second (out of 30) in performance (see OECD PISA report, 2006).

More recently, there has been interest in analyzing the relevance of performance evaluations and feedback information regarding these evaluations. The effect of interim feedback information about own performance on subsequent performance has been studied mostly in labour settings2. The importance of interim feedback information on students’ performance has been empirically studied by Bandiera et al. (2008). The authors find that providing university students with interim feedback information about own performance has a positive effect on their final performance. However, feedback information involving relative performance has received less attention. The provision of relative performance feedback information allows for social comparison (individuals can evaluate their own performance by comparing themselves to others, Festinger (1954)). While this has been extensively studied in management and psychology literature (see Festinger (1954), Locke and Latham (1990) and Suls and Wheeler (2000) for an overview), it has not been fully explored in economics.3

Download full text article here


Thursday, December 17, 2009

The challenge of representative design in psychology and economics

Abstract

The demands of representative design, as formulated by Egon Brunswik (1956), set a high methodological standard. Both experimental participants and the situations with which they are faced should be representative of the populations to which researchers claim to generalize results. Failure to observe the latter has led to notable experimental failures in psychology from which economics could learn. It also raises questions about the meaning of testing economic theories in “abstract” environments. Logically, abstract tests can only be generalized to “abstract realities” and these may or may not have anything to do with the “empirical realities” experienced by economic actors.

Economists are adept at handling the requirements of classic, factorial experimental designs. However, a major problem with factorial experiments is that the basic logic (i.e., the orthogonal variation of variables) precludes generalizing results outside the laboratory (Brunswik, 1956). The rationale does seem impeccable. By varying one variable at a time and holding all others constant, one can isolate the effects. However, outside the laboratory all other variables are not constant and variables are not orthogonal. Thus, estimates of the sizes of effects (based on the experiment) are subject to other forces. You can design factorial worlds within an experiment but this may not have much to do with what happens outside it.

On the positive side, it should be noted that economists have looked at whether results that looked surprising within experimental laboratories (from the viewpoint of economic theory) can also be observed in more realistic environments, i.e., attempts have been made to demonstrate external validity. As a case in point, consider judgments of willingness-to-pay and willingness-to-accept on issues as trivial as small gambles to those as consequential as compensation awards in civil trials. Whereas the response mechanisms can be justified by economic theory, people are not machines that necessarily produce appropriate responses. Indeed, most people have limited experience with such mechanisms and responses are often sensitive to normatively irrelevant considerations whether in or outside the experimental laboratory. For good examples, see Sunstein, Hastie, Payne, Schkade and Viscusi (2002). In addition, consider evidence that supports prospect theory “in the wild” (Camerer, 2000) or empirical studies in behavioral finance (Barberis & Thaler, 2003).

Download full text article here


Monday, December 14, 2009

Psychological Pressure in Competitive Environments: Evidence from a Randomized Natural Experiment

Abstract

Much like cognitive abilities, emotional skills can have major effects on performance and economic outcomes. This paper studies the behavior of professional subjects involved in a dynamic competition in their own natural environment. The setting is a penalty shoot-out in soccer where two teams compete in a tournament framework taking turns in a sequence of five penalty kicks each. As the kicking order is determined by the random outcome of a coin flip, the treatment and control groups are determined via explicit randomization. Therefore, absent any psychological effects, both teams should have the same probability of winning regardless of the kicking order. Yet, we find a systematic first-kicker advantage. Using data on 2,731 penalty kicks from 262 shoot-outs for a three decade period, we find that teams kicking first win the penalty shoot-out 60.5% of the time. A dynamic panel data analysis shows that the psychological mechanism underlying this result arises from the asymmetry in the partial score. As most kicks are scored, kicking first typically means having the opportunity to lead in the partial score, whereas kicking second typically means lagging in the score and having the opportunity to, at most, get even. Having a worse prospect than the opponent hinders subjects' performance. Further, we also find that professionals are self-aware of their own psychological effects. When a recent change in regulations gives winners of the coin toss the chance to choose the kicking order, they rationally react to it by systematically choosing to kick first. A survey of professional players reveals that when asked to explain why they prefer to kick first, they precisely identify the psychological mechanism for which we find empirical support in the data: they want \to lead in the score in order to put pressure on the opponent."

At least since Hume (1739) and Smith (1759), psychological elements have been argued to be as much a part of human nature, and possibly as important for understanding human behavior, as the strict rationality considerations included in economic models that adhere to the rational man paradigm. Clearly then, any study of human behavior that omits these elements can yield results of unknown reliability.

Much as the rationality principle has successfully accommodated social attitudes, altruism, values and other elements (see, e.g., Becker (1976, 1996), Becker and Murphy (2000)), behavioral economics attempts to parsimoniously incorporate psychological motives not traditionally included in economic models. Theoretical models in this area firmly rely for empirical support on the observation of human decision making in laboratory environments. Laboratory experiments have the important advantage of providing a great deal of control over relevant margins. In these settings, observed behavior often deviates from the predictions of standard economic models. In fact, at least since the 1970s, a great deal of experimental evidence has been accumulated demonstrating circumstances under which strict rationality considerations break down and other patterns of behavior, including psychological considerations, emerge. Thus, an important issue is how applicable are the insights gained in laboratory settings for understanding behavior in natural environments. This challenge, often referred to as the problem of “generalizability” or “external validity,” has taken a central role in recent research in the area.1

The best and perhaps only way to address this concern is by studying human behavior in real life settings. Unfortunately, however, Nature does not always create the circumstances that allow a clear view of the psychological principles at work. Furthermore, naturally occurring phenomena are typically too complex to be empirically tractable in a way that we can discern psychological elements from within the characteristically complex behavior exhibited by humans.2

Download full text paper here


Friday, December 11, 2009

PROMISING FAILURE: Political and Company Rhetoric as a Determinant of Success

Abstract

I show that firms that make sparing use of the future tense in their annual reports significantly outperform those that use it more. Similarly, in all of the U.S. presidential elections from 1960 through 2004, the candidate who made less use of the future tense during the televised debates won the popular vote. I show that the frequency of using future-tense sentences is strongly correlated with the frequency of making promises and that the latter can be modeled within a game-theoretic framework.

What’s a promise worth? At the start of Shakespeare’s Love’s Labour’s Lost, four men swear to forgo all worldly pleasures and, in particular, sex. A group of French ladies quickly helps them to reconsider. One by one, the men break their promises. The tension between their oaths and the temptations offered by a group of attractive young women is responsible for much of the entertainment, and it allows us to gain insight into each man’s character. That solemn oaths are not good predictors of future behavior has been noted throughout the ages by writers and philosophers alike. Jean-Jacques Rousseau argued that those who make promises keep few of them: “He who is slowest in making a promise is most faithful in keeping it.”

In this paper, I ask if Shakespeare’s and Rousseau’s instincts are true and whether we should trust those who make commitments for the future. In particular, I examine if the extensive use of promises about one’s future actions predicts poor performance and/or failure. I explore this issue in two distinct areas: corporate financial success and the popular vote in U.S. presidential elections. By studying the language structure used in corporate reports, I show that the frequency of the verbs “will”, “shall”, and “going to” represents a good proxy for a frequency of promises in these reports. Therefore, I will interchangeably use the terms future tense and promises. It turns out that companies that use future tense less frequently in their reports systematically outperform companies that use future tense more often. This relationship is not restricted to financial markets; a similar pattern exists in the context of political rhetoric. The U.S. presidential candidates who consistently make more statements about the future tend to lose the subsequent popular vote. Both of these findings are consistent with theoretical predictions on the existence of equilibrium with inflated talk in the model of Kartik, Ottaviani, and Squintani (2007) (hereafter KOS), which is built on a classic “cheap talk” game (Crawford and Sobel 1982). To the best of my knowledge, this is the first study connecting game-theoretic models of talk with a feature of real everyday language.

The finance literature has recognized that quantitative data in corporate reports may be important for pricing (Fama and French 1993; Chan, Jegadeesh, and Lakonishok 1996; Sloan 1996; Franzoni and Marin 2006). The impact of language use on performance and price formation in financial markets has been studied previously. Tetlock, Saar-Tsechansky, and Macskassy (2007) showed that a fraction of negative words in company-specific news stories forecasts low earnings. Moreover, company earnings briefly under react to the information embedded in negative words. Similarly, Tetlock (2007) studied how the proportion of negative words in popular news columns on the stock market is incorporated into aggregate market valuation. The work of Antweiler and Frank (2004) is similar in spirit. The authors constructed an algorithm to assign a “bullish”, “neutral”, or “bearish” rating to more then 1.5 million messages posted at the Yahoo! Finance website about various companies and found that these messages not only help predict market volatility but also have a statistically significant effect on stock returns. These studies analyze the wording of messages about companies, or about stocks in general.

Download full text paper here


Tuesday, December 8, 2009

Optimal Contracts, Adverse Selection, and Social Preferences: An Experiment

Abstract

It has long been standard in agency theory to search for incentive compatible mechanisms on the assumption that people care only about their own material wealth. However, this assumption is clearly refuted by numerous experiments, and we feel that it may be useful to consider nonpecuniary utility in mechanism design and contract theory. Accordingly, we devise an experiment to explore optimal contracts in an adverse-selection context. A principal proposes one of three contract menus, each of which offers a choice of two incentive-compatible contracts, to two agents whose types are unknown to the principal. The agents know the set of possible menus, and choose to either accept one of the two contracts offered in the proposed menu or to reject the menu altogether; a rejection by either agent leads to lower (and equal) reservation payoffs for all parties. While all three possible menus favor the principal, they do so to varying degrees. We observe numerous rejections of the more lopsided menus, and approach an equilibrium where one of the more equitable contract menus (which one depends on the reservation payoffs) is proposed and agents accept a contract, selecting actions according to their types. Behavior is largely consistent with all recent models of social preferences, strongly suggesting there is value in considering nonpecuniary utility in agency theory.

The classic ‘lemons’ paper (Akerlof 1970) illustrated the point that asymmetric information led to economic inefficiency, and could even destroy an efficient market. Research on mechanism design has sought ways to minimize or eliminate this problem. Seminal research includes the auction results of Vickrey (1961) and the optimal taxation study by Mirrlees (1971). Applications include public and regulatory economics (Laffont and Tirole 1993), labor economics (Weiss 1991, Lazear 1997), financial economics (Freixas and Rochet 1997), business management (Milgrom and Roberts 1992), and development economics (Ray 1998).

It has long been standard in agency theory to search for incentive-compatible mechanisms on the assumption that people care only about their own material wealth. However, while this assumption is a useful point of departure for a theoretical examination, economic interactions frequently are associated with social approval or disapproval. In dozens of experiments, many people appear to be motivated by some form of social preferences, such as altruism, difference aversion, or reciprocity. Recently, contract theorists such as Casadesus-Masanell (1999) and Rob and Zemsky (1999) have expressed the view that contract theory could be made more descriptive and effective by incorporating some form of nonpecuniary utility into the analysis.

We consider the explanatory power of recent social preference models (e.g., Bolton and Ockenfels 2000, Fehr and Schmidt 1999, and Charness and Rabin 1999) in our contractual environment. Our aim is to investigate whether incorporating social preferences into contract theory could lead to a better understanding of how work motivation and performance are linked, and to thereby improve firms’ contract and employment choices, as well as productivity and efficiency.

Download full text paper here


Saturday, December 5, 2009

ON COMMERCIAL MEDIA BIAS

Abstract

Within the spokes model of Chen and Riordan (2007) that allows for non-localized competition among arbitrary numbers of media outlets, we quantify the effect of concentration of ownership on quality and bias of media content. A main result shows that too few commercial outlets, or better, too few separate owners of commercial outlets can lead to substantial bias in equilibrium. Increasing the number of outlets (commercial and non-commercial) tends to bring down this bias; but the strongest effect occurs when the number of owners is increased. Allowing for free entry provides lower bounds on fixed costs above which substantial commercial bias occurs in equilibrium.

Motivated by the recent media policy debate in the United States and ongoing attempts by the Federal Communications Commission (FCC) to loosen ownership rules there (see e.g., McChesney, 2004, for a description of the events around the 2003 attempt; another such episode occurred in 2007), we develop a model of media competition that allows for a somewhat detailed study of the quality and bias of media content for a number of different ownership structures. The analysis builds on the spokes model of Chen and Riordan (2007), which is a Hotelling type model of spatial competition that allows for arbitrary numbers of media firms and outlets (commercial and non-commercial) that compete against each other in a non-localized fashion.

We show that excessively concentrated media markets, beyond a certain cut-off, can result in substantial bias of media content. Increasing the number of separately owned media firms in the market helps towards reducing the bias; increasing the number of commercial outlets, while keeping the number of owners fixed, can also help, but clearly to a lesser extent.1

The channel through which the bias occurs in our model is through the funding of commercial media outlets by advertisers and the internalization of the effect of the media outlets' content on the advertisers' sales and advertising budgets. A motivating example for our analysis is the coverage of tobacco related health hazards in the US. For decades, despite hundreds of thousands of deaths a year, serious statistics and medical information about the health hazards of smoking were kept away from mainstream commercial media (see e.g., Baker, 1994, and Bagdikian, 2004, for chronologies as well as references documenting the statistical impact of advertising on the coverage of tobacco related health hazards; see also Ellman and Germano, 2009, for further discussion and references). Bagdikian (2004, pp. 250-252) summarizes “there were still more stories in the daily press about the causes of influenza, polio, and tuberculosis than about the cause of one in every seven deaths in the United States," so that, in the 1980's, some “64 million Americans, obviously already addicted, smoked an average of 26 cigarettes a day" with surveys indicating that half the general and two-thirds the smoking population did not think smoking made a great difference in life expectancy, Baker (1994, p. 51).2 Our model claims that alongside advertising, concentration in the media markets plays an important role in explaining such bias.

Download full text article here


Wednesday, December 2, 2009

Legal Enforcement, Public Supply of Liquidity and Sovereign Risk

Abstract

Sovereign debt crises in emerging markets are usually associated with liquidity and banking crises within the economy. This connection is suggested by both anecdotical and empirical evidence. The conventional view is that the domestic financial turmoil is caused by foreign creditors' retaliation. Yet, there is no clear-cut evidence supporting the existence of \classic" default penalties (e.g., trade sanctions or exclusion from international capital markets). This paper then proposes a novel mechanism linking sovereign defaults with liquidity and banking crises without any intervention of foreign creditors. The model considers a standard unwillingness-to-pay problem assuming that: (i) the enforcement of private contracts is limited and, as a result, public debt represents a source of liquidity; (ii) the government cannot discriminate between domestic and foreign agents. In this setting, the prospect of drying up the private sector's liquidity restores the ex-post incentive to pay of the government without any need to assume foreign penalties. Nonetheless, liquidity crises might arise when economic conditions deteriorate and the government chooses opportunistically to default in order to avoid the repayment of foreign agents. The interaction between the enforcement friction and sovereign risk is then exploited to study the implications on international capital flows and legal and institutional domestic reforms.

Log Value Added (y). Log of value added in US dollars at the 3-digit ISIC classification for manufacturing sectors. Data are sourced from the UNIDO INDSTAT 2005 database. Original data are deated using the GDP deator in United States from the World Bank's World Development Indicators 2006 CD-ROM.

Default Dummy (DEF). Dummy variable taking a value one in the first year of a default episode. Data on default episodes are sourced from the Standard and Poor's sovereign default database, as reported in Beers and Chambers (2002). This database includes all sovereign defaults on loans or bonds with private agents between 1975 and 2002, and reports the period during which the debtor government remained in default.

Financial Dependence (FinDep). An index constructed as the median share of capital expenditures not financed with the cash ow from operations (capital expenditures minus cash flow from operation divided by capital expenditures) by US-based, publicly listed firms. The index is sourced from Kroszner et al. (2007), who provide a 3-digit ISIC based reclassification of the data originally constructed by Rajan and Zingales (1998) for a mixture of 3-digit and 4-digit ISIC sectors. The data refer to the period 1980-1999 and, originally, range from -1.14 (Tobacco) to 0.72 (Transport equipment), with a higher number indicating greater financial dependence. To ease statical inference, I normalize the index such that it ranges from 0 to 1.

Liquidity Needs (Liq). An index constructed as the median ratio of inventories over total sales for US-based, publicly listed firms. This index has been initially proposed by Raddatz (2006) to measure industrys financial needs that focuses on short-term liquidity needs. The data are sourced from Kroszner et al. (2007), who compute the Raddatz index for the 3-digit ISIC manufacturing sectors. The data refer to the 1980s and, originally, range from 0.07 (Tobacco) to 0.72 (Plastic Products), with a higher number indicating greater financial dependence. To ease statical inference, I normalize the index such that it ranges from 0 to 1.

Tangibility (Tangs). An index constructed as the median ratio of net property, plant and equipment to total assets by US-publicly listed firms during the period 1980-1999 in each 3-digit ISIC manufacturing sector. The data are sourced from Kroszner et al. (2007). The original data range from 0.12 to 0.62, and are normalized such that they range from 0 to 1.

Download full text paper here


Sunday, November 29, 2009

On the Accuracy of Latin American Trade Statistics: a Nonparametric Test for 1925

Abstract

This paper proposes a nonparametric test in order to establish the level of accuracy of the foreign trade statistics of 17 Latin American countries when contrasted with the trade statistics of the main partners in 1925. The Wilcoxon Matched-Pairs Ranks test is used to determine whether the differences between the data registered by exporters and importers are meaningful, and if so, whether the differences are systematic in any direction. The paper tests for the reliability of the data registered for two homogeneous products, petroleum and coal, both in volume and value. The conclusion of the several exercises performed is that we cannot accept the existence of statistically significant differences between the data provided by the exporters and the registered by the importing countries in most cases. The qualitative historiography of Latin American describes its foreign trade statistics as mostly unusable. Our quantitative results contest this view.

The general mistrust placed on trade statistics, particularly those of underdeveloped countries, represents a heavy burden on economic history research, since trade statistics are one of the oldest and most complete economic series available for analysis. For instance, a research project such as the described in Carreras et al. (2003) or Carreras et al. (2004) aimed at estimating the level of economic modernization in Latin American and Caribbean countries before World War II making systematic use of the trade statistics of these countries as well as of their principal trading partners in the developed world is immediately under suspicion.

From the seminal work of Morgernstern (1963) to the present day, the users of trade figures are aware of the divergence that exists between exporters’ and importers’ figures. The impression from the economic literature is that the researcher should be even more suspicious of the data the more underdeveloped the country. Among others, the studies of Naya and Morgan (1969), Yeats (1990), Rozansky and Yeats (1994) and, Makhoul and Otterstrom (1998), show that the accuracy of trade statistics provided by developed countries is higher than that of the developing countries. For instance, Makhoul and Otterstrom (1998) found that the quality of the OECD trade statistics is much better than that provided by the non-OECD in a relatively recent period such as 1980 to 1994. Also Rozansky and Yeats (1994 ) found that discrepancies between importers’ and exporters’ reports appear especially important for the less developed countries.

That underdeveloped countries shall misreport statistics more often than developed nations comes as no much of a surprise. Allegedly many of the causes for misreporting have to do with lack of means for the collection of data, systematic distorted statistics for a specific purpose --improve credit worthiness; collect (or avoid) higher taxes--, simple corruption, smuggle, etc., all of which seem to occur more often in low income countries (see Yeats (1990)). Following such a line of reasoning the straightforward solution seemed to be to use the statistics of the more developed trade partners instead, which are expected to be of higher quality. However, Yeats (1995) concluded that ‘the partner country gap filling procedures have little or no potential for improving the general coverage or quality of international trade data’. His final remark points at the need of ‘improved procedures for data collection and reporting at the country level’.

In fact, there is a wide array of potential matters that would need to improve in order to reduce the differences between the quantities and, overall the values, annotated at the port of origin and that registered at destination: different accounting methods (CIF versus FOB, general versus special trade), different time of recording (goods movement versus money movement, fiscal versus calendar years), prices used (declared prices versus official prices), different units of measurement (currencies and exchange rates; units, dozens, weight, volume, length, etc), misclassification of products (thousand subcategories versus ‘all others’ type of categories), geographical misallocation (country of consignment versus country of origin/destination), just to name the most relevant. A detailed explanation these and more reasons for discrepancies can be found in Allen and Ely (1953) and also Federico and Tena (1991). Given the list of issues, the ample pessimism about the accuracy and usefulness of international trade statistics for economic analytical purposes is comprehensible.

Download full text paper here


Friday, November 27, 2009

Labor Market Information Acquisition and Downsizing

Abstract

We study the optimal mechanism for downsizing the public sector which takes into account different informational constraints (complete versus asymmetric information on each worker’s efficiency) and political constraints (mandatory versus voluntary downsizing). Under complete information, the optimal structure of downsizing (who is laid-off and who is not) does not depend on the political constraint and is determined by the (marginal) cost of retaining a worker in the public sector. Since this cost includes his opportunity cost in the private sector, information acquisition on opportunity costs affects the structure of downsizing. Under asymmetric information, the political constraints determine which workers obtain information rents and therefore affect the structure of downsizing. An increase in the precision of the information on workers’ opportunity costs may increase or decrease social welfare depending on its impacts on the information rents.

Public sector downsizing is an increasingly important element in economic reforms of developing countries and transition economies.1 Countries which followed state-led development strategies often exhibit bloated bureaucracy with overstaffed public enterprises. Severe labor redundancies in the public sector are common in transition economies, where the shift to a market economy requires a great number of workers to be relocated out of the public sector. In some other countries, the need for public sector downsizing comes from a fiscal crisis which requires a severe cutback in government expenditures.

While the gains from downsizing are potentially large, the chances of mishandling it are considerable as well. According to some recent cross-country studies of downsizing programs, adverse selection plagues downsizing programs so that many programs exhibit the “revolving door” syndrome, whereby separated workers are subsequently rehired,2 and downsizing programs carried by governments before privatization tend to reduce instead of increasing privatization prices (Chong and López-de-Silanes, 2002). They also argue that a naïve mechanism using severance pay to induce voluntary separation is likely to fail in this respect since, when more able workers have better job opportunities in the private sector, such a mechanism induces good workers to leave and hence creates the subsequent need to rehire them.

The previous findings suggest that to be successful, a downsizing mechanism must carefully deal with adverse selection problems. For this purpose, we adopt a mechanism design approach and study the optimal mechanism for public sector downsizing which accounts for different informational and political constraints. Concerning informational constraints, we distinguish two kinds of information: one is about each worker’s productive efficiency in the public sector and the other is about each worker’s outside opportunity (i.e., the utility that he is expected to obtain in the private sector). Both kinds of information are necessary to determine the desirable size of downsizing and to successfully implement it.

Even if the government designs a mechanism properly accounting for the relevant informational constraint, the mechanism cannot be implemented if it is politically unfeasible.3 In this respect, we can distinguish two main forms of downsizing: mandatory and voluntary downsizing.4 Under mandatory downsizing, the government has the right to lay off any worker in the public sector and hence the political constraint is minimal. In contrast, under voluntary downsizing, any worker has the right to stay in the public sector with his current status and cannot be laid off against his will, and therefore the political constraint is maximal. In this paper, we consider these two extreme modes of downsizing although our analysis can be extended to an intermediate political constraint in which the government needs the approval of a majority of workers.

Download full text article here


Tuesday, November 24, 2009

Insurance and safety after September 11, 2001: Coming to grips with the costs and threats of terrorism

Abstract

This chapter, originally written as a consequence of the terrorist attacks of September 11, 2001, provides an elementary, everyday introduction to the concepts of risk and insurance. Conceptually, risk has two dimensions: a potential loss, and the chance of that loss being realized. People can, however, transfer risk to insurance companies against the payment of so called premiums. In practice, however, one needs accurate assessments of both losses and probabilities to judge whether premiums are appropriate. For many risks, this poses little problem (e.g., life insurance); however, it is difficult to assess risks of many other kinds of events such as acts of terrorism. It is emphasized, that through evolution and learning, people are able to handle many of the common risks that they face in life. But when people lack experience (e.g., new technologies, threats of terrorism), risk can only be assessed through imagination. Not surprisingly, insurance companies demand high prices when risks are poorly understood. In particular, the cost of insurance against possible acts of terrorism soared after September 11. How should people approach risk after the events of that day? Clearly, the world needs to protect itself from the acts of terrorists and other disturbed individuals. However, it is also important to address the root causes of such antisocial movements. It is, therefore, suggested that programs addressed at combatting ignorance, prejudice, and social inequalities may be more effective premiums for reducing the risk of terrorism than has been recognized to date.

There is clearly a need for insurance against terrorist attacks and other potentially catastrophic events. Moreover, in the developed world, markets typically arise to meet such needs. Why then, is the market for catastrophic insurance such an exception and what, if anything can or should be done about this? The fundamental reason probably lies in the fact that, in order to face potential catastrophic risks, insurance companies need to maintain large amounts of liquid capital. However, according to Dwight Jaffee and Thomas Russell12, this is not facilitated in the US by institutional factors that involve, inter alia, accounting regulations, tax laws, and the threat of takeover of companies with large cash reserves. Indeed, Jaffee and Russell state that the failure of the insurance market to provide coverage against catastrophic events is due to idiosyncracies or failures in the capital markets as opposed to problems with insurance per se. On the other hand, Howard Kunreuther has pointed out that even if potential investors are offered the chance to buy what seem like quite profitable catastrophe bonds (i.e., so-called “cat bonds”), there is considerable reluctance to do so.13 In other words, there is considerable aversion to investing in companies or financial instruments that depend on events that are characterized by large potential losses and unknown probabilities (however small the reasonable upper bounds of these probabilities might be).

In fact, many countries in the Western world now have policies whereby governments have effectively agreed to become the insurers of last resort in the case of catastrophes. This is the case, for example, in Great Britain, France and Israel although what differs between countries is the methods governments use to build up necessary funds across time, e.g., by imposing a levy on all private insurance contracts (France), a “pool” or risk-sharing appoach (UK), or specific taxes on property (Israel). As can be seen, however, the final costs of insuring against catastrophes is borne by the citizens of each country; what varies is how such costs are borne by different segments of the population. Currently (September 2002), it is this issue that is being debated in the US.

Finally, we referred above to the fact that the share prices of insurance companies rebounded quickly in the aftermath of September 11, 2001. Subsequently (through September 2002), share prices have dropped considerably along with the share prices of almost all sectors of business activity. However, it would be foolish to attribute this drop in prices on the events of September 11, 2001. Instead, it is much more indicative of the general malaise in share prices that has swept over the world economy in the last year.

Download full text article here


Saturday, November 21, 2009

How does product market competition shape incentive contracts?

Abstract

This paper studies the effects of product market competition on the explicit compensation packages that firms offer to their CEOs, executives and workers. We use a large sample of both traded and non-traded UK firms and exploit a quasi-natural experiment associated to an increase in competition. The sudden appreciation of the pound in 1996 implied different changes in competition for sectors with different degrees of openness. We provide difference in differences estimates and our results show that a higher level of product market competition increases the performance pay sensitivity of compensation schemes, in particular for executives.

Three different compensation measures are used as dependent variables. These are derived from the annual company statements. The first one is total compensation of the highest paid director, and contains all of the firm’s payments to the highest paid director in a particular year, including both fixed and variable compensation elements, such as stock options.12 Although occasionally it may be the chairman, in most cases the highest paid director is the CEO.13 This is the only publicly available measure of top executive pay for the UK, and the one used in virtually all related studies.14 In fact the amount of information provided on each company varies, in particular many firms do not report pay to the highest paid director explicitly.

Secondly, we use a measure of average executive pay, which contains the average remuneration received by the board members. Given that individual data is not available, this measure is calculated as the ratio of total board compensation over the number of directors. These include the top executives of the firm including the CEO, but also a proportion of non-executive directors of the firm. Ideally one would like to separate these two different types of directors, as their roles are not exactly the same. However, this is not possible in our sample. In any case, even though non-executive directors do not make direct management decisions, they do influence the strategic decisions of the firm, and can be seen as agents of the shareholders, in a way similar to executive directors. Furthermore, the presence of non executive directors in the UK is quite low when compared with the US. Previous studies estimate that the proportion of non-executive directors on the board is about 40-50% for large quoted firms. However among non quoted firms, the percentage of firms with at least one non-executive director is between 33% and 47% for large firms, and 19% for small and medium sized firms (less than 50 employees). Given the predominance of small and medium sized firms in our sample, it is likely that the proportion that do not have any non-executive director represents more than three quarters of the total number of firms.15 The pay measure is the average total remuneration of all board members, so it includes the total remuneration that executive directors receive for their executive and board activities, and the remuneration associated with being a member of the board for non-executive directors.

Finally, we use average wage in the firm constructed as total wages paid over total number of employees.16 The density of information on these three compensation variables is not constant. For the variable covering the highest paid director there is an average of 2.1 observations per firm, while for the variables on average executive pay and average wages there is a mean of 3.7 and 4.1 observations per firm respectively. We exclude from the sample firms with less than 5 employees in which CEOs and directors are hardly comparable with the rest of the sample. We also drop observations where the pay variable is zero because this appears to come from mis-coding. Table 2 contains the summary statistics of the relevant variables.

The performance measure used is earnings before interests and taxes. Most of the firms in the sample are not publicly traded. This has the advantage that it is a very broad sample of firms, representative of the whole economy. It also implies that one cannot use stock market based performance measures. Much existing literature focuses on executive compensation of publicly traded companies and uses stock market returns as their measure of performance. The fact that the vast majority of our firms are not listed on the stock market implies that the only performance measure we can use is accounting based. Existing research supports that accounting profits are a relevant measure of performance when examining compensation packages (Bushman and Smith, 2001).

To allow for any non linearities (such as minimum profits to qualify for a bonus or caps) we also include a measure of profits squared in the regressions. Size is computed by the logarithm of total assets. Year dummies, firm fixed effects and a sector-specific time trend (at 3 digit SIC) are also included in all the regressions. All the monetary variables are in constant 1987 pounds.

The measures of openness are import penetration and export share of output measured at a sector level defined by the SIC classification at three digits, as a proportion of total output plus net imports and total output respectively. Since openness itself may be endogenous to changes in the exchange rate, the measures of openness are defined at a sector level as the average openness in the years before 1996 (1993 to 1995), which is kept constant for the whole sample.17

Finally, the distribution of total pay is highly skewed to the left and contains several extreme values. For this reason we eliminate as outliers observations whenever the pay variable exceeds the value of the top 99% percentile of the sample.18

Download full text paper via ziddu


Wednesday, November 18, 2009

How Costly is Diversity? Affirmative Action in Light of Gender Differences in Competitiveness

Abstract

Recent research documents that while men are eager to compete, women often shy away from competitive environments. A consequence is that few women enter and win competitions. Using experimental methods we examine how affirmative action affects competitive entry. We find that when women are guaranteed equal representation among winners, more women and fewer men enter competitions, and the response exceeds that predicted by changes in the probability of winning. An explanation for this response is that under affirmative action the probability of winning depends not only on one’s rank relative to other group members, but also on one’s rank within gender. Both beliefs on rank and attitudes towards competition change when moving to a more gender-specific competition. The changes in competitive entry have important implications when assessing the costs of affirmative action. Based on ex-ante tournament entry affirmative action is predicted to lower the performance requirement for women and thus result in reverse discrimination towards men. Interestingly this need not be the outcome when competitive entry is not payoff maximizing. The response in entry implies that it may not be necessary to lower the performance requirement for women to achieve a more diverse set of winners

Despite decades of striving for gender equality, large differences still remain between men and women in the labor market. Perhaps most noteworthy is the gender segregation across different types of jobs. While there is substantial horizontal segregation, with women more likely to hold clerical or nurturing jobs and men more visible in manufacturing, the vertical segregation within a sector is particularly striking (Weeden, 2004, and Grusky and England, 2004, Ander, 1998). Across fields men are disproportionately allocated to professional and managerial occupations. In a large sample of US firms Bertrand and Hallock (2001) show that women only account for 2.5 percent of the five highest paid executives.1 While it may be argued that such segregation is a result of past history, and that these differences will diminish over time, it is noteworthy that women are underrepresented among the people who have the minimum training frequently required for senior management. Only 30 percent of students at top tier business schools are women, and, relative to their male counterparts, female MBA’s are more likely to work in the non-profit sector, work part time, or entirely drop out of the work force.2

It is commonly argued that discrimination, preference differences for child rearing, and ability differences can explain the absence of women in upper level management.3 Recent research suggests that an additional explanation is that women are more reluctant to put themselves in a position where they have to compete against others (see e.g., Gneezy and Rustichini, 2005, Gupta, Poulsen and Villeval, 2005, and Niederle and Vesterlund, 2007, henceforth NV).4 For example, NV examines compensation choices in an environment where men and women are equally good at competing. They find that the majority of men select the competitive tournament whereas the majority of women select the non-competitive piece rate. While low ability men are found to compete too much, high ability women compete too little, and few women succeed in and win the tournament.

From the firm’s perspective it is particularly costly if the upper tail of the performance distribution does not enter competitions for jobs or promotions. As explained by B. Joseph White, president of University of Illinois, “Getting more women into MBA programs means better access to the total talent pool for business”.5 An additional argument for increasing the number of women in top managerial positions is that diversity in and of itself may benefit the firm.6 Indeed US corporations are concerned by their inability to attain and recruit women, and they are increasingly developing programs to improve the number of women employees.7

When instituting programs to alter the gender composition in certain jobs it is of course important that we understand how these programs influence behavior. To begin this process, we use experiments to investigate how affirmative action may affect participants’ willingness to compete. Specifically, we consider a quota system which requires that out of two winners of a tournament at least one must be a woman.8 We examine the consequences such a system may have on the individual’s decision to compete and thereby on the resulting gender composition of the applicant pool. Accounting for this response we ask how costly it is to secure that women be equally represented among those who win competitions. In particular, how much lower will the performance threshold be for women? How many better performing men will have to be passed by to hire a woman? To what extent will reverse discrimination arise? These questions are particularly interesting in light of the non-payoff maximizing tournament-entry decisions documented by NV.

Download full text article via ziddu


Sunday, November 15, 2009

Testing Calibrated General Equilibrium Models

Abstract

This paper illustrates the philosophy which forms the basis of calibration exercises in general equilibrium macroeconomic models and the details of the procedure, the advantages and the disadvantages of the approach, with particular reference to the issue of testing "false" economic models. We provide an overview of the most recent simulation-based approaches to the testing problem and compare them to standard econometric methods used to test the fit of non-linear dynamic general equilibrium models. We illustrate how simulation-based techniques can be used to formally evaluate the fit of a calibrated model to the data and obtain ideas on how to improve the model design using a standard problem in the international real business cycle literature, i.e. whether a model with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate saving and aggregate investment in an open economy.

The task of this chapter was to illustrate how simulation techniques can be used to evaluate the quality of a model's approximation to the data, where the basic theoretical model design is one which fits into what we call a calibration exercise. In section 2 we first provide a definition of what calibration is and then describe in detail the steps needed to generate time series from the model and to select relevant statistics of actual and simulated data. In section 3 we overview four different formal evaluation approaches recently suggested in the literature, comparing and contrasting them on the basis of what type of variability they use to judge the closeness of the model's approximation to the data. In section 4 we describe how to undertake policy analysis with models which have been calibrated and evaluated along the lines discussed in the previous two sections. Section 5 presents a concrete example, borrowed from Baxter and Crucini (1993), where we design four different simulation-based statistics which allow us to shed some light on the quality of the model approximation to the data, in particular, whether the model is able to reproduce the main features of the spectral density matrix of saving and investment for the US and Europe at business cycle frequencies. We show that, consistent with Baxter and Crucini's claims, the model qualitatively produces a high coherence of saving and investment at business cycle frequencies in the two continental blocks but it also has the tendency to generate a highly skewed simulated distribution for the coherence of the two variables. We also show that the model is less successful in accounting for the volatility features of US and European saving and investment at business cycle frequencies and that taking into account parameter uncertainty helps in certain cases to bring the properties of simulated data closer to those of the actual data.

Overall, the example shows that simulation based evaluation techniques are very useful to judge the quality of the approximation of fully specified general equilibrium models to the data and may uncover features of the model which are left hidden by more simple but more standard informal evaluation techniques.

Download full text paper via ziddu


Thursday, November 12, 2009

FOREIGN OWNERSHIP AND PRODUCTIVITY DYNAMICS

Abstract

In analyzing the distinctive contribution of foreign subsidiaries and domestic firms to productivity growth in aggregate Belgian manufacturing, this paper shows that foreign ownership is an important source of firm heterogeneity affecting productivity dynamics. Foreign firms have contributed disproportionately large to aggregate productivity growth, but more importantly reallocation processes differ significantly between the groups of foreign subsidiaries and domestic firms.

In recent years a large number of studies have demonstrated the importance of firm heterogeneity for productivity growth, in contrast to earlier growth accounting that traditionally started from the presumption of an aggregate production function based on the representative firm (Bartelsman and doms (2000)). Theoretical models of firm dynamics have formalized the concept of firm heterogeneity and discussed the effects of learning, innovation, investment, entry and exit on firms’ productivity level and evolution (Jovanovic (1982), Pakes and Ericson (1987), Hopenhayn (1992)). Accordingly, recent empirical work has decomposed aggregate productivity into the effects of intra-firm productivity changes, market share allocations among firms with different levels of productivity, and changes in the population of firms. A common finding of this line of research is that large-scale ongoing reallocation of outputs and inputs across individual firms including the entry and exit of firms, contributes to a large extent to productivity growth in industries and countries. Additionally, it is found that this reallocation reflects merely within rather than between industry reallocation (Baily et al (1992), Bartelsman and Drymes (1994), Griliches and Regev (1995), Olley and Pakes (1996), Haltiwanger (1997), Foster et al (1998), Levihnson and Petrin (1999)).

Alternative decompositions have been used in order to assess the contributions of different categories of firms to aggregate productivity growth (Baldwin (1995), Baily et al (1996)), surprisingly however the distinctive contribution of foreign firms and domestic firms have not yet been analyzed. Productivity dynamics within the group of foreign firms and domestic firms can expect to be different given that foreign subsidiaries in host countries are typically found to be more productive than domestic firms (Dunning (1993), Caves (1996)), and that firm dynamics especially entry and exit are reported to differ considerably between foreign and domestic firms (Siegfried and Evans (1994), Geroski (1995)). This paper introduces foreign ownership as an additional source of firm heterogeneity in the analysis of productivity growth and illustrates its importance with reference to a small open country that has attracted large inflows of foreign direct investment.

Download full text paper via ziddu


Monday, November 9, 2009

THE ELUSIVE COSTS AND THE IMMATERIAL GAINS OF FISCAL CONSTRAINTS

Abstract

We study whether and how fiscal restrictions alter the business cycle features macrovariables for a sample of 48 US states. We also examine the 'typical' transmission properties of fiscal disturbances and the implied fiscal rules of states with different fiscal restrictions. Fiscal constraints are characterized with a number of indicators. There are similarities in second moments of macrovariables and in the transmission properties of fiscal shocks across states with different fiscal constraints. The cyclical response of expenditure differs in size and sometimes in sign, but heterogeneity within groups makes point estimates statistically insignificant. Creative budget accounting is responsible for the pattern. Implications for the design of fiscal rules and the reform of the Stability and Growth Pact are discussed.

Why is it that fiscal constraints appear to make so little macroeconomic difference? We show that the main reason is ability of state governments to work around the rules and transfer expenditure items to either less restricted accounts or to less constrained portions of the government. In addition, the presence of rainy days funds, which are available to all state governments by the end of the sample, effectively allow to limit current expenditure cuts at times when the constraints become binding. Given that constraints apply only to a portion of the total budget, that no formal provision for the enforcement of the constraints exist and that rainy days funds play a buffer-stock role, it is not surprising to find that tight fiscal constraints do not statistically alter the magnitude and the nature of macroeconomic fluctuations.

Our results have important implications for the design of fiscal restrictions. If constraints are imposed to keep government behavior under control, tight restrictions may be the wrong way to go, since they simply imply more creative accounting practices, unless they come together with clearly stated and easily verifiable enforcement requirements. That is to say, tight fiscal constraints are neither a necessary nor a sufficient condition for good government performance. On the other hand, if constraints are imposed to reduce default probabilities or to limit the effects that local spending has on average area wide inflation, and given that their negative macroeconomic effects appear to be marginal, tight constraints with some carefully selected escape route could be preferable.

Is there a lesson to be learned from the results for the reform of the SGP? While Canova and Pappa (2003) have shown that the response of macroeconomic variables to fiscal shocks in the two monetary unions share a number of important similarities, care should be exercised to use our evidence for that purpose. There are at least three reasons which make most of our conclusions dubious in an European environment. First, US state labor markets are sufficiently flexible, people move across states and other margins (such as relative prices) quickly adjust to absorb macroeconomic shocks. Europe is different in this respect and the imposition of tighter fiscal restrictions in the EMU may have completely different effects. Second, since fiscal constraints in the US almost always exclude capital account expenditures, the conclusions we reach are not necessarily applicable to situations where non-golden rule type of constraints are in place. Third, social security, medical and welfare expenditures constitute the largest portion of current account expenditure of European countries, while they are a tiny portion of expenditure of US states (less than four percent). Given that such expenditures are inflexible and, to a large extent, acyclical, direct extension of our conclusions to the European arena should be avoided. Nevertheless, we would like to stress that, while the presence of strict fiscal constraints does not make an important difference for cyclical fluctuations, some fiscal restriction is present in all but one US states. Therefore, none of our conclusions implies the abandonment of some kind of legislated fiscal restraint.

The rest of the paper is organized as follows. The next section describes the empirical model, explains our methodology and compares it with those typically used in the literature. Section 3 presents the procedure used to identify fiscal shocks and to construct fiscal rules. Section 4 describes how indicators capturing deficit and debt restrictions are constructed. Section 5 presents the results and section 6 compares our results to the existing literature. Section 7 concludes.

Download full text paper via ziddu