Entries tagged with “economics”.


Nick Kristof’s piece decrying the distance between academia and the rest of society has, predictably, triggered a screaming firestorm in academia. That’s what you get when you poke the (over)educated, seriously literate beast. A lot of the criticism is very well written and thought out (outstanding examples here and here). But I fear that Kristof’s central message, that society needs a more engaged academia, is getting lost here. My main problem was not that Kristof was arguing for a more engaged academy, but that his prescriptions for how to bring about that engagement did not address the real incentives and barriers that academics negotiate when they try to engage with public debate.

So, in the interest of constructive criticism, I have some suggestions for things that Mr. Kristof might consider looking into – throwing a light on these challenges would actually serve to highlight the real, and often absurdly unnecessary, barriers between the academy and society. This is obviously just a tiny sample of potential topics, drawn from my own experiences in a top-tier department in a large, Research-1 state institution.

  1. Examine the system by which departments are “ranked” in the United States: The National Research Council (NRC) ranks departments at (not so) regular intervals, creating a sort of BCS ranking of departments, with about the same amount of accuracy and certainty. By and large, academics know these rankings are garbage, but administrations love to trot them out to demonstrate the excellence of their institution, and therefore justify the institutional budget/tuition/etc. But here’s a fun fact: if you dig into what counts in the rankings, you can quickly see why university administrations don’t necessarily care for academic outreach. For example, did you know that authoring an NRC report (which is seriously prestigious) DOES NOT COUNT AS A MEASURABLE PUBLICATION IN THE NRC RANKINGS? I know this because my department ran into this problem the last time around, with at least three members of our faculty losing multiple publications because the NRC did not count ITS OWN PUBLICATIONS. If those pubs were excluded, you can imagine that basically all reports in all contexts were excluded. So if administrations love rankings, and rankings hate outreach, you’re not going to get much outreach.
  2. Consider how academic evaluation’s over-focus on the number of articles produced creates less interesting, more arcane academic outputs: The production of knowledge in academia has, for some time, been driven by expectations of ever-greater output (as measured in research dollars and publications) with less input (fewer faculty members). These expectations govern everything from the evaluation of departments to individual tenure decisions. As a result, the publication requirements for tenure have become ever-more challenging, with expectations for the number of publications produced rising so steeply that many who recently got tenure might have published more articles than their very senior colleagues published to become full professors even two decades ago. This is driven by everything from departmental-level politics to the NRC rankings themselves, though I suspect a strong trickle-down effect here. In any case, this has created a crisis of knowledge production in which professors are incentivized to produce what my colleague Carl Dahlman once called the minimum publishable unit (MPU). Because expectations of performance are more and more heavily based on quantitative output (thanks, NRC!), as opposed to the quality of that output, it makes sense for faculty to shy away from “big question” articles that might chew up a lot of their data and interesting ideas, and instead package that same set of ideas as two or three smaller, much more arcane publications. This is a very real pressure: when I put out my retheorization of livelihoods approaches a year ago, more than one colleague suggested that I would have been better cutting its 15000 words into two 8500 word pieces, as it would have counted for more in my annual evaluation. Nothing has driven us toward a proliferation of small, specialized journals carrying tiny, arcane articles quite like this drive for quantification and greater production. Undoing this really awful trend would help a lot, as academics would be freed up to think big thoughts again, both in journals and in other fora. One way to help: publicize the alt-metrics movement (start at the LSE Impact Blog and work from there) that attempts to move beyond a system of academic assessment that reflects a long-dead era of publication and communication.
  3. Focus on how for-profit academic publishers wall off knowledge from the public: Academics must publish to survive professionally, and the best journals in nearly every field are the last profitable properties for a number of publishing houses. These publishers benefit from free labor on the part of authors, reviewers, and the nearly-free labor of editors, and often the subsidy of taxpayer-funded research, yet charge exorbitant amounts for subscriptions to their journals – in the case of public universities, bleeding the taxpayer once again. Academics are absolutely responsible for this situation – after all, we collectively define what the good journals are, and as I’ve argued before we could change our minds if we wanted to. But academia takes time to change, and could use a push. Where is the push from the federal government to demand that the results of taxpayer-funded research be made available to the taxpayers immediately? What happened to the initial push from the Obama White House on this issue? It seems to be a topic ripe for a good investigative journalist.

And, for good measure, an interesting trend that will likely lead to a more engaged academia:

  1. The shift in acceptable academic funding: Until very recently, academic grants from traditional agencies like the National Science Foundation or the National Institutes of Health were given exalted status, with all other forms of funding occupying lesser rungs on the great chain of funding. Thus, to get tenure, many (biophysical science/social science) academics really had to land one of these grants. The programs associated with these grants very often rewarded pure research and actively discouraged “applied” work, and even today the NSF’s requirements for “impact” are fairly surficial. Contracts were very second-tier, and often not taken seriously in one’s academic review. Now, thanks to funding crunches in both universities and the funding agencies, any research-looking dollars have started looking good to university administrations, and contracts are more and more being evaluated alongside more traditional academic grants. There is a tremendous opportunity here to engage academia through this mechanism. [Full disclosure: I’ve been funded in the past by NSF and by the National Geographic Society, but today roughly 90% of my funding comes directly or indirectly from development donors like USAID in the form of contracts or grants]

This is hardly a comprehensive list of things into which a serious journalist could shed light on, and perhaps help leverage change. I’m just typing quickly here. If you have other ideas for things that journalists should be examining, please leave them in the comments or email them to me: ed at edwardrcarr.com   I will append them to this post as they come in, attributing them (or not, depending on the wishes of contributors) in the post.

Edit 17 February: If you want to move beyond criticism (and snark), join me in thinking about things that Mr. Kristof should look into/write about if he really wants a more engaged academia here.

In his Saturday column, Nick Kristof joins a long line of people, academics and otherwise, who decry the distance between academia and society. While I greatly appreciate his call to engage more with society and its questions (something I think I embody in my own career), I found his column to be riddled with so many misunderstandings/misrepresentations of academia that, in the end, he contributes nothing to the conversation.

What issues, you ask?

1) He misdiagnoses the problem

If you read the column quickly, it seems that Kristof blames academic culture for the lack of public engagement he decries. This, of course, ignores the real problem, which is more accurately diagnosed by Will McCants’s (oddly marginalized) quotes in the column. Sure, there are academics out there with no interest in public engagement. And that is fine, by the way – people can make their own choices about what they do and why. But to suggest that all of academia is governed by a culture that rejects public engagement deeply misrepresents the problem. The problem is the academic rewards system which currently gives us job security and rewards for publishing in academic journals, and nearly nothing for public outreach. To quote McCants:

If the sine qua non for academic success is peer-reviewed publications, then academics who ‘waste their time’ writing for the masses will be penalized.

This is not a problem of academic culture, this is a problem of university management – administrations decide who gets tenure, and on what standard. If university administrations decided to halve the number of articles required for tenure, and replaced that academic production with a demand that professors write a certain number of op-eds, run blogs with a certain number of monthly visitors, or participate in policy development processes, I assure you the world would be overrun with academic engagement. So if you want more engagement, go holler at some university presidents and provosts, and lay off the assistant professors.

2) Kristof takes aim at academic prose – but not really:

 …academics seeking tenure must encode their insights into turgid prose.

Well, yes. There is a lot of horrific prose in academia – but Kristof seems to suggest that crap writing is a requirement of academic work. It is not – I guarantee you that the best writers are generally cited a lot more than the worst. So Kristof has unfairly demonized academia as willfully holding the public at bay with its crappy writing, which completely misdiagnoses the problem. The problem is that the vast majority of academia isn’t trained in writing (beyond a freshman composition course), there is no money in academia for the editorial staff that professional writers (and columnists) rely on to clean up their own turgid prose, and the really simple fact that we all tend to write like what we read. Because academic prose is mostly terrible, people who read it tend to write terrible prose. This is why I am always reading short fiction (Pushcart Prize, Best American Short Stories, etc.) alongside my work reading…

If you want better academic prose, budget for the same editorial support, say, that the New York Times or the New Yorker provide for their writers. I assure you, academic writing would be fantastic almost immediately.

Side note: Kristof implicitly sets academic writing against all other sources of writing, which leads me to wonder if he’s ever read a policy document. I helped author one, and I read many, while at USAID. The prose was generally horrific…

3) His implicit prescription for more engaged writing is a disaster

Kristof notes that “In the late 1930s and early 1940s, one-fifth of articles in The American Political Science Review focused on policy prescriptions; at last count, the share was down to 0.3 percent.” In short, he sees engagement as prescription. Which is exactly the wrong way to go about it. I have served as a policy advisor to a political appointee. I can assure you that handing a political appointee a prescription is no guarantee they will adopt it. Indeed, I think they are probably less likely to adopt it because it isn’t their idea. Policy prescriptions preclude ownership of the conclusion and needed responses by the policymaker. Better to lay out clear evidence for the causes of particular challenges, or the impacts of different decisions. Does academia do enough of this? Probably not. But for heaven’s sake, don’t start writing prescriptive pieces. All that will do is perpetuate our marginality through other means.

4) He confuses causes and effects in his argument that political diversity produces greater societal impact.

Arguing that the greater public engagement of economists is about their political diversity requires ignoring most of the 20th century history of thought within which disciplines took shape. Just as geography became a massive discipline in England and other countries with large colonial holdings because of the ways that discipline fit into national needs, so economics became massive here in the US in response to various needs at different times that were captured (for better or for worse) by economics. I would argue that the political diversity in economics is a product of its engagement with the political sphere, as people realized that economic thought could shift/drive political agendas…not the other way around.

5) There is a large movement underway in academia to rethink “impact”.

There is too much under this heading to cover in a single post. But go visit the LSE Impact Blog to see the diversity of efforts to measure academic impact currently in play – everything from rethinking traditional journal metrics to looking at professors’ reach on Twitter. Mr. Kristof is about 4 years late to this argument.

In short, Kristof has recognized a problem that has been discussed…forever, by an awful lot of people. But he clearly has no idea where the problem comes from, and therefore offers nothing of use when it comes to solutions. All this column does is perpetuate several misunderstandings of academia that have contributed to its marginalization – which seems to be the opposite of the columns’ intent.

There is a lot of hue and cry about the issue of loss and damage at the current Conference of the Parties (COP-19). For those unfamiliar with the topic, in a nutshell the loss and damage discussion is one of attributing particular events and their impacts on poorer countries to climate variability and change that has, to this point, been largely driven by activities in the wealthier countries. At a basic level, this question makes sense and is, in the end, inevitable. Those who have contributed the most (and by the most, I mean nearly all) to the anthropogenic component of climate change are not experiencing the same level of impact from that climate change – either because they see fewer extreme events, more attenuated long-term trends, or simply have substantially greater capacity to manage individual events and adapt to longer-term changes. This is fundamentally unfair. But it is also a development challenge.

The more I work in this field, and the more I think about it, the more I am convinced that the future of development lies in creating the strong, stable foundations upon which individuals can innovate in locally-appropriate ways. These foundations are often tenuous in poorer countries, and the impacts of climate change and variability (mostly variability right now) certainly do not help. Most agrarian livelihoods systems I have worked with in sub-Saharan Africa are massively overbuilt to manage climate extremes (i.e. flood or drought) that, while infrequent, can be catastrophic. The result: in “good” or “normal” years, farmers are hedging away very significant portions of their agricultural production, through such decisions as the siting of farms, the choice of crops, or the choice of varieties. I’ve done a back-of-the-envelope calculation of this cost of hedging in the communities I’ve worked with in Ghana, and the range is between 6% and 22% of total agricultural production each year. That is, some of these farmers are losing 22% of their total production because they are unnecessarily siting their fields in places that will perform poorly in all but the most extreme (dry or wet) years. When you are living on the local equivalent of $1.25/day, this is a massive hit to one’s income, and without question a huge barrier to transformative local innovations. Finding ways to help minimize the cost of hedging, or the need for hedging, is critical to development in many parts of the Global South.

Therefore, a stream of finance attached to loss and damage could be a really big deal for those in the Global South, something perhaps as important as debt relief was to the MDRI countries. We need to sort out loss and damage. But NOT NOW.

Why not? Simply put, we don’t have the faintest idea what we are negotiating right now. The attribution of particular events to anthropogenic climate change and variability is inordinately difficult (it is somewhat easier for long-term trends, but this has its own problem – it takes decades to establish the trend). However, for loss and damage to work, we need this attribution, as it assigns responsibility for particular events and their costs to those who caused those events and costs. Also, we need means of measuring the actual costs of such events and trends – and we don’t have that locked down yet, either. This is both a technical and a political question: what can we measure, and how should we measure it is a technical question that remains unanswered. But what should we measure is a political question – just as certain economic stimuli have multiplier effects through an economy, disasters and long-term degradation have radiating “multipliers” through economies. Where do we stop counting the losses from an event or trend? We don’t have an answer to that, in part because we don’t yet have attribution, nor do we have the tools to measure costs even if we had attribution.

So, negotiating loss and damage now is a terrible idea. Rich countries could find themselves facing very large bills without the empirical evidence to justify the size of the bills or their responsibility for paying them – which will make such bills political nonstarters in rich countries. In short, this process has to deliver a bill that everyone agrees should be paid, and that the rich countries agree can be paid. At the same time, poorer countries need to be careful here – because we don’t have strong attribution or measurements of costs, there is a real risk that they could negotiate for too little – not enough to actually invest in the infrastructure and processes needed to ensure a strong foundation for local innovation. Either outcome would be a disaster. And these are the most likely outcomes of any negotiation conducted in blindly.

I’m glad loss and damage is on the table. I hope that more smart people start looking into it in their research and programs, and that we rapidly build an evidence base for attribution and costing. That, however, will take real investment by the richest countries (who can afford it), and that investment has not been forthcoming.  If we should be negotiating for anything right now, it should be for funds to push the frontiers of our knowledge of attribution and costing so that we can get to the table with evidence as soon as humanly possible.

Update: 11/22: So, after seeing Tom Murphy’s Storify of the twitter exchange, it is now clear that Sachs was on fire – the man was engaged in several conversations at once along the lines below…and he seems to have been responding to all of them pretty coherently, and in real time. I admit to being impressed (No, seriously, click on the Storify link there and just scroll. It is boggling). So recognize that what you see below is what I saw in my feed (his other conversations were with people I don’t follow, so I didn’t realize they were ongoing). Still, glad to get geography’s foot back in the door…

So, quite by surprise, I found myself on the end of an extended twitter exchange with Jeff Sachs.  I’ve hassled him via twitter before, and never had a response. So, I was a bit taken aback to see my feed light up about 30 seconds after I tweeted with @JeffDSachs at the front end! To give Sachs credit, he stayed quite engaged and did seem to be taking on some of my points. Granted, 140 characters is hardly enough to really convey the issues at hand, but I did the best I could to represent contemporary human geography. Y’all be the judge – this is the feed, slightly rejiggered to clarify that at times Sachs and I were crossing each other’s messages – he was clearly responding to a previous message sometimes when he tweeted back after one of my tweets. Also, Samuel Danthine was also on the conversation, and I kept him in the timeline as it seems he and I were coming from the same place:

I just witnessed a fascinating twitter exchange that beautifully summarizes the divide I am trying to bridge in my work and career.  Ricardo Fuentes-Nieva, the head of research at Oxfam GB, after seeing a post on GDP tweeted by Tim Harford (note: not written by Harford), tweeted the following:

To which Harford tweeted back:

This odd standoff between two intelligent, interesting thinkers is easily explained.  Bluntly, Harford’s point is academic, and from that perspective mostly true.  Contemporary academic thinking on development has more or less moved beyond this question.  However, to say that it “never has been” an important question ignores the history of development, where there is little question that in the 50s and 60s there was significant conflation of GDP and well-being.

But at the same time, Harford’s response is deeply naive, at least in the context of development policy and implementation.  The academic literature has little to do with the policy and practice of development (sadly).  After two years working for a donor, I can assure Tim and anyone else reading this that Ricardo’s point remains deeply relevant. There are plenty of people who are implicitly or explicitly basing policy decisions and program designs on precisely the assumption that GDP growth improves well-being. To dismiss this point is to miss the entire point of why we spend our time thinking about these issues – we can have all the arguments we want amongst ourselves, and turn up our noses at arguments that are clearly passé in our world…but if we ignore the reality of these arguments in the policy and practice world, our thinking and arguing will be of little consequence.

I suppose it is worth noting, in full disclosure, that I found the post Harford tweeted to be a remarkably facile justification for continuing to focus on GDP growth. But it is Saturday morning, and I would rather play with my kids than beat that horse…

Marc Bellemare at Duke has been using Delivering Development in his development seminar this semester.  On Friday, he was kind enough to blog a bit about one of the things he found interesting in the book: the finding that women were more productive than men on a per-hectare basis.  As Marc notes, this runs contrary to most assumptions in the agricultural/development economics literature, especially some rather famous work by Chris Udry:

Whereas one would expect men and women to be equally productive on their respective plots within the household, Udry finds that in Burkina Faso, men are more productive than women at the margin when controlling for a host of confounding factors.

This is an important finding, as it speaks to our understanding of inefficiency in household production . . . which, as you might imagine given Udry’s findings, is often assumed to be a problem of men farming too little and women farming a bit too much land.  So Marc was a bit taken aback to read that in coastal Ghana the situation is actually reversed – women are more productive than men per unit area of land, and therefore to achieve optimal distributions of agricultural resources (read:land) in these households we would actually have to shift land out of men’s production into women’s production.

I knew that this finding ran contrary to Udry and some other folks, but I did not think it was that big a deal: Udry worked in the Sahel, which is quite a different environment and agroecology than coastal Ghana.  Further, he worked with folks of a totally different ethnicity engaged with different markets.  In short, I chalked his findings up to the convergence of any number of factors that had played out somewhat differently in my research context.  I certainly don’t see my findings as generalizable much beyond Akan-speaking peoples living in rural parts of Ghana . . .

All of that said, Marc points out that with regard to my findings:

Of course, this would need to be subjected to the proper empirical specification and to a battery of statistical tests . . .

Well, that is an interesting question.  So, a bit of transparency on my data (it is pretty transparent in my refereed pubs, but the book didn’t wade into all of that):

Weaknesses:

  • The data was gathered during the main rainy season, typically as the harvest was just starting to come in.  This required folks to make some degree of projection about the productivity of their fields at least a month into the future, and often several months into the future
  • The income figures for each crop, and therefore for total agricultural productivity, were self-reported. I was not able to cross-check these reported figures by counting the actual amount of crop coming off each farm.
    • I also gathered information on expenses, and when I totaled up expenses and subtracted them from reported income, every household in the village was running in the red.  I know that is not true, having lived there for some 18 months of my life.
    • There is no doubt in my mind that production figures were underestimated, and expenses overestimated, in my data – this fits into patterns of income reporting among the Akan that are seen elsewhere in the literature.
    • Therefore, you cannot trust the reported figures as accurate absolute measures of farm productivity.

Strengths:

  • The data was replicated across three field seasons.  The first two field seasons, I conducted all data collection with my research assistant.  However, in the final year of data collection, I lead a team of four interviewers from the University of Cape Coast, who worked with local guides to identify farms and farmers to interview – in the last year, we interviewed every willing farmer in the village (nearly 100% of the population).
    • It turns out that my snowball sample of households in the first two years of data collection actually covered the entire universe of households operating under non-exceptional household circumstances (i.e. they are not samples, they are reports on the activities of the population).
      • In other words, you don’t have to ask about my sampling – there was no sampling.  I just described the activities of the entire relevant population in all three years.
      • This removes a lot of concerns people have about the size of my samples – some household strategies only had 7 or 8 households working with them in a given year, which makes statistical work a little tricky :)  Well, turns out there is no real need for stats, as this is everyone!
      • The only exception to this: female-headed households.  I grossly underinterviewed them in years 1 and 2 (inadvertently), and the women I did interview do not appear to be representative of all female-headed households.  I therefore can only make very limited claims about trends in these households.
    • Even with completely new interviewers who had no preconceived notions about the data, the income findings came in roughly the same as when I gathered the data. That’s replicability, folks! Well, at least as far as qualitative social science gets in a dynamic situation.
    • Though the data was gathered at only one point in the season, at that point farmers were already seeing how the first wave of the harvest was doing and could make reasonable projections about the rest of the harvest.

I’m probably forgetting other problems and answers . . . Marc will remind me, I’m sure!  In any case, though, Marc asks a really interesting question at the end of his post:

Assuming the finding holds, it would be interesting to compare the two countries given that Burkina Faso and Ghana share a border. Is the change in gender differences due to different institutions? Different crops?

The short answer, for now, has to be a really unsatisfying “I don’t know.”  Delivering Development lays out in relatively simple terms a really complex argument I have building for some time about livelihoods, that they are motivated by and optimized with reference to a lot more than material outcomes.  The book builds a fairly simple explanation for how men balanced the need to remain in charge of their households with the need to feed and shelter those households . . . but I have elaborated on this in a piece in review at the Development and Change.  I will send them an email and figure out where this is in review – they have been struggling mightily with reviewers (last I heard, they had gone through 13!?!) and put up a preprint as soon as I am able.  This is relevant here because I would need a lot more information about the Burkina setting to work through my new livelihoods framework before I could answer Marc’s question.

Stay tuned!

 

OK, a last thought on the development initiatives and markets thread: let’s leave the predictive markets thing aside for the moment, and get to what I think is a more serious question for development initiatives – do we use all the information we might to evaluate the likely impact of our programs?  I think a lot of folks misread the intent of my initial post – I was NOT suggesting we bet on mortality rates and other direct measures of project effectiveness.  That is something I could see as an academic exercise, but is way too morbid for my tastes, even in that setting.

But everyone who lunged in that direction seemed to miss the point that any major development initiative will, if it succeeds, have radiating impacts through different markets.  That is, a successful food security initiative will change harvest sizes of different crops, thereby influencing commodities markets.  A successful public health intervention might increase the size of the workforce, or its efficiency.  And so on.  My simple thought was that any fund investor worth his/her salt should be examining these initiatives and their expected outcomes to decide 1) if the initiative worked, what markets might be affected, how and when and 2) do they think the initiative will actually work.

If there is no movement around these initiatives, it seems to me that these two factors might be important – at the first step in this decision-making, investors might decide that in the event of a successful intervention, the markets affected might not be accessible or profitable, or the timeframe of any movement in the market might be so long as to make immediate response unnecessary.  Thus, we would see no market response to the announcement of an intervention.  At that point, it doesn’t matter if the intervention will work or not – that assessment never comes into the picture.

However, in at least some cases, I have to think that there are initiatives out there (in a world of rising food prices, I am a bit fixated on food security at the moment) that would affect significant markets, and not only at a national scale (where markets might be illiquid or otherwise inaccessible).  Take the case of cocoa and Cote d’Ivoire this past winter: the civil conflict in CIV cut off a significant amount of global supply, and futures markets got skittish over the further constriction of trade, driving cocoa prices upward.  This is a niche crop, heavily produced by only a few countries, but the price movement could have meant big dollars for a fund that correctly anticipated this trend.  Surely there are (or will be) food security initiatives that could similarly affect the overall supplies of and access to particular (perhaps niche) crops for entire regions, or even shift global availability/perception enough to shift commodities prices in much larger, more transparent markets in the short term. Don’t fixate on national markets for these initiatives – what about really big development movers that could affect global supplies of grain in an era where all the slack has been taken out of various global grain markets?  You can’t tell me that everyone at these trading desks is simply ignoring the food security world . . . surely they are at least assessing through step 1) above.  So if there is no market response to these initiatives, either the timeframe of movement is too distant to warrant interest, or the traders simply don’t think these initiatives will succeed enough to significantly influence the markets in which they trade.  Perhaps the price of oil and its impact on transport is much, much more important than increasing harvest size when it comes to shaping food commodities prices . . . in which case, it would probably be good for those designing food security initiatives to know this at the outset and address it in project design (for example by thinking about transportation issues as integral to the initiative).

Of course, there is option 3): traders have no idea what sorts of initiatives are out there, and are operating in ignorance of these potential large drivers.  This is entirely possible, but a bit hard to believe . . .



Lots of comments pouring in via twitter regarding my earlier post on development initiatives and markets.  First, I found it interesting that readers went in two directions – they either took the post to be about prediction markets alone, or they caught the reference to hedge funds and realized that I was talking about “betting” in a much more general sense: that is, in the sense of hedge fund investment, which is really a set of (ideally) well-researched, carefully-hedged bets on the direction of particular stocks, commodities and sometimes whole segments of the market.

For now, let’s take up the issue of predictive markets.  I love Bill Easterly’s response tweet, asking what development initiative I (or anyone else) would bet my own money on.  I think prediction markets are interesting tools.  They are hardly perfect, as like other markets they are subject to bubbles and manipulation, but there is some evidence to suggest that they do yield interesting information under the right conditions.  It would be interesting to set up parallel prediction markets, and populate one with development professionals at agencies and NGOs, one with development academics, and one that blends the two, and then have them start to buy and sell the likelihood of success (as defined by the initiative, both in terms of outcomes and timeframe) for any number of development initiatives.  While I doubt these parallel markets would move in lockstep, I wonder if they would come to radically different assessments of these initiatives.  And we could examine how well they worked as predictive devices.  I’m pretty sure most academics would have started shorting the Millennium Village Project at its inception (academic paper here) . . . so what things would the development blogosphere/twittersphere short today?  What would you go long on (that is, what would you hold in the expectation it would meet expectations and rise in value)?  Have at it in the comments . . .

I’ll address the wider meaning of “betting” that I was also aiming at later . . .



Welcome to a new feature of Open the Echo Chamber, a quick post on something that interests me.  Yes, I am capable of writing less than 1000 words in a post, but most of the time I take on subjects that need a lot of attention.  Going forward, I am going to try to intersperse some “quick thoughts” on the blog for those who lack the 15 minutes and headspace to deal with my longer fare . . .

I’ve been doing a lot of reading about hedge funds lately, and it recently hit me: does anyone in the markets bet for or against development initiatives?  It seems to me that you could – after all, a big initiative from either a multilateral or large bilateral donor will often come with quite a bit of money attached (at least initially), a lot of publicity, and some clearly stated goals that are almost always tied to economic growth or diversification.  So, do investors look at these initiatives and bet for or against them?  I’m not saying they bet directly on an initiative, but on its outcome: for example, do funds look at large food security initiatives in a particular country and bet on the prices of the crops involved in that initiative?

Here is why I care: if nobody is betting on them, it pretty much signals that these initiatives are largely irrelevant.  Either they are not large enough to move any market in the short or long term, or they are not aimed at anything likely to induce a transformation of economy and society through some set of cascading impacts in the long term.  If this is the case, it seems to me we ought to back out of those initiatives right away.  This is not to say that we should not be addressing the needs of the most vulnerable people in the world, but to suggest that an absence of interest in these initiatives might mean that our efforts to address these needs are not likely to come to much.

On the other hand, if we see significant betting on the outcomes of initiatives, it seems to me we might start to look at the direction of this betting (short or long) to get a sense of how things are likely to play out, and start looking for problems/leveraging opportunities as soon as possible.

Just a quick thought . . .



Yesterday, I took the relief community to task for not spending more time seriously thinking about global environmental change.  To be clear, this is not because that community pays no attention, or is unaware of the trend toward increasing climate variability and extreme weather events in many parts of the world that seems to be driving ever-greater needs for intervention.  That part of the deal is pretty well covered by the humanitarian world, though some folks are a bit late to the party (and it would be good to see a bit more open, informal discussion of this – most of what I have seen is in very formal reports and presentations).  I am more concerned that the humanitarian community gives little or no thought to the environmental implications of its interventions – in the immediate rush to save lives, we are implementing projects and conducting activities that have a long-term impact on the environment at scales ranging from the community to the globe.  We are not, however, measuring these impacts in really meaningful ways, and therefore run the risk of creating future problems through our current interventions.  This is not a desirable outcome for anyone.

But what of the development community, those of us thinking not in terms of immediate, acute needs as much as we are concerned with durable transformations in quality of life that will only be achieved on a generational timescale?  You’d think that this community (of which I count myself a part) would be able to grasp the impact of climate change on people’s long-term well-being, as both global environmental changes (such as climate change and ecosystem collapse) and development gains unfold over multidecadal timescales.  Yet the integration of global environmental change into development programs and research remains preliminary and tentative – and there is great resistance to such integration from many people in this community.

Sometimes people genuinely don’t get it – they either don’t think that things like climate change are real problems, or fail to grasp how it impacts their programs.  These are the folks who would lose at the “six degrees of Kevin Bacon” game – I’ve said it before, and I will say it again: global environmental change is development’s Kevin Bacon: I can link environmental change to any development challenge in three steps or less.  Sometimes the impacts are really indirect, which can make this hard to see.  For example, take education: in some places, climate change will alter growing seasons such that farm productivity will be reduced, forcing families to use more labor to get adequate food and income, which might lead parents to pull their kids from school to get that labor.  Yep, at least some education programs are impacted by climate change, an aspect of global environmental change.

Other times, though, I think that the resistance comes from a very legitimate place: many working in this field are totally overtaxed as it is.  They know that various aspects of global environmental change are problems in the contexts in which they work, but lack the human and financial resources to accomplish most of their existing tasks. Suddenly they hear that they will have to take something like climate change into account as they do their work, which means new responsibilities that will entail new training, but often come without new personnel or money.  It is therefore understandable when these folks, faced with these circumstances, greet the demand for the integration of global environmental change considerations into their programs with massive resistance.

I think the first problem contributes to the second – it is difficult to prioritize people and funding for a challenge that is poorly understood in the development community, and whose impacts on the project or initiative at hand might be difficult to see.  But we must do this – various forms of global environmental change are altering the future world at which we are aiming with our development programs and projects.  While an intervention appropriate to a community’s current needs may result in improvements to human well-being in the short term, the changes brought on by that intervention may be maladaptive in ten or twenty years and end up costing the community much more than it gained initially.

Global environmental change requires us to think about development like a fade route in football (American), or the through ball in soccer (the other football).  In both cases, the key is to put the ball where the target (the receiver of the pass) is going to be, not where they are now.  Those who can do this have great success.  Those that cannot have short careers.  Development needs to start working on its timing routes, and thinking about where our target communities are going to be ten and twenty years from now as we design our programs and projects.

So, how do we start putting our projects through on goal?  One place to start would be by addressing two big barriers: the persistence of treating global environmental change as a development sector like any other, and the failure of economics to properly cost the impacts of these changes.

First, global environmental change is not a sector.  It is not something you can cover in a section of your project plan or report, as it impacts virtually all development sectors.  Climate change alters the range and number of vectors for diseases like malaria.  Overfishing to meet the demands of consumers in the Global North can crush the food security of poor coastal populations in the Global South.  Deforestation can intensify climate change, lead to soil degradation that compromises food security, and even distort economic policy (you can log tropical hardwoods really quickly and temporarily boost GNP in a sort of “timber bubble”, but eventually you run out of trees and those 200-500 year regrowth times means that the bubble will pop and a GNP downturn is the inevitable outcome of such a policy).  If global environmental change is development’s Kevin Bacon, it is pretty much omnipresent in our programs and projects – we need to be accounting for it now.  That, in turn, requires us to start thinking much longer term – we cannot design projects with three to five year horizons – that is really the relief-to-recovery horizon (see part 1 for my discussion of global environmental change in that context).  Global environmental change means thinking about our goals on a much longer timescale, and at a much more general (and perhaps ambitious) scale.  The uncertainty bars on the outcomes of our work get really, really huge on these timescales . . . which to me is another argument for treating development as a catalyst aimed at triggering changes in society by facilitating the efforts of those with innovative, locally-appropriate ideas, as opposed to imposing and managing change in an effort to achieve a narrow set of measurable goals at all costs.  My book lays out the institutional challenges to such a transformation, such as rethinking participation in development, which we will have to address if this is ever to work.

Second, development economics needs to catch up to everyone else on the environment.  There are environmental economists, but not that many – and there are virtually no development economists that are trained in environmental economics.  As a result, most economic efforts to address environmental change in the context of development are based on very limited understandings of the environmental issues at hand – and this, in turn, creates a situation where much work in development economics either ignores or, in its problematic framings of the issue, misrepresents the importance of this challenge to the development project writ large. Until development economists are rewarded for really working on the environment, in all its messiness and uncertainty (and that may be a long way off, given how marginal environmental economists are to the discipline), I seriously doubt we are going to see enough good economic work linking development and the environment to serve as a foundation for a new kind of thinking about development that results in durable, meaningful outcomes for the global poor.  In the meantime, it seems to me that there is a huge space for geographers, anthropologists, sociologists, political scientists, new cultural historians, and others to step up and engage this issue in rich, meaningful ways that both drive how we do work now and slowly force new conversations on both economics and the practice of development.

I do admit, though, that my expanding circle of economics colleagues (many of which I connected to via this blog and twitter) have given me entrée into a community of talented people that give me hope – they are interested and remarkably capable, and I hope they continue to engage me and my projects as they go forward . . . I think there is a mutual benefit there.

Let me be clear: the continuing disconnect between development studies and environmental studies is closing, and there are many, many opportunities to continue building connections between these worlds.  This blog is but one tiny effort in a sea of efforts, which gives me hope – with lots of people at work on this issue, someone is bound to succeed.

In part three, I will take up why global environmental change means that we have to rethink the RCT4D work currently undertaken in development – specifically, why we need much, much better efforts at explanation if this body of work is to give us meaningful, long-term results.