A while back, I had a blog post on a report for ActionAid, written by Alex Evans, on critical uncertainties for development between the present and 2020. One of the big uncertainties Alex identified were environmental shocks, though in that version of the report he limited these shocks to climate-driven environmental shocks. In my post, I suggested to Alex that he widen his scope, for environmental shocks might also include ecosystem collapse, such as in major global fisheries – such environmental shocks are not really related to climate change, but are still of great importance. The collapse of the Gulf of Guinea large marine ecosystem (largely due to commercial overfishing from places other than Africa) has devastated local fish hauls, lowering the availability of protein in the diets of coastal areas and driving enormous pressure on terrestrial fauna as these populations seek to make up for the lost protein. Alex was quite generous with my comments, and agreed with this observation wholeheartedly.
And then today, I stumbled on this – a simple visualization of Atlantic Fisheries in 1900 and 2000, by fish haul. The image is striking (click to expand):
Now, I have no access to the datasets used to construct this visualization, and therefore I can make no comments on its accuracy (the blog post on the Guardian site is not very illuminating). However, this map could be off by quite a bit in terms of how good hauls were in 1900, and how bad they are now, and the picture would still be very, very chilling. As I keep telling my students, all those new, “exotic” fish showing up in restaurants are not delicacies – they are just all that is left in these fisheries.
This is obviously a development problem, as it compromises livelihoods and food supplies. Yet I don’t see anyone addressing it directly, even aid organizations engaged with countries on the coast of the Gulf of Guinea, where this impact is most pronounced. And how long until even the rich really start to feel the pinch?
Go here to see more visualizations – including one of the reach of the Spanish fishing fleet that makes clear where the pressure on the Gulf of Guinea is coming from.
And now everyone is implicated . . .
Updated 7 June 2011: I can find no evidence that any of my TIAA-CREF funds are holding Glencore. So far, so good . . .
aaannnnddd
No Glencore in my Vanguard 2025 Fund (kid’s college fund). Sadly, though, there is Gazprom. And probably a hell of a lot of other problematic stuff . . . nobody is clean, I tell you.
As a geographer, I spend a lot of time thinking about interconnections – how events and processes in one place influence events and processes in other places. I use these interconnections as a teaching tool in my courses, to help students understand how, for example, our levels of consumption here in the US preclude similar levels of consumption for the rest of the world (not enough resource out there to make that happen). I am always careful to make sure that the students understand that I am as bound up in these linkages as they are – I certainly do not live off the grid, walking/riding a bike everywhere and eating only food I grow (or that is grown locally). But it still hurts every time a find a new way in which I am bound to, and therefore a cause of, some of the processes I find most frustrating in the world. So, this excellent post on FairPensions was a bit tough. Simply put, Glencore, a well-known problem company that trades heavily in the food commodities markets (and appears to be making those markets, as it were, to its own advantage) has been fast-tracked into the FTSE 100, and therefore is now likely part of a lot of the mutual funds and pension plans to which we all make contributions. I’m going to have to check on this, and pray that TIAA-CREF has some sense, but . . . dammit.
For an earlier discussions of food insecurity and the commodities markets, see here, here and here.
Academic Adaptation and "The New Communications Climate"
Andrew Revkin has a post up on Dot Earth that suggests some ways of rethinking scientific engagement with the press and the public. The post is something of a distillation of a more detailed piece in the WMO Bulletin. Revkin was kind enough to solicit my comments on the piece, as I have appeared in Dot Earth before in an effort to deal with this issue as it applies to the IPCC, and this post is something of a distillation of my initial rapid response.
First, I liked the message of these two pieces a lot, especially the push for a more holistic engagement with the public through different forms of media, including the press. As Revkin rightly states, we need to “recognize that the old model of drafting a press release and waiting for the phone to ring is not the path to efficacy and impact.” Someone please tell my university communications office.
A lot of the problem stems from our lack of engagement with professionals in the messaging and marketing world. As I said to the very gracious Rajendra Pachauri in an email exchange back when we had the whole “don’t talk to the media” controversy:
I am in no way denigrating your [PR] efforts. I am merely suggesting that there are people out there who spend their lives thinking about how to get messages out there, and control that message once it is out there. Just as we employ experts in our research and in these assessment reports precisely because they bring skills and training to the table that we lack, so too we must consider bringing in those with expertise in marketing and outreach.
I assume that a decent PR team would be thinking about multiple platforms of engagement, much as Revkin is suggesting. However, despite the release of a new IPCC communications strategy, I’m not convinced that the IPCC (or much of the global change community more broadly) yet understands how desperately we need to engage with professionals on this front. In some ways, there are probably good reasons for the lack of engagement with pros, or with the “new media.” For example, I’m not sure Twitter will help with managing climate change rumors/misinformation as it is released, if only because we are now too far behind the curve – things are so politicized that it is too late for “rapid response” to misinformation. I wish we’d been on this twenty years ago, though . . .
But this “behind the curve” mentality does not explain our lack of engagement. Instead, I think there are a few other things lurking here. For example, there is the issue of institutional politics. I love the idea of using new media/information and communication technologies for development (ICT4D) to gather and communicate information, but perhaps not in the ways Revkin suggests. I have a section later in Delivering Development that outlines how, using existing mobile tech in the developing world, we could both get better information about what is happening to the global poor (the point of my book is that, as I think I demonstrate in great detail, we actually have a very weak handle on what is going on in most parts of the developing world) and could empower the poor to take charge of efforts to address the various challenges, environmental, economic, political and social, that they face every day. It seems to me, though, that the latter outcome is a terrifying prospect for some in development organizations, as this would create a much more even playing field of information that might force these organizations to negotiate with and take seriously the demands of the people with whom they are working. Thus, I think we get a sort of ambiguity about ICT4D in development practice, where we seem thrilled by its potential, yet continue to ignore it in our actual programming. This is not a technical problem – after all, we have the tech, and if we want to do this, we can – it is a problem of institutional politics. I did not wade into a detailed description of the network I envision in the book because I meant to present it as a political challenge to a continued reticence on the part of many development organizations and practitioners to really engage the global poor (as opposed to tell them what they need and dump it on them). But my colleagues and I have a detailed proposal for just such a network . . . and I think we will make it real one day.
Another, perhaps more significant barrier to major institutional shifts with regard to outreach is the a chicken-and-egg situation of limited budgets and a dominant academic culture that does not understand media/public engagement or politics very well and sees no incentive for engagement. Revkin nicely hits on the funding problem as he moves past simply beating up on old-school models of public engagement:
As the IPCC prepares its Fifth Assessment Report, it does so with what, to my eye, appears to be an utterly inadequate budget for communicating its findings and responding in an agile way to nonstop public scrutiny facilitated by the Internet.
However, as much as I agree with this point (and I really, really agree), the problem here is not funding unto itself – it is the way in which a lack of funding erases an opportunity for cultural change that could have a positive feedback effect on the IPCC, global assessments, and academia more generally that radically alters all three. The bulk of climate science, as well as social impact studies, come from academia – which has a very particular culture of rewards. Virtually nobody in academia is trained to understand that they can get rewarded for being a public intellectual, for making one’s work accessible to a wide community – and if I am really honest, there are many places that actively discourage this engagement. But there is a culture change afoot in academia, at least among some of us, that could be leveraged right now – and this is where funding could trigger a positive feedback loop.
Funding matters because once you get a real outreach program going, productive public engagement would result in significant personal, intellectual and financial benefits for the participants that I believe could result in very rapid culture change. My twitter account has done more for the readership of my blog, and for my awareness of the concerns and conversations of the non-academic development world, than anything I have ever done before – this has been a remarkable personal and intellectual benefit of public engagement for me. As universities continue to retrench, faculty find themselves ever-more vulnerable to downsizing, temporary appointments, and a staggering increase in administrative workload (lots of tasks distributed among fewer and fewer full-time faculty). I fully expect that without some sort of serious reversal soon, I will retire thirty-odd years hence as an interesting and very rare historical artifact – a professor with tenure. Given these pressures, I have been arguing to my colleagues that we must engage with the public and with the media to build constituencies for what we do beyond our academic communities. My book and my blog are efforts to do just this – to become known beyond the academy such that I, as a public intellectual, have leverage over my university, and not the other way around. And I say this as someone who has been very successful in the traditional academic model. I recognize that my life will need to be lived on two tracks now – public and academic – if I really want to help create some of the changes in the world that I see as necessary.
But this is a path I started down on my own, for my own idiosyncratic reasons – to trigger a wider change, we cannot assume that my academic colleagues will easily shed the value systems in which they were intellectually raised, and to which they have been held for many, many years. Without funding to get outreach going, and demonstrate to this community that changing our model is not only worthwhile, but enormously valuable, I fear that such change will come far more slowly than the financial bulldozers knocking on the doors of universities and colleges across the country. If the IPCC could get such an effort going, demonstrate how public outreach improved the reach of its results, enhanced the visibility and engagement of its participants, and created a path toward the progressive politics necessary to address the challenge of climate change, it would be a powerful example for other assessments. Further, the participants in these assessments would return to their campuses with evidence for the efficacy and importance of such engagement . . . and many of these participants are senior members of their faculties, in a position to midwife major cultural changes in their institutions.
All this said, this culture change will not be birthed without significant pains. Some faculty and members of these assessments want nothing to do with the murky world of politics, and prefer to continue operating under the illusion that they just produce data and have no responsibility for how it is used. And certainly the assessments will fear “politicization” . . . to which I respond “too late.” The question is not if the findings of an assessment will be politicized, but whether or not those who best understand those findings will engage in these very consequential debates and argue for what they feel is the most rigorous interpretation of the data at hand. Failure to do so strikes me as dereliction of duty. On the other hand, just as faculty might come to see why public engagement is important for their careers and the work they do, universities will be gripped with contradictory impulses – a publicly-engaged faculty will serve as a great justification for faculty salaries, increased state appropriations, new facilities, etc. Then again, nobody likes to empower the labor, as it were . . .
In short, in thinking about public engagement and the IPCC, Revkin is dredging up a major issue related to all global assessments, and indeed the practices of academia. I think there is opportunity here – and I feel like we must seize this opportunity. We can either guide a process of change to a productive end, or ride change driven by others wherever it might take us. I prefer the former.
The Qualitative Research Challenge to RCT4D: Part 2
Well, the response to part one was great – really good comments, and a few great response posts. I appreciate the efforts of some of my economist colleagues/friends to clarify the terminology and purpose behind RCTs. All of this has been very productive for me – and hopefully for others engaged in this conversation.
First, a caveat: On the blog I tend to write quickly and with minimal editing – so I get a bit fast and loose at times – well, faster and looser than I intend. So, to this end, I did not mean to suggest that nobody was doing rigorous work in development research – in fact, the rest of my post clearly set out to refute that idea, at least in the qualitative sphere. But I see how Marc Bellemare might have read me that way. What I should have said was that there has always been work, both in research and implementation, where rigorous data collection and analysis were lacking. In fact, there is quite a lot of this work. I think we can all agree this is true . . . and I should have been clearer.
I have also learned that what qualitative social scientists/social theorists mean by theory, and what economists mean by theory, seems to be two different things. Lee defined theory as “formal mathematical modeling” in a comment on part 1 of this series of posts, which is emphatically not what a social theorist might mean. When I say theory, I am talking about a conjectural framing of a social totality such that complex causality can at least be contained, if not fully explained. This framing should have reference to some sort of empirical evidence, and therefore should be testable and refinable over time – perhaps through various sorts of ethnographic work, perhaps through formal mathematical modeling of the propositions at hand (I do a bit of both, actually). In other words, what I mean by theory (and what I focus on in my work) is the establishment of a causal architecture for observed social outcomes. I am all about the “why it worked” part of research, and far less about the “if it worked” questions – perhaps mostly because I have researched unintended “development interventions” (i.e. unplanned road construction, the establishment of a forest reserve that alters livelihoods resource access, etc.) that did not have a clear goal, a clear “it worked!” moment to identify. All I have been looking at are outcomes of particular events, and trying to establish the causes of those outcomes. Obviously, this can be translated to an RCT environment because we could control for the intervention and expected outcome, and then use my approaches to get at the “why did it work/not work” issues.
It has been very interesting to see the economists weigh in on what RCTs really do – they establish, as Marc puts it, “whether something works, not in how it works.” (See also Grant’s great comment on the first post). I don’t think that I would get a lot of argument from people if I noted that without causal mechanisms, we can’t be sure why “what worked” actually worked, and whether the causes of “what worked” are in any way generalizable or transportable. We might have some idea, but I would have low confidence in any research that ended at this point. This, of course, is why Marc, Lee, Ruth, Grant and any number of other folks see a need for collaboration between quant and qual – so that we can get the right people, with the right tools, looking at different aspects of a development intervention to rigorously establish the existence of an impact, and the establish an equally rigorous understanding of the causal processes by which that impact came to pass. Nothing terribly new here, I think. Except, of course, for my continued claim that the qualitative work I do see associated with RCT work is mostly awful, tending toward bad journalism (see my discussion of bad journalism and bad qualitative work in the first post).
But this discussion misses a much larger point about epistemology – what I intended to write in this second part of the series all along. I do not see the dichotomy between measuring “if something works” and establishing “why something worked” as analytically valid. Simply put, without some (at least hypothetical) framing of causality, we cannot rigorously frame research questions around either question. How can you know if something worked, if you are not sure how it was supposed to work in the first place? Qualitative research provides the interpretive framework for the data collected via RCT4D efforts – a necessary framework if we want RCT4D work to be rigorous. By separating qualitative work from the quant oriented RCT work, we are assuming that somehow we can pull data collection apart from the framing of the research question. We cannot – nobody is completely inductive, which means we all work from some sort of framing of causality. The danger is when we don’t acknowledge this simple point – under most RCT4D work, those framings are implicit and completely uninterrogated by the practitioners. Even where they come to the fore (Duflo’s 3 I s), they are not interrogated – they are assumed as framings for the rest of the analysis.
If we don’t have causal mechanisms, we cannot rigorously frame research questions to see if something is working – we are, as Marc says, “like the drunk looking for his car keys under the street lamp when he knows he lost them elsewhere, because the only place he can actually see is under the street lamp.” Only I would argue we are the drunk looking for his keys under a streetlamp, but he has no idea if they are there or not.
In short, I’m not beating up on RCT4D, nor am I advocating for more conversation – no, I am arguing that we need integration, teams with quant and qual skills that frame the research questions together, that develop tests together, that interpret the data together. This is the only way we will come to really understand the impact of our interventions, and how to more productively frame future efforts. Of course, I can say this because I already work in a mixed-methods world where my projects integrate the skills of GIScientists, land use modelers, climate modelers, biogeographers and qualitative social scientists – in short, I have a degree of comfort with this sort of collaboration. So, who wants to start putting together some seriously collaborative, integrated evaluations?
The Qualitative Research Challenge to RCT4D: Part 1
Those following this blog (or my twitter feed) know that I have some issues with RCT4D work. I’m actually working on a serious treatment of the issues I see in this work (i.e. journal article), but I am not above crowdsourcing some of my ideas to see how people respond. Also, as many of my readers know, I have a propensity for really long posts. I’m going to try to avoid that here by breaking this topic into two parts. So, this is part 1 of 2.
To me, RCT4D work is interesting because of its emphasis on rigorous data collection – certainly, this has long been a problem in development research, and I have no doubt that the data they are gathering is valid. However, part of the reason I feel confident in this data is because, as I raised in an earlier post, it is replicating findings from the qualitative literature . . . findings that are, in many cases, long-established with rigorously-gathered, verifiable data. More on that in part 2 of this series.
One of the things that worries me about the RCT4D movement is the (at least implicit, often overt) suggestion that other forms of development data collection lack rigor and validity. However, in the qualitative realm we spend a lot of time thinking about rigor and validity, and how we might achieve both – and there are tools we use to this end, ranging from discursive analysis to cross-checking interviews with focus groups and other forms of data. Certainly, these are different means of establishing rigor and validity, but they are still there.
Without rigor and validity, qualitative research falls into bad journalism. As I see it, good journalism captures a story or an important issue, and illustrates that issue through examples. These examples are not meant to rigorously explain the issue at hand, but to clarify it or ground it for the reader. When journalists attempt to move to explanation via these same few examples (as far too often columnists like Kristof and Friedman do), they start making unsubstantiated claims that generally fall apart under scrutiny. People mistake this sort of work for qualitative social science all the time, but it is not. Certainly there is some really bad social science out there that slips from illustration to explanation in just the manner I have described, but this is hardly the majority of the work found in the literature. Instead, rigorous qualitative social science recognizes the need to gather valid data, and therefore requires conducting dozens, if not hundreds, of interviews to establish understandings of the events and processes at hand.
This understanding of qualitative research stands in stark contrast to what is in evidence in the RCT4D movement. For all of the effort devoted to data collection under these efforts, there is stunningly little time and energy devoted to explanation of the patterns seen in the data. In short, RCT4D often reverts to bad journalism when it comes time for explanation. Patterns gleaned from meticulously gathered data are explained in an offhand manner. For example, in her (otherwise quite well-done) presentation to USAID yesterday, Esther Duflo suggested that some problematic development outcomes could be explained by a combination of “the three I s”: ideology, ignorance and inertia. This is a boggling oversimplification of why people do what they do – ideology is basically nondiagnostic (you need to define and interrogate it before you can do anything about it), and ignorance and inertia are (probably unintentionally) deeply patronizing assumptions about people living in the Global South that have been disproven time and again (my own work in Ghana has demonstrated that people operate with really fine-grained information about incomes and gender roles, and know exactly what they are doing when they act in a manner that limits their household incomes – see here, here and here). Development has claimed to be overcoming ignorance and inertia since . . . well, since we called it colonialism. Sorry, but that’s the truth.
Worse, this offhand approach to explanation is often “validated” through reference to a single qualitative case that may or may not be representative of the situation at hand – this is horribly ironic for an approach that is trying to move development research past the anecdotal. This is not merely external observation – I have heard from people working inside J-PAL projects that the overall program puts little effort into serious qualitative work, and has little understanding of what rigor and validity might mean in the context of qualitative methods or explanation. In short, the bulk of explanation for these interesting patterns of behavior that emerges from these studies resorts to uninterrogated assumptions about human behavior that do not hold up to empirical reality. What RCT4D has identified are patterns, not explanations – explanation requires a contextual understanding of the social.
Coming soon: Part 2 – Qualitative research and the interpretation of empirical data
On explanation in development research
I was at a talk today where folks from Michigan State were presenting research and policy recommendations to guide the Feed the Future initiative. I greatly appreciate this sort of presentation – it is good to get real research in the building, and to see USAID staff that have so little time turn out in large numbers to engage. Once again, folks, its not that people in the agencies aren’t interested or don’t care, its a question of time and access.
In the course of one of the presentations, however, I saw a moment of “explanation” for observed behavior that nicely captures a larger issue that has been eating at me as the randomized control trials for development (RCT4D) movement gains speed . . . there isn’t a lot of explanation there. There is really interesting data, rigorously collected, but explanation is another thing entirely.
In the course of the presentation, the presenter put up a slide that showed a wide dispersion of prices around the average price received by farmers for their maize crops around a single market area (near where I happen to do work in Malawi). Nothing too shocking there, as this happens in Malawi, and indeed in many places. However, from a policy and programming perspective, it’s important to know that the average price is NOT the same thing as what a given household is taking home. But then the presenter explained this dispersion by noting (in passing) that some farmers were more price-savvy than others.
1) there is no evidence at all to support this claim, either in his data or in the data I have from an independent research project nearby
2) this offhand explanation has serious policy ramifications.
This explanation is a gross oversimplification of what is actually going on here – in Mulanje (near the Luchenza market area analyzed in the presentation), price information is very well communicated in villages. Thus, while some farmers might indeed be more savvy than others, the prices they are able to get are communicated throughout the village, thus distributing that information. So the dispersion of prices is the product of other factors. Certainly desperation selling is probably part of the issue (another offhand explanation offered later in the presentation). However, what we really need, if we want a rigorous understanding of the causes of this dispersion and how to address it, is a serious effort to grasp the social component of agriculture in this area – how gender roles, for example, shape household power dynamics, farm roles, and the prices people will sell at (this is a social consideration that exceeds explanation via markets), or how social networks connect particular farmers to particular purchasers in a manner that facilitates or inhibits price maximization at market. These considerations are both causal of the phenomena that the presenter described, and the points of leverage on which policy might act to actually change outcomes. If farmers aren’t “price savvy”, this suggests the need for a very different sort of intervention than what would be needed to address gendered patterns of agricultural strategy tied to long-standing gender roles and expectations.
This is a microcosm of what I am seeing in the RCT4D world right now – really rigorous data collection, followed by really thin interpretations of the data. It is not enough to just point out interesting patterns, and then start throwing explanations out there – we must turn from rigorous quantitative identification of significant patterns of behavior to the qualitative exploration of the causes of those patterns and their endurance over time. I’ve been wrestling with these issues in Ghana for more than a decade now, an effort that has most recently led me to a complete reconceptualization of livelihoods (shifting from understanding livelihoods as a means of addressing material conditions to a means of governing behaviors through particular ways of addressing material conditions – the article is in review at Development and Change). However, the empirical tests of this approach (with admittedly tiny-n size samples in Ghana, and very preliminary looks at the Malawi data) suggest that I have a better explanatory resolution for explained behaviors than possible through existing livelihoods approaches (which would end up dismissing a lot of choices as illogical or the products of incomplete information) – and therefore I have a better foundation for policy recommendations than available without this careful consideration of the social.
See, for example, this article I wrote on how we approach gender in development (also a good overview of the current state of gender and development, if I do say so myself). I empirically demonstrate that a serious consideration of how gender is constructed in particular places has large material outcomes on whose experiences we can understand, and therefore the sorts of interventions we might program to address particular challenges. We need more rigorous wrestling with “the social” if we are going to learn anything meaningful from our data. Period.
In summary, explanation is hard. Harder, in many ways, than rigorous data collection. Until we start spending at least as much effort on the explanation side as we do on the collection side, we will not really change much of anything in development.
On field experience and playing poor
There is a great post up at Good on “Pretending to be Poor” experiments, where participants try to live on tiny sums of money (i.e. $1.50/day) to better understand the plight of the global poor. Cord Jefferson refers to this sort of thing as “playing poor”, at least in part because participants don’t really live on $1.50 a day . . . after all, they are probably not abandoning their secure homes, and probably not working the sort of dangerous, difficult job that pays such a tiny amount. Consuming $1.50/day is one thing. Living on it is entirely another. (h/t to Michael Kirkpatrick at Independent Global Citizen for pointing out the post).
This, for me, brings up another issue – the “authenticity” of the experiences many of us have had while doing fieldwork (or working in field programs), an issue that has been amplified by what seems to be the recent discovery of fieldwork by the RCT trials for development crowd (I still can’t get over the idea that they think living among the poor is a revolutionary idea). The whole point of participant observation is to better understand what people do and why they do it by experiencing, to some extent, their context – I find it inordinately difficult to understand how people even begin to meaningfully parse social data without this sort of grounding. In a concrete way, having malaria while in a village does help one come to grips with the challenges this might pose to making a living via agriculture in a rather visceral way. So too, living in a village during a drought that decimated a portion of the harvest, by putting me in a position where I had to go a couple of (intermittent) days without food, and with inadequate food for quite a few more, helped me to come to grips with both the capacity and the limitations of the livelihoods strategies in the villages I write about in Delivering Development, and at least a limited understanding of the feelings of frustration and inadequacy that can arise when things go wrong in rural Africa, even as livelihoods strategies work to prevent the worst outcomes.
But the key part of that last sentence was “at least a limited understanding.” Being there is not the same thing as sharing the experience of poverty, development, or disaster. When I had malaria, I knew what clinics to go to, and I knew that I could afford the best care available in Cape Coast (and that care was very good) – I was not a happy guy on the morning I woke up with my first case, but I also knew where to go, and that the doctor there would treat me comprehensively and I would be fine. So too with the drought – the villages I was living in were, at most, about 5 miles (8km) from a service station with a food mart attached. Even as I went without food for a day, and went a bit hungry for many more, I knew in the back of my mind that if things turned dire, I could walk that distance and purchase all of the food I needed. In other words, I was not really experiencing life in these villages because I couldn’t, unless I was willing to throw away my credit card, empty my bank account, and renounce all of my upper-class and government colleagues and friends. Only then would I have been thrown back on only what I could earn in a day in the villages and the (mostly appalling) care available in the rural clinic north of Eguafo. I was always critically aware of this fact, both in the moment and when writing and speaking about it since. Without that critical awareness, and a willingness to downplay our own (or other’s) desire to frame our work as a heroic narrative, there is a real risk in creating our own versions of “playing poor” as we conduct fieldwork.
I'm a talking head . . .
Geoff Dabelko, Sean Peoples, Schuyler Null and the rest of the good folks at the Environmental Change and Security Program at the Woodrow Wilson Center for Scholars were kind enough to interview me about some of the themes in Delivering Development. They’ve posted the video on te ECSP’s blog, The New Security Beat (you really should be checking them out regularly). So, if you want to see/hear me (as opposed to read me), you can go over to their blog, or just click below.
Little milestones
A quick thank you to everyone who has stopped by this blog over the past 10 months. Google Analytics tells me that my 10,000 individual visitor arrived at some point earlier today . . . which sort of blows my mind. Yeah, some of you all seem to crap 10,000 visitors on a Monday – I know. But hey, I started the blog largely at the behest of my publisher, as a means of getting myself and Delivering Development out there. It has become a lot more than that for me – it lets me vent to a really interesting readership, and helps me to control my lecturing withdrawals while I am on leave from academia. I appreciate all the comments, emails, tweets and retweets – I’ve learned a lot from this effort, and the community it seems to have brought me. I will attempt to remain suitably entertaining/intelligent going forward . . .
Now, I want every single one of you to go out and buy a copy of my book. Pronto.