[In a first for this blog, this post is co-authored with my colleague Rob Johnston, Director of the George Perkins Marsh Institute and Professor of Economics at Clark University. I’m grateful for his willingness to work with me on this]
A recent blog post by NYU marketing professor Scott Galloway has been making the rounds recently, not only via his twitter feed and extensive mailing list, but also as a column in Business Weekly and the source material for articles in media outlets like MassLive (we’re not providing links because, as will become clear, we don’t want to reward this work with clicks). Ostensibly a post on the risk Covid-19 presents for higher education, this work is best characterized as pseudo-science designed to generate splashy media headlines. Nothing generates clicks quite like existential doom, and in this sense, both Galloway and media outlets win with content like this. If you are into marketing, and you subscribe to the maxim “there’s no such thing as bad publicity,” this is an interesting case study. Look closer, however, and it becomes obvious that this is much ado about nothing—Galloway’s study is simply unverified and unvalidated speculations that appear driven by his particular view of fragility in higher ed.
It’s unfortunate that media attention which might have gone to meaningful research on the pandemic was instead devoted to a blog post such as this one. We would rather not validate the “eyeballs at all costs” approach behind Galloway’s post and therefore give him more of the attention he so clearly desires. However, the media stories that have emerged from his analysis have cast the well-being of our institution, and several others, into question. We, and we suspect others across higher ed, are hearing expressions of concern from students, parents, and alumni. It is therefore important to demonstrate that the analysis in question is appallingly bad (analogous analysis would not receive a passing grade in either of our classes), why this is the case, and why nobody should take it seriously.
While there are many ways to challenge Galloway’s “analysis” (scare quotes are the only way to dignify this work with that term), we focus on three categories of problem. First, there are flaws with the data used and its connection to the conclusions he attempts to draw. Second, for an analysis that is supposedly centered on Covid-19 risk, there is nothing in it that reflects the different epidemiological situations of institutions around the United States. Third and most importantly, this “analysis” is not anchored in any empirical testing or validation, though it easily could be. As a result, it is entirely self-referential and open to manipulation to provide any result desired by the author (or anyone else), including clickbait narratives of doom.
First, the data. The data Galloway employed was what he could find easily, as opposed to the data he needed to create a valid and reliable analysis. While all analyses are constrained by the data at hand, this appears to be a particularly lazy case of the streetlight effect. As a result, the data chosen for this analysis are entirely ad hoc and lacking in clear links to actual institutional outcomes (e.g., whether institutions have failed in the past or are failing now). Their predictive value is unknown. Some examples of the data issues that pervade this analysis:
- Galloway’s analysis attempts to tie the value of an institutional credential to the institution’s average monthly online search volume. Whether this is valid in any context is debatable, but even if we accept this premise there are many things that distort that volume. Power Five conference schools have substantially higher search volumes than smaller universities and liberal arts colleges because they have nationally-ranked football teams and are often home to more than ten times the undergrads, not necessarily because people think those degrees are higher value. There is no evidence that Galloway attempted any sort of normalization for these effects before simply linking search volume to brand value (which seems ironic for a marketing professor).
- It appears that this analysis treats higher instructional wages per full-time student as a good thing without explaining why. Salaries are shaped by a range of factors, including regional markets and the composition of the institution (institutions with law schools, medical schools, and engineering schools will have much higher instructional wages than those without). Again, without an effort to address these differences, the crude application of salary to institutional quality is rendered somewhere between problematic and meaningless.
Looking across the data he employs, Galloway doesn’t seem to understand what he is comparing when he looks across institutions, or the underlying financial situations of those institutions. For example:
- The analysis makes no distinction between all-undergrad institutions and institutions with graduate and professional programs, despite the fact that these institutions have very different sources of reputation and revenue. These distinctions produce very different opportunities and challenges across these institutions. Indeed, enrollments in graduate professional programs are in some ways countercyclical to undergraduate admissions, offering financial hedges to institutions that house them. This seems to be one of many critical oversights in this analysis.
- Incredibly, the analysis makes no distinction between state institutions and private institutions, which for reasons of revenue and politics will have very different fragilities and pathways to success.
Second, we note that for a piece purported to be an analysis of the risks Covid-19 poses to the health of institutions of higher ed, this work displays a shocking lack of data that measure Covid-19 risks or impacts. The actual rates of infection, stress on local and regional health systems, and the likely future trends in both are critical indicators for such an analysis. So too are the different guidelines that are in place for institutions in different states, or the different procedures that institutions might (or might not) be taking to attenuate risks. This sort of data speaks to the likelihood of different institutions having to close all face-to-face instruction, or even to close completely for a term or more. None of this information is in Galloway’s analysis, despite its wide availability on a range of platforms. Instead, Galloway assumes an oddly even risk across the United States at a time when extraordinarily clear regional discrepancies are emerging.
Third, this ad hoc “analysis” is self-referential: there is no attempt to validate it empirically. Galloway might have taken his proposed index and applied it to schools that have closed in the past year, or perhaps applied it to those that have closed over the past decade (as Covid-19 is largely cast as a stressor exacerbating existing issues for institutions) to check its explanatory power. This is standard practice in modeling exercises, used both to tune models and to assess their validity.
Instead, a range of critical questions go unanswered. Are appropriate variables included in the index? Are these variables weighted in the right way? Without careful validation against real data (e.g., whether institutions have thrived, perished, etc., or are doing so now), one cannot determine whether the proposed index has any predictive validity whatsoever.
Without testing, the implicit model behind this analysis is self-referential and easily engineered to produce any desired outcome. Do the results support the position you want to advocate? If not, just remove a variable or two, substitute new variables, or change the weighting (importance) given to different variables. Eventually you can get an analysis that tells a story that you like – even if that story is detached from reality. Because Galloway’s analysis has not been validated or ground-truthed in any way, there is no way to determine whether it tells us anything useful.
This is not research. This is embarrassing and irresponsible. Perhaps even more disappointing, shoddy “analysis” of this type threatens to erode the long-term confidence that the public and policy-makers have in all research—including careful research that applies valid, reliable and peer-reviewed methods. In his original blog post, Galloway used an aside that his analysis has not been peer reviewed and a weak admonition that he sees this work as starting a conversation (at which, of course, he is the center) as a fig leaf to excuse this shoddy, irresponsible work. We view this as an astonishing abdication of responsibility by a person with a large audience who had to know his half-baked “analysis” would create significant concern at a number of institutions. As academics, we see his stipulation as an acknowledgement that this analysis is so poor that there is no chance that it would survive peer review, or indeed be recoverable with revisions in a manner that would make it so. Bluntly, this analysis is so problematic and badly flawed that institutions it has slated to “thrive” or “survive” should perhaps consider their own situations carefully before feeling good about how they were characterized here.
This is not to say that Covid-19 poses no challenge to higher education, or even to suggest that higher education might not benefit from some reflection about its goals and practices at this time. We welcome careful research to address these important issues. However, we feel that identifying and addressing such challenges requires serious research and analysis, not the headline-grabbing dumpster fire that lurks beneath Galloway’s post and the resultant media attention.
If something useful can come of this absurd offering, it is to demonstrate the value of higher education when well-executed. We feel quite confident that any good undergraduate at Clark, and certainly all of our graduate students, would have received the training in critical thinking, research, and analysis necessary not only to identify many, if not all, of the problems we point to here, but to conduct such an analysis in a more effective, productive manner.