Let’s say you work on housing policy in an American city, and you want to reduce homelessness. You know there’s an evidence base on the question but you’re short on time and can’t read every study; you also think your city has unique qualities that mean research done elsewhere doesn’t necessarily apply.
Maybe you have unusually high rents, or weird zoning laws, or cold weather that makes homelessness in winter months particularly brutal. Maybe there’s some factor specific to your city you haven’t even considered that means evidence from elsewhere doesn’t apply.
Right now, a person in that situation is stuck. They can rely on their gut and intuition with all the subconscious biases and limitations that entails. Or they can rely on research that may or may not be applicable.
So a team of economists has another idea: get a group of forecasters together and ask them to predict what would happen in your city if you enacted the policy you have in mind.
In a new op-ed in the journal Science, Berkeley's Stefano DellaVigna, UChicago's Devin Pope, and Australian National University's Eva Vivalt explain why they’ve created a prediction platform for social science studies.The idea is to ask forecasters — who include social scientists with relevant knowledge but also policymakers, government workers, nonprofit employees, and interested lay people — to predict the results of studies in progress: whether a homelessness program will reduce homelessness, whether it will reduce or increase stress, reduce or increase arrests, and so on. Over time, with more predictions, the best forecasters will be identified, and will get better and better at forecasting the results.
The team behind the site has a few rationales for the project. One is that registering predictions makes it easier for null results — studies finding that an intervention or historical event didn’t matter — to get published. Currently, some journals regard such results as boring, but if there are predictions researchers you can point to who thought that there would be real effects, then that makes the null result more interesting.
Another is that rigorous statistical thinking depends on priors: what you thought about a given question before a study is conducted. Optimally, we hold a prior about, say, if a rapid rehousing program will effectively reduce homelessness in the long term, and we update that prior as more research on the question becomes available.
We update more based on better research, and less based on shoddier research — and how much we update depends on how good our prior was. “If I came to you with a study on how smoking is actually really healthy for you you’d probably be a bit skeptical, and rightly so,” Vivalt explained to me in a Skype call. “We should start to take priors a bit more seriously.”
But there’s a problem: Often, scientists’ priors are unstated. They don’t actually say ahead of time if, for instance, they expect a cash program in Rwanda to reduce child mortality, or if they think a day care program in Quebec will increase crime. That makes the scientific process of updating priors very difficult, and provides an incentive for ex post facto claims that a given study result was obvious or inevitable. DellaVigna, Vivalt, and Pope argue that predictions allow us to register scientists’ priors ahead of time, and update more effectively from new studies.
In the long run, they hope to do what UPenn psychologist Philip Tetlock has been doing for decades, and try to develop a team of “superforecasters” who can reliably outperform the average when predicting how future studies will go.
Such forecasters, if their track record is strong enough and they’re sufficiently versed on a given question, can serve as a kind of research literature summary for people like a city policymaker working on homelessness. They can give a more precise prediction than a written research summary, prepared without reference to the specific city that wants to try the policy, can.