Search By Topic


Any questions? Crowdsourcing research priorities for greater impact

Good research is as much about framing good questions as ferreting out answers. But if the answers are meant to be applied, what counts as a good question is not always easy to agree. Academics, practitioners, and policy-makers tend to have different ideas.

Solution? Get them together to thrash out an agreed agenda. A recent example in palaeoecology shows how a carefully orchestrated, bottom-up discussion can help forge consensus in a complex, inherently multidisciplinary area. A paper just published in the Journal of Ecology details 50 questions that those who delve into palaeoecological records like fossil pollen or seeds could try and answer. They represent research priorities agreed at a meeting in Oxford at the end of 2012, sponsored by the Past Global Changes project, which whittled down a list of 900 possible questions posed by 130 different people and research groups.

The meeting followed a procedure that has now been tried out in a range of fields, after its prototype was established in 2005 by Professor Bill Sutherland of Cambridge University. He began with an exercise designed to produce 100 ecological research questions of most interest to policy-makers. A small meeting of a few dozen people worked on distilling more than 1,000 candidate questions compiled beforehand. As in subsequent efforts, the questions were sifted by the organisers, then gradually reduced to an agreed set of priorities in several rounds of voting and discussion. In Sutherland’s pioneer effort, which was aimed mainly at policy-relevance, only the policy-makers present voted. Subsequent exercises – which have covered areas such as conservation, agriculture and food, plant science, mineralogy, and forestry – have usually allowed everyone to vote, but still focussed on reconciling the sometimes different interests of knowledge producers and users.

Sutherland didn't know what to expect when they had finished. “When we put the first of these question prioritising exercises together, we had no idea what to do with it. It was completely different from everything else. Was it publishable?” It turned out that it was, and the result attracted lots of attention from researchers, the media and funding agencies. He reckons that and subsequent exercises (e.g. 100 Questions to Conserve Global Biodiversity) have been influential in their fields.

The meetings are intense, the pruning of lists ruthless. Jacqueline Gill of the University of Maine, who blogged about the palaeoecology meeting, said the process was “fascinating, often brutal, always challenging, and – I think – really, really revealing”. It was also, she added, pretty fun.

Keys to success, according to Sutherland, include narrowing down the topic to make it manageable. Individual questions also have to be kept in bounds, and you have to watch out for group discussions’ tendency to merge or expand questions, which then command more votes. One answer to that is to link questions to the size of effort that might be needed to answer them – and go for those which, say, might be tackled by a reasonably sized research group.

The interaction between producers and users, especially policy-makers, is vital for these discussions to bear fruit, and relates to Future Earth’s ambitions to encourage ‘co-design’ of research. “Typically, the academics say: that’s not a research question. The policy-makers say: that question is not interesting or useful”, says Sutherland. The trick is to look for overlap, and find what is both researchable and useful – that becomes the core question that drives the exercise. Canvassing broadly for questions and making the winnowing process transparent, ideally by sharing rounds of voting on the web and inviting comment, all add legitimacy to the results.

The general approach is adaptable to other topics and disciplines. And it has already been used to generate a list of questions to address on the causes and prevention on poverty and even – turning it inward to the area where it is typically applied – to questions in science policy research. The poverty-related exercise, in the UK, was a first foray into social research and proved just as productive as its predecessors. Moreover, according to Sutherland, “it was as technical and evidence based as any I’ve done. There was a really intense, high-level debate about what you need to understand”.

So there may be enough examples now to conclude that we have an iterative, carefully monitored, democratic approach to framing lists of priority questions that are both appealing to researchers and useful to policy-makers, and is widely applicable. It certainly seems potentially relevant to a large proportion of the complex, multidisciplinary concerns that fall under the banner of Future Earth. Some further exercises that will be relevant here are already planned or under way. For example, the Scientific Committee on Antarctic Research recently completed the second round of submissions for its “horizon scanning” initiative to identify 100 priority questions for research – looking ahead to 2035. The first round threw up 751 possible questions, and the second is still being processed. After reviewing all the suggestions, the final list will be distilled at an invitation only workshop in New Zealand in April, whose participants have been selected partly by nomination from their research colleagues. Like the poverty inquiry, this one is highly multidisciplinary and includes questions on governance, for example, alongside ones on ocean-atmosphere interaction or ecology.

The same will be true of another potentially significant initiative about to get under way. February 17th sees the launch of an effort to agree 100 most important questions in international development. This project, which is co-ordinated by the Sheffield Institute for International Development in the UK, is closely linked to the current global conversation aimed at formulating a set of Sustainable Development Goals to succeed the Millennium Development Goals in 2015. The organisers are calling for contributions from NGOs, academics, think tanks, governmental and international organisations. This one is aimed at policy-makers as well as researchers and needs to get a move on to have any influence on the SDG discussion. The online consultation will be followed by the usual workshop.

Questions are invited that relate to any of the eleven themes identified by the global consultation on SDGs led by the United Nations, The World We Want, though questioners can suggest a new theme if they wish. As usual, they have to be framed in a way that could lead to actual research projects. The criteria in this case are that questions:

  • Must be answerable through a realistic research design,
  • Must be of a spatial and temporal scope that reasonably could be addressed by a research team
  • Must not be formulated as a general topic area
  • Must not be answerable with ‘it all depends’
  • Except if questioning a precise statement (e.g. ‘does the earth go round the sun?’), should not be answerable by ‘yes’ or ‘no’
  • If related to impact and interventions, must contain a subject, an intervention, and a measurable outcome

One hundred questions like that could keep a lot of researchers busy on world-improving work for a good while.


Further reading

A Collaboratively-Derived Science-Policy Research Agenda

Scientists devise list of potential threats to UK