At the top of every scientific paper, in small, neat typography is a roll call of the researchers who helped make the study happen.
This can be just a few researchers, or – for larger collaborations – more than 100. But a recent Cancer Research UK study takes this to a new level, with almost 100,000 extra people to thank for the findings. And those people are our Citizen Scientists.
Three years ago we launched the Cell Slider project. And like a scientific version of spot the difference, members of the public were asked to identify breast cancer cells and gauge how brightly they were coloured by looking at images on the project’s website.
98,293 Citizen Scientists took up the challenge. And now the results are in.
It turns out that they did a pretty good job of spotting the cancer cells. In fact, when the data were averaged across all those who took part, the combined analysis from the volunteers was almost as accurate as a trained pathologist.
But what does this mean?
Well, it’s the first time we’ve shown the true power of the crowd in helping our researchers in this way. And it’s an important milestone for our Citizen Science work. But does it signal the end of professional pathologists?
The short answer is no.
Citizen Science isn’t designed to replace expertise; its aim is to share the workload of a particular research challenge.
And it all comes down to time.
“Asking pathologists to evaluate large numbers of tumours in the lab is a problem because it’s so time consuming,” says senior author on the study, Professor Paul Pharoah. “And an expert pathologist’s time is at a premium.”
So the project was set up based on the idea that the collective efforts of the public could free up this time. And by bringing as many people together as possible to look at the same problem, we hoped a reliable scientific result would emerge.
In the case of Cell Slider, the images the public analysed were samples of breast cancer tumours from previous studies. So these women had already been treated for their disease.
The images themselves could each contain a mixture of different looking cells. Some might be cancer cells; others healthy breast cells, and some might be the supporting tissue surrounding these cells. The challenge for the Citizen Scientists was to spot the cancer cells following a short tutorial when they signed up.
The images were also coloured based on how much of a particular important molecule, called the oestrogen receptor, the cells produce.
Pathologists have to look out for its levels in patient samples to help make decisions about what treatment a woman should receive. So the volunteers were also asked to gauge how bright this colouring was – if indeed there was any there at all.
On the surface it may seem odd to ask the public to do the job of trained scientists. But it turns out that we are naturally suited to this type of analysis – even better than a complicated computer programme.
“The task is fundamentally one of pattern recognition,” explains Pharoah. “And humans are good at this.”
“So I expected Citizen Scientists to do reasonably well, but I wasn’t sure how well.
“I was also interested to find out whether the public would engage with a project like this, which was the first biomedical science project to use Citizen Scientists on a large scale.”
So how did they do?
The Cell Slider site was loaded up with 180,172 images from 12,326 samples that had originally come from 6,378 breast tumours.
Some people checked, or ‘scored’ lots of these images; others only looked at a handful. But, collectively, once the project was finished, our researchers were left with almost two million scores to help test how accurate the public had been.
To do this the researchers looked at the volunteers’ responses when asked whether they could see cancer cells in the images or not. And, crucially, when the team checked these scores against 3,000 of the samples that had also been analysed by a pathologist, they found the public gave the same answer as the pathologist in nine out of 10 cases.
But the availability of tens or hundreds of thousands of Citizen Scientists ought to make it possible to do research on a much larger scale
– Professor Paul Pharoah
According to Pharoah, this is a good level of accuracy, and shows just how powerful this approach could be in helping our scientists accelerate the progress of their research.
“We are asking Citizen Scientists to do the same thing as a pathologist,” he says. “But the availability of tens or hundreds of thousands of Citizen Scientists ought to make it possible to do research on a much larger scale than is possible when relying on expert pathologists.”
But there is always room for improvement.
“While the Citizen Scientists were accurate, they weren’t as good at classifying some tumours,” according to Pharoah. And because so many people got involved with the project, it’s actually taken a while to reach this conclusion, something he acknowledges as “frustrating”.
But that’s how science works. And this is real science. So the methods need to be tweaked and improved, which is something Pharoah, along with our other scientists involved in the project, are keen to do next.
“We need to do further work to find out if we can improve the performance of the Citizen Scientists by changing the brief training that they are given,” he says. And that’s exactly what our latest project focuses on.
The results from Cell Slider show that the public can accurately spot cancer cells in pathology samples, which is great. But what if each volunteer could do more?
One of the things our Citizen Scientists struggled with in Cell Slider was distinguishing between the different cells in the images. This meant that in some cases they overestimated the number of cancer cells.
So with more training, would it be possible to boost their accuracy by training smaller groups of people to analyse more complicated samples and really save our researchers’ time? That’s the question our new ‘Trailblazer’ project seeks to answer.
We’ll be working with smaller groups of volunteers to see how different types of training and tutorials could help improve the accuracy of their scores.
Then, once these tutorials are shown to be effective, they can be rolled out to larger groups of people. And this is something Pharoah believes will be really important for getting the most out of Citizen Scientists.
“The scoring of pathology images by the public should be tested in the same way that one would test any new method,” he says.
“First, try it out on a reasonable number to get an idea of how well it works. And then, depending on this first answer, either carry on with a larger number of people to get definitive answer, or tweak the method to get improvements.”
In addition to the breast cancer samples analysed through Cell Slider, we’ve also started looking at lung, bladder and oesophageal cancer samples with researchers from across the UK. And we’re planning to extend this to other cancers too.
But we need your help.
If you fancy joining our Citizen Science community and taking part in the Trailblazer project then drop us an email: [email protected]
Your lab coat awaits.
- Find out more about our other Citizen Science projects on our website
Candido dos Reis, F., et al. (2015). Crowdsourcing the General Public for Large Scale Molecular Pathology Studies in Cancer EBioMedicine, 2 (7), 681-689 DOI: 10.1016/j.ebiom.2015.05.009