Skip to main content

Together we are beating cancer

Donate now
  • For Researchers

Research with Integrity – the madness of short methods

The Cancer Research UK logo
by Cancer Research UK | Analysis

7 December 2022

2 comments 2 comments

The next in our series focussing on research integrity finds Dr Andrew Porter exploring the dangers of publishing with a limited methods section, asking how we can reverse this trend and, incredibly, taking a lesson from the Great British Bake Off…

This entry is part 6 of 14 in the series Research Integrity
Series Navigation<< Research with Integrity – integrating training into research cultureResearch with integrity – the importance of communication >>

This may strike you as an odd way to start a column about research integrity – but I think there is something interesting to learn about communication of research methods from The Great British Bake Off…

The competition features a round called the ‘technical challenge’. The contestants are given all the ingredients they need to produce a classic bakery product, with the twist that many of the steps have been removed from the recipe. A typical instruction is “make the dough”. It’s not how recipes normally work – removing this information tests the baker’s knowledge and instinct, and they often do amazingly well. The details are hidden for entertainment – in real life you’d be very frustrated if you picked up a recipe book so light on detail.

But that’s not dissimilar to how many scientists write the method sections of their research papers.  Long, complex and technically challenging processes are frequently reduced to short statements. “Samples were prepared according to standard procedures” is a common one – or “Images were acquired using a confocal microscope”. These statements – while true – provide very little help to anyone looking to either understand what the authors have done or to attempt the same experiments themselves.

“To write out an experiment as it actually occurred almost feels like breaking a taboo.”

So why are method sections so often like this? Why are they the equivalent of “make the dough”?

I think there are several reasons, and it’s not generally because scientists are trying to hide anything.

Precedent and space

Journals often have strict guidelines in terms of space and word limits. Historically these are linked to printing processes and the cost of distributing and storing information in physical form. While publications have largely moved online, requirements for authors – such as maximum word limits for method sections – often remain. This hangover from an age of paper has a restrictive effect on how much detail can be added.

And that simple issue has an important knock-on effect. Scientists learn to be authors by reading what has gone before and acquainting themselves with the literature. It’s clear then, that researchers pick up the unspoken message that method sections should be short, and minimal on detail.

Authors often assume that others know their work as well as they do, so adding in lots of detail can feel like stating the obvious. And this is reinforced by reading traditionally short methods in the literature. To write out an experiment as it actually occurred almost feels like breaking a taboo.

Another fine mess

Another issue is that science is messy. Sometimes in a Bake Off technical challenge a process has gone so badly that the baker scraps their first attempt and starts again. It’s often the same in research, but papers often present the perfect scenario in their methods, and leave out the trial-and-error. Details of repeats are often excluded because they didn’t work out, or information revealing the degree of uncertainty in the analysis process is scant.

Again, if researchers don’t see these in the literature, it’s a very reasonable assumption that they are not supposed to be there. Researchers do share this information, but usually in informal spaces like conferences or scientific meetings, through social media or by personal communications. However, these tend to be hidden from the wider scientific community.

This all has consequences. In the recent reports from the Cancer Reproducibility Project in eLife, a major barrier to even attempting to replicate a published experiment was the lack of detail in the original publications. This multi-year project attempted to repeat 193 experiments selected from 53 highly cited papers published between 2010 and 2012. At the end of the project, the team was only able to repeat 50 experiments from 23 papers, and many could not even be started due to a lack of information. When approached, in around half the cases researchers were often able to add important – and sometimes essential – information not included in the published manuscript.

Method in the madness

How might we begin to address this issue? During the pre-submission checks at the CRUK Manchester Institute – very similar to those Catherine described last month at the CRUK Beatson Institute – I read the methods of each paper in detail, and encourage the authors to include more relevant information wherever possible.

What about page limits and space restrictions? There are other ways to include more detail, such as including supplemental methods in an online-only part of the paper. Here the authors are free to add as much detail as they wish, and even if the journal doesn’t explicitly ask for it, this is a great way to increase the reproducibility and transparency of the work. It can even include videos of a process – capturing the steps for preparing a sample using a smartphone, for instance.

“If you read someone’s work, and there’s something you don’t understand about their process, ask them to add some clarification.”

If this isn’t an option, platforms such as Protocols.io can help. Full, detailed methods can be published on the platform, stored securely in a freely accessible way, and given a DOI (digital online identifier) allowing them to be cited. Versions of the protocol can be updated and tracked over time, and I’d argue giving the method its own space can add more visibility to your particular approach.

Finally, everyone can play a part in normalising the inclusion of longer, more detailed methods. If a colleague asks you to look over a manuscript, give them feedback on whether or not you’d be confident repeating their experiments. If you read someone’s work, and there’s something you don’t understand about their process, ask them to add some clarification.

And don’t be afraid to include the gory details of your method – there will undoubtedly be someone out there who will thank you for them!

 

Takeaway tips

  • If you’re reviewing a paper, include a look at the methods in your review (not everyone does) and ask whether the authors could include more information?
  • If space limits method detail, look for alternative models such as Protocols.io, Figshare or OSF to expand your methods.
  • Don’t be inhibited by reading short method sections in older papers – remember their authors had limited space and were themselves influenced by a trend towards short method writing. Instead, look for good examples as a guide and share these with your colleagues
  • If you’re referencing another paper as the source of your method, ensure it contains sufficient details. If not, add a brief summary of the process and then give all the relevant details and updates from your work.
  • A common request from the Cancer Reproducibility Team is for details of materials. Using a Research Resource Identifier (RRID) can be a good way of specifying the exact reagents used in your study.

Keep your eyes peeled for the next instalment of this series from Dr Catherine Winchester in Jan 2023.

Author:
Dr Andrew Porter is Research Integrity and Training Adviser at Cancer Research UK Manchester Institute.

 

 

 

 

 

    Comments

  • Andrew Porter
    4 January 2023

    Dear Marcel, thanks for your positive comments, and for the link to the survey, which I hadn’t seen but am now working my way through.

    That’s a very helpful point about how repositories can provide a better service than the SI materials – I hadn’t considered that, but it makes a lot of sense when thinking about all the data locked in PDFs and non-standardised Excel spreadsheets!

    I will certainly consider this – I’m actually working with one of our researchers who has a specific interest in the FAIR principles to run a workshop later this month and I will discuss this with her as a topic to include.

  • Marcel LaFlamme
    9 December 2022

    Great piece, Andrew, and thanks for calling attention to this important issue! A number of the barriers to sharing detailed methods information that you mention are borne out in the results of our recent survey:

    https://osf.io/preprints/metaarxiv/7jxav/

    One small point: from my perspective, sharing these details in supplemental information is less preferable than using a repository because material in SI often ends up being neither discoverable nor persistent.

    Comments

  • Andrew Porter
    4 January 2023

    Dear Marcel, thanks for your positive comments, and for the link to the survey, which I hadn’t seen but am now working my way through.

    That’s a very helpful point about how repositories can provide a better service than the SI materials – I hadn’t considered that, but it makes a lot of sense when thinking about all the data locked in PDFs and non-standardised Excel spreadsheets!

    I will certainly consider this – I’m actually working with one of our researchers who has a specific interest in the FAIR principles to run a workshop later this month and I will discuss this with her as a topic to include.

  • Marcel LaFlamme
    9 December 2022

    Great piece, Andrew, and thanks for calling attention to this important issue! A number of the barriers to sharing detailed methods information that you mention are borne out in the results of our recent survey:

    https://osf.io/preprints/metaarxiv/7jxav/

    One small point: from my perspective, sharing these details in supplemental information is less preferable than using a repository because material in SI often ends up being neither discoverable nor persistent.