Skip to main content

Together we are beating cancer

Donate now
  • For Researchers

Research with integrity – What you need to know about generative AI

The Cancer Research UK logo
by Cancer Research UK | Analysis

7 November 2023

0 comments 0 comments

Integrity

Almost exactly a year on from Chat GPT being made publicly available, the next in our series focussing on research integrity finds Dr Andrew Porter deep in the weeds of generative AI. Here he talks us through its potential impact on research integrity and gives some tips on how we might keep up with the incredible pace of AI tools, regulations, guidance and policy…

This entry is part 10 of 12 in the series Research Integrity


I keep having ‘am I living in the future now?’ moments in relation to large language models like ChatGPT.

In the last few weeks, I’ve been offered the option to ‘rephrase my email to sound more professional’, generate text to make my Eventbrite page more appealing, and keep my Canva content ‘on brand’. From producing whole research papers, writing peer reviews, to helping with computer coding, you can see why they are also attracting scientists’ attention. In the recent Nature Postdoc Survey, 31% of employed respondents were using Generative AI.

Generative AI tools are trained on vast amounts of written text so that they ‘learn’ patterns in how words and phrases are used. They respond to prompts – inputs written by the user in normal language – to generate streams of coherent text. (For an explainer of how they work, as well as some of their limitations, I recommend this article by UKRIO’s Research Integrity Manager Matt Hodgkinson and this helpful Financial Times piece for a visual guide).

With so many demands on researchers’ time at all career levels, it is entirely understandable that a system to speed up these tasks would be welcome, especially for those who struggle with writing in English.

They hold the alluring promise of discovery – perhaps finding a result that has been overlooked, a hidden pattern or a novel research question. They also appear to be a solution to dealing with many of the repetitive and seemingly arbitrary tasks connected to research careers. Need a better lay summary for your grant application, a letter of recommendation or some extra copy for your researcher profile? Want to turn your bullet point list into a 5-year research engagement proposal or a data management plan? Trying to find yet another way to describe your research impact, write minutes for a meeting or compose a mass email to the rest of your department?

With so many demands on researchers’ time at all career levels, it is entirely understandable that a system to speed up these tasks would be welcome, especially for those who struggle with writing in English. Whether you’re already using generative AI in your work, testing out it’s potential, or just encountering it as companies like Microsoft and Google add it to their software, there are aspects of this technology which raise questions for researchers.

New guidance for researchers

In response, Cancer Research UK have just published guidance for the use of generative AI in research, alongside a new policy for their use in relation to funding applications. Broadly, they highlight two areas of potential concern for researchers: considerations about what goes into generative AI programs, and concerns around what comes back out.

The major concern around inputs is data protection. Many platforms use inputs to train their software, and so this new policy prevents CRUK-funded researchers using a generative AI tool to assess a funding application, because uploading any content from that application is a breach of confidentiality. A joint funding statement has also been published by The Research Funders Policy Group, so all major funders of biomedical research in the UK are covered by these same requirements. I think this is a sensible move which provides clarity and consistency for researchers.  It protects them from inadvertently breaking confidence and falling foul of data protection legislation. CRUK’s wider guidance also states that generative AI should not be used to handle any sort of sensitive, confidential or personal data, as this would be contrary to data protection legislation.

AI companies are responding to these criticisms; the commercial version of ChatGPT launched with additional data privacy settings to prevent users inputs from being used to train the AI. Installing software locally so that data is held and processed appropriately by the user might be another approach. While these adaptations will need to be thoroughly investigated by information governance and copyright experts, it may be that confidentiality and privacy objections relating to inputs can be overcome in time.

Gen AI

GIGO – garbage in, garbage out

Concerns over the outputs of generative AI may be harder to address.

Some of these relate to the generated content itself – is it accurate, up-to-date and meaningful?  Generative AI tools can produce coherent, persuasive but fundamentally wrong information because they don’t understand (in a human sense) what they are generating. They generally lack a framework for assessing accuracy or offering qualifiers on the truthfulness of their answers. Researchers should exercise caution in the use of these outputs and take responsibility for their accuracy.

There are also issues around copyright, fair use and plagiarism. Companies have been scraping the internet and other digital sources for all the content they can find. Without a means to assign credit for the use of the ideas of others in the outputs, researchers can be guilty of plagiarism. Without a radical redesign of the tools it is hard to see how this can be overcome (although people are working on this problem).

Other ethical issues surround the use of these programs. The CRUK guidance highlights the environmental costs of running these programs, which require vast, energy-hungry data centres to operate. Further questions surround the work of those who moderate and train these generative AI tools who are often paid very little for the work they do. (You can find more examination of the practical and ethical issues here.)

A question that arises is whether the benefits of generative AI outweigh these costs. It seems like it is in the interests of those producing the tools to diminish these concerns, while those who want to engage with these ethical questions may feel like they are swimming against the tide.

Using Generative AI with integrity

So how can researchers keep up with this fast-changing landscape of tools, regulations, guidance and policy? While the generative AI tools may be new – and there are entirely novel challenges associated with them – I believe we can still apply good research principles to address their use.

The Concordat to Support Research Integrity is structured on five principles underpinning good research practice: honesty; rigour; transparency and open communication; care and respect; and accountability. These principles can provide a framework for thinking about issues that may seem distant, complex, or completely new, including Generative AI.

We are all learning about Generative AI, so sharing specific details of how they have been used – the prompts, the edits, the failures – is in line with open research practices

By declaring when and where an AI has been used and for what purposes, researchers can demonstrate honesty in their approach. Some organisations – including the World Conference on Research Integrity – are already adding declarations on AI use to their submission forms.  Researchers need to honestly assess the costs and benefits of using these tools, otherwise they are in danger of fooling themselves over the value of these outputs and contributing to the hype and competition around the use of Generative AI.

By developing a greater understanding of how these tools work, researchers can apply academic rigour and use them appropriately for each task. Rigorous checking of outputs can help avoid errors in the use of the content they generate. This is line with guidance from the Russell Group about engaging with AI tools in an appropriate, scholarly way.

We are all learning about Generative AI, so sharing specific details of how they have been used – the prompts, the edits, the failures – is in line with open research practices. Being open and transparent about the processes of using AI, communicating learnings and sharing findings, can help the whole field advance.

Researchers who take seriously the concerns of those whose intellectual property may have been compromised by an AI in its training set, and who wish to avoid filling the scientific literature with error or possible plagiarism, are demonstrating care and respect for those conducting and participating in research. Behind the scenes of many AI platforms are workers who are often being paid very small amounts of money to moderate, check and train these models. By engaging with the debate about the ethical and environmental issues behind generative AI, and making moral choices about which organisations to support and which tools to use, researchers can make sure they and their funding is being used positively.

And by taking ownership of the final content produced using generative AI – checking for error, bias, originality, consistency – researchers can show accountability for their use. By ensuring that inputs are handled appropriately, researchers can show they are accountable and working in line with relevant frameworks and guidance of their funders and institutions. And by being prepared to ask for help if they encounter issues – and through funders and institutions being sensitive to the needs and possibilities for error in this space and handling issues sensitively – researchers now can help develop better frameworks for the future.

Generative AI is something we are all likely to be using, so learning about it now and establishing good principles is important. How much do we think about the technology behind search engines, route planners, web browsers and email, for instance, compared with when they first became available? Perhaps this learning process is particularly important for researchers. Those working long hours in high-pressured environments with multiple commitments and pressure to publish and secure funding may be particularly drawn to the time-saving elements of this software.

Take a typical researcher writing a grant application. They understandably turn to Generative AI to help write more persuasive content for their application, feeding in examples of their own writing, refining their research questions, drawing data from previous papers.

From their individual perspective this a perfectly sensible use of a new tool, and early adopters of these techniques might well get a boost. But once these tools become commonplace (which arguably they already are), will this dilute the benefits as AI drives a homogenisation of written language, a reversion to the mean? Or perhaps it produces an AI-arms race amongst researchers, striving to write better prompts to make the applications more likely to succeed, in turn producing AI haves and have-nots, with inequalities based on institutional support, access to training and funds to access the latest AI tools.

Consider the others engaging with this imaginary grant application. If privacy issues can be overcome, might not overworked grant reviewers receiving verbose AI-assisted and AI-generated content use AI to create summaries and digests, and help expand bullet-pointed notes into full reviewer reports? Might not committee members value having AI assistants to help compare applications and even assist in framing clearer questions for applicants (which applicants might anyway be able to predict using AI trained on their grant, profiles of panel members, and the content of the awarding body website)? Are these sensible uses of technology to enhance writing and synthesis skills, or do they begin to erode the human agency and community that is part of the social fabric of research life?

ChatGPT’s Razor

Just as universities and schools are wrestling with the potential for students to use ChatGPT shortcuts in assessments, so funding bodies, universities and others who are asking for content from researchers should consider the impact of generative AI on the tasks they are setting.

Perhaps one use of generative AI can be obtained without even switching on a computer. Instead, generative AI can power a thought experiment.

If Generative AI can produce an answer to a question on a form as well as a human being, what is the value of that question? Is it possible that we don’t actually need to ask the question anymore?  Or do we need a different way to assess the underlying goal behind the question?

I propose that this ‘ChatGPT Razor’ could be a useful tool – one that has no environmental cost, breaches of privacy or risk of plagiarism – for identifying and trimming unnecessary bureaucracy.  Such a razor might help reduce the workload on researchers, free up reviewers’ time and help those making decisions focus on the key information, ultimately improving research culture, and relieving some of the pressure to use AI tools in the first place.

How to use Gen AI like a scientist
Dr Andrew Porter

Author

Dr Andrew Porter

Andrew is Research Integrity and Training Adviser at Cancer Research UK Manchester Institute

Tell us what you think

Leave a Reply

Your email address will not be published. Required fields are marked *

Read our comment policy.

Tell us what you think

Leave a Reply

Your email address will not be published. Required fields are marked *

Read our comment policy.