Integrating AI
April 29, 2025

Integrating AI

As AI becomes a more integrated part of our daily lives, it is vital that we consider all stakeholder perspectives to enable us to better foster collaboration for effective AI integration in scientific publishing. In this episode we will explore AI's transformative impact on the creation and dissemination of scientific content, addressing the real-world challenges and diverse perspectives needed to harness its full potential. By considering the opportunities and barriers (real and perceived) to AI adoption, we can distinguish how these challenges vary among stakeholders from a pharma, publisher, and patient advocate perspective. . Joining us for this conversation is Stephen Griffiths, Publications Head at GSK; Stephanie Preuss, Director of Content Innovation at Springer Nature; and Stephen Rowley, Patient Advocate and Director at Artension.

To join ISMPP, visit our website at https://www.ismpp.org/ 

This episode is generously sponsored by Avalere Health.

Downloadable transcript here



Rob: Whether we're ready for it or not, AI is revolutionizing how we approach research, communication, and even decision making in patient care. But as the technology rapidly evolves, how well are we understanding the implications of AI? What challenges do we face and what opportunities does it bring to our industry?

This is In Plain Cite, a podcast exploring the biggest questions and trends facing medical publication and communication professionals. I'm your host, Rob Matheis, president and CEO of ISMPP. Today's episode is generously sponsored by Avalere Health. 

In this episode, we're diving into some of the questions you're probably grappling with right now—what AI means for your career in medical publications, its potential to enhance processes. And how can we navigate the ethical challenges it raises? To help us unpack these questions, we're joined by Stephen Griffiths, Publications Head at GSK; Stephanie Preuss, Director of Content Innovation at Springer Nature; and Stephen Rowley, Patient Advocate and Director at Artension.

Rob: I thought maybe we'd start right from the very beginning for our listeners and just ask, what is the role of AI in, um, generative AI in medical publishing these days?  

Stephanie: I think AI plays an increasingly important role all over the publication process. So researchers started to use it in their free time, but now we see more and more activities related to the publication process as well. They use it to discover content, to read content, look for references, but also more and more in the writing process for copy and editing, also for generating different kinds of outputs. The manuscript text itself sometimes, or some smaller pieces to communicate their research. So I think it's all over the workflow actually.

And then of course there is a publisher's perspective where we, on the other hand, use AI to do integrity checks, for example, scanning images, journal and review recommendations. I guess it's all over the publication process already. 

Rob: Um, do you all feel that medical publishing is made more efficient overall by the use of generative AI?

Stephanie: I think there might be an initial state where it takes more work because you really need to make sure that everything's correct. You need to review every output that AI is generating. So there might be in the beginning that you even have a little bit of additional effort, but I think once a stable process is stable, human machine handshake and collaboration between the AI and the human is established, you can really see those efficiency gains coming through. 

So I guess the role simply changes. So people sometimes are less in an authoring role and probably more in an reviewer role, um, nowadays, but I think the efficiency increases will still need some time to, to really make it through. So a lot of times you see fancy things out there, but then implementation and doing things in practice, um, I think sometimes it's still a little bit of a way to go.

Rob: I'm wondering from your points of view, is there a best case use case for AI? If someone hadn't started using it yet and was like, well, now, my boss wants me to use AI and I really gotta do something about this, it’s in my goals this year. What use case would you recommend that perhaps they start with?

Stephanie: For me, it would be everything around content conversion that it's strongest. I think if you look at writing a paper, for example, um, that's something still to be explored. But what AI is really good in is summarizing things, translating things from one language to another language, putting something in more simple words.

I mean, we've seen some interesting posters and presentations in the last ISMPP conference where people were comparing AI summaries, plain language summaries and human written summaries, and a lot of them didn't perform that badly. So I think what AI can really do well is translate it to really easy language. And that's what a lot of humans struggle with, a lot of authors and researchers struggle with, because they're so into the topic that for them it's crystal clear what it all means and they can't really relate with a patient perspective or perspective of somebody that's seeing that really the first time and really from outside of the field. But for authors, it seems to me it's really easy to do a correction, like a scientific correction. So you give them a plain language summary and say, is this still factually correct? That's a fast, faster task and an easier task for some people than really translating something into easy language. 

Stephen R.: Yeah, I, and I think the translation, from a patient perspective, the translation aspect is really important.

So I am involved in an organization called Digestive Cancers Europe, and we represent people with cancers across the continent. And for a lot of people, being able to find the research that might help them is a significant challenge. If, you know, your language, your first language isn't English. And so being able to get, get to the source information in your own language is absolutely a game changer for them 

Stephen G.: And another, I guess, example we use at my company is responding to queries and questions around policy because, as you know, it's a very complex environment, we've got lots of SOPs. And even if you were, for example, you had a question on authorship or if someone's out and you wanted to, you may know what's an ICMG or GPP, but you can ask your AI copilot, right, to retrieve the information, even draft a response. And that's something we are using quite frequently, and it's an easy thing that someone can start doing straight away rather than, you know, try to write a whole paper based on this huge report. That's something day to day that's happening and can bring inefficiencies for them.

Rob: It sounds like the most popular use cases are around synthesis and kind of taking large amounts of complex information and make them simpler. When we think about that though, it makes me wonder, do there need to be any safeguards? Is it time to turn this conversation over to, you mentioned the word SOP Stephen, is there SOPs that need to be developed? So what are your thoughts on that? 

Stephen G.: Yes. I mean, sometimes I've, you know, I've queried an SOP and got a strange response. So you always have to have the human in the loop and check where it is. Even ask, you know, where are you getting that information? How did you come up with that response? 

Stephen R.: Um, and, and is this, is this actually based on data that somebody's actually checked that it does say that.

I think you need to be clear about transparency. You know, is this AI generated or is this what somebody really thinks? Also the scale. A big problem for patients is that we get push, um, all the time. Uh, so if somebody has cancer and from the moment they're diagnosed and they try and search for something about their cancer, the engines, you know, behind Google and the like, then push them stuff to do with cancer. And there are people behind that who are pushing things, often cures. And some of these are presented in a scientific manner, even looking like real scientific papers, but fake scientific papers. Or also they might push a genuine scientific paper or article, you know, based on stuff which is based on only one study. But people will take that as being their, you know, proof that this works. They've got no way of knowing what's real or not. And when they look at genuine papers and find that, you know, there was a particular thing was successful, they don't really see the picture of whether it was successful in one study or successful in a hundred studies.

Stephanie: I think there's also a role for the publisher to play because of course, like these human machine handshake and quality control is very important, and of course patients can't do it themselves because if they could read the full article and understand in the first place, then we wouldn't need the plain language summary. So I think it's up to the authors and probably to the publishers. And in a world of open access becoming more and more important because everybody has access to the paper, but not everybody can understand what's actually in there. So to work together with authors and AI to create that kind of human machine interaction where we have a piece that can be produced at scale, that can be produced very efficiently without causing too much effort on the author side but that is author approved and fact checked and people know that they can rely on and it's actually true information, rather than people downloading open access papers and uploading in their random AI tool and getting a summary that might be correct or might not be correct. 

Rob: So we're getting into a really interesting area and I was waiting for someone to use the word transparency and Stephen you did just a moment ago. How transparent do we feel we need to be when we're using AI in medical publishing? What's the threshold, the litmus test for that? 

Stephanie: I would say let's overdo it in the beginning and then we will probably find a balance. So I guess at one point, using AI will be so normal that probably it isn't indicated everywhere. But so far I think, at least our perspective is that we indicate if our staff is using it for any of the editorial pieces, for example, that are out there, we indicate it together with the piece and we clearly say what the role of the AI was and what the role of the human was. Like this was a hand selected paper. It was first summarized by AI and then a human did the fact check polishing whatsoever and bring it up to our standards, something like that. And we put that with every piece at the moment. So I think transparency is key. And then there might be a time when everybody, similar to the, I don't know, spell check or grammar check that's AI based that now everybody's using and of course nobody's indicating. There might be a time where we, as a community, everybody decides together that it's not necessary for certain things anymore. 

Stephen G.: The development of a publication, for example, there's lots of steps, there's lots of comments, there's lots of rounds and drafts. We don't upload the full story right when we submit something. So we've got to be practical as well and I think that's a good way. You know, how have you used it? Which sections? And then the safeguards, right? That the authors are accountable for everything that's been done there and the data and the references are checked properly. I think that's the best way we can disclose it.

Again, I don't think we can upload, you know, all the series of prompts and things that we've used, but if we indicate where it's been used and maybe which tool, I think that's a good way to start. And I don't know, I think this is something maybe journals or groups of journals can help people when they submit to kind of give a few guidelines on what's the key things that need to be down there'cause that's something we get quite a lot of questions about. And I think there is still some variation with different journals. 

Stephen R.: I run a large support group, peer support group, and we've got over 6,000 members. So I see what people are doing every day and because now the first thing they do is Google and you've got AI overview, which is taken over from going to the top article, which came up in the thing. They're now going to the AI overview, so they read that and you've got the “Learn More” panel beside, which I presume, I don't know, I presume that it reflects the main sources that the AI went to. And, and you've, and of course you can go to them and people do go to those, and they're not really ranked in any way. So you've got no understanding of which had more influence. And I think that's an area where the opportunity to have some kind of quality measurement. You can use AI to do that quality measurement. That would be valuable. 

Stephanie: And I think AI is already evolving in that direction, right? 

Stephen R.: Mm-hmm. 

Stephanie: If you look at the “Deep Research” mode of OpenAI, for example, or DeepSeek, you can already see that there is something like inline citation similar to what you would see in a scientific paper. So in the beginning, I think it was just querying the internet and spitting out a response. But now more and more technology is also because of the demand, probably evolving in a way where you have, similar to every claim and every sentence, you have a [inaudible] reference. Sometimes you can even click through and you see what paragraph actually out of that webpage or that paper has been taken to generate those sentences.

So I think as more and more people are looking for those kind of features, the technology will probably also evolve in the right direction to give us a little bit more security and make it easier to trace back where the information actually comes from. 

Rob: I wanna ask a question on behalf of our listeners and say, well, I think we can all probably agree that if you're using AI to draft an article or do some of the things, it's obvious that you'd have to be transparent in that regard. But what about some of the less obvious things that maybe our listeners are thinking about? Like, if you're using AI to correspond with authors and drafting emails back and forth or literature synthesis, like how…Steph, originally you said we should be over disclosing, and I don't necessarily disagree with you, but I'm just wondering how far we should take it with those particular use cases as examples?

Stephanie: I think more important than the transparency is even the accountability. So if you send an email and you are fully accountable for what's in that email and you have read the email, made sure that you're really responding and it's not automated, I think for an email at least my perspective would be it's fine to not declare.

I also sometimes think people are using it just for example, for copy editing if they're not native speakers. I do sometimes if I have something complex and long to say, just say polish this, for example, for me. I think those are cases, if you read through it afterwards and you make sure everything is still correct, that I would think you won't need to declare it. In an official publication, I would say do it anyway. Put it in the method section or in the acknowledgement section and say, I have used AI for final polishing, language polishing, or whatever. But the most important piece is who's accountable for the content. And I think for a publication and for all scientific pieces, that needs to be very clear.

Rob: So do you think that the use of disclosure or disclosing generative AI gives the perception of lower quality to readers? 

Stephen G.: I mean, we are worried, right? That we're gonna submit something and then you disclose it's gen AI to one of the journals, or top tier journals, that they might think it might be of lower quality, might be more of a chance of rejecting. And actually there, there's been a few studies where people have blindly reviewed PLS or abstracts, and they thought maybe the ones that were lower quality were gen AI generated, but actually some of them were human generated. And you know, humans make mistakes as well, but it's that bias there that's going to be difficult to get rid of. And then I think people that were leaning into it more or use it more are maybe less biased. So it's a real difficult one. 

Stephen R.: One reason why I might be suspicious of it is a data bias, knowing where that data came from. Of course there is always data bias in the researcher, but I’d probably be more worried about data manipulation. 

Stephanie: I think it's also a little bit of an arms race, right? Like paper [inaudible] gets easier, data manipulation, image manipulation gets easier. But then again, there's AI tools to detect those things. So it's a balancing act and I think there's different perceptions to it. So I think there's still a population of people that will think something is lower quality if it is AI generated. But I also think we see more and more that actually it helps researchers to translate their findings into a good publication, especially if they're not native speakers. And it's kind of almost democratizing a little bit the publication process because there is differences in quality if you look at language and how people are able to present their results. Not saying research output quality, but just the quality of the paper and AI can help, for those researchers who are on the one end of the spectrum, to move them up and, you know, enable them to present their research in a way so that people really see the results and don't just see bad language or something, and then assume the research is bad as well. Somebody might be a really good biologist, but just not such a good English speaker. 

Rob: It's an interesting perspective because a lot of times researchers are very good, as you're saying, at their science, but maybe don't have the corresponding skill to be able to communicate it effectively. And hopefully we'll come to a day and age where AI will help to bridge that gap. But it does come back to doing it effectively. And also with that transparency we're talking about. 

Stephen R.: I think, uh, there's an element there also of widening the audience. So one of the issues is that scientific research, a lot of it involves stakeholders of different kinds. Yet often it's only written in the language of the scientist and the key stakeholder in medical research is the patient. And so I feel very strongly that research should be accessible to the patient and in a, written in such a way that it can be understood by the patient, even very complex things. Because actually there are patients on the panels that do the funding proposals, you know, they need to be able to understand the proposal to be able to decide whether to give it funding or not, and then that's gotta be reflected, right, the way through into the output. Is the output in a format which is understandable by the stakeholders for whom this research is important? Otherwise, the research might never get implemented in any way. It might be good research, but if the right people aren't listening to it, it's not gonna happen.

Stephen G.: I guess one way we prompt is, you know, you can say write it in this style, or I am a…and assign a persona and then, you know, write it that way. You could then say, I am this stakeholder, please explain this to me. But then I guess the source of that, be it a paper, is gonna be a limited amount of paper. So would there be a future where it's a larger, you know, a data piece or something behind it that's quite big and you can say, you know, describe it to me, a stakeholder, as a scientist or a medic or a patient. Again, that's something that may be in the AI world, and that could be done so rapidly and so easily with large amounts of data, it could be possible. 

Rob: So a really interesting discussion here. So if I made the bold statement that the patients were gonna get better care by physicians and other healthcare practitioners, would that be true based on your perspectives? Due to generative AI?

Stephen R.: Possibly. I can't, I don't think you can say that  definitely yes now. 

Stephanie: I think so. I think if we help everybody to stay on top of things, if we help them to understand what's out there, the patient themselves, to inform themselves, but also the medical professionals, I think it, it, it will be, it'll be possible.

The other thing is that I think AI can help to put things into practice faster. So get results out there off the lab into the real world faster, and I think that's a big advantage for everyone. That will accelerate discovery, that will speed up the process. And for somebody that has cancer, for example, that can be mission critical. I think that's another way AI could have a positive impact, but of course we need to make sure that people get the right information. Like producing, as mentioned, producing something that looks like science but isn't science has become easier than ever, and I think we really need to make sure that there is guardrails and safety measures against the misinformation because misinformation has never been as easy as it is today with AI technology.

Stephen G.: And the speed is a really good topic as well 'cause we do know it takes a very long time to draft papers, comment on papers, get them submitted, get them published. Now during the pandemic with COVID, there were pre-prints coming in quite a bit and it was getting the information out there that was needed at the time. But then there were some, you know, that were retracted later or some that were, you know, got amended. But that's a balance with getting the speed of information to the patient and helping development while trying to have that safeguard on the data. So difficult one may be to completely get right, but I think the advantages there are clear and we want to get data out to inform patients and doctors faster. And this is definitely something that is an advantage of having Gen AI from a content development perspective. 

Stephanie: The other thing, if you need less time as a researcher for publication development, you have more time that you can actually spend doing the thing you love, doing research, being in the lab, producing those new results to be published.

So I think I'm also for reviewing, for example. If at some point AI can support reviewers taking away some of the quality checks and of the tedious work that people have to do to process a manuscript and push it forward for review, I think that would be a great opportunity to free up people's time to actually do more research. 

Rob: So the jury is still out as to whether or not patients will ultimately benefit from generative AI in medical publishing but we have a lot of hope is what I hear you all saying. So that's very positive. 

Rob: Well, that's us for today. Thank you all for listening. Please take a minute to subscribe to In Plain Cite on your favorite podcast app. Share with your colleagues and rate our show highly if you like what you heard today.

In Plain Cite is a production of ISMPP, the International Society for Medical Publication Professionals. This episode is generously sponsored by Avalere Health.

Our production partner is CitizenRacecar. Our producer and editor is Hajar Eldaas. Post production by Alex Brouwer. Publication and promotion by Candice Chantalou.

To join ISMPP today, go to ismpp.org. Becoming a member means you can participate in value packed webinars and receive instant access to exclusive tools and resources. If you're interested, just go to ismpp.org, that's I S M P P dot org, to learn more.

© 2025 CitizenRacecar