top of page
Search

Demystifying AI Adoption for Medical Writing: An Interview with Núria Negrão, Part 1


Núria Negrão is a medical writer and AI adoption strategist specializing in continuing medical education. She has presented at numerous national conferences including the American Medical Writers Association, Alliance for Continuing Education in the Health Professions, and Applied AI Summit, and has given workshops for numerous academic and professional organizations. She offers specialized AI consultancy, including strategic blueprints, advisory services, competency development, and tailored training for the medical education industry.


We sat down with her to discuss why she chose to pursue this unique career path, her passion for helping medical writing professionals navigate AI, and where she sees this technology taking the industry in the future.


What first drew you to generative AI? Did you immediately see its potential for the medical writing field, or were you interested in this technology on a more personal level?


My brother is a software developer, and he introduced me to some of the early generative AI models for things like image generation. Honestly though, I've always wanted to “talk” to my computer. When ChatGPT was released, I was primed and ready to start figuring out how to use it to my advantage. My first question was, can I use this to write for me? 


The answer was yes, but ChatGPT also had some pretty obvious limitations. It could edit paragraphs, add punctuation, proofread, etc., but it was missing other important features like track changes. And then I also became aware of the privacy and copyright concerns associated with generative AI, so I stopped using it that way and instead I started using it for other tasks.


I remember I had this huge transcript from a meeting I attended, and as an experiment I asked ChatGPT to summarize it for me. It did a great job, and I thought, ok, this could actually be really helpful! And that’s how I got started using AI for medical writing. For many years I only used it for my work, but increasingly I find myself using it for other purposes. That’s a relatively new development though. My primary interest was in how it could make my writing more efficient and effective. 


In your opinion, how should medical writers be using generative AI?


I'll start with how medical writers shouldn't use AI: we should never use it to outsource our thinking. Or at the very least, if there are places where we do want to outsource some decisions and analysis, then we need to be very methodical and open about what we are automating, what we are outsourcing, and how we are designing those roles so that they produce good outcomes. In other words, I think medical writers still need to do the big job of reading and understanding the literature. AI can help with the writing, but not in the reading and understanding.


Having said that, I do think that the easiest, lowest hanging fruit is to use AI to quicken up the pace of research. AI can enhance internet searches, and tools like Notebook LM can speed up the process of reading those documents by allowing the user to query them. You can even do things like put a bunch of papers into a model and ask for an audio summary to listen to while you go for a walk. For medical writers who are new to AI, using it this way is a good entry point.


A lot of your work these days is focused on training other medical writers to use AI effectively. How do you approach this type of continuing education?


I started with an approach grounded in my experience in science communication. I saw (and still see) generative AI as a science, and as an academic discipline, not just as a product. When I first started teaching people about AI, I approached it from that point of view--explaining the history of the field, the research, the breakthroughs, the different subfields, and so on. I still sometimes will do that or go into some of the “under the hood” concepts, but now I approach it with two specific goals in mind:


  1. I want people to know how to use these tools for science writing and medical writing and to know how to use them well and within legal and ethical boundaries. 

  2. I want people to learn how to incorporate AI into their workflows, which involves shifting their thinking around how they do writing, research, and communication. 


I use this approach because I believe AI is going to fundamentally transform the way we do work. Right now, we are at a pivotal moment where we have an opportunity to decide what that future will look like. Overwork and burnout are notorious issues in medical writing, and we finally have the tools to reduce the workload without sacrificing productivity.


I think rather than just helping people understand AI as a science, today the more important thing is to help people navigate questions like, "Okay, now that these tools are here, how is this going to transform the reality of work? What does that mean for me? How do I position myself in this new reality?"


Of course, that also raises some even bigger questions for our field, and a lot of anxieties as well. If we have technology that can do something automatically that used to take a human a lot of thought, time, and energy, how do we make sure that we don’t lose our cognitive capacities?  We used to teach critical thinking by making people write. But if people are not going to write, and instead they are going to ask a tool to write for them, then what exactly should we be teaching them? And that gets you to the level of metacognition, or “thinking about thinking,” and that’s become a big part of what I do now.


Those are great points, and you also touched on some issues that many professionals in our field are concerned about. Do you have any fears or concerns of your own about the future of generative AI in the context of medical writing?


Yes, my biggest fear is that we fumble our implementation of AI in a way that has negative consequences, similar to what happened when we started using social media. There are a lot of “unknown unknowns,” and that’s always concerning. 


Another concern is that if we embrace AI uncritically, we risk giving up all of our decision-making abilities to AI. I think humans should still be in control, or if we are exporting some decisions to AI, then we need to design those decision-making processes with safeguards in place.


In general, I think we need to do a lot more deliberate thinking and deliberate design rather than just letting the AI do all the things. And increasingly, it feels like people are at one end of the extreme or the other. There are people who want nothing to do with AI, and people who want to rush AI without thinking about the consequences. Neither approach is likely to produce good results. 


Check back next week for Part 2!

Comments


© 2026 by M&D Science Consulting and Communications, LLC

bottom of page