Elisabeth Bik’s work as a leading authority on scientific image analysis was frequently in the spotlight over the past year in the wake of a high-profile investigation into allegations of research misconduct against former Stanford president Marc Tessier-Lavigne, which ultimately led to his resignation. It’s now clear that Bik, who was among the experts to initially raise concerns about image manipulation in papers that Tessier-Lavigne had co-authored and was interviewed by the outside panel behind the investigation, is helping to usher in an era in which scientific research will be subject to greater scrutiny.
STAT’s Deborah Balthazar spoke with Bik about her work and the challenges of identifying research misconduct in the age of AI:
Have there been any other really high-profile things that you’ve been looking at since [the Tessier-Lavigne investigation]?
I’ve been working with Charles Piller and some other sleuths in discovering some fraud cases in the Alzheimer’s space. We worked on a case: Berislav Zlokovic. Charles Piller wrote about it in Science [in November]. That is sort of a big case, because this is a big lab with lots of money. This researcher works in Alzheimer’s, but also on stroke. There was a clinical trial that he was getting involved in, like some drug that was a result of his research. And I think the FDA halted the clinical trial because of his articles. So that is a pretty big and very immediate action. I don’t think that has happened very frequently that because of these misconduct investigations, that a clinical trial gets paused.
From the perspective of the researchers, I can see that AI would make their jobs easier. But from your perspective, would it make your job harder in trying to determine what exactly is a real image? Or can you still detect the pattern?
I don’t think I will be able to recognize a good AI-generated image anymore. We have found some images generated two, three, four years ago, which we believe are AI-generated. But this was by a paper mill and I think they made the error of putting all these AI-generated western blot bands on this same background. And so because they all have the exact same background, we could recognize that pattern of noise. But I think there’s probably a lot of papers being produced right now that we can no longer recognize as fake.
Read the full conversation.