EMUG 2024 Discovery Digest Part 4: Signals for Research and Editorial Integrity Investigations
With advancements in technology, the introduction of flexible business models, and the emergence of various fraud phenomena led by bad actors – maintaining integrity in scholarly publishing has never been more critical or more challenging. As scholarly journals continue to receive an ever-increasing influx of submissions, the more difficult it is for editorial teams to clear out suspicious or unethical activity. With manual screening processes, potential biases in play, evolving fraud techniques, limited resources, and a lack of expertise/outdated policies, the threat to quality and trust in global research is high. Publishing problematic content (or publishing under questionable means) not only harms the journal’s credibility, but also puts the communities in which this research is intended to serve at risk.
A collective responsibility across the multi-faceted publishing ecosystem, championing research integrity through our innovative solutions and strategic partnerships remains a priority for Aries Systems. To further support this commitment, we sought insights from industry publishers and societies on their experience fostering integrity through the Discovery Roundtables session at the 2024 Editorial Manager User Group (EMUG) meeting held this past summer. An interactive workshop designed to help inform our market research and product strategy, the EMUG 2024 Discovery Roundtables session divided attendees into focus groups dedicated to four key topics led by the Aries team. This digest serves as the final installment in a four-part blog series on insights gained during the workshop.
To better understand the realities of battling integrity dilemmas in scholarly publishing today, our user community highlighted which concerns they face most often, shared what indicators they use to detect integrity mishaps, and how they currently manage (or safeguard against) these issues. Leveraging the Rapid Ideation technique, members of the Aries team solicited ideas in lightning-round discussions, grouped like comments into themes, and brainstormed potential solutions with our user community. To support the discussion, our teams asked attendees to consider the following:
- What integrity issues occur/arise most often for your journal(s)? Are there any situations that you notice are becoming more frequent?
- What signals do you look out for regarding different integrity dilemmas? How do you identify them or determine if something is shady or improper?
- How do these signals differ between roles? Consider how integrity issues can be detected/flagged from the Author, Editor, and Reviewer perspectives
- Do you use any internal or external tools to detect or safeguard against integrity dilemmas? How can EM help more in these areas?
- What do you do when something is flagged as a potential risk? How do you conduct a deeper investigation? What players are involved?
- How are editorial teams trained to detect, investigate, and resolve integrity issues? What educational/practices do you have in place? Do you ever bring in or outsource to consultants?
- After issues are resolved, what actions are taken to ensure they are prevented in the future? Are end-users contacted or educated on these matters before, during, or after their involvement?
- Where/what are the pain points regarding research integrity and why? What part of the journey can be improved?
- How do your teams determine what is an accident versus intentional, and does that factor matter in investigations? Where are the grey areas and how can EM make them more black and white?
There are many signals and tools clients use to identify, tackle, and even prevent integrity-related challenges on a day-to-day basis – such as integrated solutions, manual technical checks and screening, custom workflow configurations and restrictions, and adherence to industry policies. Common examples include Duplicate Submission Check, Identity Confidence Check, reporting, Similarity Check, Reviewer statistics, the STM Integrity Hub, conflict of interests, tracking Author changes, and not allowing Authors to suggest Reviewers. However, these channels only go so far as instances of fraud become more sophisticated and prevalent. Integrity concerns most noted by attendees at EMUG include paper mills, duplicate submissions, duplicate or fake user accounts, citation omission or tampering, plagiarism, fraudulent peer review, ghost Editors, false or exaggerated authorship, inconsistent or non-disclosed conflicts of interest, image manipulation, and non-authorized use of artificial intelligence (AI) by Authors and Reviewers. There was consensus that technical checks are manual and burdensome, and Reviewers often fail to pinpoint or report suspicious activity. One attendee claimed a need to recall papers that had flown under the radar as iThenticate Similarity Check did not accurately detect plagiarism when the content was translated into another language. With these ethical challenges at the forefront of every journal, it is important to understand and optimize how integrity concerns are identified, investigated, and resolved.
To see the bigger picture, the Aries team worked with groups to organize all these challenges by categorizing them under major themes. This activity emphasized both concern and room for improvement in three key areas – user identities, (mis)use/potentials of technologies, and creating/updating policies. With the challenges tied to these three themes, groups transitioned their focus to brainstorming potential solutions that Aries and publishers can implement to bridge integrity gaps in publishing.
Duplicate, dishonest, and ambiguous users are difficult to spot – raising unease about fake authors/reviewers and other bad actors. In response to identity concerns, attendees proposed various potential solutions to promote confidence in registered users of the system. Clients requested ways for the system to automatically flag duplicate user accounts, co-authors that act under the guise of an Editor, and accounts with dubious email addresses using technologies such as AI. Requiring ORCID verification or implementing frequent pop-up reminders to those who do not yet have it linked could help verify users, and creating a more comprehensive history of changes to Author metadata can help Editors spot discrepancies. Improving identity security with multi-factor authentication with valid institutional email addresses was also strongly suggested by groups. Adding a report in EM that would surface how many papers a user has submitted across all titles in the publisher’s portfolio to help Editors monitor activity of “frequent flyers” in the system was also proposed during the workshop. Many attendees also expressed interest in refining their policies on integrity. This includes updating their existing policies, creating new policies to cater to “new norms” such as AI, communicating the “dos and don’ts” for Authors and Reviewers, following guidelines set by ethics committees, requiring education for Editors on best practices, conducting internal exercises on conscious and uncurious bias in editorial teams, and more.
Much of the discussion during the Roundtables was focused on technology and its impact on research integrity – including its shortcomings, its risks, and its potential. There were many ideas shared at EMUG on what new kinds of tools would be of interest as well as potential enhancements to existing tools available within the Aries ecosystem. This feedback included expanding Duplicate Submission Check to function across publications, Similarity Check plagiarism detection for images and graphics, Identity Confidence Check to scrutinize more detailed criteria, Reference Check to detect citation stacking or retractions, COI disclosure tools that can also verify, and automatic image manipulation detection. Many expressed a need for harnessing these kinds of services earlier than the workflow to better address integrity concerns upfront, rather than downstream in peer review or after acceptance. The topic of AI technology dominated the discussion as groups debated the desire for the system to detect use of generative AI by Authors and Reviewers while also exploring how the system itself can leverage AI to raise red flags. Clients also requested more standard reports related to integrity, such as reporting on custom submission/people flags, anomalies around speed or length of a review, how many submissions are received by the same individual, how many times the same Reviewer is recommended by Authors, and more.
“During the Signals for Integrity Investigations roundtable, a customer revealed they received a review generated by ChatGPT that, despite violating their policies, was more thorough and accurate than those by human Reviewers,” stated Elysia Williams, Aries Client Services Manager. “This raises an intriguing question: could AI-generated manuscript reviews supplement traditional ones in the future? As AI technology advances, I look forward to more conversations with our customers about where – or whether – they see a role for it in their editorial workflows.”
“I was thrilled to not only attend EMUG this year, but also lead discussions to spotlight valuable client and industry insights – which in turn make me a stronger developer of our software solutions,” said Matt Van Voorhies, Aries Software Engineer. “Hearing directly from our users on the pain points and policies of an industry-wide priority such as integrity allows Aries to create collaborative opportunities for a more sound and ethical publishing experience.”
Check out our high-level recap of the recent user group meeting and the previous posts in the four-part 2024 Discovery Digest blog series!