A
recent international survey concluded that medical researchers have a positive
attitude toward using AI chatbots, but ethical and accuracy concerns require
further interventions to create systematic, unified rules.

This
international cross-sectional survey was published in January 2026 in Cureus.

Applying
AI Guidelines in Research Journey – Differing Stance

Artificial
Intelligence (AI) driven large language models (LLMs) like Google Bard, Gemini,
Bing AI, and ChatGPT are designed to generate human-like responses. They assist
the scientific community with literature reviews, writing, data analysis, and
citations. While guidelines exist for AI chatbot use in research, acceptance
varies among publishers: Springer Nature and Science reject ChatGPT as a
coauthor, while many Elsevier journals permit its disclosed use. Studies have
shown that ChatGPT produces coherent writing with low plagiarism but faces
challenges with accuracy, fabricated references, and ethical concerns.

Study
Overview

An
observational, cross-sectional survey was conducted to assess the use and
perceptions of AI chatbots among 434 medical researchers. The survey was
administered online and targeted participants across multiple countries.
Medical researchers who had either published at least one study or were
currently involved in a medical research project and resided in Saudi Arabia,
Nigeria, Tunisia, or the United Kingdom (England), regardless of nationality,
were included. Those who had never conducted or contributed to a research
project, those outside the countries mentioned, and those not in the medical
field were excluded. The primary outcomes included self-reported use of AI
chatbots in research (binary: yes/no) and perceptions of AI chatbots’ impact on
research. Additional outcome measures included participants’ ethical stances
(e.g., whether they believe guidelines are needed) and future intentions
regarding AI chatbot use. Key explanatory variables included participants’
demographic characteristics, such as age group, gender, country, and
professional role.

Key
Findings

Of
the 43 participants,175 (40.3%) reported using AI chatbots in their research.Use
varied by country (32.8%-45.9%); however, neither gender nor country was
significantly associated with use. Older age and more senior roles were
associated with lower odds of use (odds ratio (OR): ages 41-50 years, 0.32;
residents, 0.31; consultants, 0.17; P ≤ 0.009) Awareness
strongly predicted use (OR 15.53), as did guideline awareness (OR 2.47), trust
(P = 0.005), hypothesis formation (P = 0.001), willingness to
cite (P = 0.003), and future use (P < 0.001); intention to
declare use during submission did not differ (P = 0.468)

Possible
Medical Researcher & Stakeholder Implications

AI
has gained attention in scientific publishing for its ability to generate
human-like text. Its strength lies in its ability to process large volumes of
text quickly, potentially reducing researchers’ workloads. ChatGPT and similar
tools trained on vast text corpora support language tasks at scale. These
models can automate previously manual tasks, such as reviewing papers and
extracting key elements. AI chatbots can benefit authors when used responsibly.
While they cannot replace subject-matter expertise, they may assist in drafting
descriptions, organising manuscripts, supporting literature tasks, and refining
research questions. However, the risks include a lack of context, inaccuracy,
and bias in the outputs.

Reference: Alturaiki H M, Al Khamees M M,
Alradhi H A, et al. The Use and Perceptions of AI Chatbots in Medical Research:
An International Cross-Sectional Survey. Cureus 18(1): e100908. Published January
06, 2026. DOI 10.7759/cureus.100908

Leave a Reply

Your email address will not be published. Required fields are marked *