AI-CHECK: testing the potential of AI in training healthcare professionals

AI-CHECK, a research project by Zadig that is currently unique of its kind, aims to assess the potential and limitations of using AI in training courses for healthcare professionals, with the goal of defining guidance and best practices for this application. The project began by querying ChatGPT, with plans to extend the analysis to other tools, in order to evaluate their ability to generate reliable training materials. Initial findings suggest that AI cannot operate independently, but can only support certain phases of the process.

29 Jan. 2024

by Maria Rosa Valetto
News
Science
Training
AI
image header

Image created with DALL·E, an artificial intelligence algorithm capable of generating images from textual descriptions.

 

For many years, Zadig, as a national provider of ECM continuing medical education courses, has adopted a method based on presenting healthcare professionals with real-life professional practice cases and decision-making questions.

A new Zadig project, AI-CHECK, which has recently attracted the attention of Il Sole 24 Ore, now aims to explore how artificial intelligence (AI) can operate within this training model.

AI-CHECK: a project to define how AI can be used in training

The aim of the project – unique to our knowledge – is to identify best practices for training healthcare professionals using artificial intelligence, defining its limits, benefits and risks.
It is a wide-ranging project designed to assess the application of AI to training through successive stages. An initial set of data may be available within the first months of 2024.

First aspect to be assessed: content reliability

The first step involves asking ChatGPT, and subsequently other similar tools currently available on the market, to independently develop the content of a training course, which is then evaluated in terms of clarity, currency of sources, reliability with respect to the scientific literature, and consistency with clinical guidelines.

If the outcome were a draft course of appreciable quality, this would indicate that the system has access to the same sources Zadig would have used to develop it and that it possesses an ability to organise information that is not inferior to that of training specialists.

When it comes to content reliability, the main obstacle is that it is still unclear what is contained within the algorithms of systems such as ChatGPT. Until transparency is ensured, it cannot be ruled out that biases may arise in the collection and processing of information.

The AI-CHECK project evaluation group is composed of Zadig’s training team, Eugenio Santoro, Director of the Medical Informatics Laboratory at the Mario Negri Institute for Pharmacological Research in Milan, and Luigi Naldi, a dermatologist with one of the highest levels of scientific output internationally, long involved in education and, since 2017, Director of the Complex Operative Unit of Dermatology at San Bortolo Hospital in Vicenza.

For the evaluation, a dermatological condition of broad interest was selected—relevant not only to specialists but also to general practitioners—and whose knowledge base and management have not been radically altered in very recent years. This choice was made in order to work around one of ChatGPT’s limitations, namely the fact that it is not updated in real time with the pace of scientific publications.

AI appears to work only as a support for experts

The project will certainly continue for some time, and it is still too early to assess its outcomes. However, early indications suggest that the development of training materials of adequate quality cannot be delegated to AI alone.