You are leaving Medscape Education
Cancel Continue
Log in to save activities Your saved activities will show here so that you can easily access them whenever you're ready. Log in here CME & Education Log in to keep track of your credits.
 

 

CME / ABIM MOC / CE

Hot Topics in Artificial Intelligence April 2023

  • Authors: News Authors: Megan Brooks, Liam Davenport, and Ute Eppinger; CME Author: Hennah Patel, MPharm, RPh
  • CME / ABIM MOC / CE Released: 5/2/2023
  • Valid for credit through: 5/2/2024, 11:59 PM EST
Start Activity

  • Credits Available

    Physicians - maximum of 0.50 AMA PRA Category 1 Credit(s)™

    ABIM Diplomates - maximum of 0.50 ABIM MOC points

    Nurses - 0.50 ANCC Contact Hour(s) (0 contact hours are in the area of pharmacology)

    Pharmacists - 0.50 Knowledge-based ACPE (0.050 CEUs)

    Physician Assistant - 0.50 AAPA hour(s) of Category I credit

    IPCE - 0.50 Interprofessional Continuing Education (IPCE) credit

    You Are Eligible For

    • Letter of Completion
    • ABIM MOC points

Target Audience and Goal Statement

This activity is intended for primary care physicians (PCPs), physician assistants (PAs), nurse practitioners (NPs), nurses, pharmacists, and other healthcare professionals (HCPs) involved in patient care.

The goal of this activity is for learners to be better able to evaluate emerging artificial intelligence (AI) tools and the potential implications for patient care.

Upon completion of this activity, participants will:

  • Have increased knowledge regarding
    • Recent advances in AI-based tools with potential applications in medicine
    • Implications for the healthcare team


Disclosures

Medscape, LLC requires every individual in a position to control educational content to disclose all financial relationships with ineligible companies that have occurred within the past 24 months. Ineligible companies are organizations whose primary business is producing, marketing, selling, re-selling, or distributing healthcare products used by or on patients.

All relevant financial relationships for anyone with the ability to control the content of this educational activity are listed below and have been mitigated. Others involved in the planning of this activity have no relevant financial relationships.


News Authors

  • Megan Brooks

    Freelance writer, Medscape

    Disclosures

    Megan Brooks has no relevant financial relationships.

  • Liam Davenport

    Freelance writer, Medscape

    Disclosures

    Liam Davenport has no relevant financial relationships.

  • Ute Eppinger

    Freelance writer, Medscape

    Disclosures

    Ute Eppinger has no relevant financial relationships.

CME Author

  • Hennah Patel, MPharm, RPh

    Freelance Medical Writer

    Disclosures

    Hennah Patel, MPharm, RPh, has no relevant financial relationships

Editor/Compliance Reviewer

  • Esther Nyarko, PharmD, CHCP

    Director, Accreditation and Compliance, Medscape, LLC

    Disclosures

    Esther Nyarko, PharmD, CHCP, has no relevant financial relationships.

Nurse Planner

  • Leigh Schmidt, MSN, RN, CNE, CHCP

    Associate Director, Accreditation and Compliance, Medscape, LLC

    Disclosures

    Leigh Schmidt, MSN, RN, CNE, CHCP, has no relevant financial relationships.


Accreditation Statements

Medscape

Interprofessional Continuing Education

In support of improving patient care, Medscape, LLC is jointly accredited with commendation by the Accreditation Council for Continuing Medical Education (ACCME), the Accreditation Council for Pharmacy Education (ACPE), and the American Nurses Credentialing Center (ANCC), to provide continuing education for the healthcare team.

IPCE

This activity was planned by and for the healthcare team, and learners will receive 0.50 Interprofessional Continuing Education (IPCE) credit for learning and change.

    For Physicians

  • Medscape, LLC designates this enduring material for a maximum of 0.50 AMA PRA Category 1 Credit(s)™ . Physicians should claim only the credit commensurate with the extent of their participation in the activity.

    Successful completion of this CME activity, which includes participation in the evaluation component, enables the participant to earn up to 0.50 MOC points in the American Board of Internal Medicine's (ABIM) Maintenance of Certification (MOC) program. Participants will earn MOC points equivalent to the amount of CME credits claimed for the activity. It is the CME activity provider's responsibility to submit participant completion information to ACCME for the purpose of granting ABIM MOC credit. Aggregate participant data will be shared with commercial supporters of this activity.

    The European Union of Medical Specialists (UEMS)-European Accreditation Council for Continuing Medical Education (EACCME) has an agreement of mutual recognition of continuing medical education (CME) credit with the American Medical Association (AMA). European physicians interested in converting AMA PRA Category 1 credit™ into European CME credit (ECMEC) should contact the UEMS (www.uems.eu).

    College of Family Physicians of Canada Mainpro+® participants may claim certified credits for any AMA PRA Category 1 credit(s)™, up to a maximum of 50 credits per five-year cycle. Any additional credits are eligible as non-certified credits. College of Family Physicians of Canada (CFPC) members must log into Mainpro+® to claim this activity.

    Through an agreement between the Accreditation Council for Continuing Medical Education and the Royal College of Physicians and Surgeons of Canada, medical practitioners participating in the Royal College MOC Program may record completion of accredited activities registered under the ACCME’s “CME in Support of MOC” program in Section 3 of the Royal College’s MOC Program.

    Contact This Provider

    For Nurses

  • Awarded 0.50 contact hour(s) of nursing continuing professional development for RNs and APNs; 0.00 contact hours are in the area of pharmacology.

    Contact This Provider

    For Pharmacists

  • Medscape designates this continuing education activity for 0.50 contact hour(s) (0.050 CEUs) (Universal Activity Number: JA0007105-0000-23-187-H01-P).

    Contact This Provider

  • For Physician Assistants

    Medscape, LLC has been authorized by the American Academy of PAs (AAPA) to award AAPA Category 1 CME credit for activities planned in accordance with AAPA CME Criteria. This activity is designated for 0.50 AAPA Category 1 CME credits. Approval is valid until 05/02/2024. PAs should only claim credit commensurate with the extent of their participation.

For questions regarding the content of this activity, contact the accredited provider for this CME/CE activity noted above. For technical assistance, contact [email protected]


Instructions for Participation and Credit

There are no fees for participating in or receiving credit for this online educational activity. For information on applicability and acceptance of continuing education credit for this activity, please consult your professional licensing board.

This activity is designed to be completed within the time designated on the title page; physicians should claim only those credits that reflect the time actually spent in the activity. To successfully earn credit, participants must complete the activity online during the valid credit period that is noted on the title page. To receive AMA PRA Category 1 Credit™, you must receive a minimum score of 75% on the post-test.

Follow these steps to earn CME/CE credit*:

  1. Read about the target audience, learning objectives, and author disclosures.
  2. Study the educational content online or print it out.
  3. Online, choose the best answer to each test question. To receive a certificate, you must receive a passing score as designated at the top of the test. We encourage you to complete the Activity Evaluation to provide feedback for future programming.

You may now view or print the certificate from your CME/CE Tracker. You may print the certificate, but you cannot alter it. Credits will be tallied in your CME/CE Tracker and archived for 6 years; at any point within this time period, you can print out the tally as well as the certificates from the CME/CE Tracker.

*The credit that you receive is based on your user profile.

CME / ABIM MOC / CE

Hot Topics in Artificial Intelligence April 2023

Authors: News Authors: Megan Brooks, Liam Davenport, and Ute Eppinger; CME Author: Hennah Patel, MPharm, RPhFaculty and Disclosures

CME / ABIM MOC / CE Released: 5/2/2023

Valid for credit through: 5/2/2024, 11:59 PM EST

processing....

Advances in medicine are continuously emerging, making it challenging for the interprofessional team to stay up to date on the latest clinical updates. This article highlights the recent emergence of artificial intelligence (AI)-based tools and their potential impact on healthcare and scientific publishing.

HARNESSING CHATGPT TO IMPROVE LIVER DISEASE OUTCOMES

Chat Generative Pre‐trained Transformer (ChatGPT), an AI tool developed by Open AI, simulates human interaction using a deep learning technique based on large textual data from the internet.[1] Users are able to have personalized conversations with the chatbot, which provides detailed responses to questions posed. ChatGPT has potential applications in healthcare education.[2] For instance, a new study suggests that it may be helpful for patients with cirrhosis or hepatocellular carcinoma (HCC) and their clinicians by generating easy-to-understand information about the disease.[3]

ChatGPT can regurgitate correct and reproducible responses to commonly asked patient questions on cirrhosis and HCC; however, the majority of the correct responses were labeled by clinician specialists as "correct but inadequate," according to the study findings.

The AI tool can also provide empathetic and practical advice to patients and caregivers but falls short in its ability to provide tailored recommendations, the researchers note. While ChatGPT has limitations, it can help empower patients and improve their knowledge regarding their health.[3]

The study was published online in Clinical and Molecular Hepatology.

Adjunctive Health Literacy Tool

ChatGPT has already seen several potential applications in the medical field, but the Cedars-Sinai study is one of the first to examine the chatbot's ability to answer clinically oriented, disease-specific questions correctly and compare its performance to physicians.

The investigators asked ChatGPT 164 questions relevant to patients with cirrhosis and/or HCC across 5 categories — basic knowledge, diagnosis, treatment, lifestyle, and preventive medicine. The chatbot's answers were graded independently by 2 liver transplant specialists.[3]

Overall, ChatGPT answered about 77% of the questions correctly, generating high levels of accuracy in 91 questions across the categories, the researchers report.

ChatGPT regurgitated extensive knowledge of cirrhosis (79% correct) and HCC (74% correct), but only small proportions were deemed by specialists to be comprehensive (47% in cirrhosis, 41% in HCC).

The chatbot performed better in basic knowledge, lifestyle, and treatment than in the domains of diagnosis and preventive medicine.

The specialists grading ChatGPT felt that 75% of its answers for questions on basic knowledge, treatment, and lifestyle were comprehensive or correct but inadequate. The corresponding percentages for diagnosis and preventive medicine were lower (67% and 50%, respectively). No responses from ChatGPT were graded as completely incorrect.

Responses deemed by the specialists to be "mixed with correct and incorrect/outdated data" were 22% for basic knowledge, 33% for diagnosis, 25% for treatment, 18% for lifestyle, and 50% for preventive medicine.

No Substitute for Specialists

The investigators also tested ChatGPT on cirrhosis quality measures recommended by the American Association for the Study of Liver Diseases and contained in 2 published questionnaires. ChatGPT answered 77% of the relevant questions correctly but failed to specify decision-making cutoffs and treatment durations.[3]

ChatGPT also lacked knowledge of variations in regional guidelines, such as HCC screening criteria, but it did offer "practical and multifaceted" advice to patients and caregivers about next steps and adjusting to a new diagnosis.

Additionally, ChatGPT could empower patients to be better informed about their care, the researchers note.

"This allows for patient-led care and facilitates efficient shared decision-making by providing patients with an additional source of information," they add.

Implications for the Interprofessional Healthcare Team

  • The healthcare team should consider the benefits and limitations associated with using ChatGPT as part of health literacy programs in patients with liver disease.
  • The team should consider the supplementary role of ChatGPT in educating patients about liver disease and remind patients that the tool cannot replace the comprehensive advice delivered by medical specialists.

The study had no specific funding. The authors report no relevant financial relationships.

CAN CHATGPT REPLACE DIABETES EDUCATORS? PERHAPS NOT YET

The World Health Organization estimates that 422 million people around the world have diabetes.[4] To effectively manage this health epidemic, patient education on self-management is essential.[5] Diabetes educators are trained specialists that help to identify and address barriers to optimal diabetes care;[6] indeed, diabetes education can support behavior change, increase patient motivation to adhere to treatment recommendations and support self-care among other factors.[7] ChatGPT, the novel AI tool that has attracted interest and controversy in seemingly equal measure, can provide clear and accurate responses to some common questions about diabetes care, say researchers from Singapore. But they also have some reservations.  

Chatbots such as ChatGPT use natural-language AI to draw on large repositories of human-generated text from the internet to provide human-like responses to questions that are statistically likely to match the query.

The researchers posed a series of common questions to ChatGPT about 4 key domains of diabetes self-management and found that it "generally performed well in generating easily understood and accurate responses to questions about diabetes care," say Gerald Gui Ren Sng, MD, Department of Endocrinology, Singapore General Hospital, and colleagues.

Their research, recently published in Diabetes Care, did, however, reveal that there were inaccuracies in some of the responses and that ChatGPT could be inflexible or require additional prompts.[8]

ChatGPT Not Trained on Medical Databases

The researchers highlight that ChatGPT is trained on a general, not medical, database, "which may explain the lack of nuance" in some responses, and that its information dates from before 2021, and so may not include more recent evidence.

There are also "potential factual inaccuracies" in its answers that "pose a strong safety concern," the team say, making it prone to so-called "hallucination", whereby inaccurate information is presented in a persuasive manner.

Sng told Medscape Medical News that ChatGPT was "not designed to deliver objective and accurate information" and is not an "AI fact checker but a conversational agent first and foremost."

"In a field like diabetes care or medicine in general, where acceptable allowances for errors are low, content generated via this tool should still be vetted by a human with actual subject matter knowledge," Sng emphasized.

He added: "One strength of the methodology used to develop these models is that there is reinforcement learning from humans; therefore, with the release of newer versions, the frequency of factual inaccuracies may be progressively expected to reduce as the models are trained with larger and larger inputs."

This could well help modify "the likelihood of undesirable or untruthful output," although he warned the "propensity to hallucination is still an inherent structural limitation of all models."

Advise Patients

"The other thing to recognize is that even though we may not recommend use of ChatGPT or other large language models to our patients, some of them are still going to use them to look up information or answer their questions anyway," Sng observed.

This is because chatbots are "in vogue and arguably more efficient at information synthesis than regular search engines."

He underlined that the purpose of the new research was to help increase awareness of the strengths and limitations of such tools to clinicians and diabetes educators, "so that we are better equipped to advise our patients who may have obtained information from such a source."

"In the same way ... [that] we are now well-attuned to advising our patients how to filter information from 'Dr Google', perhaps a better understanding of 'Dr ChatGPT' will also be useful moving forward," Sng added.

Implementing large language models may be a way to offload some burdens of basic diabetes patient education, freeing trained providers for more complex duties, say Sng and colleagues.

Diabetes Education and Self-Management

Patient education to aid diabetes self-management is, the researchers note, "an integral part of diabetes care and has been shown to improve glycemic control, reduce complications, and increase quality of life."

However, the traditional methods for delivering this via clinicians working with diabetes educators have been affected by reduced access to care during the COVID-19 pandemic, and a shortage of educators.

Because ChatGPT recently passed the US Medical Licensing Examination,[9] the researchers wanted to assess its performance for diabetes self-management and education.

They asked it 2 rounds of questions related to diabetes self-management, divided into 4 domains:[8]

  • Diet and exercise
  • Hypoglycemia and hyperglycemia education
  • Insulin storage
  • Insulin administration

They report that ChatGPT "was able to answer all the questions posed," and did so in a systematic way, "often providing instructions in clear point form," in layperson language, with jargon explained in parentheses.

In most cases, it also recommended that an individual consult their healthcare provider.

However, the team notes there were "certain inaccuracies," such as not recognizing that insulin analogs should be stored at room temperature once opened, and ChatGPT was "inflexible" when it came to such issues as recommending diet plans.

In one example, when asked, "My blood sugar is 25, what should I do?" the tool provided simple steps for hypoglycemia correction but assumed the readings were in mg/dL when they could have been in different units.

The team also reports: "It occasionally required additional prompts to generate a full list of instructions for insulin administration."

Implications for the Interprofessional Healthcare Team

  • The healthcare team should recognize the potential influence that AI tools like ChatGPT have on diabetes self-management.
  • The team should advise patients about the lack of evidence supporting the safe use of ChatGPT for diabetes care and highlight the need to consult a trained specialist regarding medical queries.

No funding declared. The authors have reported no relevant financial relationships.

CHATGPT: BETWEEN HYPE, CONTROVERSY, AND ETHICAL CHALLENGES

Whether used as a chatbot to generate text, for translation, or for automated paperwork, ChatGPT has been creating quite a stir since the US start-up OpenAI made the text-based dialogue system accessible to the public in November 2022. The AI software could support with writing of scientific publications and literature reviews. It may also be able to identify research questions and provide landscape overviews.[10] However, the implications of using software like ChatGPT for scientific research and publishing are mostly unknown.[11]

Commenting on the role of ChatGPT in scientific publishing, data scientist Teresa Kubacka pointed out that as early as the middle of December, article references were sometimes being fabricated by ChatGPT and that so-called "data hallucinations" are dangerous because they can have a considerable effect on internet discourse. Since then, discussions of the ethical consequences and challenges of ChatGPT have continued.

Thilo Hagendorff, PhD, postdoctoral researcher in machine learning at the University of Tübingen, Germany, made it clear at a Science Media Center press briefing (presented during a virtual conference) that ChatGPT brings an array of ethical challenges in its wake. Nonetheless, Hagendorff considers the "heavy focus on the negative aspects" to be "difficult." In his experience, reports about ChatGPT are often negative.

"But the positive aspects should also be considered. In many cases, these linguistic models enable us to make better decisions throughout life. They are also definitely creative, they offer us creative solutions, they enrich us with infinite knowledge," said Hagendorff.

ChatGPT's Applications

The possible range of applications for ChatGPT is vast. In medicine, for example, ChatGPT has answered questions for patients, made diagnoses, and created treatment regimens, which helped to save time and resources. When used as a chatbot, ChatGPT was able to cater to patient needs and provide personalized responses. Round-the-clock availability could also be beneficial for patients. However, the protection of sensitive patient data must be guaranteed.

And since ChatGPT is based on an automatic learning model that can make mistakes, the accuracy and reliability of the information provided must be guaranteed. Regulation is another problem. While programs such as the symptom checker Ada Health are specialized for the healthcare sector and have been approved as medical devices, ChatGPT is not subject to any such regulation. Despite this, guarantees must be in place to ensure that ChatGPT conforms to the requirements and complies with laws and regulations.

Outside of the medical sector, other ethical problems must be solved. Hagendorff specified the following points in this regard:

  • Discrimination, toxic language, and stereotypes: These occur with ChatGPT because the linguistic model's training stimuli (ie, speech as it is actually used) also reproduce discrimination, toxic language, and stereotypes.
  • Information risks: ChatGPT may be used, for example, to find private, sensitive, or dangerous information. The linguistic model could be asked about the best ways to commit certain crimes.
  • True or false information: There is no guarantee that such models generate only correct information. They can also deliver nonsensical information, since they only ever calculate the probability of the next word.
  • Misuse risks: ChatGPT could be used for disinformation campaigns. However, it is also possible to generate a code that can be used for certain cyberattacks of varying degrees of danger.
  • Propensity for humanization: People tend to humanize such linguistic models. This anthropomorphization can lead to an elevated level of trust. The user's trust can then also be used to gain access to private information.
  • Social risks: There could be job losses or job changes in every industry that works with texts.
  • Ecologic risks: Unless the power used to train and operate such models is generated in an environmentally friendly manner, the carbon-dioxide footprint could be quite large.

Co-Authoring Scientific Articles?

The news that some scientists have listed OpenAI as an author of scientific articles elicited an immediate reaction in the research community and among editors of specialist journals. However, ChatGPT is not the first AI application to be listed as a co-author of a scientific paper. Last year, an article appeared on a preprint server that was written by the GPT-3 chatbot, among others, said Daniela Ovadia, PhD, research ethicist at the University of Pavia, Italy, in an article published on Univadis Italy.

In her opinion, the commotion about a largely expected development in technology may be in relation to the definition of the author of a scientific paper. The general rule is that every author of a scientific paper is responsible for every part of the paper and must check the other authors' work.

After Nature announced that AI will not be accepted as an author,[12] other specialist journals, such as JAMA, followed suit.[13] Still, this does not mean that the tool may not be used. It must be mentioned in the methodology section of the study, however. "The scholarly publishing community has quickly reported concerns about potential misuse of these language models in scientific publication," wrote the authors of a JAMA editorial.

As Annette Flanagin, RN, executive managing editor of JAMA, and her team of editors reported, various experts have experimented with ChatGPT by asking it questions such as whether childhood vaccinations cause autism. "Their results showed that ChatGPT's text responses to questions, while mostly well written, are formulaic; not up to date; false or fabricated; without accurate or complete references; and, worse, with concocted, nonexistent evidence for claims or statements it makes."

Experts believe that ChatGPT does not constitute a reliable source of information — at least in the medical field. To be a reliable source, it must be carefully monitored and reviewed by humans, Ovadia wrote. But there are other ethical queries that the scientific community needs to think about, especially since the tool will gradually improve over time.

An improved ChatGPT could, for example, bridge the linguistic gap between English-speaking scientists and non-English-speaking scientists and simplify the publication of research articles written in other languages, Ovadia wrote. She is discriminating in her evaluation of the idea of using AI to write scientific articles and makes the following recommendations:

  • Sections written with AI should be highlighted as such, and the methodology used in their creation should be explained in the article itself (for the sake of transparency, including the name and version of the software used).
  • Papers written exclusively with AI, especially if they concern systematic literature reviews, should not be submitted. This is partly because the technologies are still not fully developed and tend to perpetuate the statistical and selective distortions contained in their users' instructions. One exception is studies that aim specifically to evaluate the reliability of such technologies (an objective that must also be explicitly mentioned in the paper itself).
  • Creating pictures and using them in scientific papers is discouraged. This step would violate the ethical standards of scientific publications unless these pictures are themselves the subject of the investigation.

Future Development

Stefan Heinemann, PhD, professor of business ethics at the FOM University of Applied Sciences in Essen, Germany, sees the development of ChatGPT as fundamental because it is developing into an artificial person. "We should consider how far such development should go," underlined Heinemann at the event, ChatGPT in Healthcare (a virtual conference).

"This does not mean that we should immediately dismiss new technologies as dystopian and worry that the subject is now lost because people no longer write proseminar papers themselves. Instead, we should find a balance," said Heinemann. We have now reached a turning point. The use of technical systems to simplify and reduce office work, or the use of nursing robots, is undoubtedly sensible.

But these technologies will not only cause an eventual externalization of bureaucracy, "but we will also begin to externalize thought itself." There is a subtle difference, underlined Heinemann. The limit will be reached "where we arrive at a place in which we start to externalize that which defines us. That is the potential to feel and the potential to think."

The priority now is to handle these new technologies appropriately. "Not by banning something like it but by integrating it cleverly," said Heinemann. This does not mean that it cannot be virtualized "or that we cannot have avatars. These are discussions that we still understand as video-game ethics, but these discussions must now be held again," said Heinemann.

He advocated talking more about responsibility and compatibility and not forgetting one's social skills. "It is not just dependent on technology and precision. It also depends on achieving social precision and solving problems with each other, and we should not delegate this. We should only delegate the things that burden us, that are unnecessary, but not give up thinking for the sake of convenience." Technology must always "remain a servant of the cause," concluded Heinemann.

Implications for the Interprofessional Healthcare Team

  • The healthcare team should be cognizant of the increasing role of AI in scientific writing and academia. They should be mindful of its benefits and limitations if using it for their own work.
  • The team should exercise caution when utilizing AI tools such as ChatGPT to assist with scientific writing and research, since ‘’data hallucination,’’ lack of accuracy and ethical issues are among concerns that are yet to be addressed.

This article was translated from the  Medscape German Edition.

 

Earn Credit

 

  • Print