ijerph-logo

Journal Browser

Journal Browser

Diagnostic Strategy in Medicine in the Era of Digital Assistance and Machine Learning

A special issue of International Journal of Environmental Research and Public Health (ISSN 1660-4601). This special issue belongs to the section "Digital Health".

Deadline for manuscript submissions: closed (30 June 2021) | Viewed by 22292

Special Issue Editor

Diagnostic and Generalist Medicine, Dokkyo Medical University Graduate School of Medicine, Tochigi, 321-0293 Japan
Interests: diagnostic strategy; clinical reasoning; diagnostic error; medical education
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Diagnosis is one of the essential parts of clinical expertise in medicine. The mastery of diagnostic (clinical) reasoning, is a key mission of physicians who practice medicine, since ascertaining the correct diagnosis is vital to proper treatment and the health management of patients. The Institute of Medicine concluded that diagnostic error occurs in nearly every patient throughout his or her lifetime. The phenomena are observed in any clinical setting ranging from rural to urban, clinic to tertiary hospital, and community to university hospital. Today, physicians can enjoy digital assistance and the support of artificial intelligence in daily clinical practices. These cutting-edge technologies allow us to access the needed information swiftly and ultimately uncover hidden or unknown diagnoses. These “artificial” systems are seemingly powerful tools in diagnosis, but they have their disadvantages. For example, they may often be designed not to work independently, namely, designed to utilize information from external input acquired by human medical professionals.

It is reasonable to say that it is very challenging for the digital system to obtain subtle signs and history information because every patient has a unique history, background, and personality. On obtaining a clinical history and physical examination, physicians should account for a patient’s clinical context, perception to symptoms, and potentially hidden complaints or signs which are not apparent through routine data gathering methods. Identifying and eliciting the case-by-case information can be hurdle for statistical patterns induced by current machine learning systems. The other disadvantage of artificial intelligence is that machine learning systems replace humans’ intuitive diagnostic process. Dual processing theory (DPT) has been the main theory in humans’ diagnostic decision making, which comprises an intuitive process (system 1) and analytical process (system 2). Machine learning is categorized as part of system 2, and humans’ clinical reasoning covers both system 1 and 2. For learning and practicing diagnostic thinking, the diagnostic thinking principle, known as “diagnostic strategy (DS),” has been advocated and clinically applied among frontline physicians internationally. This strategy is designed for humans but also can be installed in digital systems in some way, augmenting the diagnostic accuracy. In this manner, humans and machines can collaborate with each other and establish a better diagnostic outcome.

In this Special Issue, authors report the current concept and scope of humans’ diagnostic thinking systems and machine learning-assisted diagnostic systems, elucidating future perspective for augmentation between both systems.

Dr. Taro Shimizu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. International Journal of Environmental Research and Public Health is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Diagnostic strategy
  • Clinical reasoning
  • Diagnostic error
  • Artificial intelligence
  • Machine learning
  • Deep learning
  • Medical education
  • Pivot and cluster strategy

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

14 pages, 3388 KiB  
Article
Balanced Convolutional Neural Networks for Pneumoconiosis Detection
by Chaofan Hao, Nan Jin, Cuijuan Qiu, Kun Ba, Xiaoxi Wang, Huadong Zhang, Qi Zhao and Biqing Huang
Int. J. Environ. Res. Public Health 2021, 18(17), 9091; https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph18179091 - 28 Aug 2021
Cited by 13 | Viewed by 2178
Abstract
Pneumoconiosis remains one of the most common and harmful occupational diseases in China, leading to huge economic losses to society with its high prevalence and costly treatment. Diagnosis of pneumoconiosis still strongly depends on the experience of radiologists, which affects rapid detection on [...] Read more.
Pneumoconiosis remains one of the most common and harmful occupational diseases in China, leading to huge economic losses to society with its high prevalence and costly treatment. Diagnosis of pneumoconiosis still strongly depends on the experience of radiologists, which affects rapid detection on large populations. Recent research focuses on computer-aided detection based on machine learning. These have achieved high accuracy, among which artificial neural network (ANN) shows excellent performance. However, due to imbalanced samples and lack of interpretability, wide utilization in clinical practice meets difficulty. To address these problems, we first establish a pneumoconiosis radiograph dataset, including both positive and negative samples. Second, deep convolutional diagnosis approaches are compared in pneumoconiosis detection, and a balanced training is adopted to promote recall. Comprehensive experiments conducted on this dataset demonstrate high accuracy (88.6%). Third, we explain diagnosis results by visualizing suspected opacities on pneumoconiosis radiographs, which could provide solid diagnostic reference for surgeons. Full article
Show Figures

Figure 1

8 pages, 279 KiB  
Article
Effects of a Differential Diagnosis List of Artificial Intelligence on Differential Diagnoses by Physicians: An Exploratory Analysis of Data from a Randomized Controlled Study
by Yukinori Harada, Shinichi Katsukura, Ren Kawamura and Taro Shimizu
Int. J. Environ. Res. Public Health 2021, 18(11), 5562; https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph18115562 - 23 May 2021
Cited by 13 | Viewed by 2733
Abstract
A diagnostic decision support system (DDSS) is expected to reduce diagnostic errors. However, its effect on physicians’ diagnostic decisions remains unclear. Our study aimed to assess the prevalence of diagnoses from artificial intelligence (AI) in physicians’ differential diagnoses when using AI-driven DDSS that [...] Read more.
A diagnostic decision support system (DDSS) is expected to reduce diagnostic errors. However, its effect on physicians’ diagnostic decisions remains unclear. Our study aimed to assess the prevalence of diagnoses from artificial intelligence (AI) in physicians’ differential diagnoses when using AI-driven DDSS that generates a differential diagnosis from the information entered by the patient before the clinical encounter on physicians’ differential diagnoses. In this randomized controlled study, an exploratory analysis was performed. Twenty-two physicians were required to generate up to three differential diagnoses per case by reading 16 clinical vignettes. The participants were divided into two groups, an intervention group, and a control group, with and without a differential diagnosis list of AI, respectively. The prevalence of physician diagnosis identical with the differential diagnosis of AI (primary outcome) was significantly higher in the intervention group than in the control group (70.2% vs. 55.1%, p < 0.001). The primary outcome was significantly >10% higher in the intervention group than in the control group, except for attending physicians, and physicians who did not trust AI. This study suggests that at least 15% of physicians’ differential diagnoses were affected by the differential diagnosis list in the AI-driven DDSS. Full article
10 pages, 674 KiB  
Article
Efficacy of Artificial-Intelligence-Driven Differential-Diagnosis List on the Diagnostic Accuracy of Physicians: An Open-Label Randomized Controlled Study
by Yukinori Harada, Shinichi Katsukura, Ren Kawamura and Taro Shimizu
Int. J. Environ. Res. Public Health 2021, 18(4), 2086; https://doi.org/10.3390/ijerph18042086 - 21 Feb 2021
Cited by 11 | Viewed by 4018
Abstract
Background: The efficacy of artificial intelligence (AI)-driven automated medical-history-taking systems with AI-driven differential-diagnosis lists on physicians’ diagnostic accuracy was shown. However, considering the negative effects of AI-driven differential-diagnosis lists such as omission (physicians reject a correct diagnosis suggested by AI) and commission (physicians [...] Read more.
Background: The efficacy of artificial intelligence (AI)-driven automated medical-history-taking systems with AI-driven differential-diagnosis lists on physicians’ diagnostic accuracy was shown. However, considering the negative effects of AI-driven differential-diagnosis lists such as omission (physicians reject a correct diagnosis suggested by AI) and commission (physicians accept an incorrect diagnosis suggested by AI) errors, the efficacy of AI-driven automated medical-history-taking systems without AI-driven differential-diagnosis lists on physicians’ diagnostic accuracy should be evaluated. Objective: The present study was conducted to evaluate the efficacy of AI-driven automated medical-history-taking systems with or without AI-driven differential-diagnosis lists on physicians’ diagnostic accuracy. Methods: This randomized controlled study was conducted in January 2021 and included 22 physicians working at a university hospital. Participants were required to read 16 clinical vignettes in which the AI-driven medical history of real patients generated up to three differential diagnoses per case. Participants were divided into two groups: with and without an AI-driven differential-diagnosis list. Results: There was no significant difference in diagnostic accuracy between the two groups (57.4% vs. 56.3%, respectively; p = 0.91). Vignettes that included a correct diagnosis in the AI-generated list showed the greatest positive effect on physicians’ diagnostic accuracy (adjusted odds ratio 7.68; 95% CI 4.68–12.58; p < 0.001). In the group with AI-driven differential-diagnosis lists, 15.9% of diagnoses were omission errors and 14.8% were commission errors. Conclusions: Physicians’ diagnostic accuracy using AI-driven automated medical history did not differ between the groups with and without AI-driven differential-diagnosis lists. Full article
Show Figures

Figure 1

9 pages, 1673 KiB  
Article
The Utility of Virtual Patient Simulations for Clinical Reasoning Education
by Takashi Watari, Yasuharu Tokuda, Meiko Owada and Kazumichi Onigata
Int. J. Environ. Res. Public Health 2020, 17(15), 5325; https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph17155325 - 24 Jul 2020
Cited by 21 | Viewed by 5332
Abstract
Virtual Patient Simulations (VPSs) have been cited as a novel learning strategy, but there is little evidence that VPSs yield improvements in clinical reasoning skills and medical knowledge. This study aimed to clarify the effectiveness of VPSs for improving clinical reasoning skills among [...] Read more.
Virtual Patient Simulations (VPSs) have been cited as a novel learning strategy, but there is little evidence that VPSs yield improvements in clinical reasoning skills and medical knowledge. This study aimed to clarify the effectiveness of VPSs for improving clinical reasoning skills among medical students, and to compare improvements in knowledge or clinical reasoning skills relevant to specific clinical scenarios. We enrolled 210 fourth-year medical students in March 2017 and March 2018 to participate in a real-time pre-post experimental design conducted in a large lecture hall by using a clicker. A VPS program (®Body Interact, Portugal) was implemented for one two-hour class session using the same methodology during both years. A pre–post 20-item multiple-choice questionnaire (10 knowledge and 10 clinical reasoning items) was used to evaluate learning outcomes. A total of 169 students completed the program. Participants showed significant increases in average total post-test scores, both on knowledge items (pre-test: median = 5, mean = 4.78, 95% CI (4.55–5.01); post-test: median = 5, mean = 5.12, 95% CI (4.90–5.43); p-value = 0.003) and clinical reasoning items (pre-test: median = 5, mean = 5.3 95%, CI (4.98–5.58); post-test: median = 8, mean = 7.81, 95% CI (7.57–8.05); p-value < 0.001). Thus, VPS programs could help medical students improve their clinical decision-making skills without lecturer supervision. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

14 pages, 567 KiB  
Review
Clinical Decision Support Systems for Diagnosis in Primary Care: A Scoping Review
by Taku Harada, Taiju Miyagami, Kotaro Kunitomo and Taro Shimizu
Int. J. Environ. Res. Public Health 2021, 18(16), 8435; https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph18168435 - 10 Aug 2021
Cited by 23 | Viewed by 3913
Abstract
Diagnosis is one of the crucial tasks performed by primary care physicians; however, primary care is at high risk of diagnostic errors due to the characteristics and uncertainties associated with the field. Prevention of diagnostic errors in primary care requires urgent action, and [...] Read more.
Diagnosis is one of the crucial tasks performed by primary care physicians; however, primary care is at high risk of diagnostic errors due to the characteristics and uncertainties associated with the field. Prevention of diagnostic errors in primary care requires urgent action, and one of the possible methods is the use of health information technology. Its modes such as clinical decision support systems (CDSS) have been demonstrated to improve the quality of care in a variety of medical settings, including hospitals and primary care centers, though its usefulness in the diagnostic domain is still unknown. We conducted a scoping review to confirm the usefulness of the CDSS in the diagnostic domain in primary care and to identify areas that need to be explored. Search terms were chosen to cover the three dimensions of interest: decision support systems, diagnosis, and primary care. A total of 26 studies were included in the review. As a result, we found that the CDSS and reminder tools have significant effects on screening for common chronic diseases; however, the CDSS has not yet been fully validated for the diagnosis of acute and uncommon chronic diseases. Moreover, there were few studies involving non-physicians. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

6 pages, 265 KiB  
Perspective
A Perspective from a Case Conference on Comparing the Diagnostic Process: Human Diagnostic Thinking vs. Artificial Intelligence (AI) Decision Support Tools
by Taku Harada, Taro Shimizu, Yuki Kaji, Yasuhiro Suyama, Tomohiro Matsumoto, Chintaro Kosaka, Hidefumi Shimizu, Takatoshi Nei and Satoshi Watanuki
Int. J. Environ. Res. Public Health 2020, 17(17), 6110; https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph17176110 - 22 Aug 2020
Cited by 9 | Viewed by 3236
Abstract
Artificial intelligence (AI) has made great contributions to the healthcare industry. However, its effect on medical diagnosis has not been well explored. Here, we examined a trial comparing the thinking process between a computer and a master in diagnosis at a clinical conference [...] Read more.
Artificial intelligence (AI) has made great contributions to the healthcare industry. However, its effect on medical diagnosis has not been well explored. Here, we examined a trial comparing the thinking process between a computer and a master in diagnosis at a clinical conference in Japan, with a focus on general diagnosis. Consequently, not only was AI unable to exhibit its thinking process, it also failed to include the final diagnosis. The following issues were highlighted: (1) input information to AI could not be weighted in order of importance for diagnosis; (2) AI could not deal with comorbidities (see Hickam’s dictum); (3) AI was unable to consider the timeline of the illness (depending on the tool); (4) AI was unable to consider patient context; (5) AI could not obtain input information by themselves. This comparison of the thinking process uncovered a future perspective on the use of diagnostic support tools. Full article
Back to TopTop