google ads
Deep Learning: Deep Learning and the Gi Tract Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Deep Learning and the GI Tract

-- OR --

  • “Abdominal cancers continue to pose daily challenges to clinicians, radiologists and researchers. These challenges are faced at each stage of abdominal cancer management, including early detection, accurate characterization, precise assessment of tumor spread, preoperative planning when surgery is anticipated, prediction of tumor aggressiveness, response to therapy, and detection of recurrence. Technical advances in medical imaging, often in combination with imaging biomarkers, show great promise in addressing such challenges. Information extracted from imaging datasets owing to the application of radiomics can be used to further improve the diagnostic capabilities of imaging. However, the analysis of the huge amount of data provided by these advances is a difficult task in daily practice. Artificial intelligence has the potential to help radiologists in all these challenges. Notably, the applications of AI in the field of abdominal cancers are expanding and now include diverse approaches for cancer detection, diagnosis and classification, genomics and detection of genetic alterations, analysis of tumor microenvironment, identification of predictive biomarkers and follow-up. However, AI currently has some limitations that need further refinement for implementation in the clinical setting. This review article sums up recent advances in imaging of abdominal cancers in the field of image/data acquisition, tumor detection, tumor characterization, prognosis, and treatment response evaluation.”
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence
    Maxime Barat · Anna Pellat · Christine Hoeffel · Anthony Dohan · Romain Coriat · Elliot K. Fishman · Stéphanie Nougaret · Linda Chu · Philippe Soyer
    Japanese Journal of Radiology (2024) 42:246–260
  • “Technical advances in medical imaging, often in combination with imaging biomarkers, show great promise in addressing such challenges. Information extracted from imaging datasets owing to the application of radiomics can be used to further improve the diagnostic capabilities of imaging. However, the analysis of the huge amount of data provided by these advances is a difficult task in daily practice. Artificial intelligence has the potential to help radiologists in all these challenges. Notably, the applications of AI in the field of abdominal cancers are expanding and now include diverse approaches for cancer detection, diagnosis and classification, genomics and detection of genetic alterations, analysis of tumor microenvironment, identification of predictive biomarkers and follow-up. However, AI currently has some limitations that need further refinement for implementation in the clinical setting. This review article sums up recent advances in imaging of abdominal cancers in the field of image/data acquisition, tumor detection, tumor characterization, prognosis, and treatment response evaluation.”
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence
    Maxime Barat · Anna Pellat · Christine Hoeffel · Anthony Dohan · Romain Coriat · Elliot K. Fishman · Stéphanie Nougaret · Linda Chu · Philippe Soyer
    Japanese Journal of Radiology (2024) 42:246–260
  • “Radiomics can be used for several tasks such as characterizing indeterminate liver lesions in patients with cirrhosis  or pancreatic tumors, grading HCC or pancreatic neuroendocrine tumors (pNETs) , identifying microvascular invasion (MVI) in HCC (TACE)  or transarterial radioembolization. More recently, radiomics has demonstrated utility for the assessment of tumor microenvironment, which is a relatively new concept that refers to an assemblage of multiple elements contained in tissues that surround tumor. Tumor microenvironment is a dynamic and heterogeneous assemblage made of precursor cells, fibroblasts, immune cells, endothelial cells, signaling molecules, and extracellular matrix components that play a major role in cancer biology. Tumor microenvironment is involved in tumor growth, invasion, metastasis but also in response or resistance to systemic therapies and local therapies such as thermal ablation.”
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence
    Maxime Barat · Anna Pellat · Christine Hoeffel · Anthony Dohan · Romain Coriat · Elliot K. Fishman · Stéphanie Nougaret · Linda Chu · Philippe Soyer
    Japanese Journal of Radiology (2024) 42:246–260
  • “Radiomics models have been developed to predict response to local intra-arterial therapy of HCC. Park et al. found that HCCs with complete response after TACE show lower homogeneity at CT-texture analysis than those with partial response. A hybrid model combining CT-based radiomic features and three clinical factors (Child–Pugh score, a-fetoprotein level, and HCC size) was a strong predictor of longer survival in patients with HCC treated using TACE (hazard ratio 19.88; 95% CI 6.37–62.02) (p < 0.0001). Interestingly, in a study by Aujay et al., MRI based radiomics outperformed RECIST criteria and Liver Imaging Reporting and Data System treatment response algorithm for the assessment of early response of locally advanced HCC to 90yttrium transarterial radioembolization.”
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence
    Maxime Barat · Anna Pellat · Christine Hoeffel · Anthony Dohan · Romain Coriat · Elliot K. Fishman · Stéphanie Nougaret · Linda Chu · Philippe Soyer
    Japanese Journal of Radiology (2024) 42:246–260
  • “Owing to the development of radiomics, CT and MRI can now be used as predictive tools to better estimate response to treatment. Although major advances have been made in abdominal cancer imaging with promising results, these results are still at an early stage and often obtained with local algorithms. Although AI helps extract a huge number of features and classify them, there is a need to bring together all the information to use it in a more efficient way. The next step should be to investigate how all these advances can be implemented in the real-life setting and how they can positively influence care and outcomes in patients with abdominal cancers .State of the art imaging is forcing radiologists to rethink what they do and how they should do it. Current challenges to implementation include reimbursement issues and well-designed translational trials for AI validation that need large volumes of high-quality and representative data for the development of robust AI algorithms.”
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence
    Maxime Barat · Anna Pellat · Christine Hoeffel · Anthony Dohan · Romain Coriat · Elliot K. Fishman · Stéphanie Nougaret · Linda Chu · Philippe Soyer
    Japanese Journal of Radiology (2024) 42:246–260
  • Rationale and objectives: Automated evaluation of abdominal computed tomography (CT) scans should help radiologists manage their massive workloads, thereby leading to earlier diagnoses and better patient outcomes. Our objective was to develop a machine-learning model capable of reliably identifying suspected bowel obstruction (BO) on abdominal CT.
    Conclusion: The 3D mixed convolutional neural network developed here shows great potential for the automated binary classification (BO yes/no) of abdominal CT scans from patients with suspected BO.
    Clinical relevance statement: The 3D mixed CNN automates bowel obstruction classification, potentially automating patient selection and CT prioritization, leading to an enhanced radiologist workflow.
    Deep learning for automatic bowel-obstruction identification on abdominal CT  
    Quentin Vanderbecq et al.
    European Radiology https://doi.org/10.1007/s00330-024-10657
  • Key Points  
    • Bowel obstruction’s rising incidence strains radiologists. AI can aid urgent CT readings.  
    • Employed 1345 CT scans, neural networks for bowel obstruction detection, achieving high accuracy and sensitivity on external testing.  
    • 3D mixed CNN automates CT reading prioritization effectively and speeds up bowel obstruction diagnosis.
    Deep learning for automatic bowel-obstruction identification on abdominal CT  
    Quentin Vanderbecq et al.
    European Radiology https://doi.org/10.1007/s00330-024-10657 
  • “Abdominal cancers continue to pose daily challenges to clinicians, radiologists and researchers. These challenges are faced at each stage of abdominal cancer management, including early detection, accurate characterization, precise assessment of tumor spread, preoperative planning when surgery is anticipated, prediction of tumor aggressiveness, response to therapy, and detection of recurrence. Technical advances in medical imaging, often in combination with imaging biomarkers, show great promise in addressing such challenges. Information extracted from imaging datasets owing to the application of radiomics can be used to further improve the diagnostic capabilities of imaging. However, the analysis of the huge amount of data provided by these advances is a difficult task in daily practice. Artificial intelligence has the potential to help radiologists in all these challenges. Notably, the applications of AI in the field of abdominal cancers are expanding and now include diverse approaches for cancer detection, diagnosis and classification, genomics and detection of genetic alterations, analysis of tumor microenvironment, identification of predictive biomarkers and follow-up. However, AI currently has some limitations that need further refinement for implementation in the clinical setting. This review article sums up recent advances in imaging of abdominal cancers in the field of image/data acquisition, tumor detection, tumor characterization, prognosis, and treatment response evaluation.”
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence.  
    Barat M, Pellat A, Hoeffel C, Dohan A, Coriat R, Fishman EK, Nougaret S, Chu L, Soyer P.  
    Jpn J Radiol. 2023 Nov 6. Epub ahead of print. 
  • “Photon-counting CT is an emerging technology with a high potential to optimize spectral imaging and generate opportunities for quantitative imaging . Photon counting CT uses energy-resolving detectors (i.e., photon counting detectors) and photon-counting CT provides a more comprehensive and accurate sampling of the energy dependence found in the CT images by comparison with DECT . As a result, photon-counting CT has multiple advantages over conventional CT. Of these, photon-counting allows individual photon counting and photon energy discrimination, nulls the electronic noise, and improves spatial resolution and energy weighting of the low-energy photons leading to greater contrast within the tissues. In spite of all these theoretical advantages, limited attention has been placed to the clinical applications of photon-counting CT in abdominal cancers.”
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence.  
    Barat M, Pellat A, Hoeffel C, Dohan A, Coriat R, Fishman EK, Nougaret S, Chu L, Soyer P.  
    Jpn J Radiol. 2023 Nov 6. Epub ahead of print. 
  • “Another remarkable advance in the evaluation of abdominal cancers is the recent development of cinematic rendering. Cinematic rendering is a three-dimensional visualization technique based on a new lighting model that provides photorealistic images with enhanced internal and surface details. Cinematic rendering now has multiple applications in real life in the field of abdominal cancers. It is used as an important adjunct to traditional CT images in the evaluation of pancreatic cancers, gastric and small bowel masses and liver tumors. It has been suggested that cinematic rendering increases the conspicuity of hypervascular HCC, helps identify the feeding arteries before TACE and appreciate internal tumor architecture.”
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence.  
    Barat M, Pellat A, Hoeffel C, Dohan A, Coriat R, Fishman EK, Nougaret S, Chu L, Soyer P.  
    Jpn J Radiol. 2023 Nov 6. Epub ahead of print. 
  • Knowing the gene mutation status is often crucial to best select treatment in a variety of abdominal cancers. As a result, imaging is now at the forefront to help identify a specific mutation. GISTs represent a major category of abdominal cancers for which detection of gene mutation is of major importance because as it has a direct impact on management and therapy. In this regard, GISTs with KIT proto-oncogene mutations (representing 80−85% of all GISTs) favorably respond to imatinib (i.e., a selective TKI of the KIT receptors), whereas those with platelet-derived growth factor receptor alpha (PDGFRA) mutation (representing 5−7% of all GISTs) do not. Moreover, the affected exon (exon 11 vs. exon 9) responsible for KIT mutation has also a major impact on tumor response to imatinib. GISTs with KIT exon 11 mutation are more responsive to imatinib, whereas GISTs with KIT exon 9 mutation are more responsive to sunitinib ormay require a higher dose of imatinib.  
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence.  
    Barat M, Pellat A, Hoeffel C, Dohan A, Coriat R, Fishman EK, Nougaret S, Chu L, Soyer P.  
    Jpn J Radiol. 2023 Nov 6. Epub ahead of print. 
  • “Although major advances have been made in abdominal cancer imaging with promising results, these results are still at an early stage and often obtained with local algorithms. Although AI helps extract a huge number of features and classify them, there is a need to bring together all the information to use it in a more efficient way. The next step should be to investigate how all these advances can be implemented in the real-life setting and how they can positively influence care and outcomes in patients with abdominal cancers. State of the art imaging is forcing radiologists to rethink what they do and how they should do it. Current challenges to implementation include reimbursement issues and well-designed translational trials for AI validation that need large volumes of high-quality and representative data for the development of robust AI algorithms.”
    CT and MRI of abdominal cancers: current trends and perspectives in the era of radiomics and artificial intelligence.  
    Barat M, Pellat A, Hoeffel C, Dohan A, Coriat R, Fishman EK, Nougaret S, Chu L, Soyer P.  
    Jpn J Radiol. 2023 Nov 6. Epub ahead of print. 
  • Artificial intelligence (AI) is a transformative technology that is capturing popular imagination and can revolutionize biomedicine. AI and machine learning (ML) algorithms have the potential to break through existing barriers in oncology research and practice such as automating workflow processes, personalizing care, and reducing healthcare disparities. Emerging applications of AI/ML in the literature include screening and early detection of cancer, disease diagnosis, response prediction, prognosis, and accelerated drug discovery. Despite this excitement, only few AI/ML models have been properly validated and fewer have become regulated products for routine clinical use. In this review, we highlight the main challenges impeding AI/ML clinical translation.  
    Translation of AI into oncology clinical practice
    Issam El Naqa et al.
    Oncogene; https://doi.org/10.1038/s41388-023-02826-z
  • AI and ML are transforming the field of oncology with potentials to automate its workflow processes, personalize cancer care, accelerate research discoveries, and reduce health disparities. These potential applications have been reported across a wide spectrum of oncology practice from screening and early detection, cancer diagnosis, prognosis and prediction, radiological imaging and therapeutics, to disruptive applications in drug discovery and immunotherapy. Despite this enthusiasm about AI in the literature, its impact on the bedside is yet to be felt. Several factors have impeded AI clinical translation, including algorithmic issues (e.g., lack of proper validation, lack of transparency, abilityto make inferences), data modeling (e.g., limited sample size, bias, shift, drift), lack of consensus standard, and commercial hype to name a few.
    Translation of AI into oncology clinical practice
    Issam El Naqa et al.
    Oncogene; https://doi.org/10.1038/s41388-023-02826-z 
  • “Evaluation of class activation maps can help radiologists gain confidence in how an AI algorithm assesses the imaging findings that contribute to the tool’s final decision. In the present study, the activation maps indicated that the AI results were based on alterations in pericolonic fat (e.g., extracolonic tumor extension, fluid collection, and pericolic fat stranding), consistent with classic secondary findings, rather than on bowel wall thickening. This explanation is crucial for radiologists’ uptake of CADx tools. However, the authors do not indicate the time required by the AI algorithm to analyze each CT examination, and they acknowledge that clinical use of the tool requires a user interface for the radiologist to manually define the pathologic colon segment in 3D. Depending on implementation, this step could be too time-consuming to be accepted by radiologists in a clinical setting. Nonetheless, the present work should inspire radiologists and engineers to create their own AI solutions to address such issues.”
    Beyond the AJR: Artificial Intelligence Helps Radiologists to Improve Their Performance in Differentiating Colon Carcinoma From Acute Diverticulitis on CT.  
    Martín-Noguerol T, Luna A.  
    AJR Am J Roentgenol. 2023 Sep 20:1. doi: 10.2214/AJR.23.29466. 
  • ”Use of an AI algorithm trained to differentiate CC and AD may significantly improve radiologists’ performance in the assessment of patients with focal large-bowel wall thickening.”
    Beyond the AJR: Artificial Intelligence Helps Radiologists to Improve Their Performance in Differentiating Colon Carcinoma From Acute Diverticulitis on CT.  
    Martín-Noguerol T, Luna A.  
    AJR Am J Roentgenol. 2023 Sep 20:1. doi: 10.2214/AJR.23.29466. 
  • BACKGROUND. Splenomegaly historically has been assessed on imaging by use of potentially inaccurate linear measurements. Prior work tested a deep learning artificial intelligence (AI) tool that automatically segments the spleen to determine splenic volume.
    OBJECTIVE. The purpose of this study is to apply the deep learning AI tool in a large screening population to establish volume-based splenomegaly thresholds.
    METHODS. This retrospective study included a primary (screening) sample of 8901 patients (4235 men, 4666 women; mean age, 56 Å} 10 [SD] years) who underwent CT colonoscopy (n = 7736) or renal donor CT (n = 1165) from April 2004 to January 2017 and a secondary sample of 104 patients (62 men, 42 women; mean age, 56 Å} 8 years) with endstage liver disease who underwent contrast-enhanced CT performed as part of evaluation for potential liver transplant from January 2011 to May 2013. The automated deep learning AI tool was used for spleen segmentation, to determine splenic volumes.
    Automated Deep Learning Artificial Intelligence Tool for Spleen Segmentation on CT: Defining Volume-Based Thresholds for Splenomegaly
    Alberto A. Perez, Victoria Noe-Kim, Meghan G. Lubner, et al.
    AJR 2023; 221:1–9
  • RESULTS.  In 8853 patients included in analysis of splenic volumes (i.e., excluding a value of 0 mL or error values), the mean automated splenicvolume was 216 Å} 100 [SD] mL. The weight-based volumetric threshold (expressed in milliliters) for splenomegaly was calculated as (3.01 Å~ weight [expressed as kilograms]) + 127; for weight greater than 125 kg, the splenomegaly threshold was constant (503 mL). Sensitivity and specificity for volume-defined splenomegaly were 13% and 100%, respectively, at a true craniocaudal length of 13 cm, and 78% and 88% for a maximum 3D length of 13 cm. In the secondary sample, both observers identified segmentation failure in one patient. The mean automated splenic volume in the 103 remaining patients was 796 Å} 457 mL; 84% (87/103) of patients met the weight-based volume-defined splenomegalythreshold. CONCLUSION. We derived a weight-based volumetric threshold for splenomegaly using an automated AI-based tool. CLINICAL IMPACT. The AI tool could facilitate large-scale opportunistic screening for splenomegaly.
    Automated Deep Learning Artificial Intelligence Tool for Spleen Segmentation on CT: Defining Volume-Based Thresholds for Splenomegaly
    Alberto A. Perez, Victoria Noe-Kim, Meghan G. Lubner, et al.
    AJR 2023; 221:1–9
  • Key Finding
    „ A previously tested automated deep learning AI tool was used to calculate splenic volumes from the CT examinations of 8853 patients from an outpatient screening population. Splenic volume was most strongly associated with weight among a range of patient factors, and a weight-based volume-defined threshold for splenomegaly was derived.
    Importance
    „ Use of the automated deep learning AI tool and weightbased volumetric thresholds could allow large-scale evaluation for splenomegaly on CT examinations performed for any indication.  
    Automated Deep Learning Artificial Intelligence Tool for Spleen Segmentation on CT: Defining Volume-Based Thresholds for Splenomegaly
    Alberto A. Perez, Victoria Noe-Kim, Meghan G. Lubner, et al.
    AJR 2023; 221:1–9
  • “In conclusion, we derived a simple weight-based volumetric threshold for determining the presence of splenomegaly using an automated AI-based tool for determining splenic volume from CT examinations. Standard linear splenic measurements (which historically have been used as a surrogate for splenic volume) had suboptimal performance in detecting volume-based splenomegaly, and the weight-based volumetric thresholds indicated the presence of splenomegaly in most patients who underwent pre–liver transplant CT. The AI tool could be applied for more robust evaluation for splenomegaly in comparison with linear measurements as well as for large-scale opportunistic screening for splenomegaly.”  
    Automated Deep Learning Artificial Intelligence Tool for Spleen Segmentation on CT: Defining Volume-Based Thresholds for Splenomegaly
    Alberto A. Perez, Victoria Noe-Kim, Meghan G. Lubner, et al.
    AJR 2023; 221:1–9
  • “The future of infectious-disease surveillance will feature emerging forms of technology, including but not limited to biosensors, quantum computing, and augmented intelligence. Recent advances in large language models (e.g., Generative Pre-trained Transformer 4 [GPT-4]) hold great promise for the future of infectious-disease surveillance because these models can process and analyze vast amounts of unstructured text and may enhance our ability to streamline labor-intensive processes and spot hidden trends. Other types of technology, not yet invented, will surely make a difference. However, over the course of the Covid-19 pandemic, our current methods have been put to the test, and their performance has been highly variable. The success of the next generation of AI-driven surveillance tools will depend heavily on our ability to unravel the shortcomings of our algorithms, recognize which of our achievements are generalizable, and incorporate the many lessons learned into our future behavior.”
    Advances in Artificial Intelligence for Infectious-Disease Surveillance
    John S. Brownstein et al.
    N Engl J Med 2023;388:1597-607.
  • “Early-warning systems for disease surveillance have benefitted immensely from the incorporation of AI algorithms and analytics. At any given moment, the Web is flooded with disease reports in the form of news articles, press releases, professional discussion boards, and other curated fragments of information. These validated communications can range from documentation of cases of innocuous infections well known to the world to the first reports of emerging viruses with pandemic potential. However, the volume and distributed nature of these reports constitute much more information than can be made sense of promptly by even highly trained persons, making early warning of emerging viruses nearly impossible.”
    Advances in Artificial Intelligence for Infectious-Disease Surveillance
    John S. Brownstein et al.
    N Engl J Med 2023;388:1597-607.
  • “Enter AI-trained algorithms that can parse, filter, classify, and aggregate text for signals of infectious-disease events with high accuracy at unprecedented speeds. HealthMap, just one example of these types of systems, has done so successfully for more than a decade. This Internet-based infectious-disease surveillance system provided early evidence of the emergence of influenza A (H1N1) in Mexico and was used to track the 2019 outbreak of vaping-induced pulmonary disease in the United States.”
    Advances in Artificial Intelligence for Infectious-Disease Surveillance
    John S. Brownstein et al.
    N Engl J Med 2023;388:1597-607.
  • “Artificial intelligence using computer-aided diagnosis (CADx) in real with images acquired during colonoscopy may help colonoscopists distinguish between neoplastic polyps requiring removal and nonneoplastic polyps not requiring removal. In this study, we tested whether CADx analyzed images helped in this decision-making process.”
    Real-Time Artificial Intelligence–Based Optical Diagnosis of Neoplastic Polyps during Colonoscopy
    Ishita Barua, et al.
    NEJM Evid 2022; 1 (6)
  • “Real-time polyp assessment with CADx did not significantly increase the diagnostic sensitivity of neoplastic polyps during a colonoscopy compared with optical evaluation without CADx.”
    Real-Time Artificial Intelligence–Based Optical Diagnosis of Neoplastic Polyps during Colonoscopy
    Ishita Barua, et al.
    NEJM Evid 2022; 1 (6)
  • BACKGROUND Artificial intelligence using computer-aided diagnosis (CADx) in real time with images acquired during colonoscopy may help colonoscopists distinguish between neoplastic polyps requiring removal and nonneoplastic polyps not requiring removal. In this study, we tested whether CADx analyzed images helped in this decision-making process.
    CONCLUSIONS Real-time polyp assessment with CADx did not significantly increase the diagnostic sensitivity of neoplastic polyps during a colonoscopy compared with optical evaluation without CADx.
    Real-Time Artificial Intelligence–Based Optical Diagnosis of Neoplastic Polyps during Colonoscopy
    Ishita Barua, et al.
    NEJM Evid 2022; 1 (6)
  • “Implementation of AI in cancer screening and clinical diagnosis requires proof of benefits from high-quality clinical studies. Our international multicenter study assessed the incremental gain of a specific CADx AI system for real-time polyp assessment during colonoscopy. Our study indicates that real-time AI with CADx may not significantly increase the sensitivity for small neoplastic polyps. However, CADx may improve specificity for optical diagnosis of small neoplastic polyps and increase colonoscopist confidence with visual diagnosis of polyps.”
    Real-Time Artificial Intelligence–Based Optical Diagnosis of Neoplastic Polyps during Colonoscopy
    Ishita Barua, et al.
    NEJM Evid 2022; 1 (6)
  • “In conclusion, real-time assessment with CADx did not significantly increase sensitivity for neoplastic polyps during colonoscopy. There are promising signals for increased specificity and improved confidence of optical diagnosis, but our statistical approach precludes us from making any definitive statements about the identification and removal of small rectosigmoid polyps using the colonoscopy system we employed.”
    Real-Time Artificial Intelligence–Based Optical Diagnosis of Neoplastic Polyps during Colonoscopy
    Ishita Barua, et al.
    NEJM Evid 2022; 1 (6)
  • Purpose: The aim of the study was to develop a prediction model for closed-loop small bowel obstruction integrating computed tomography (CT) and clinical findings.
    Conclusions: A random forest model found clinical factors including prior surgery, age, lactate, and imaging factors including whirl sign, fecalization, and U/C-shaped bowel configuration are helpful in improving the prediction of CLSBO. Individual CT findings in CLSBO had either high sensitivity or specificity, suggesting that accurate diagnosis requires systematic assessment of all CT signs.
    Machine Learning Based Prediction Model for Closed-Loop Small Bowel Obstruction Using Computed Tomography and Clinical Findings
    Goyal, Riya et al
    J Comput Assist Tomography: 3/4 2022 - Volume 46 - Issue 2 - p 169-174
  • Results: Surgery confirmed CLSBO in 185 of 223 patients with clinically suspected CLSBO. Age greater than 52 years showed 2.82 (95% confidence interval = 1.13–4.77) times higher risk for CLSBO (P = 0.021). Sensitivity/specificity of CT findings included proximal dilatation (97/5%), distal collapse (96/2%), mesenteric edema (94/5%), pneumatosis (1/100%), free air (1/98%), and portal venous gas (0/100%). The random forest model combining imaging/clinical findings yielded an area under receiver operating curve of 0.73 (95% confidence interval = 0.58–0.94), sensitivity of 0.72 (0.55–0.85), specificity of 0.8 (0.28–0.99), and accuracy of 0.73 (0.57–0.85). Prior surgery, age, lactate, whirl sign, U/C-shaped bowel configuration, and fecalization were the most important variables in predicting CLSBO.
    Machine Learning Based Prediction Model for Closed-Loop Small Bowel Obstruction Using Computed Tomography and Clinical Findings
    Goyal, Riya et al
    J Comput Assist Tomography: 3/4 2022 - Volume 46 - Issue 2 - p 169-174
  • “Major advances in medical AI have had a tremendous impact at two main levels: (1) image recognition and (2) big data analysis. AI can detect very small changes that are difficult for humans to perceive. For example, AI can detect lung cancer up to a year before a physician [3], and AI can correctly diagnose skin cancer with superior diagnostic performance compared to that of a physician [4]. In addition, AI can reach the desired output within seconds and with more “consistent” performance. Doctors may have “inconsistent” performance due to insufficient training or exhaustion from busy clinical demands. A visual assessment by imaging physicians is qualitative, subjective, and prone to errors, and subject to intra-observer and inter-observer variability. AI may have better performance than physicians in some cases [5], and it has great promise to reduce clinician workload and the cost of medical care. However, it is necessary for clinicians to verify the output from AI for patient care.”
    A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology
    Akihiko Oka  , Norihisa Ishimura and Shunji Ishihara  
    Diagnostics 2021, 11, 1719. https://doi.org/10.3390/diagnostics11091719 
  • “Major advances in medical AI have had a tremendous impact at two main levels: (1) image recognition and (2) big data analysis. AI can detect very small changes that are difficult for humans to perceive. For example, AI can detect lung cancer up to a year before a physician [3], and AI can correctly diagnose skin cancer with superior diagnostic performance compared to that of a physician [4]. In addition, AI can reach the desired output within seconds and with more “consistent” performance. Doctors may have “inconsistent” performance due to insufficient training or exhaustion from busy clinical demands. A visual assessment by imaging physicians is qualitative, subjective, and prone to errors, and subject to intra-observer and inter-observer variability. AI may have better performance than physicians in some cases [5], and it has great promise to reduce clinician workload and the cost of medical care. However, it is necessary for clinicians to verify the output from AI for patient care.”
    A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology
    Akihiko Oka  , Norihisa Ishimura and Shunji Ishihara  
    Diagnostics 2021, 11, 1719. https://doi.org/10.3390/diagnostics11091719 

  • A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology
    Akihiko Oka  , Norihisa Ishimura and Shunji Ishihara  
    Diagnostics 2021, 11, 1719. https://doi.org/10.3390/diagnostics11091719 
  • “In conclusion, there is little doubt that AI technology will benefit almost all medical personnel, ranging from specialty physicians to paramedics, in the future. Furthermore, patients should benefit from AI technology directly via mobile applications. Physicians should collaborate with the different stakeholders within the AI ecosystem to provide ethical, practical, user-friendly, and cost-effective solutions that reduce the gap between research settings and applications in clinical practice. Collaborations with regulators, patient advocates, AI companies, technology giants, and venture capitalists will help move the field forward.”
    A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology
    Akihiko Oka  , Norihisa Ishimura and Shunji Ishihara  
    Diagnostics 2021, 11, 1719. https://doi.org/10.3390/diagnostics11091719 
  • ”Early detection of postoperative complications, including organ failure, is pivotal in the initiation of targeted treatment strategies aimed at attenuating organ damage. In an era of increasing health-care costs and limited financial resources, identifying surgical patients at a high risk of postoperative complications and providing personalised precision medicine-based treatment strategies provides an obvious pathway for reducing patient morbidity and mortality. We aimed to leverage deep learning to create, through training on structured electronic health-care data, a multilabel deep neural network to predict surgical postoperative complications that would outperform available models in surgical risk prediction.”
    Assessing the utility of deep neural networks in predicting  postoperative surgical complications: a retrospective study  
    Alexander Bonde  et al.
  • "In this retrospective study, we used data on 58 input features, including demographics, laboratory values, and 30-day postoperative complications, from the American College of Surgeons (ACS) National Surgical Quality Improvement Program database, which collects data from 722 hospitals from around 15 countries. We queried the entire adult (≥18 years) database for patients who had surgery between Jan 1, 2012, and Dec 31, 2018. We then identified all patients who were treated at a large midwestern US academic medical centre, excluded them from the base dataset, and reserved this independent group for final model testing. We then randomly created a training set and a validation set from the remaining cases.”
    Assessing the utility of deep neural networks in predicting  postoperative surgical complications: a retrospective study  
    Alexander Bonde  et al.
    Lancet Digit Health 2021; 3: e471–85 Lancet Digit Health 2021; 3: e471–85 
  • “We have developed unified prediction models, based on deep neural networks, for predicting surgical postoperative complications. The models were generally superior to previously published surgical risk prediction tools and appeared robust to changes in the underlying patient population. Deep learning could offer superior approaches to surgical risk prediction in clinical practice.”
    Assessing the utility of deep neural networks in predicting  postoperative surgical complications: a retrospective study  
    Alexander Bonde  et al.
    Lancet Digit Health 2021; 3: e471–85 
  • Implications of all the available evidence  
    ”Our deep learning models were superior to previously published surgical risk prediction tools, despite the increasingly rigorous standards for model validation. Our algorithms might be used by clinicians to help guide future preoperative, intraoperative, and postoperative risk management, serving as an important step towards personalised medicine in surgery. A clinical trial is required to identify whether the use of deep learning models can help to reduce the incidence of surgical postoperative complications.”
    Assessing the utility of deep neural networks in predicting  postoperative surgical complications: a retrospective study  
    Alexander Bonde  et al.
    Lancet Digit Health 2021; 3: e471–85 
  • Background: or Purpose: Pancreatic ductal adenocarcinoma (PDAC) is a leading cause of mortality in the world with the overall 5-year survival rate of 6%. The survival of patients with PDAC is closely related to recurrence and therefore it is necessary to identify the risk factors for recurrence. This study uses artificial intelligence approaches and multi-center registry data to analyze the recurrence of pancreatic cancer after surgery and its major determinants.  
    Results: Based on variable importance from the random forest, major predictors of disease-free survival after surgery were tumor size (0.00310), tumor grade (0.00211), TNM stage (0.00211), T stage (0.00146) and lymphovascular invasion (0.00125). The coefficients of these variables were statistically significant in the Cox model (p < 0.05). The C-Index averages of the random forest and the Cox model were 0.6805 and 0.7738, respectively.  
    Conclusions: This is the first artificial-intelligence study with multi-center registry data to predict disease-free survival after the surgery of pancreatic cancer. The findings of this methodological study demonstrate that artificial intelligence can provide a valuable decision-support system for treating patients undergoing surgery for pancreatic cancer. However, at present, further studies are needed to demonstrate the actual benefit of applying machine learning algorithms in clinical practice.  
    Usefulness of artificial intelligence for predicting recurrence following surgery for pancreatic cancer: Retrospective cohort study  
    Kwang-Sig Lee et al.
    International Journal of Surgery 93 (2021) 106050 
  • Results: Based on variable importance from the random forest, major predictors of disease-free survival after surgery were tumor size (0.00310), tumor grade (0.00211), TNM stage (0.00211), T stage (0.00146) and lymphovascular invasion (0.00125). The coefficients of these variables were statistically significant in the Cox model (p < 0.05). The C-Index averages of the random forest and the Cox model were 0.6805 and 0.7738, respectively.  
    Conclusions: This is the first artificial-intelligence study with multi-center registry data to predict disease-free survival after the surgery of pancreatic cancer. The findings of this methodological study demonstrate that artificial intelligence can provide a valuable decision-support system for treating patients undergoing surgery for pancreatic cancer. However, at present, further studies are needed to demonstrate the actual benefit of applying machine learning algorithms in clinical practice.  
    Usefulness of artificial intelligence for predicting recurrence following surgery for pancreatic cancer: Retrospective cohort study  
    Kwang-Sig Lee et al.
    International Journal of Surgery 93 (2021) 106050 
  • Background: Image recognition using artificial intelligence with deep learning through convolutional neural networks (CNNs) has dramatically improved and been increasingly applied to medical fields for diagnostic imaging. We developed a CNN that can automatically detect gastric cancer in endoscopic images.
    Methods: A CNN-based diagnostic system was constructed based on Single Shot MultiBox Detector architecture and trained using 13,584 endoscopic images of gastric cancer. To evaluate the diagnostic accuracy, an independent test set of 2296 stomach images collected from 69 consecutive patients with 77 gastric cancer lesions was applied to the constructed CNN.
    Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images
    Toshiaki Hirasawa et al.
    Gastric Cancer (2018) 21:653–660
  • Results: The CNN required 47 s to analyze 2296 test images. The CNN correctly diagnosed 71 of 77 gastric cancer lesions with an overall sensitivity of 92.2%, and 161 non-cancerous lesions were detected as gastric cancer, resulting in a positive predictive value of 30.6%. Seventy of the 71 lesions (98.6%) with a diameter of 6 mm or more as well as all invasive cancers were correctly detected. All missed lesions were superficially depressed and differentiated-type intramucosal cancers that were difficult to distinguish from gastritis even for experienced endoscopists. Nearly half of the false-positive lesions were gastritis with changes in color tone or an irregular mucosal surface.
    Conclusion: The constructed CNN system for detecting gastric cancer could process numerous stored endoscopic images in a very short time with a clinically relevant diagnostic ability. It may be well applicable to daily clinical practice to reduce the burden of endoscopists.
    Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images
    Toshiaki Hirasawa et al.
    Gastric Cancer (2018) 21:653–660
  • “In conclusion, we developed a CNN system for detecting gastric cancer using stored endoscopic images, which processed extensive independent images in a very short time. The clinically relevant diagnostic ability of the CNN offers a promising applicability to daily clinical practice for reducing the burden of endoscopists as well as telemedicine in remote and rural areas as well as in developing countries where the number of endoscopists is limited.”
    Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images
    Toshiaki Hirasawa et al.
    Gastric Cancer (2018) 21:653–660
  • AI and Medicine: Changing the Workflow
    - GI Endoscopy
    - Acute findings in CT (pneumothorax, intracranial bleed, PE)
    - Goal is to increase physician accuracy and perhaps increase workflow/through-put
  • “Artificial intelligence (AI) is coming to medicine in a big wave. From making diagnosis in various medical conditions, following the latest advancements in scientific literature, suggesting appropriate therapies, to predicting prognosis and outcome of diseases and conditions, AI is offering unprecedented possibilities to improve care for patients. Gastroenterology is a field that AI can make a significant impact. This is partly because the diagnosis of gastrointestinal conditions relies a lot on image-based investigations and procedures (endoscopy and radiology). AI-assisted image analysis can make accurate assessment and provide more information than conventional analysis. AI integration of genomic, epigenetic, and metagenomic data may offer new classifications of gastrointestinal cancers and suggest optimal personalized treatments.”
    Artificial intelligence in gastroenterology: where are we heading?
    Joseph JY Sung, Nicholas CH Poon
    Front. Med. https://doi.org/10.1007/s11684-020-0742-4
  • “AI integration of genomic, epigenetic, and metagenomic data may offer new classifications of gastrointestinal cancers and suggest optimal personalized treatments. In managing relapsing and remitting diseases such as inflammatory bowel disease, irritable bowel syndrome, and peptic ulcer bleeding, convoluted neural network may formulate models to predict disease outcome, enhancing treatment efficacy. AI and surgical robots can also assist surgeons in conducting gastrointestinal operations. While the advancement and new opportunities are exciting, the responsibility and liability issues of AI-assisted diagnosis and management need much deliberations.”
    Artificial intelligence in gastroenterology: where are we heading?
    Joseph JY Sung, Nicholas CH Poon
    Front. Med. https://doi.org/10.1007/s11684-020-0742-4
  • “AI has another major potential in healthcare: to predict the clinical outcome of patients on the basis of clinical data set, genomic information, and medical images. Cardiologists have developed algorithms to assess the risk of cardiovascular disease and claimed that their prediction is superior to existing scoring systems. By analyzing echocardiograms, deep CNN model claimed to predict the mortality of patients with heart failure. Risk assessment and prediction of outcome have always been a challenge in public health and clinical medicine. Now, AI is offering a new direction to these challenges.
    Artificial intelligence in gastroenterology: where are we heading?
    Joseph JY Sung, Nicholas CH Poon
    Front. Med. https://doi.org/10.1007/s11684-020-0742-4
  • “However, when AI-assisted endoscopy and surgery are put into daily use, who should take the responsibility of clinical decisions? When a malignant polyp is missed or misdiagnosed as benign hence left unresected, when the depth of invasion is assessed to be superficial and surgery is not offered, and when the resection margin of endoscopic dissection is wrongly assessed and follow-up operations are not performed, these scenarios might lead to disastrous outcomes or even medical–legal consequences. Where should medical liability rest on?”
    Artificial intelligence in gastroenterology: where are we heading?
    Joseph JY Sung, Nicholas CH Poon
    Front. Med. https://doi.org/10.1007/s11684-020-0742-4
  • Objective: Sepsis remains a costly and prevalent syndrome in hospitals; however, machine learning systems can increase timely sepsis detection using electronic health records. This study validates a gradient boosted ensemble machine learning tool for sepsis detection and prediction, and compares its performance to existing methods.
    Results: The MLA achieved an AUROC of 0.88, 0.84, and 0.83 for sepsis onset and 24 and 48 h prior to onset, respectively. These values were superior to those of SIRS (0.66), MEWS (0.61), SOFA (0.72), and qSOFA (0.60) at time of onset. When trained on UCSF data and tested on BIDMC data, sepsis onset AUROC was 0.89.
    Discussion and conclusion: The MLA predicts sepsis up to 48 h in advance and identifies sepsis onset more accurately than commonly used tools, maintaining high performance for sepsis detection when trained and tested on separate datasets.
    Evaluation of a machine learning algorithm for up to 48-hour advance prediction of sepsis using six vital signs
    Barton C et al.
    Computers in Biology and Medicine 109 (2019) 79-84
  • Objective: Sepsis remains a costly and prevalent syndrome in hospitals; however, machine learning systems can increase timely sepsis detection using electronic health records. This study validates a gradient boosted ensemble machine learning tool for sepsis detection and prediction, and compares its performance to existing methods.
    Discussion and conclusion: The MLA predicts sepsis up to 48 h in advance and identifies sepsis onset more accurately than commonly used tools, maintaining high performance for sepsis detection when trained and tested on separate datasets.
    Evaluation of a machine learning algorithm for up to 48-hour advance prediction of sepsis using six vital signs
    Barton C et al.
    Computers in Biology and Medicine 109 (2019) 79-84
  • “The machine learning algorithm assessed in this study is capable of predicting sepsis up to 48 h in advance of onset with an AUROC of 0.83. This performance exceeds that of commonly used detection methods at time of onset, and may in turn lead to improved patient outcomes through early detection and clinical intervention.”
    Evaluation of a machine learning algorithm for up to 48-hour advance prediction of sepsis using six vital signs
    Barton C et al.
    Computers in Biology and Medicine 109 (2019) 79-84

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.