Classification of Arteriovenous Vascular Access Aneurysms With AI

EVIDENCE BASED INSIGHT

Automatic Classification of Arteriovenous Vascular Access Aneurysms Using Artificial Intelligence and Smartphones

September 21, 2020 • 4 min read

PETER KOTANKO, MD, FASN • ZUWEN KUANG, MS • MURAT SOR, MD •
HANJIE ZHANG, PhD


Researchers at the Renal Research Institute (RRI) are creating a mobile application that uses vascular access images to identify and classify aneurysms that pose a high risk to patients. This app is an example of how artificial intelligence can help recognize patterns in data, automatically flag potential problems, and improve personalized patient care. By quickly analyzing smartphone images taken by a caregiver, the goal of the app is to classify the severity of the patient’s vascular access problem and then immediately send a message to the clinical team. This innovative tool is currently being piloted in 20 RRI clinics in the United States.

A well-functioning vascular access is a necessity for the delivery of hemodialysis. There are two broad categories of vascular access: central venous catheters and arteriovenous (AV) access, such as AV fistula (AVF) and AV graft (AVG).

Conventional AVF monitoring requires visual inspection by a healthcare professional capable of providing a diagnosis and treatment recommendation. A complication of AVF is the development of an aneurysm, a widening that occurs when part of the AVF wall weakens, allowing it to balloon out and expand abnormally. Depending on the stage, aneurysms can pose a lethal risk to the patient in the event of a rupture. Patient mortality due to hemorrhage from an AV shunt for hemodialysis is quite high. For example, between January 1, 2003 and August 15, 2011, the New York City Office of Chief Medical Examiner reported 100 deaths caused by AV shunt hemorrhages.1

LEVERAGING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR VASCULAR ACCESS PRECISION CARE

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to “think” like humans and mimic human actions. AI brings together concepts from fields such as computer science, statistics, information processing, and data science. Machine learning (ML) is a subdiscipline of AI that uses training examples of how to perform a specific task without explicit instructions, by processing large amounts of data and recognizing patterns in the data to identify associations with a given outcome measure. Deep learning is a subfield of ML that uses neural networks to, for example, recognize patterns. A convolutional neural network (CNN) is one class of deep neural networks widely used for image classification. A CNN includes an input layer (image data), multiple hidden layers (convolution layers, pooling layers, and fully connected layers), and an output layer (classification output).

Given the clinical importance of aneurysm staging, Renal Research Institute (RRI) built a CNN to automatically classify AVF aneurysm stages. This stage classification was developed in close collaboration with Azura Vascular Care (Figure 1). 

Figure 1 | Stages of AVF aneurysms and recommended actions (staging system used based on the British Renal Society’s scoring system)

To classify AVF aneurysm stages, the following process is performed (Figure 2). First, images of a diverse range of AVF accesses are collected using devices (iPhone, iPad, etc.). The following guidelines assist in image capturing:

  • Take a picture of an AVF that includes the surrounding skin area.
  • Use the default resolution on the iPhone, Android phone, or tablet, as it is sufficient.
  • Use a white background.
  • Collect pictures in a diverse patient population (skin tone, age, etc.). Diversity is important for the CNN to improve its diagnostic acumen.

Figure 2 | The process for automatic AVF aneurysm classification

Azura’s vascular access experts review these images and adjudicate AVF aneurysm stages. RRI uses 80 percent of the patients’ images for training purposes to optimize the CNN; the remaining 20 percent are used for CNN validation. Since images collected from different devices may have different resolutions, images are standardized before the next step. The CNN analyzes the images and, in split seconds, computes probabilities for each aneurysm stage (Figure 3).

Figure 3 | The mobile app solution 

RRI has already collected and analyzed 15-to-20-second “panning” videos from 30 patients with aneurysms, 23 in stage 2, and seven in stage 3. The video frames that comprised the image set were extracted. Each image included three color channels with the image resolution set to 960 x 540. In this early phase, RRI collected video instead of pictures because each video frame helped to build a large training and validation image data set. Eighty percent of the patients’ videos were used for CNN training and the remainder for validation. Amazon Web Services’ SageMaker machine learning platform was used to build the CNN; it had a more than 90 percent classification accuracy using validation images.

IMPLEMENT AI/ML MODEL FOR NONINVASIVE PERSONALIZED PATIENT CARE

RRI built an application solution to facilitate the image capturing and seamless image transfer to Amazon’s cloud in a safe, HIPPAcompliant manner. The app is built for Android and iOS tablets and smartphones.

The solution architecture has two process flows: model training and model integration (Figure 4).

Figure 4 | Architecture of the app solution

Model training (process flow 1): Image collection via the mobile app is based on one image per patient-month. Vascular access specialists provide an aneurysm classification, which will be used for CNN training and validation. The CNN is trained to classify images on a scale of 0 (no aneurysm) to 3 (severe aneurysm).

Model execution (process flow 2): The CNN relays the probabilities for each of the four aneurysm stages almost instantaneously to the healthcare professional (see Figure 3).

OUTLOOK

In the future, additional clinical data elements (prolonged bleeding time, pain, etc.) could be collected in addition to the images by enabling a clinician feature in the mobile app interface.

This innovative solution is designed to minimize the burden on patients and the care team, further advance AI/ML approaches in Fresenius Medical Care, and provide precision and personalized care for dialysis patients. Furthermore, image analysis is a technology that can be leveraged beyond aneurysm classifications.

After the solution has been appropriately tested and cleared for use, the goal is to make it available to stakeholders in the United States and abroad. As identified by the Standardized Outcomes in Nephrology (SONG) initiative, the most important patient reported outcomes focus on quality of life, maintaining lifestyle, and self-management. RRI is confident that this new aneurysm classification app has the potential to be meaningful to patients, their families, and health professionals.

Meet The Experts

Research Director, Renal Research Institute

Vice President, BI Analytics and Data Management, Fresenius Medical Care North America

Chief Medical Officer, Azura Vascular Care

Supervisor, Biostatistics and Applied Artificial Intelligence/Machine Learning, Renal Research Institute

References

  1. Gill JR, Storck K, Kelly S. Fatal exsanguination from hemodialysis vascular access sites. Forensic Sci Med Pathol 2012 Sep;8(3):259-62. doi: 10.1007/s12024-011-9303-0.