Success! The link has been copied to your clipboard.
In addition to healthcare, sound can be used to determine something about every environment. Sound can provide a signaling mechanism about what may be happening in the environment. If you walk into a Starbucks with your eyes closed, you would be able to decipher if it is busy based on the level of noise. You could discern if it is the Christmas season based on the type of music playing. The loudness of voices may suggest whether customers are frustrated with wait times.
In fact, every environment carries its own unique specific noise, or "acoustic fingerprint."
An acoustic fingerprint is a condensed digital summary generated from an audio signal. It can identify audio samples or locate similar sounds in an audio database.5 Typically, acoustic fingerprinting is used to identify songs, melodies, tunes, or advertisements. However, it has the potential to also be used in the clinic environment (Figure 2).
FIGURE 2 | Acoustic fingerprinting
One challenge of using acoustic fingerprinting in a clinical setting is determining how to detect sound without recording identifiable information about the people or interactions in the clinic. To address this challenge, FMCNA developed an acoustic fingerprinting sensor, a device designed to collect the unique audio signature of an environment without collecting any discernible or interpretable sound such as individual voices or conversations. The device converts the acoustic fingerprint of a geographic location into an unidentifiable digital stream of numbers at the point of "impact" as the sound is recorded on the device, so that the numbers can never be reversed to identify individual sounds.
FIGURE 3 | Taking the acoustic fingerprint
These 21 data elements are:
- Mean volume
- 13 mel-frequency cepstral coefficients (numbered 0 to 12)
The data that comes out as a result of this process can be used in a variety of analytics. These frequently collected data points create a numeric fingerprint of a given environment that can be used for artificial intelligence—specifically, machine learning-based algorithms. These algorithms can be used to classify clinics into those with potential real-time patient safety concerns and considerations.
Thanks to the clinicians at the Fresenius Kidney Care Stoneham Dialysis Clinic, the development team has been able to get a first glimpse of what the typical acoustic fingerprint of a dialysis clinic looks like. The acoustic fingerprinting sensor was installed in the clinic for a period of 24 hours starting at 3 p.m. on day one and finishing at 3 p.m. on day two. The device collected numeric data reflecting the acoustic fingerprint of the environment every 10 minutes, in 10-second intervals (Figure 4).
FIGURE 4 | Acoustic fingerprint of the Stoneham Dialysis Clinic
Using this acoustic fingerprint of the dialysis clinic and XGBoost, a machine learning algorithm, FMCNA data scientists were able to predict when patients leave their dialysis chairs. Based on
the observation at Stoneham, several factors were important in predicting when patients leave their dialysis chairs, such as changes in the centroid function of the frequency and the mel- frequency cepstral coefficients 0, 1, 4, 7, 8. In addition, other important factors were identified as predictors of when patients leave the dialysis chairs (Figure 5, bottom figure).
FIGURE 5 | Predictive model to determine patient changeover using acoustic fingerprint
While FMCNA is in the early stages of determining whether acoustic fingerprinting can assist with ascertaining if a clinic has a higher risk of patient safety issues, this technology can have other implications. For example, the ability to convert sound data into non-discernible numeric data can potentially be used to classify various heart sounds or sound waves associated with arteriovenous (AV) fistula generated thrill. Some artificial intelligence-based sound-processing algorithms already exist to identify heart murmurs and even gastrointestinal issues. Image-based artificial intelligence algorithms have also been used in radiation oncology and diabetic retinopathy classification.7,8,9,10,11 Identifying a wide range of artificial intelligence-based cardiac abnormalities may help reverse the problem of "forgetting the art of listening to the patient."12
In addition to sound data, physicians rely on other senses to understand the patient's condition. They rely on visual data for patient physical evaluations. They may use touch and skin pressure to ascertain if patients are having peripheral edema. Uremic breath has been reported to be associated with reduction in kidney function.13
Converting these senses into numeric data streams will allow for application of artificial intelligence and machine learning algorithms to help deliver truly precise personalized care to patients. This technology may allow FMCNA to obtain unique signatures of dialysis patients—as unique as the DNA or microbiome of each patient.
BRIEF OVERVIEW OF WHAT IS SOUND
Sound is a pressure wave created by a vibrating object, such as a musical instrument, a person speaking, an airplane flying, or a variety of sounds coming from a dialysis clinic. The resulting vibrations set particles in the surrounding medium, which is usually air, in a vibrational motion. Since the particles are moving in parallel to the wave movement, the sound wave is referred to as a longitudinal wave. The result of longitudinal waves is the creation of compressions and rarefactions within the air. These compressions and rarefactions are typically described by the graph of a sine wave.
The frequency of a wave is measured as the number of complete back-and-forth vibrations of a particle in the air per unit of time, while the amplitude is the fluctuation or displacement of a wave from its mean value (Figure 6).14 The human eardrum vibrates in response to air vibrations, and the brain translates these different waves into a comprehendible sound.
FIGURE 6 | Sample sound waves
However, if two different musical instruments make sound waves with the same amplitude and frequency, why do they sound so different? It's because an instrument (or a human voice, for that matter) produces a whole mixture of different waves at the same time.15 These different sine waves are overlaid on top of each other simultaneously.
To translate different waves occurring at the same time, a mathematical technique called "spectrum analysis" is used. Spectrum analysis is the technical process of decomposing a complex signal into simpler parts. One of the most common approaches to decompress complex sound waves into simpler components is a Fourier analysis, named for Joseph Fourier, a French mathematician and physicist who lived from 1768 to 1830.
An example of these complex sound waves is shown in Figure 7, along with the three individual pure tones that constitute this sample. Fourier analysis is used to determine which sine waves constitute a given signal, i.e., to deconstruct the signal into its individual sine waves. The result is expressed as sine wave amplitude as a function of frequency.16 This is what the acoustic fingerprinting sensor uses to transform sound data into numeric data streams.
FIGURE 7 | Fourier analysis (complex wave at the bottom and series of individual sine waves on top)
We wish to thank our data scientists, Tommy Blanchard and Andy Long; our IT experts, Mike Ryder and Mehran Fattahy; and Scott Ash from the Fresenius Kidney Care Strategic Analytics team for the extensive work they put into the creation of the acoustic fingerprinting sensor and regulatory risk model, as well as the staff at the Stoneham Dialysis Clinic for allowing us to use this device in their clinic.
Meet Our Experts
LEN USVYAT, PhD
Vice President, Applied Advanced Analytics, Fresenius Medical Care
Len Usvyat chairs Fresenius Medical Care's Advanced Analytics Steering Committees and works closely with the global MONitoring Dialysis Outcomes (MONDO) initiative, an international consortium of dialysis providers. His team provides analytical support and functions in the liaison capacity with Fresenius's integrated care assets, such as its pharmacy, vascular care centers, urgent care facilities, and the Fresenius health plan. Globally, he is responsible for connecting various advanced analytics partners inside and outside Fresenius Medical Care to the Medical Office Clinical Agenda and building out FMC's capabilities in applied advanced analytics endeavors. These efforts vary and include activities such as routine and custom reporting, predictive modeling, outcomes analysis, and research. Len has published over 70 manuscripts in peer-reviewed journals. He holds a master's degree from the University of Pennsylvania and a doctorate from the University of Maastricht in Netherlands.
Vice President, Regulatory Affairs, Fresenius Kidney Care
Wendy Millette oversees the Regulatory Affairs Department for Fresenius Kidney Care, which assists Fresenius clinics in maintaining compliance with state and federal regulations and staying apprised of new and developing policies. Prior to her role in Regulatory Affairs, Wendy managed litigation in the Fresenius Legal Department from 2007 to 2014 after a career as a litigation attorney with a Boston law firm.
- Alexander S. They decide who lives, who dies: medical miracle puts a moral burden on a small committee. LIFE, Nov. 9, 1962.
- Corbett E. Standardized care vs. personalization: can they coexist? Health Catalyst, Apr. 12, 2016. https://www.healthcatalyst.com/standardized-care-vs-personalization-can-they-coexist.
- Jiwa M, Millett S, Meng X, Hewitt VM. Impact of the presence of medical equipment in images on viewers' perceptions of the trustworthiness of an individual on-screen. J Med Internet Res 2012;14(4):e100.
- "Sound medicine." American Institute of Physics news release, June 16, 2008. https://www.eurekalert.org/pub_releases/2008-06/aiop-sm062508.php.
- "Acoustic fingerprint." Wikipedia. Accessed March 1, 2019. https://en.wikipedia.org/wiki/Acoustic_fingerprint.
- Internet of things. Wikipedia. Last updated March 6, 2019. https://en.wikipedia.org/wiki/Internet_of_things.
- Thompson WR, Reinisch AJ, Unterberger MJ, Schriefl AJ. Artificial intelligence-assisted auscultation of heart murmurs: validation by virtual clinical trial. Pediatr Cardiol 2019 Mar;40(3):623-9. doi: 10.1007/s00246-018-2036-z.
- Thompson RF, Valdes G, Fuller CD, et al. The future of artificial intelligence in radiation oncology. Int J Radiat Oncol Biol Phys 2018;102(2):247-8.
- Thompson RF, Valdes G, Fuller CD, et al. Artificial intelligence in radiation oncology: a specialty-wide disruptive transformation? Radiother Oncol 2018;129(3):421-6.
- Allwood G, Du X, Webberley M, Osseiran A, Marshall BJ. Advances in acoustic signal processing techniques for enhanced bowel sound analysis. IEEE Rev Biomed Eng 2019;12:240-53.
- Sayres R, Taly A, Rahimy E, et al. Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology 2019 Apr;126(4):552-64. doi: 10.1016/j.ophtha.2018.11.016.
- Hajar R. The art of listening. Heart Views 2012;13(1):24-5.
- Bargman JM and Skorecki K. "Chronic kidney disease." Chapter 280 in Harrison’s Principles of Internal Medicine, 18th ed. New York: McGraw-Hill, 2011.
- What is sound? PowerPoint slides, accessed March 1, 2019. http://www.cs.toronto.edu/~gpenn/csc401/soundASR.pdf.
- Woodford, C. Sound. Last updated Feb. 20, 2019. https://www.explainthatstuff.com/sound.html.
- Staab W. Fourier analysis and its role in hearing aids. Hearing Health and Technology Matters, June 17, 2012. https://hearinghealthmatters.org/waynesworld/2012/fourier-analysis-and-its-role-in-hearing-aids/.