Episode 52: Generative AI / Benefits and Risk in Healthcare with Hanjie Zhang

 

 

About This Episode

Generative AI represents a paradigm shift in the landscape of healthcare, offering unprecedented potential to revolutionize patient care, diagnosis, and treatment methodologies. However, amid its promising advancements substantial and daunting risks remain. Dr. Hanjie Zhang, Senior Director of computational statistics and artificial Intelligence at the Renal Research Institute in New York discusses the benefits and risks of Generative AI in healthcare.

Featured Guest:
Hanjie Zhang, MSC, PhD
Sr. Director of Computational Statistics & Artificial Intelligence 
Renal Research Institute

Dr. Hanjie Zhang joined Renal Research Institute in 2014. She has been involved in the design of several large cluster-randomized clinical trials and complex statistical analysis. She is also involved in designing, developing, and deploying enterprise solutions across the artificial intelligence spectrum, such as machine learning and deep learning. In the past ten years, Dr. Zhang has authored more than 30 research articles in leading kidney journals. She holds a master’s degree in statistics from Columbia University and a PhD in medical science from the University of Maastricht, The Netherlands.

 

Listen to This Episode

 

Episode Transcript:

Frank Maddux: The development of generative artificial intelligence has created excitement and vigorous debate across various industries, including health care. This novel technology transcends conventional rule-based systems, data analysis and predictions. Departing from the familiar confines of traditional AI, generative AI ventures into uncharted territory where machines wield the power of creativity without human intervention. Doctor Hanjie Zhang, senior director of computational statistics and artificial intelligence at the Renal Research Institute in New York, joins us to discuss the benefits and risks of generative AI in healthcare.

Welcome, Hanjie.

Hanjie Zhang: Thank you. Dr. Maddux

Frank Maddux: Talk a little bit about how is AI used in health care today.

Hanjie Zhang: Broadly speaking, for the traditional AI's normally used in, diagnosis, prognosis and treatment recommendations. So, for diagnosis, for example, in RRI we have worked on the aneurysm classification app, which you’ve heard about. So, we just take a picture of the arterial fistula or graph. And then our algorithm will be able to identify the aneurysm. 

Is it advanced or not advanced. And recently we have done a prospective clinical study, and we analyzed 120 patients. We have our algorithm doing the classification. We also have the in-person vascular access surgeon do the in-person physical evaluation. And the result is very promising. Although our image is only there during the full physical evaluation.

Another example for diagnosis is that we are also doing the work on our AI algorithm to identify the sawtooth pattern of the arterial oxygen saturation during hemodialysis treatment. So, this is possibly related to the sleep apnea. And for the prognosis we have worked on an IDH prediction model. So, we try to predict Intradialytic hypotension in next 15 to 75 minutes, in real time.

And then for the treatment recommendations, I think in the last GMO Dialogues Luca talked about  a particular AI algorithm for the anemia management using the reinforcement learning. 

Frank Maddux: Our world is expanding and the types of data that we use right now, there's data that comes from the traditional sources of the treatments we deliver, the blood we draw and analyze. But there's also data from the environment. There's, data that can be created elsewhere, describe the data types that are used that go into various AI applications. And how has that changed? 

Hanjie Zhang: Currently, the data used the most is still from the structured data like the patient demographics, treatment variables, laboratory variables, also social determinants of the certain patient. And also, for our hemodialysis patient in recent years we have a lot of focus on the intradialytic bio-signals. So, we measure the blood pressure, pulse rate, oxygen saturation, relative blood volume, hematocrit every ten second of the patient during that hemodialysis.

And besides that, we also have a lot of free text and notes. So, our clinicians doing the pre-evaluation note, during the treatment note and post treatment evaluation note. All of this can be used for the natural language processing. And in recent years also, a lot of data related to the wearable device.

Like the Fitbit, the Apple Watch and also like some rings or the patches, all of this really helps us to understand more about the interdialytic patient data, because we don't have any information around the interdialytic period. And also, I think the another point I was mentioning the omics data. So, in RRI our labs work on a lot of the metabolomics data.

How is generative AI different than traditional AI in your view?

Hanjie Zhang: Generative AI is just a subset of artificial intelligence, which is mostly focusing on creating a new content, which is different from the traditional AI. I think the purpose and the output are both different from traditional AI and generative AI. So, the traditional AI mostly focuses on the task like classification, prediction, regression. And also, the output most likely is a probability of certain things or the values of certain things.

But for generative AI it's mostly focus on generating new content, let's say for question answering. Also, if you want to compose music based on specific description, or you want to generate an image based on the particular description, and then this is about the new content generation.

Frank Maddux: Give us a little bit of background about how generative AI came about, how the models are trained and how these tools are actually developed and some sense of, how that's happened, because really, these generative AI tools sort of just popped up about a year and a half ago and have made a huge difference in how people look at understanding questions and developing answers, as well as dealing with language and multiple language issues.

Hanjie Zhang: Generative AI is trained mainly based on the large number of texts it can be, articles, books, online resources. So, the main task is about predicting next word. So given the sequence of the previous word, what the next word will be. So, in 2017 there is a paper called “Attention Is All You Need” outlined the new deep learning architecture about attention mechanism.

So, attention mechanism allows you to look at the word in the context of the part of the sequence. And it's really good at capturing the dependency and relationships between the words, regardless the distance between the words.   I want to show, a made-up example to illustrate this a little bit better. So, in this sentence it says a “blue fluffy creature roamed in the verdant forest.”

So, each word, or normally I say, each token, but for simplification I just say each word is coded in a high dimensional vector. So, you can imagine in a high dimensional space all the words have their own location. And normally when the words have similar meaning, they are more close by. So, for this example, the blue fluffy creature and then the creature has its own meaning.

It’s just a general creature. But because attention mechanism, there is a high relationship between the blue fluffy and the creature, and the meaning of the word creature should be updated. It's not just a creature, it's a blue, fluffy creature. And then the vector representation of the word is updated and the meaning is also being enriched by the context.

So, this is just a simple example. Actually, they are very close by. But you know, like sometimes even the meaning is actually in a long sequence or paragraph, it can still capture them. So, for example, when we say, Harry and then we talk about J.K. Rowling, we talk about wizard, and we know we are talking about Harry Potter, but let's say we talk about Harry this Harry, in a lot of context you talk about Duke, talk about William, Meghan Markle, then you know this Harry is a Prince Harry. 

So, although you only have done one word because the context this word has a different meaning. And also, back to the question of the training. So, for the chat GPT three it has about 175 billion parameters. And for the GPT four it has close to 1 trillion parameters, it takes several months to train and a lot of money to train.

So, for GPT four, it takes about $100 million to train this model. So right now, for the foundation model, mostly the large IT tech companies work on such developments.

Frank Maddux: Do you see that we would be in a position as a company to ultimately, at some point have models trained on data that's more, directly related to the work that we've done, either in our products business or our services business?

Hanjie Zhang: Yes, I believe so. So, there is different ways to do this. So, for example, you can do a fine tuning of the foundation model. So, basically we take the model from other companies who develop it and we train it on our own data. So, we refine this model to get the meaning, the context from our word. So, this is one way and also another way to do it by the retrieval augmented generation.

So, we can build our database. So, every time we ask questions we can ask the model to look through the database to find the most relevant information. And then based on this context and to answer the question much better and much more informed.

Frank Maddux: GPT four looks to me like what it does is it's trained on its model through 2021 but then it also goes to today's internet, and it kind of has layers of how it looks for information. It looks into its own training, it looks into current information. And so, it can actually answer questions it seems that are quite immediate, things that have just happened.

How do you see that evolving over time? Today it's through this text interface or voice interface that we have with it. What are some of the other ways in which we both, as a company and individually, as individuals, will interact with generative AI as these tools mature?

Hanjie Zhang: Previously most models are working on the single modality. So, you have text, image, video, audio, so most of the model are working on the single modality either text or image. But right now they have the single model actually works end to end - all modality like text, image, video and audio. And then it just reads all the information in. So, it means all the input and output are processed by one single large neural network. So, this is really mimic the human, how we work. So, we take all the information and then we think it through and come with output. This is definitely a very exciting field of combining all the input types. 

Frank Maddux: I was reading recently that there have been some individuals in history that have, looked at whether or not their individual responses to many different things over time have been part of the training from the foundational model on a instance of a generative AI tool that could effectively make them somewhat immortal in the sense that if you ask the question and you say, let's say it was trained on Franklin Roosevelt or somebody, which there was a large body of either their writings or their audio or speeches and so forth. Then you could get the tool to answer questions as if it was that individual. Do you think that's realistic? What do you think is going to happen in that field?

Hanjie Zhang: I think it's realistic. And also there is one I actually see. So, you may know the founder of LinkedIn his name is Reid. And he actually did it. He actually did have a video. He actually, not himself, but his team actually built a model of himself. So they take all the online resources, his books, his videos, his talks, papers, all these things. And trained a model particular for Reid. It's called Reid AI and then he actually has an interview with himself, which is very fun. It's a YouTube link. you can if you want to watch it. It's very nice. 

Frank Maddux: It’s interesting if we take famous people from history that had a large body of public information that you could train the tool on, it makes me wonder if there are opportunities to create a generative AI engine that would answer or look at a problem in the way that they had looked at a problem in the future.

Hanjie Zhang: Yeah, They can think in the way how they normally think. Which is very fascinating. 

Frank Maddux: How do you think this will impact health care in the future. What do do you think the opportunities are?

Hanjie Zhang: I think this will impact health care a lot in a lot of areas. So, for example the routine administrative tasks. So, it can help with the patient doing the scheduling, canceling and rescheduling and all these things. It also helps with billing and also insurance claims. All this and I think a big field is about ambient recording.

So, if you have the video or audio recording of the interactions between the patients and health care professionals, so it can really record everything and nothing is missed, and also if the information will be built into the electronic  system and tested in a good format is really can reduce a lot of documentation workload, from the physicians. It really can reduce the staff burnout and also make sure they are really focused on the patient care rather than, you know, when you go to see a doctor and they, they just typing on the computer and barely look at you.

So, I think this really helps on the administrative tasks. And also, I would imagine in the future the medical chatbot will be able to be in the market eventually. So, let's say you can just chat with the chat bot and then give you the general information about diseases, the treatment options, medication options. And it can really help with the patient education.

It can communicate with any language and also any level of communication because if you talk to them too technical or too medical, the people are really hard to understand all these terms. 

Frank Maddux: One of the things that I remember when I was practicing and this has been a few years now, but when I was in active practice, I would have greatly benefited from a tool that was like an AI assistant. That would give me the opportunity to make sure that I was, managing an individual conversation to the level where I understood what were the guidelines that I needed to follow for that.

And these tools have a huge opportunity to be, part of the output stream for our predictive modeling, for our insight analysis, for how we engage in new training materials, how to overcome language differences that might occur. It just seems like that personal assistance as a medical provider would have been extremely helpful in a lot of settings. Do you think that's realistic or am I just dreaming?

Hanjie Zhang: Yeah, I think that's definitely realistic. And also, I think for some company, maybe they already have. So, the idea is straightforward and I think the application is really doable. So, you're basically if you have any particular training materials and you feed into the generative AI, let it learn it first and then can really follow it well.  We actually did one about the bone mineral, medication algorithm.

Frank Maddux: It's a very long physical algorithm.

Hanjie Zhang: So, the bone mineral algorithm about the Cinacalcet, let's say is like ten pages and all of the different rules about when your calcium is what and when your phosphorus is that, what should you do. It's very complicated. And we tried that using that algorithm. And then we give the patient information. It's actually did pretty well it's very, very interesting and very fascinating.

Frank Maddux: So that brings up the question of how long will it take that we can overcome some of the risks that this might not be right, so we can get to the benefit of being able to have that simpler implementation of the algorithm independent of asking for hallucinations or bias or privacy or other things. What do you think the maturing process is going to be to where we can overcome our anxieties over these risks to be able to begin to actually use this stuff?

Hanjie Zhang: I think we should start with some case, where the hallucinations rate is really low. And we should have a lot of evaluation. So not only the deployment phase evaluation, also the pre-deployment, the deployment phase evaluation, I think the clinical trials, which is really important, and we should have the real-world evidence for the people's trust and transparency.

And also, I think right now a lot of actually research initiatives working on the reasoning capability of AI. So, I think when we ask a question, we also should ask for ourself, why you come to this decision point. So, if we can explain it well enough for us to really think that's true, I think it's also helps to reduce the hallucination for yourself and for us, for when we look at it, we already did the self-evaluation. So, I think this could really help. 

Frank Maddux: Kidney care has a lot of opportunities for how these tools might be used. Its lowest level use is language. It's really good at different languages. And I'm just curious with your background, I know you speak multiple languages. Have you tested it between English and Mandarin, for example, and so forth? Is it an effective translation device?

Hanjie Zhang: We have this user case called the nutritional guidance for the patient. And then for that use case, we asked it to generate the recipes and the instructions. So besides that, we also evaluated it in the different languages. When we asked the same thing for recipes and cooking instructions. We at RRI have people who speak different languages. So, we did Chinese and we did German, we did Hungarian and Dutch. So, the same recipe, but in different languages. And we evaluated it by different people who speak the native language. In our evaluation, this is very reliable. I feel it's very good. 

Frank Maddux: Interesting. Well these are remarkable tools. What do you think in our world where we're dealing with both chronic kidney disease, end stage kidney disease, a therapeutic set of interventions like hemodialysis or hemodiafiltration, where do you see the potential for generative AI having a positive impact on patient care?

Hanjie Zhang: I think it can be used in many different ways. For example, for me, I'm working in the research field. So, I think generative AI can help me already a lot on the research area. It's in many different aspects. So, for example, right now we have vast amount of the literature in front of you.

When you search a topic, you see thousands of abstracts, manuscripts. So, I think this retrieval augmented generation is already helping me to distill the insight from all of this. It can help me to review this faster and more effectively. And also, it can help. So, for example, the protocol development. And also, right now if you have the inclusion-exclusion criteria, and then it also helps you to screen the patient actually based on the criteria and based on the database you have.

And it also could help with the data analysis. So, it is pretty good in writing a lot of language like SAS, Python. It's pretty good. Yeah. So, it helps with the data analysis, help with the manuscript writing. So, for example, normally when you have, results output directly from any program languages, you still need someone to interpret it.

But actually right now if you give output is actually can interpret the result pretty well. And then you can incorporate into your manuscript, which is very, easy. So, this is very promising. It can be also use in the patient mental health support. So, I think this is because for first we have the mental health professional shortage.

It's really hard to have the timely conversation with them. But with the generative AI, you can have a conversation with them anytime, anywhere and any language you want. And also, you probably cannot find anyone more patient than the machine. And also sometimes when people would talk about mental health you feel if you say too much you feel people may judge you, but now you don't need to worry about if it will judge you.

So, you are more open. I feel this should help when the discussion with the chat bot about mental health issues. 

Frank Maddux: It will be interesting to see how the integration of these tools changes the way that we do clinical research and the ability to interview patients that are research participants may change dramatically by having the interactions with the generative AI engine. Let me ask you one other technical question. So ChatGPT came from OpenAI. Microsoft uses Copilot. Google's just announced Gemini. There's Bard that's out there. Are there going to be competing generative AI tools, or are they all based still on the same trained model?

Hanjie Zhang: I think they are trained in a different way because originally when they have the Open AI release their earlier version model especially is open source for the code, but later version it is also right now they don't release any detail of the code or any details of model architecture anymore. So, all the big tech companies they are built actually independently.

So, their model may have slightly different on the model architecture and the training materials they are having. So, you also see the slight difference on the performance. Actually, I would say sometimes the performance on certain areas are really different, but I know there are some companies may be focusing more on different fields. So, for example, Google particularly focuses a large proportion on the medical field. They particularly train the medical domain on large language models called Med Palm. And also they do the Med Gemini. The Open AI is still right now mostly general-purpose models. And also they are all releasing their earlier version models for you to fine tune. So, if we want to do fine tune, you wouldn't be able to have the most recent version of the model, but you can use the earlier version to do the fine tuning. 

Frank Maddux: I've been talking with Doctor Hanjie Zhang about generative AI and AI in general, in both health care and kidney care and, any final comments today Hanjie on things you want to discuss or things you'd like to bring up about this to our audience.

Hanjie Zhang: I want to talk about the generative AI use in the protein folding examples. So, I know right now we are mostly talking about is text or image or video generations, but it also has a different aspect of application. So, for example the protein folding issues. So, we have millions of proteins and then the 3D structure determines, what they do and what it is like, but for years, it really takes a long time to determine, based on the sequence of the amino acid, what will be the 3D structure. And then for the generative AI right now they also doing the model. So based on the sequence of the amino acid and the known 3D structure they train the model on that. So right now they will be able to predict the 3D structure of the protein based on the sequence of amino acid.

And the result is very encouraging. And then right now in the later versions not only on the 3D structure, also can modeling the interaction between the proteins and even the interactions between the MRAs and DNAs. There's I think this is very new field and these have a transformative impact on the whole biology, whole health care. 

Frank Maddux: If you understand where particular molecules may be able to interact with other membranes or other molecular structures, it seems like that's going to enhance the ability of target identification and drug development and these kinds of areas quite significantly. It also makes me think that the opportunity to build novel compounds, chemical compounds, will be very different if you understand how they're going to be in three dimensions after you begin to put them together. So that's a huge opportunity from a research standpoint and a genomics standpoint I would think.  

Thanks so much for being here and I really appreciate your insights and how you think this is maybe developing for our company in the future.

Hanjie Zhang: Thank you so much for having me here today.