This educational activity is intended for an international audience of non-US ophthalmologists.
The goal of this activity is to increase clinicians' knowledge of the latest data on the use of deep learning (DL) systems in the diagnosis and prognosis of eye diseases such as diabetic retinopathy and age-related macular degeneration, as well as the challenges and ethical considerations surrounding the use of artificial intelligence (AI) in ophthalmology.
Upon completion of this activity, participants will:
WebMD Global requires each individual who is in a position to control the content of one of its educational activities to disclose any relevant financial relationships occurring within the past 12 months that could create a conflict of interest.
The Faculty of Pharmaceutical Medicine of the Royal Colleges of Physicians of the United Kingdom (FPM) has reviewed and approved the content of this educational activity and allocated it .25 continuing professional development credits (CPD).
For questions regarding the content of this activity, contact the accredited provider for this CME/CE activity noted above. For technical assistance, contact [email protected]
There are no fees for participating in or receiving credit for this online educational activity. For information about your
eligibility to claim credit, please consult your professional licensing board.
This activity is designed to be completed within the time designated on the title page; physicians should claim only those
credits that reflect the time actually spent participating in the activity. To successfully earn credit, participants must
complete the activity online during the credit eligibility period that is noted on the title page.
Follow these steps to claim a credit certificate for completing this activity:
We encourage you to complete an Activity Evaluation to provide feedback for future programming.
You may now view or print the certificate from your CME/CE Tracker. You may print the certificate but you cannot alter it.
Credits will be tallied in your CME/CE Tracker and archived for 6 years; at any point within this time period you can print
out the tally as well as the certificates by accessing "Edit Your Profile" at the top of your Medscape homepage.
*The credit that you receive is based on your user profile.
CPD Released: 6/30/2020
Valid for credit through: 6/30/2021, 11:59 PM EST
processing....
Richard F. Spaide, MD: Hello, everyone. Welcome to what will be a very interesting discussion concerning a hot topic, "Overcoming Challenges in Ophthalmology: Using Artificial Intelligence to Personalize Care." My name is Richard Spaide and I'm with the Vitreous, Retina, Macula Consultants in New York.
I'm pleased to introduce 2 superstars in artificial intelligence (AI) who will be joining us as panel members. The first person is so sharp his wife won't even let him touch balloons at his kid's birthday party. That's Aaron Lee. Aaron is from the University of Washington. The second one is so smart he has more letters after his name than he has in his name, and that's Tariq Aslam.
We're going to be learning about deep learning (DL) approaches today in this panel session. We'll talk about advancing advances in imaging, because that's how we get personal information about the patient. Finally, we're going to talk about some of the current challenges in applying DL to ophthalmology and some of the ethical considerations. Let's start with the basis of AI. Tariq, can you walk us through this topic?
Tariq Aslam DM, FRCSEd, MBChB, PhD: Thanks, Rick. That's one of the best introductions I've ever had. AI we've all heard of. It's where computer systems essentially appear to emulate thought processes that we recognize in humans. It emerged as an academic discipline actually as long ago as the 1950s with techniques such as Expert Systems, where a specialist's knowledge would be more programmed into the system. Now, what machine learning (ML) did was change that around a little bit. Rather than having preprogrammed rules, in ML the computer learns from example data and experience to develop its own rules. We essentially feed it data and what the real-world answer is, and it works out algorithms to predict that answer from any new input data.
One of the great advantages of that is the ability to learn from lots of data. It's very well suited to modern times where we have masses of images and information on tap and masses of processing power. If the rules aren't known by humans, it can create those rules and create abstract rules. There are many types of ML, but one of the key ones I will talk about is the neural net. A simple neural net consists of basically a layer of units or neurons taking in data. One of the greatest breakthroughs that has led to this real explosion in AI is the ability to train many layers of neural net, and that's called a DL system.
Now, this all sounds fantastic, and it is fantastic, but there are challenges. One of the key things to understand is that algorithms only work for the exact questions that they're trained for. So if you're detecting hemorrhages on a diabetic population that happens to be all Caucasian, it might not work on black patients. And there's no place to fall back on in terms of common sense that humans have. It basically can only deal with what it's learned in the training dataset. So it's not integrated with prior knowledge and it doesn't have any contextual understanding to fall back on. It's data hungry.
So that can be a problem in rare conditions. It's fine for age-related macular degeneration (AMD) and diabetes, but if we have a patient with a rare mucopolysaccharidosis we might need to fall back on other systems of imaging in order to identify or diagnose things. Finally, it's not always transparent in terms of why the result is as it is. So, there's a challenge in making sure that there's some explainability for AI, and that's going to be really important when clinicians are going to be using that or interacting with it on a daily basis.
Dr Spaide: Can I ask you one really simple question? What's the difference between ML and AI?
Dr Aslam: All of these definitions, it just depends on who you ask of them. There are more definitions than there are terms. There are more opinions than there are terms. But in my mind, AI is the overall umbrella term for emulating human thought process. ML is only one aspect of AI. You can produce systems that appear to be intelligent, which haven't been learned, which are being preprogrammed by experts. A lot of those have gone out of favor now, but those were the predominant form up to the 1980s in terms of systems, for example, that were designed to diagnose or to determine the bacterial nature of infections. Those are actually systems that were produced in the 1970s and 1980s; they existed but weren't involving the ML, they involved an expert programming system.
Aaron Lee, MD, MSc: Just to give my 2 cents, I think that one way to think about this is that all ML is AI, but not all AI is ML.
Dr Spaide: Very good. OK, let's go through an overview of some of the applications of AI in ophthalmology. On the next slide we can see how AI augments ability of us to be able to extract information from imaging. In this I used a new high-resolution optical coherence tomography (OCT) that's not yet on the market, but it is capable of 3-micron resolution. I extracted information about the intermediate and deep capillary plexuses from this, and then I volume rendered it. You can see in this video tremendous clarity of both of those vascular planes, and you can see how they're interlinked by vessels. Those vessels have been seen in histology, but never been imaged before in humans. Aaron, you've used AI to automate testing. Can you give us a little clue about that?
Dr Lee: Yeah. So this is really the work of Ted Spaide, who's working as a postdoc in our lab.
Dr Spaide: He must be brilliant. How did you get this guy?
Dr Lee: Well, he happens to be somebody's son. This was a really amazing body of work. Right now when we take Goldmann tonometry there's a subjective component to lining up the mires. And we actually showed that humans have a bias towards even numbers on tonometry readings because of the markings on the tonometer. What we wanted to do was to take out some of the subjectivity using an automated approach. Ted designed a custom DL network to do the segmentation task. What you can see in the video is that not only are we able to precisely locate the mires, but you can actually see ocular pulsation. So this is like what you were saying, Rick: We were able to extract data that has been around us all the time in a new way using AI.
Dr Spaide: Very good. AI can also be used to help make diagnosis and in this slide, I just illustrated a number of different papers in which it's been used to diagnose glaucoma. Glaucoma, as you know, the difficult thing to diagnose, anyway. What do you think about that, Aaron?
Dr Lee: This is one of the major advances, in my opinion, in the field of applying AI to the medical field. The DL in these automated models are able to make these diagnoses in a fully automated fashion and integrate data from multiple different sources and to reach human expert-level decision making. That's really amazing, right? It's really baffling and amazing to think that a computer model, just by giving it images and consensus grading, it can learn the inherent relationship to make that kind of complex decision making. The Achilles heel of this is what Tariq mentioned: that behavior outside of what you trained them to do is very unclear, and it may be completely undefined. That's a significant limitation of the current technology.
Dr Spaide: Can I ask you a question? Suppose we gave a computer visual field data over an extended period of time, OCT data over an extended period of time. Could it connect the dots and make a diagnosis earlier than what humans can make?
Dr Lee: Yes, I think that's possible. We did a little bit of work trying to build models to predict a future visual field given a single visual field. And we showed that DL models were able to do that. I do believe that is possible, but that's a very emergent field of research right now.
Dr Spaide: What's interesting to me, in addition to being able to extract data from images and make diagnoses, is that we can predict function just from imaging. Tariq, can you tell us about your work in this field?
Dr Aslam: Yes. We did a relatively simple study where we developed a neural net to model the impact of OCT changes on visual acuity. Essentially, we trained the neural net by inputting features of 1000 or so OCT scans along with the corresponding visual acuity, which acted as the target for the algorithm's final output. Ultimately, the neural net developed an algorithm, which would output visual acuity from any OCT scan that you input. So, you could potentially put in some features of an OCT scan that you'd see, and it would tell you what the expected visual acuity is. If you've got somebody with cataracts, for example, with some changes in OCT, you could use the algorithm to predict what the vision would be if it has a good visual output. Or, you can do other things and change small features on the OCT and see what sort of impact it has on visual function. And that gives you be more insight into how the structure and function are related.
I mean, one of the more recent papers we're doing, we're using this and extending it and developing more advanced algorithms to try and look at the issue of subretinal fluid. When does it have an impact on visual acuity? When does it not have an impact on visual acuity? What clinical trials can do is take 100 people or 200 people with set criteria of subretinal fluid and tell you what impact that might have on vision. What they can't necessarily do, or not very easily at least, is look at what the impact would be if you also have some hyper-reflexive material, if you also have different degrees of atrophy. When you get to that level of complexity, something like a neural net or ML system is actually quite useful.
Dr Spaide: I get a number of referrals from neuro-ophthalmologists. Patients who have vision loss or vision difficulties and they're hard to explain, they either go to a retina doctor first, then neuro-ophthalmologist, or a neuro-ophthalmologist first, then a retina doctor. How do you think this is going to fit into that scheme?
Dr Aslam: I get the same issue. One of the things that is tricky is that we can train ML to work on certain modalities, but at the moment it's limited to very specific questions. The machines don't really have the ability to think outside the box or translate that single ability into something more advanced. I think one of the issues with what you're seeing might be that we're going to need to try and look at algorithms and look at multiple modalities -- maybe look at autofluorescence and optical coherence tomography angiography (OCTA), maybe any electrodiagnostic, and ultimately try and put all those things together. But we're not quite there yet. I think we're still at the level of supervised type learning where we have a specific question and try to get a good level of accuracy with that specific question.
Dr Spaide: AI can help predict the prognosis of disease. Aaron, what are your insights into this matter?
Dr Lee: I think this is where DL can really start to take off. Everything that Tariq and I have been talking about is recapitulating human behavior or expert behavior. We apply some classification of disease in our human framework of understanding. And then we try to get models to recapitulate that. That's how we've talked about the models that we've led up to this point. But where I think DL can really start to learn new things about diseases is when you start to tie it to something that happens objectively in the future. That's what these 2 bodies of work demonstrate, that there is some hope to think that DL models can learn new ideas about diseases in a way that we haven't been able to do before.
Dr Spaide: So let me address the elephant in the room. If you talk to ophthalmologists about this, they immediately jump to the idea that they're going to be out of a job because AI is going to take over everything. You're going to need 3 ophthalmologists and everybody else is going to be unemployed. Is that true? Or is it just that AI is going to augment our ability to do our job?
Dr Lee: I really hope that DL and AI will elevate the standard of care so that it democratizes expertise. If we could have a million Rick Spaides in the world looking at all of our OCTs, wouldn't that be wonderful? I think everybody would learn so much more from taking care of their patients. So one, I really hope it's going to make us better doctors. Second, I do think there is an element about what these models take into consideration that fall into the art of medicine. There are times when you don't want to listen to what the model is telling you to do. There are things about the socioeconomic status about the patients, whether they can come into the clinic, whether they can afford to have another injection. Those are things that an AI model cannot take into consideration that a doctor can and is equipped to do. I really hope for those 2 things. One, it'll give us more time to take care of patients in the manner that is effective for them. And two, it'll make us better doctors.
Dr Aslam: Can I just interject as well? A great question, Rick. I have a talk entitled, and there's the website we can go to, saying willarobottakemyjob.com. You can type in your job. You should try it sometime. I think we should be OK. It works out the percentage likelihood of you being automated. But I think that what will happen is that AI won't replace clinicians, but clinicians who know how to use AI properly may end up replacing clinicians who don't know how to use the AI.
And it is true, isn't it? Because actually interpreting these studies will take quite a bit of skill. How to use the information that the AI system gives you will need you to understand what training was done and how the system was developed, what sort of network and how valid it is for your patient group.
The other important thing I think is that a few years ago we started doing some studies on patient anxiety and depression. I was finding patients in my clinic actually had quite high levels of anxiety and depression. Actually, in AMD patients overall, the levels of anxiety and depression aren't that much better than before the anti-vascular endothelial growth factor (VEGF). So something's going on here. Patients still have a lot of unanswered questions and unanswered needs.
Some of the work we've done has really shown us that there is a lot of scope for having actually a greater level of interaction. That's what patients want, I think, greater explanation of what's going to happen and what's not going to happen and answering of their fears. And I think one of the great things that I hope comes out of AI that actually we have more time on a human-to-human basis, communicating with our patients. Because I think that's a need that they have that hasn't been answered enough. It's also something that is enjoyable to do and feels good to do when you have the time.
Dr Spaide: Let's talk about making a diagnosis. I'm a human and I'm a resident just starting out to make a diagnosis. First thing I learned is what's normal. And then I learned ways in which things can be abnormal, and then I put those pieces together to figure out how to make a diagnosis. That's not really how AI works, is it? In AI how does a diagnosis get made?
Dr Lee: I think when you're going to train an AI model, it's easier to understand in the context of training because it's similar to what you're describing with the human approach. Basically, what happens is that an AI model gets a stack of images that are AMD and a stack of images that are not AMD. It gets punished or rewarded every time it gets the answer right. It's like how we train our residents in a way, but in over a million, million images, right? So, when an AI model starts it knows absolutely nothing about the world, it doesn't understand what the eye is, it doesn't understand anything about imaging. It just knows that this class of images are zeros and this class of images are ones. And it just has to try and learn that decision boundary between those 2 image sets.
Dr Spaide: So if we go along this further, people have compared the results from AI with humans and come up with the conclusion that AI is at least equivalent to humans. Is that the whole story?
Dr Lee: No, not at all. One thing that AI models really suffer at is trying to understand things that happen beyond what they were trained to do. So, within the context of what was in the training set, the models usually perform really well. But the moment that you give it something else, it can do something completely random and totally nonsensical. Whereas human experts usually will make a fail-safe choice that sort of makes sense. But one of the big limitations of these DL models is that they can really act in very strange ways the moment that you deviate from the training set.
Dr Aslam: It's funny that some of the headlines around the accomplishments of DL, the sorts of things it does tend to be the sort of things that computers do really well. So, playing a game of chess, it would do really well. But actually, it'd be more impressive if you got the robot to walk out to the chess board, sit down, and then move the pieces physically, because that's the sort of thing that humans do well and machines still can't do well.
We quite often have very narrow windows in terms of when we're reporting what the AI is doing and forget, as Aaron said, that there's a much broader area that they wouldn't see. So, if you have an algorithm that's trained to grade diabetic retinopathy and there's a massive big melanoma there, unless there were melanomas in the training data, they wouldn't be able to recognize that.
Dr Lee: I think that's a great point. One of my favorite quotes I heard on this topic, it was from a Google engineer, and I'm blanking on his name, but he called AI models dumb-smart. So they're very dumb in the sense that they can't do anything reasonable beyond what they were trained to do, but they're very smart in that one task that you tell them to do. I love that analogy that they're both amazingly powerful, but also very limited.
Dr Spaide: The idiot savant in the computer world. So, let's go to something that's fit for AI, as I think it'll turn out: remote testing. With remote testing we use different kinds of devices. We can use smartphones. They're very bright. They're all right. You can put different things inside of them, test inside of them and structure it like a game. We could use devices in the home, like home OCT or something like that. Or we could have mobile units or kiosks. Mobile units would go to the patient's house or the kiosk could be in a shopping mall and you'd have more sophisticated equipment inside that. Tell us about some of the trade-offs that we have with that, Aaron.
Dr Lee: I think a useful paradigm to think about these devices is to consider the trade-offs between something that is universally available everywhere, like a smartphone, vs the usefulness of all the different things that you can test. There are limitations to what you can do with a smartphone. You can't capture an OCT image with a smartphone, yet. There's limit in the capacity and the usefulness of what they can provide, yet they are literally everywhere. There is some useful information that could be gleaned from a smartphone; it would be very useful in this context of remote testing.
On the flip side, if you have very complex systems that have an OCT device and automated refractor color fundus photo camera, and you're able to move those all around, then there is so much more useful data. There's probably going to be an upper limit on how many of those you can deploy in the field.
Dr Spaide: So let me ask you a question. A lot of studies done so far seem to have unbelievably good results just by looking at a color fundus photograph. Do we really need to have an OCT?
Dr Lee: I think it's ironic that you, of all people, are asking that question. There has to be some limit, right? The null hypothesis is probably true in that the same amount of information embedded in an OCT is not present in a color fundus photo. There has to be some limit because that's the physics and information theory behind it. If you measure the entropy of a color fundus photograph vs the entropy of an OCT volume, they're vastly different. So we know that there must be an upper limit to how much you can glean; yet, it is amazing how much you can, especially with these papers that you read. It is quite amazing how much is possible with just a single color fundus photo.
Dr Aslam: We do see some really good results from a lot of papers, but the further truth will come when those studies are repeated for external validation and how well they work on other patients and other patient groups. I think that would be the next step in AI, in demonstrating its external validity in different units and how it links into our clinics, how we interpret it, and its utility there.
Dr Spaide: OK. We're going to go out and get a smartphone application. Maybe it does OCT through your iPhone or whatever. Then there’s a mountain of data. Who reads that?
Dr Aslam: I think there's a lot of issues around remote testing that might not be easily apparent. We did some work in developing a tablet to test visual acuity, just an example, and it's actually relatively simple to produce something that should give you a good answer. But then adapting it to something that will work in your older patients and will work consistently in patients with maybe poor physical mobility or in children -- that takes a lot of extra work. That's a much harder task. And then, as you say, even if you do get them to use it, when you're taking it to a home situation you need to be able to get them to use it regularly.
Again, we've had the study where we gave patients an app where we wanted to look at their nutrition and get them to put in the foods that they've been eating over a week, those patients with AMD. In hospital, when we showed it to them, it worked very well, and they promised they would be doing it every day. But as soon as they got home they had other priorities. And in studies that we've done looking at the reasons, a lot of patients, especially in the older groups, just don't like to use technology. They find it invasive. Those are challenges we need to address. They will forget about it or think it's not for them. The long-term data on patients using it consistently and reliably in a home setting is still lacking from a lot of studies.
The next thing, which you're leading to, is if you do show a result, you do show that something's there, you need to have the infrastructure to be able to analyze that. There are relatively few systems that still do that and are available, and not globally. Finally, you need a system or an infrastructure whereby that patient can come in very quickly. So it's no point in the UK, if I have a patient whose smartphone flags that he needs assessment but it takes 6 weeks to get him into my clinic for treatment. I think the remote testing has to be able to be presented as part of a whole infrastructure. And only when we get trials of that whole thing over the long term will they be truly believable and properly validated.
Dr Spaide: Validation is the next topic, and it's a big one. We only have a few minutes left. How do you validate these tests?
Dr Lee: That's a great question. Right now the current paradigm is to do these proof of concept studies with retrospective data. And then when you go for regulatory approval, do them in a prospective fashion. One of the things that we haven't seen yet is something like a randomized controlled clinical trial in the field of AI and ophthalmology; I haven't seen any papers on that. I know that there are some trials registered to look at that question.
One of the things that I want to emphasize is something that Tariq brought up earlier, in that it's very often easy to train and validate a model within the data that was gathered in the same place as the training data. But the moment you take that model and you bring it out to a different hospital in a different population of patients, those models tend to do not as well. And that is something that we really need to measure to understand the performance of these models.
The other thing I'll mention is that that there are some aspects to real-world retrospective data that may be useful that are not captured in a prospective study. In a prospective study there are very rigid guidelines on how the image is captured, the experience of the photographer, the quality of the images that are entered into the study. Those are very rigidly defined so that they can measure a definable outcome. But when these models are deployed in the real world those safeguards are removed. So, it's really important to understand how these models work in the real world with the variation that you see out there.
Dr Spaide: How do you recommend people increase patient adherence to taking these tests at home?
Dr Lee: I don't know the answer to that. I think providing positive feedback and thinking about how to enter the dialogue of AI models into people's healthcare are going to be really important discussion points with the public. I think with this idea of self-driving cars out there in the world, the idea of using AI models for your health is becoming more accepted. I think that might increase patient adherence in the future.
Dr Spaide: People are intuitively economically sound, I think. Is there a reward system that we can put in place somehow? When they use it, they get points . . .
Dr Lee: I think you could be very creative. If you think about the health economics around AI models you might be able to increase adherence that way. There are other ways of using AI models to lower the barriers and make it easier for the patients to do these tests so that the adherence is higher.
Dr Aslam: Some of the propositions for home monitoring -- for children with amblyopia, for example -- include games for patient in the apps. One of the things that we were trying to do for a period was to work out a system that would detect when children wore their patch for the right amount of time and then reward them if they've worn it the right amount of time by unlocking games on an app. It might be fairly intuitive, I guess, but those sorts of games for patient approach don't work too badly as well in adults or older people as well. Because everybody gets bored after a while and everybody loses concentration. So that might be one thing, but it's a challenge that is still there, I think, with the tools.
Dr Spaide: So, who's going to pay for all this AI stuff?
Dr Aslam: I would say, our situation in the UK is maybe different to a lot of other places. We have a National Health Service (NHS). Part of the Department of Health is this National Institute for Health and Care Excellence (NICE). Basically, they publish guidelines on the use of all sorts of clinical treatments and health technologies more recently, as well. What happens is the NHS becomes legally obliged to provide funding for treatments that are recommended by NICE's technology appraisal board. In that way it would be, for us in the UK, you would have to have an economic evaluation carried out and an assessment of the cost effectiveness of the activity.
That might be a route, but there are lots of bigger questions that that opens up, in terms of how was AI trained and was it trained using patients in the NHS? And if so, should we then be paying the price for that system? There are lots of legal issues around regulation of medical data and the right to use it, privacy, security. What happens when it goes wrong? It's the same with the driverless cars. Who's liable? And how can researchers and engineers be protected against decisions? So, I think we're in an interesting area where we've demonstrated the strength and the power of AI, but we really have got a way to go in terms of working out exactly how it's going to fit into medical care and society.
Dr Spaide: So we have a minute left, and in that last minute I want to know your wish list. What do you think we can gain? Or what do you want to do in the next 5 years? Or what do you think we can accomplish? That's a pretty open-ended question. Let us hear your creative thoughts.
Dr Lee: I would love to see head roads in defeating blindness. So if we can use these AI tools to diagnose diseases earlier, treat people earlier, find the early progressors, find people who might respond better to drug A vs drug B, we might actually be able to bend the curve and allow people to see better for longer. I would love to see that kind of impact happen in our field using these AI tools.
Dr Aslam: I would echo that, Aaron. Also, it'd be nice I think for my side to have AI deal with not just the exciting stuff that makes headlines, but also some of the more mundane stuff like bringing up images that takes ages and getting the right patients' images up, and all the software that's crashing because all the images are on different systems. Let's get AI to do some of that mundane stuff so that I can have more time talking to my patients and finding out exactly what makes them tick and what's making them unhappy. And then going back to proper clinical medicine a little bit in trying to see them as patients and not just the images that need an answer.
Dr Lee: What about you, Rick? I'd be curious to hear your thoughts on where you think all of this is going.
Dr Spaide: I think it's going to happen according to the layers that we built up in those slides. A lot of the mundane testing I think will be either done or incorporated with the help of AI. And that will have maybe prognosis indicators, not so much, or at least diagnostic indicators, maybe in the line of something like a weather forecast. Right now, we diagnose glaucoma or no glaucoma or glaucoma suspect, whatever that is. Even to this day, it's hard to figure out what they mean by that. If I have a certain set of tests that I have assembled, I analyze that, and I get a 50% chance that this person is going to progress in the next 5 years, that's a valuable piece of information. I think that AI can easily do that.
I think we're going to have very much opportunity to have the prognosis for patients and we'll know right from the first time we see them what their outcome is going to be in 2 years from that point in time. I don't think that we're going to get replaced by computers anytime soon. There's always that Luddite kind of idea that every step in development is always going to put people out of business, and it actually turns out that's not true. It's always that the human race has been educated to a higher level and we accomplish more by each person. Each person becomes more productive.
I think if you apply those kinds of principles to ophthalmology in general, you're going to come to a place we're going to be able to save vision and people where we couldn't before. And we're going to be able to at least plan out treatment strategies and see where the weaknesses are in our current levels of treatment. Then we'll be able to target those to enhance people's vision over a longer period of time.
With that, I'd like to thank both of you. This is an exciting, very interesting time. I think we had a really nice session here. In addition to thanking Tariq and Aaron, I'd like to thank you, the audience, for participating in this activity. Please continue to answer the questions that follow and complete the evaluation. Thank you.
This is a verbatim transcript and has not been copyedited.
« Return to: Overcoming Challenges in Ophthalmology: Using Artificial Intelligence to Personalize Care |