Home Print this page Email this page Users Online: 43
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
EDITORIAL
Year : 2020  |  Volume : 8  |  Issue : 1  |  Page : 1-4

Artificial Intelligence


1 Department of Ophthalmology, Hindu Rao Hospital and NDMC Medical College, Delhi, India
2 Co-Founder - Rezofin, Mumbai, India

Date of Submission02-Mar-2020
Date of Decision02-Mar-2020
Date of Acceptance02-Mar-2020
Date of Web Publication6-Mar-2020

Correspondence Address:
Jatinder Bali
Department of Ophthalmology, Hindu Rao Hospital and NDMC Medical College, Delhi
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jcor.jcor_18_20

Rights and Permissions

How to cite this article:
Bali J, Nayak S. Artificial Intelligence. J Clin Ophthalmol Res 2020;8:1-4

How to cite this URL:
Bali J, Nayak S. Artificial Intelligence. J Clin Ophthalmol Res [serial online] 2020 [cited 2020 Sep 28];8:1-4. Available from: http://www.jcor.in/text.asp?2020/8/1/1/280209



Artificial intelligence (AI), deep learning, and machine learning (ML) are quickly becoming buzzwords in the field of ophthalmology and technology in general. Several of their applications are now becoming common place in our day-to-day use such as verification of diagnoses, intraocular lens power calculations, read images, and improve surgical outcomes. While AI is an extensive field and its applications are limitless, we would be better served to understand AI from two broad classifications, namely generalized AI and specific AI. Generalized AI is the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can do, thus looking to replace humans. It has the capacity to learn new techniques or tasks like humans can. Whereas specific AI is the narrower form of AI that is targeted at achieving a focused task. Its goal is to achieve expert human level of proficiency in a particular task. In some cases, it consistently surpasses human expertise in that particular task. While generalized AI is more talked out when spoken about and is an ideal goal to AI research, significant majority of the research and development is done on specific AI and there is plenty of headroom to focus only on specific AI.

The Food and Drug Administration approved ML algorithms for the purpose of detection of diabetic retinopathy, and its grading has been developed by Google, which is a prime example of specific AI. It is exciting to see the potential use of such technology in limited resource environments. It recently achieved near-human performance level at the task. To better understand AI and its application, let us first look and understand at the basic concepts of AI.


  Artificial Intelligence Top


The Merriam Webster dictionary defines AI as “a branch of computer science dealing with the simulation of intelligent behavior in computers or the capability of a machine to imitate intelligent human behavior.” This is different from the usual algorithms and programs which are task specific and written to perform repetitive tasks. AI is “the ability of a digital machine or a computer to accomplish tasks that traditionally have required human intelligence.” AI has become ubiquitous in day-to-day life. While AI is extensively being used by finance, marketing, and travel industry, its application is more recent in medicine. The applications based on AI have the potential to benefit all stakeholders in the health-care industry. This new generation of programming makes a computer system work without being explicitly programmed. On account of this, self-driving cars, highly effective web searches, practical speech recognition, and a deep understanding of the human genome have become a reality. However, this is still a primitive form of AI. When this is fully developed, it should be capable of sentience (feeling emotions) and recursive-or-iterative-self-improvement.


  Types of Artificial Intelligence Top


Depending on the emulation of the human mind with thinking or even feeling like humans, AI and AI-enabled machines can be classified as reactive machine systems such as Deep Blue which defeated Kasparov in 1997 at Chess, limited memory machine systems which improve with experience like the chatbots, mind theory systems which recognize the need for other domains, and self-aware AI which plans for its self-preservation[1] [Figure 1].
Figure 1: Types of artificial intelligence

Click here to view



  Health-care Data Techniques Top


Health-care data is becoming easily available and analytics techniques are becoming more refined. AI has been and further is capable of using various types of health-care data (structured and unstructured) to make decisions in an automated manner without human intervention. The commercialization of automated external defibrillator represents a very practical example for differentiation between shockable and nonshockable rhythms. Popularity in ophthalmology can be traced to more recent times.

The core parts of narrow AI are ML, robotics, knowledge engineering, and machine perception. For the purpose of this overview, we will be focusing on the first one only. ML refers to a computing machine or a system's ability to teach or improve itself using experience, without explicit programming for each improvement using methods of forward chaining of algorithms derived from backward chaining of algorithm deduction from data. Deep learning, a subsection within ML, is focused on using artificial neural networks to address highly abstract problems.

ML handles structured data such as images, electrophysiological data, and genetic data. For clinical notes, medical journals, and other unstructured medical data texts, natural language-processing (NLP) tools are used. Now, an overlap is increasingly occurring in techniques. These algorithms are trained with health-care data. These systems then aid doctors in diagnosing and treating diseases. The IBM Watson system mentioned above has both ML and NLP modules and is a pioneer.[2] Ninety-nine percent of the cancer treatment recommendations from Watson correlated with decisions of the treating physician. Analyzing this genetic data, Watson successfully identified the rare secondary leukemia caused by myelodysplastic syndromes in Japan [Figure 2].
Figure 2: A brief overview of applications in artificial intelligence

Click here to view



  The Patient Encounter as Source Top


The data for health-care AI comes from patient encounters or visits, each of which is collected as a transaction. A transaction is an office/clinic event that generates for modified data stored in an information system. Transaction-processing systems were the first computerized information systems such as automated teller machines, airline reservations systems, and railway reservations systems.


  The Storage and Preservation Top


This data is stored on native systems or servers or cloud [Figure 3].
Figure 3: Data generation, storage, and flow in schematic representation

Click here to view


The commonly used health-care transactions and research databases were relational and heterogeneous databases. With the onset of Internet of Things and advent of wearable technologies, the volume of data available for analysis is increasing and the techniques have moved toward larger datasets with enhanced complexities.


  Current Sources of Data Top


The main areas where AI is being applied in health care are as follows [Figure 4]:
Figure 4: Training and test data in artificial intelligence

Click here to view


  • Mass screening
  • Clinical data
  • Diagnostic imaging
  • Electronic health records
  • Laboratory data
  • Electrodiagnosis
  • Genetic diagnosis
  • Operation notes
  • Records from wearable devices.



  the Recent Past Top


In 1996, IBM's Deep Blue demonstrated ability to play grandmaster-level chess and the next year even defeated the World Champion Gary Kasparov in a full six-match series. In more recent times, IBM Watson and Google DeepMind's AlphaGo made headlines by outplaying humans in Jeopardy and an ancient strategy game called Go.

Currently, IBM's Watson Oncology at Memorial-Sloan-Kettering-Cancer-Center and Cleveland-Clinic has picked up drugs for the treatment of cancer patients with equal or better efficiency than human experts. It also analyzes journal articles for insights on drug development in addition to tailoring therapy.[2] At Oregon Health and Science University's Knight Cancer Institute, Microsoft's Hanover Project has analyzed medical research and has predicted the most effective cancer drug treatment option tailored to individual patient with equal efficiency as a human expert. The UK National Health Service (NHS) has been using Google's DeepMind platform to detect certain health risks by analyzing data collected using a mobile app. It also analyzed medical images collected from NHS patients. This was directed at making computer vision algorithms to detect cancerous cells. At Stanford, the radiology algorithm performed better than human radiologists in picking up pneumonia, while in diabetic retinopathy challenge, the computer was as good as expert ophthalmologists in making a referral decision.[2],[3] In addition, these computers could be on duty all year day and night, unlike the human experts.

Without going into too much detail, the AI techniques have wide reported applications in several branches of medicine such as:

  • Radiology
  • Oncology
  • Cardiology
  • Clinical pharmacology
  • Internal medicine
  • Mental health
  • Ophthalmology.


    • Diabetic retinopathy
    • Age-related macular degeneration
    • Retinal vein occlusion
    • Retinopathy of prematurity
    • Cataract
    • Glaucoma


It has shown promising results in diabetic retinopathy detection and referral.[4] Recently, the Indian data has shown that the new algorithms can be generalized to the Indian population as well.


  Main Artificial Intelligence Tools in Health Care Top


The main AI tools used in health care are summarized in [Table 1].
Table 1: Common artificial intelligence techniques used in health care

Click here to view



  The Future Top


In future, will a hospital resemble the “Sick-Bay” of Starship Enterprise of “Star Trek”? Nobody can tell, but definitely, the computers will move from being mere decision support tools for physicians to becoming more reliable than human beings. British mathematician and cryptologist I. J. Good coined the term “intelligence explosion” in his 1965 essay, “Speculations Concerning theFirst Ultraintelligent Machine,” to describe this situation which is now called “singularity” – the point at which humans are no longer the most intelligent beings on earth. This may create Artificial-General-Intelligence (AGI) where a system will recursively improve itself by taking inputs from the environment, ultimately leading to Artificial-Super-Intelligence (ASI). This AGI would understand its own design to an extent that it could redesign itself or create a successor system, which would then redesign itself, and so on, with unknown limits. The problem arises when ASI goes rogue and starts harming humans itself. This is the dystopian future that scientists are afraid of. Now, let us examine this further in our case of medical setups.


  The Dystopian Debate Top


Will this ASI render the doctors redundant? The answer to it appears to be in the negative. Throughout history, technological advances have consistently made the majority of workers richer and not poorer. In the first industrial revolution, there was movement from agriculture to industry. Agriculture is still practiced, but a whole array of new jobs and opportunities have arisen because of the surplus created by the industries. People have become prosperous and are leading safer and more comfortable lives than before.


  The Conclusion Without Caveats Top


With so many advances in AI, there is notion that machines would replace clinicians and make them redundant in ophthalmology. This is not the case, instead, we should look to harness its potential to become better clinicians to achieve expert-level performance in every aspect of our practice and supplement the same with our knowledge. The current state-of-the-art AI tools and algorithms are specialist in specific task only. For example, the algorithms trained for diabetic retinopathy would not be able to detect any other disease. It would not report the anomaly, but instead, attempt to fit the specific task it has been trained for. Hence, as clinicians, our judgment to its applicability is important. Blind faith in the algorithm cannot and must not exist. In today's scenario, they should be used as decision support tools which enhance our capacities and capabilities as clinicians. We would be best served by using AI's task-specific expertise, but in medicine, we still need to develop limited specific task expertise ourselves and most importantly be trained to understand the limitations of the algorithms. In addition, we have one more responsibility in these exciting times when AI algorithms and more importantly applications in health care are developing today. We, as clinicians, need to understand, guide the development, and enunciate ethical principles in computational medical research for protection of humans. AI would lead to the rise of new super-doctors that are able to harness this potential AI and become PAN ophthalmologist over specialist. They would be able to take this complete ophthalmological care to wider set of audience consistently with lesser error. As ophthalmologists, we must embrace this change and retrain ourselves.

Acknowledgment

The authors would like to thank Ojasvini Bali MBBS who has designed the schematic diagrams.



 
  References Top

1.
Mahendra S. Types of Artificial Intelligence (AI); 2019. Available from: https://www.cisin.com/coffee-break/technology/7-types-of-artificial-intelligence-ai.html. [Last accessed on 2019 Sep 28].  Back to cited text no. 1
    
2.
Hamilton JG, Genoff Garzon M, Westerman JS, Shuk E, Hay JL, Walters C, et al. “A tool, not a crutch”: Patient perspectives about IBM Watson for Oncology Trained by Memorial Sloan Kettering. J Oncol Pract 2019;15:e277-88.  Back to cited text no. 2
    
3.
Krause J, Gulshan V, Rahimy E, Karth P, Widner K, Corrado GS, et al. Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. Ophthalmology 2018;125:1264-72.  Back to cited text no. 3
    
4.
Kumar A, Padhy S, Takkar B, Chawla R. Artificial intelligence in diabetic retinopathy: A natural step to the future. Indian J Ophthalmol 2019;67:1004.  Back to cited text no. 4
[PUBMED]  [Full text]  


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4]
 
 
    Tables

  [Table 1]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Artificial Intel...
Types of Artific...
Health-care Data...
The Patient Enco...
The Storage and ...
Current Sources ...
the Recent Past
Main Artificial ...
The Future
The Dystopian Debate
The Conclusion W...
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed728    
    Printed24    
    Emailed0    
    PDF Downloaded202    
    Comments [Add]    

Recommend this journal