##plugins.themes.bootstrap3.article.main##

This study explores Explainable Artificial Intelligence (XAI) in general and then talked about its potential use for the India Healthcare system. It also demonstrated some XAI techniques on a diabetes dataset with an aim to show practical implementation and implore the readers to think about more application areas. However, there are certain limitations of the technology which are highlighted along with the future scope in the discussion.

Downloads

Download data is not yet available.

References

  1. Dhurandhar A, Chen Y, Luss R, Tu C, Ting P, Shanmugam K, Das P. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in Neural Information Processing Systems, 2018;592?603.
     Google Scholar
  2. Garc ??a V, Aznarte L. Shapley additive explanations for no2 forecasting. Ecological Informatics, 2020:56.
     Google Scholar
  3. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y, Dong Q, Shen H, Wang Y. Artificial intelligence in healthcare: past, present and future. Stroke and Vascular Neurology, 2017; 2 (4): 230?243.
     Google Scholar
  4. Looveren A.V, Klaise J. Interpretable counterfactual explanations guided by prototypes. 2020.
     Google Scholar
  5. Lundberg S.M, Lee S.I. A unified approach to interpreting model predictions. Advances in neural information processing systems, 2017: 4765?4774.
     Google Scholar
  6. Pawar U, O?shea D, Rea S, O?Reilly R. Explainable AI in healthcare. Reasonable Explainability for Regulating AI in Health. June 2020.
     Google Scholar
  7. Ribeiro M.T, Singh S, Guestrin C. Anchors: High-precision model-agnostic Explanations. AAAI Conference on Artificial Intelligence (AAAI). 2018.
     Google Scholar
  8. Sundararajan M, Taly A, Yan Q. Axiomatic attribution for deep networks. arXiv preprint arXiv.2017;1703.01365.
     Google Scholar
  9. Thampi A. Interpretable AI, Building explainable machine learning systems. Manning Publications, 2020.
     Google Scholar
  10. Wachter S, Mittelstadt B, Russell C.Counterfactual explanations without opening the black box: Automated decisions and the gdpr. 2018.
     Google Scholar
  11. Alex A., Freitas. Comprehensible Classification Models: A Position Paper. SIGKDD Explor. Newsl. 15.2014; 1:1?10.
     Google Scholar
  12. Friedman J.Greedy function approximation: a gradient boosting machine. Annals of statistics, 2001; 1189?1232.
     Google Scholar
  13. Friedman J, Popescu B. Predictive learning via rule ensemble. The Annals of Applied Statistics 2, 2008; 3: 916?954.
     Google Scholar
  14. Gade K, Geyik S, Kenthapadi K, Mithal V, Taly A. Explainable AI in Industry. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (Anchorage, AK, USA) (KDD ?19) Association for Computing Machinery.2019; 3203?3204.
     Google Scholar
  15. Garc?a M, a Jos? L Aznarte. Shapley additive explanations for NO2 forecasting. Ecological Informatics, 2020;56(101039).
     Google Scholar
  16. Patel V, Mazumdar-Shaw K, Kang G, Das P, Khanna T. Reimagining India's health system: a Lancet Citizens? Commission. The Lancet, 2021;397(10283):427-1430.
     Google Scholar
  17. Kaushik A, Raman A. The new data-driven enterprise architecture for e-healthcare: Lessons from the Indian public sector. Government Information Quarterly, 2015;32(1): 63-74.
     Google Scholar
  18. Dhagarra D, Goswami M, Kumar G. Impact of Trust and Privacy Concerns on Technology Acceptance in Healthcare: An Indian Perspective. International Journal of Medical Informatics, 2020;41(104164: ISSN 1386-5056.
     Google Scholar