Friday, May 6, 2022

 Model First Explainable Artificial Intelligence (XAI)

Overview

Explainable artificial intelligence, or XAI, has recently sparked much interest in the scientific community. This study addresses the issue of complicated machines and algorithms failing to provide insight into methods or paradigms that can be leveraged before or during data input that will measure success through behavior and thought processes of unexplainable results. XAI makes aspects of users' and internal systems' decisions more transparent by offering granular explanations. Amparore et al. (2021) assert that these explanations are critical for guaranteeing the algorithm's fairness, detecting recursive training data issues, and promoting robust algorithms for predictions.

On the other hand, the initial process or technological "first step" that leads to interpretations is not standardized and has not been rigorously evaluated. Islam et al. (2021) discover that initial framework-based methods perform an excellent job of achieving explicit pre-determined outputs on the training data but do not reflect the nuanced implicit desires of design-modeling of integrated human components coalesced with input data is extremely rare with very little supporting literature. This research introduces basic structuring notions of modeling and illustrates how to utilize social constructs of human input and relevant data to build best practices and discover open challenges.

This study will also articulate why existing pre-emptive XAI paradigms on deep neural networks have significant advantages. Finally, the research will address possible future research directions of pre-emptive XAI structuring and modeling as a direct result of research discovery and findings (Chromik & Schuessler, 2020). 

Why Model First XAI Study Is Needed

XAI assumes that the end-user is given an explanation based on the AI system's decision, recommendation, or operation. Fellous et al. (2019) find that few conceptual models or paradigms in the form of models increase the likelihood of interpretability before or during the development and implementation of the system. A computer analyst, doctor, or lawyer, for example, could be one of the participants. On the other hand, as Booth (2020) suggests, teachers or C-level executives may be expected to explain the system and grasp what has to be fixed. Another user, on the other hand, could be judged to be biased against the system's fairness. Each user group can cause bias, which can lead to preferential or non-preferential interpretations, which can have a negative impact on the information and the conclusion.

            Before implementing XA1, a practical Model can give preliminary consideration to a system's intended user group, using their background knowledge and needs for the content by fusing explainability. Xu et al. (2019) instruct us that XAI is a well-integrated framework, yet it falls short due to its reliance on "interior" frameworks like explainability primarily through modeling. In addition, there are several third-party frameworks available, each of which covers a particular atomic scope of the XAI but none of which address the human interaction, data, or science components collectively through design modeling that determine the level of success in the resulting interpretability (Zeng, 2021).
Figure 1 Blog Post

Conceptual Structure-Abstraction and Encapsulation

            Many methods have been proposed to evaluate and measure the effectiveness of interpretation; however, as Ehsan and Riedl (2020) find, very few have been devised as model drivers that define interpretability valuation from the onset of XAI implementation. However, there are no general modeling paradigms to measure whether XAI systems will be more interpretable from concept to deployment (Ehsan et al., 2021). From a modeling point of view, metrics could be derived from conceptually represented feelings or behavior of participants, which unlocks patterns of subjective or non-subjective components of a description. Olds et al. (2019) state that abstract objective modeling can represent and communicate dependable and consistent measurements of XAI interpretation valuation.  A research question persists in modeling factors that directly affect interpretation to ascertain output valuation that drives success or failure.

            Design-first modeling fosters the examination and predictive structuring of the XAI component before applying any evaluation framework allowing common ground between the human participant and the training data. Gaur et al. (2020) assert that modeling capabilities and knowledge from a human-centered research perspective can enable XAI to go beyond explaining specific XAI systems and helping its users determine appropriate trust roles. In the future, XAI model first designs will eventually play an essential role in the deterministic valuation of the outputs.  Islam (2020) examines the XAI principle and states that the behavior of artificial intelligence should be meaningful to humans, but without model-first design, understanding and explaining in different ways may still become convoluted and cumbersome, especially for questions at different levels. (Islam, 2020)The model first AI ensures human practitioners can be factored in before the implementation of XA, establishing criteria for reliability through patterns because they possess existing subject matter knowledge of the injected data. Incorporating a lawyer into the data design characteristics model can determine the level of causality of interpretation of his client's actions and relative contributions of several court cases to check if his defense is conducive to the legal guidelines.

 

 

 

References

 

Amparore, E., Perotti, A., & Bajardi, P. (2021, 2021 Apr 16

2021-04-17). To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods. PeerJ Computer Science. https://doi.org/http://dx.doi.org/10.7717/peerj-cs.479

Booth, S. L. (2020). Explainable AI foundations to support human-robot teaching and learning Massachusetts Institute of Technology].

Chromik, M., & Schuessler, M. (2020). A Taxonomy for Human Subject Evaluation of Black-Box Explanations in XAI. ExSS-ATEC@ IUI,

Ehsan, U., & Riedl, M. O. (2020). Human-centered explainable ai: Towards a reflective sociotechnical approach. International Conference on Human-Computer Interaction,

Ehsan, U., Wintersberger, P., Liao, Q. V., Mara, M., Streit, M., Wachter, S., Riener, A., & Riedl, M. O. (2021). Operationalizing Human-Centered Perspectives in Explainable AI. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems,

Fellous, J.-M., Sapiro, G., Rossi, A., Mayberg, H., & Ferrante, M. (2019, 2019 Dec 13

2020-02-28). Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation. Frontiers in Neuroscience. https://doi.org/http://dx.doi.org/10.3389/fnins.2019.01346

Gaur, M., Desai, A., Faldu, K., & Sheth, A. (2020). Explainable AI Using Knowledge Graphs. ACM CoDS-COMAD Conference,

Islam, M. A., Veal, C., Gouru, Y., & Anderson, D. T. (2021). Attribution Modeling for Deep Morphological Neural Networks using Saliency Maps. 2021 International Joint Conference on Neural Networks (IJCNN),

Islam, S. R. (2020). Domain Knowledge Aided Explainable Artificial Intelligence (Publication Number 27835073) [Ph.D., Tennessee Technological University]. ProQuest One Academic. Ann Arbor. https://coloradotech.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/domain-knowledge-aided-explainable-artificial/docview/2411479372/se-2?accountid=144789

Olds, J. L., Khan, M. S., Nayebpour, M., & Koizumi, N. (2019). Explainable ai: A neurally-inspired decision stack framework. arXiv preprint arXiv:1908.10300.

Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2019). Explainable AI: A brief survey on history, research areas, approaches and challenges. CCF international conference on natural language processing and Chinese computing,

Zeng, W. (2021). Explainable Artificial Intelligence for Better Design of Very Large Scale Integrated Circuits (Publication Number 28719980) [Ph.D., The University of Wisconsin - Madison]. ProQuest One Academic. Ann Arbor. https://coloradotech.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/explainable-artificial-intelligence-better-design/docview/2572576626/se-2?accountid=144789

 

No comments:

Post a Comment

                                                          Concurrent and Distributed Modeling   This post will cover thoughts and idea...