EVALUATION OF LATTICE BASED XAI
Abstract
With multiple methods to extract explanations from a black box model, it becomes significant to evaluate the correctness of these Explainable AI (XAI) techniques themselves. While there are many XAI evaluation methods that need manual intervention, in order to be objective, we use computable XAI evaluation methods to test the basic nature and sanity of an XAI technique. We pick four basic axioms and three sanity tests from existing literature that the XAI techniques are expected to satisfy. Axioms like Feature Sensitivity, Implementation Invariance, Symmetry preservation and sanity tests like Model parameter randomization, Model-Outcome relationship, Input transformation invariance are used. After reviewing the axioms and sanity tests, we apply it on existing XAI techniques to check if they satisfy them or not. Thereafter, we evaluate our lattice based XAI technique with these axioms and sanity tests using a mathematical approach. This work proves these axioms and sanity tests to establish the correctness of explanations extracted from our Lattice based XAI technique.

Authors
Bhaskaran Venkatsubramaniam, Pallav Kumar Baruah
Sri Sathya Sai Institute of Higher Learning, India

Keywords
Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, XAI Evaluation
Yearly Full Views
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
211000000000
Published By :
ICTACT
Published In :
ICTACT Journal on Soft Computing
( Volume: 14 , Issue: 2 , Pages: 3180 - 3187 )
Date of Publication :
October 2023
Page Views :
277
Full Text Views :
12

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.