Starter Example

This is a basic example of explainX Open-Source usage in explaining a Random Forest model.

After successfully installing explainX, open up your Python IDE of Jupyter Notebook and simply follow the code below to use it:

Open up your jupyter notebook and simply follow the code.


The goal is to be able to explain and debug the machine learning model we are building. The main goal will be to provide business-level explanations.


Make sure you have Sklearn & explainX installed. In case you don't, follow the following command to install the libraries.

  1. Import required module.

from explainx import *
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
  1. Load and split your dataset into x_data and y_data

#Load Dataset: X_Data, Y_Data
#X_Data = Pandas DataFrame
#Y_Data = Numpy Array or List
X_data,Y_data = explainx.dataset_heloc()
  1. Split dataset into training & testing.

X_train, X_test, Y_train, Y_test = train_test_split(X_data,Y_data, test_size=0.3, random_state=0)
  1. Train your model.

# Train a RandomForest Model
model = RandomForestClassifier(), Y_train)

After you're done training the model, you can either access the complete explainability dashboard or access individual techniques.

Complete Explainability Dashboard

To access the entire dashboard with all the explainability techniques under one roof, follow the code down below. It is great for sharing your work with your peers and managers in an interactive and easy to understand way.

5.1. Pass your model and dataset into the explainX function:, Y_test, model, model_name="randomforest")

5.2. Click on the dashboard link to start exploring model behavior:

App running on

Explainability Modules

In this latest release, we have also given the option to use explainability techniques individually. This will allow the user to choose technique that fits their personal AI use case.

6.1. Pass your model, X_Data and Y_Data into the explainx_modules function., Y_test, model)

As an upgrade, we have eliminated the need to pass in the model name as explainX is smart enough to identify the model type and problem type i.e. classification or regression, by itself.

You can access multiple modules:

Module 1: Dataframe with Predictions


Module 2: Model Metrics


Module 3: Global Level SHAP Values


Module 4: What-If Scenario Analysis (Local Level Explanations)


Module 5: Partial Dependence Plot & Summary Plot


Module 6: Model Performance Comparison (Cohort Analysis)


To access the modules within your jupyter notebook as IFrames, just pass the mode='inline' argument in each of the function.