TAZI Live for Classification
TAZI Live shows important metrics related to your Business Model in real time. Hence, every chart, table and metric gets refreshed during the run course of your model.
COUNTS
-
This pie chart shows the distribution of target labels across the whole dataset.
-
You can see the distribution of labels as a table in this section (in this case, CHURN and REMAIN). In TAZI, unknown is used to represent instances without an assigned target label. By enabling the Show As Percentage switch, you can see the distribution of labels as percentage.Time Range column shows how much time has passed since the last run of our model.
-
Using these charts, you can see the total number of processed instances per label over run time of our Business Model. In this case, the time interval these numbers are aggregated over is 1 minute. You can see the total number of target labels over 1 minute at a particular time by clicking on top chart. Moreover, by adjusting the time range at the bottom chart you can immediately see the more zoomed in and updated results in the top chart.
When you enable Show the Most Current Data switch, time window slides to the rightmost side at every update and only the current results are seen at the top chart.
PERFORMANCE
In this tab, you can see all of the metrics related your Business Model’s performance can be seen. Here you can see additional ALL, TRAIN and TEST tabs which represent metrics for all of your dataset, for only the train dataset that is used to run the model and for only the test dataset, respectively. Below, you can see Performance Slices which also lets you inspect the metrics related to one particular target class. Here, we only have two classes CHURN and REMAIN. By default, anomaly class is selected (CHURN in this case).
Top K% Table
Let’s explain this table in the case of Churn problem which our Business Model is based on. In the confusion matrix for ALL (that is for the whole dataset), we can see the total customers that churned. Let’s say we want to offer a discount for the customers that are likely to churn so that they change their mind and decide to stay. However, we cannot offer discounts to all of the customers that might churn since we have a limited budget and not all customers bring in the same value to our company (i.e. profit). Hence, we decide to offer a discount for customers that are most likely to churn to use our budget wisely. We start from the customers that are in top 1% in terms of their probability to churn and that’s what 1% in TOPK % TABLE above represents in sorted by score column. This colum is sorted by the probability score that the customers will likely to churn.
count: This is the number of customers that are in the top k %.
hitRate: for each row of the Top K % table, this ratio is calculated as the division of count to all of the customers that churned.
precision: To calculate precision, for each row of the Top K % table, we first calculate the k % of all customers (k % of the all instances in our dataset). Then we divide count to this number to find the precision.
sum of benefit: While creating the Business Model if we choose a Benefit Feature from the variables in our dataset that is the business KPI that we want to maximize, this column will be available. Here, it will show us the total benefit that these customers who are in top k % will bring to our company. Benefit could be profit or any other value we want to maximize. By using this column we can decide which customers we want to offer our discounts to since it makes sense to use our budget mostly on the people that will cost us more money if they decide to churn.
% of benefit: This shows the percentage of benefit that are distributed among the top k customers.
confidence min: Minimum decision score that the instance group has.
ROC Curve
You can the ROC Curve of your model and also the AUC from this tab:
When you scroll down in TAZI Live, you can see additional charts and tables:
1.This pie chart shows the distribution of true positives, true negatives, false positives and false negatives along with the count numbers:

When you hover on one of the slices you can see the percentage and number of instances that are predicted as REMAIN when the actual label is also REMAIN (where in label: actual label, predicted label).
2.These tables represent the confusion matrices for all of the dataset, for the train dataset and for the test dataset. When you click Show As Percentage, you will see percentages instead of counts.
3.This table shows the performance metrics of the combined model results. These combined indivual models are the ones which we chose in AI Algorithms tab when we configured our Business Model. These metrics such as precision, f1score and recall are listed in a table format by default. However, we can change it to a radar chart easily by clicking the left button above the table:

4.When you click Show Details, another window will open for you to see performance metrics historically during the run course of our Business Model:
When you change the time window at the bottom chart you can see the updated performance metrics in the top chart.
When you click Show the Most Current Data, time window slides to the rightmost side at every update and only the current performance metrics are seen at the top chart.
Model Performances
In Model Performances, you can see all of the individual model performances along with the Explanation Model and the combined model (Output).
To compare individual models more closely, you can click Insights/Compare Models:
In this window, you can easily compare performance metrics and results of individual machine learning models you have selected while configuring your Business Model. You can change both models that will be presented side-by-side as left and right models. Available models will be present in both left model and right model drop-down lists. By default, left model is chosen as Output which is the combined results of models that come out of Combiner. Let’s change right model as Neural Network and see the comparison of both models:
Live Model
When you click Live Model, you will be asked to inspect saved models:

If you Click Go! you will directed to new window in which you’ll see saved snapshots of your model throughout its run:
This is important in terms of continuous learning since you may want to check performance metrics and evaluations during training of your model. On each snapshot you can see the number of instances in your dataset TAZI has used to train your model sequentially. Moreover, snapshot time of your model and performance metrics can be seen. When you click one of the snapshots, you will be directed to Insights&Explore only for the selected snapshot of your model.
When you click Compare, a new window will open up for you to compare two different snapshots of your explanation model:
Let's select Model:10000 and Model: 20000 and click the Compare button:
As List
We can see explanations (paths/rules) as a list. This list shows the added explanations from left model to right model as well as removed explanations from right model that were existing in the left model. (left and right models are Model:10000 and Model:20000, respectively, according to the example above).
LabelFirst: This shows the label of instances following a decision path that was in the left model but absent in the right model.
LabelLast: This is the label of instances grouped with an explanation that is added in the right model but was absent in the left one.
SizeFirst and SizeLast denotes the number of instances contained in the groupings in left and right models, respectively.
NodeType: It shows whether a node was added or removed. Added nodes (explanations) in the right model are listed with a green color next to them, while removed ones are listed with red.
On Model
Here, you will see combined patterns that are both added to right model and removed from left model. Since left model was the snapshot of your model with 10,000 instances and right model was of 20,000 instances, we see instances with a total count of 30,000. Gray nodes represent the patterns that exist in both models.
Highlights
In this tab, you can see the highlights in your model and take a closer look at it instance by instance:
Thanks to the continous learning TAZI offers, you can see how your model progresses across time during the training of your model. You can sort the instances according to Time column in a descending or ascending order.
In the left pane, you can let it show instances with a minimum decision score by adjusting the slider and also choose which classes you want to display via check boxes
In the upper pane, you can slide the window and in the below table it will only display trained instances that the time window defines. When you enable Show the Most Current Data switch, time window slides to the rightmost side and only the latest instances are seen at below table.
Now, let’s focus on a single instance and explain how to inspect it further:

You will see both the actual and predicted label (Will Churn) for your instance along with the explanation the instance has. It is the last instance that TAZI trained on and we have a high decision score meaning that we are pretty confident of our prediction which we have correctly guessed as CHURN when the actual label was CHURN as well.
Moreover, it is possible to see the explanation by clicking on the icon in Explanation column. You can also see all feature values of that particular instance by clicking on Inspect Instance icon which displays the whole instance in a json format.