One of the great advantages of linear additive models is how easy it is to interpret the associations between dependent variables and a target. On the other hand, more flexible models tend to make more accurate predictions but compromise interpretability due to their higher level of complexity.
Can we have a highly-flexible model without losing interpretability?
This fun little project addresses this question using Local Surrogate Machine Learning Interpretable Models (LIME).
The project is divided in three parts:
Make local approximations around slightly modified versions of the original comments
By doing this, we can disentangle the model's association rules.
Ultimately, this project serves as an example on how to make sense of highly-complex Machine Learning models.
While you're here:
Check out my Python class to apply LIME to NLP models
Read the paper where I review this methodology