Developed adaptive interpretable ML framework using Contextual Bandits to leverage the Rashomon Effect, demonstrating that users develop distinct individual preferences for model complexity while maintaining predictive accuracy and interpretability.
I empirically validate interpretable ML systems through rigorous behavioral experiments and domain expert collaboration. Using pre-registered studies (N>250) and inferential statistics, I test which properties of transparent systems actually drive user understanding and adoption. My work challenges common assumptions about interpretability, revealing that adjustability and personalization often matter more than static transparency alone.