PyData Global 2022

SHASHANK SHEKHAR

Shashank is Data Sciences leader with diverse experience across verticals including Telecom, CPG, Retail, Hitech and E-commerce domains. He is currently heading the Artificial Intelligence Labs at Subex. In the past, he has worked in VMware, Amazon, Flipkart and Target and has been involved in solving various complex business problems using Machine Learning and Deep Learning. He has been part of the program committee of several international conferences like ICDM and MLDM and was selected as a mentor in Global Datathon 2018 organized by Data Sciences Society. He has multiple patents and publications in the field of artificial intelligence, machine learning, deep learning and image recognition in several international journals of repute to his credit. He has spoken at many summits and conferences like PyData Global, APAC Data Innovation Summit, Big Data Lake Summit, PlugIn etc. He has also published three open-source libraries on Python and is an active contributor to the global Python community.

The speaker's profile picture

Sessions

12-01
08:00
30min
Generate Actionable Counterfactuals using Multi-objective Particle Swarm Optimization
Niranjan G S, SHASHANK SHEKHAR

Counterfactual explanations (CFE) are methods that
explain a machine learning model by giving an alternate class prediction
of a data point with some minimal changes in its features.
In this talk, we describe a counterfactual (CF)
generation method based on particle swarm optimization (PSO) and how we can have greater control over the proximity and sparsity properties
over the generated CFs.

Talk Track I
12-01
08:30
30min
Measurement of Trust in AI
SHASHANK SHEKHAR

For enterprises to adopt and embrace AI into their transformational journey, it is imperative to build Trustworthy AI- so that AI products and solutions that are built, delivered, and acquired are responsible enough to drive trust and wider adoption. We look at AI Trust as a function of 4 key constructs which include Reliability, Safety, Transparency, Responsibility and Accountability. These core constructs are pillars of driving AI trust in our products and solutions. In this talk, I will explain how to enable each core construct and will articulate how they can be measured in some real-world use cases.

Talk Track I