PyData Global 2022

Responsible AI - What, Why, How and Future!
12-02, 09:00–09:30 (UTC), Talk Track II

Mostly, people relate Artificial Intelligence to progress, intelligence and productivity. But with this comes unfair decisions, biases, human workforce being replaced, lack of privacy and security. And to make matters worse, a lot of these problems are specific to AI. This indicates that the rules and regulations in place are inadequate to deal with them. Responsible AI comes into play in this situation. It seeks to resolve these problems and establish AI system responsibility. In this talk I am going to talk about What is Responsible AI, Why is it needed, How it can be implemented, What are the various frameworks for Responsible AI and What is the Future?


In this talk I am going to talk about What is Responsible AI, Why is it needed, How it can be implemented, What are the various frameworks for Responsible AI and What is the Future? I will discuss the three major Responsible AI Frameworks - Google, Microsoft, and IBM.

The development of AI is creating new opportunities to improve the lives of people around the world, from business to healthcare to education. It is also raising new questions about the best way to build fairness, interpretability, privacy, and security into these systems.

The first area mentioned is interpretability. When we interpret a model we get an explanation for how it makes predictions. An AI system could reject your application for a mortgage or diagnose you with cancer. A user would likely demand an explanation even if these decisions are correct. Some models are easier to interpret than others making it easier to get explanations. Responsible AI can define how we build interpretable models or when it is okay to use one that is less interpretable.

Related to interpretability is model fairness. It is possible for AI systems to make decisions that discriminate against certain groups of people. This bias comes from bias in the data used to train models. In general, the more interpretable a model the easier it is to ensure fairness and correct any bias. We still need a Responsible AI framework to define how we evaluate fairness and what to do when a model is found to make unfair predictions. This is especially important when using less interpretable models.

Safety and security is another concern. These are not new to software development and are address by techniques like encryption and software tests. The difference is that, unlike general computer systems, AI systems are not deterministic. When faced with new scenarios they can make unexpected decisions. The systems can even be manipulated to make incorrect decisions. This is particularly concerning when we are dealing with robots. If they make errors, things like self-driving cars can cause injury or death.

The last aspect is privacy and data governance. The quality of data used is important. If there are mistakes in the data used by AI then the system may make incorrect decisions. In general, AI should also not be allowed to use sensitive data.

Ultimately, what this all boils down to is trust. If users do not trust AI they will not use your services. We won’t trust systems that use the information we are uncomfortable with sharing or we think it will make biased decisions. We certainly won’t trust it if we think it will cause us physical harm. Explanations for decisions and accountability for those decisions go a long way in building this trust. This need for trust is what is driving self-regulation amongst companies that use AI.


Prior Knowledge Expected

No previous knowledge expected

Dr Sonal Kukreja is a passionate academician and researcher. She is PhD in Computer Science and working as a Professor at Bennett University. She is also the founder of ChildrenWhoCode, which is on a mission to provide quality technical education to every child in India, to prepare them for the upcoming tech market.