As AI permeates the home, work, and public life, it’s increasingly important to be able to understand why and how it makes its decisions. Explainable AI isn’t just a matter of hitting a switch, though; Experts from UC Berkeley, SRI, and Fiddler Labs will discuss how we should go about it on stage at TC Sessions: Robotics+AI on March 3.
What does explainability really mean? Do we need to start from scratch? How do we avoid exposing proprietary data and methods? Will there be a performance hit? Whose responsibility will it be, and who will ensure it is done properly?
On our panel addressing these questions and more will be two experts, one each from academia and private industry.
Trevor Darrell is a professor at Berkeley’s Computer Science department who helps lead many of the university’s AI-related labs and projects, especially those concerned with the next generation of smart transportation. His research group focuses on perception and human-AI interaction, and he previously led a computer vision group at MIT.
Krishna Gade has passed in his time through Facebook, Pinterest, Twitter and Microsoft, and has seen firsthand how AI is developed privately — and how biases and flawed processes can lead to troubling results. He co-founded Fiddler as an effort to address problems of fairness and transparency by providing an explainable AI framework for enterprise.
Moderating and taking part in the discussion will be SRI International’s Karen Myers, director of the research outfit’s Artificial Intelligence Center and an AI developer herself focused on collaboration, automation, and multi-agent systems.
Save $50 on tickets when you book today. Ticket prices go up at the door and are selling fast. We have two (yes two) Startup Demo Packages Left – book your package now and get your startup in front of 1000+ of today’s leading industry minds. Packages come with 4 tickets – book here.