Artificial Intelligence systems – enabled by advancement in sensor and control technologies, artificial intelligence, data science, and machine learning – promise to deliver new and exciting applications to a broad range of industries. However, a fundamental trust in their application and execution must be established in order for them to succeed. People, by and large, do not trust a new entity or system in their environment without some evidence of trustworthiness. To trust an artificial intelligence system, we need to know which factors affect system behaviors, how those factors can be assessed and effectively applied for a given mission, and the risks assumed by trusting.
This course aims to provide a foundation for building trust in artificial intelligence systems. A framework for evaluating trust is defined and highlights three perspectives - data, artificial intelligence algorithms, and cybersecurity. An overview of the state-of-the art in research, methods, and technologies for achieving trusted in AI is reviewed along with current applications.
Andrew Brethorst is the Associate Department Director for the Data Science and AI Department at The Aerospace Corporation. Mr. Brethorst completed his undergraduate degree in cybernetics from UCLA, and later completed his master’s degree in computer science with a concentration in machine learning from UCI. Much of his work involves applying machine learning techniques to image exploitation, telemetry anomaly detection, intelligent artificial agents using reinforcement learning, as well as collaborative projects within the research labs.
Dr. Erik Linstead is a professor in AI at Chapman University. Dr. Linstead completed his undergraduate degree in computer science from Stanford University, and later went on to complete his PhD in Artificial Intelligence and machine learning from UC Irvine. He currently operates a research lab where he focuses on using AI technology for enhancing learning as well as studying new treatment affects for autism.