Visual Understanding using Analogical Learning

Speaker: Kezhen Chen (Northwestern University)

Date and Time: Friday, October 22 at 11am CST


Analogical ability – the ability to make relational comparisons between objects, events, or ideas and to see common relational patterns across different sets of objects - is a core mechanism in human cognition. Computer scientists have implemented the human-like modeling of analogy for learning and reasoning. Ample research has shown that analogical learning over qualitative representations provides high data-efficiency and strong explainability. This talk will introduce how analogical learning is used over the visual qualitative representations of sketches and images and describe a hybrid method to combine deep learning and analogical learning for visual understanding on real images.


Kezhen Chen is currently a sixth-year PhD student at Northwestern University. He is working in the Qualitative Reasoning Group and his advisor is Prof. Ken Forbus. His research focuses on building hybrid systems on visual understanding via combining qualitative reasoning and deep learning. His interests include qualitative reasoning, cognitive modeling, computer vision, multi-model learning, and neural-symbolic. He has publications on multiple top-tier conferences such as ICML, AAAI, CogSci, etc. He also received the best paper award in the NeurIPS KR2ML workshop in 2019 for his paper in cooperation with Microsoft Research.