ICML 2020 Workshop
Deep learning has achieved great success in a variety of tasks such as recognizing objects in images, predicting the sentiment of sentences, or image/speech synthesis by training on a large-amount of data. However, most existing success are mainly focusing on perceptual tasks, which is also known as System I intelligence. In real world, many complicated tasks, such as autonomous driving, public policy decision making, and multi-hop question answering, require understanding the relationship between high-level variables in the data to perform logical reasoning, which is known as System II intelligence. Integrating system I and II intelligence lies in the core of artificial intelligence and machine learning.
Graph is an important structure for System II intelligence, with the universal representation ability to capture the relationship between different variables, and support interpretability, causality, and transferability / inductive generalization. Traditional logic and symbolic reasoning over graphs has relied on methods and tools which are very different from deep learning models, such Prolog language, SMT solvers, constrained optimization and discrete algorithms. Is such a methodology separation between System I and System II intelligence necessary? How to build a flexible, effective and efficient bridge to smoothly connect these two systems, and create higher order artificial intelligence?
Graph neural networks, have emerged as the tool of choice for graph representation learning, which has led to impressive progress in many classification and regression problems such as chemical synthesis, 3D-vision, recommender systems and social network analysis. However, prediction and classification tasks can be very different from logic/symbolic reasoning.
Bits and pieces of evidence can be gleaned from recent literature, suggesting graph neural networks may be a general tool to make such a connection. For example, (Battaglia et al., 2018; Barceĺó et al., 2019) viewed graph neural networks as tools to incorporate explicitly logic reasoning bias. (Kipf et al., 2018) used graph neural network to reason about interacting systems, (Yoon et al., 2018; Zhang et al., 2020) used neural networks for logic and probabilistic inference, (Hudson &Manning, 2019; Hu et al., 2019) used graph neural networks for reasoning on scene graphs for visual question reasoning, (Qu & Tang, 2019) studied reasoning on knowledge graphs with graph neural networks, and (Khalil et al., 2017; Xu et al.,2018; Velickovic et al., 2019; Sato et al., 2019) used graph neural networks for discrete graph algorithms. However, there can still be a long way to go for a satisfactory and definite answers on the ability of graph neural networks for automatically discovering logic rules, and conducting long-range multi-step complex reasoning in combination with perception inputssuch as language, vision, spatial and temporal variation.
Can graph neural networks be the key bridge to connect System I and System II intelligence? Are there other more flexible, effective and efficient alternatives? For instance, (Wang et al., 2019) combined max satisfiability solver withdeep learning, (Manhaeve et al., 2018) combined directed graphical and Problog with deep learning, (Arseny Skryagin,2020) combined sum product network with deep learning, (Silver et al., 2019; Alet et al., 2019) combined logic reasoning with reinforcement learning. How do these alternative methods compare with graph neural networks for being a bridge?
The goal of this workshop is to bring researchers from previously separate fields, such as deep learning, logic/symbolic reasoning, statistical relational learning, and graph algorithms, into a common roof to discuss this potential interface and integration between System I and System intelligence. By providing a venue for the confluence of new advances in theoretical foundations, models and algorithms, as well as empirical discoveries, new benchmarks and impactful applications,
we hope this can shed light on the bridge to higher order intelligence, and spark new ideas along this direction. The topics discussed in this workshop will include but are not limited to:
University of Montreal & Mila
DeepMind
CMU
MIT
MIT
KU Leuven
TU Darmstadt
Sponsored by Ant Financial
1. Email:
jian.tang@hec.ca
2. Research expertise:
Jian is an assistant professor at HEC Montreal and also a core faculty member at Mila-Quebec AI Institute. He is a recipient of the first cohort of Canada AI Research Chairs. His main research interest is graph representation learning and graph neural networks with applications in knowledge graphs, drug discovery, and recommender systems. Before joining HEC Montreal, he was a postdoc at University of Michigan and also Carnegie Mellon University, and also a researcher at Microsoft Research Asia. He received a few best paper awards including ICML'14 Best Paper Award, WWW'16 Best Paper Nomination, and Best Paper Award at KDD'19 Workshop on Deep Learning Practice for High-dimensional Space data. He published one of the earliest work on graph representation learning---the LINE algorithm---which has been cited close to 1,900 times since it was published in 2015.
3. Previous experience:
Jian has co-organized workshops at SDM'19, CIKM'19, AAAI'20.
1. Email:
lsong@cc.gatech.edu
2. Research expertise:
Le is an Associate Professor in the College of Computing, an Associate Director of the Center for Machine Learning, Georgia Institute of Technology. He is also affiliated with the AI department in Ant Financial. His principal research direction is kernel methods, deep learning and probabilistic graphical models. Before he joined the Georgia Institute of Technology in 2011, he was a postdoc in the Department of Machine Learning, Carnegie Mellon University, and then a research scientist at Google. He is also the recipient of the NSF CAREER Award’14, and many best paper awards, including the NIPS’17 Materials Science Workshop Best Paper Award, the Recsys’16 Deep Learning Workshop Best Paper Award, AISTATS'16 Best Student Paper Award, IPDPS'15 Best Paper Award, NIPS’13 Outstanding Paper Award, and ICML’10 Best Paper Award. He have served as the area chair or senior program committee for many leading machine learning and AI conferences such as ICML, NIPS, AISTATS, AAAI, and IJCAI, and the action editor for JMLR and IEEE TPAMI.
3. Previous experience:
Le has co-organized workshops at ICML 2015, the Simon Institute 2017, WWW 2015, NIPS 2019, 2014 and 2012.
1. Email:
jure@cs.stanford.edu
2. Research expertise:
Jure Leskovec is Associate Professor of Computer Science at Stanford University, Chief Scientist at Pinterest, and investigator at Chan Zuckerberg Biohub. His research focuses on machine learning and data mining large social, information, and biological networks. Computation over massive data is at the heart of his research and has applications in computer science, social sciences, marketing, and biomedicine. This research has won several awards including a Lagrange Prize, Microsoft Research Faculty Fellowship, Alfred P. Sloan Fellowship, and numerous best paper and test of time awards. Leskovec received his bachelor’s degree in computer science from the University of Ljubljana, Slovenia, and his PhD in machine learning from Carnegie Mellon University and postdoctoral training at Cornell University.
3. Previous experience:
Jure has co-organized workshops at NeurIPS'19, ICML'19, ICLR'19, and multiple workshops at KDD and WWW.
1. Email:
lrjconan@gmail.com
2. Research expertise:
Renjie is a PhD student from the machine learning group, University of Toronto. He is jointly supervised by Raquel Urtasun and Richard Zemel. He also works part-time as a senior research scientist in Uber Advanced Technology Group. His research interest is machine learning with a recent focus on deep probabilistic models of graph-structured data and its applications. He is also interested in applying machine learning algorithms to solve various computer vision and self-driving problems. He has made several contributions in the field of graph neural networks, published at top-tier venues in the machine learning community (NeurIPS, ICLR, ICML) and in the computer vision community (CVPR, ICCV). He is the lead developer of graph recurrent attention networks, LanczosNet, NerveNet, and Graph Partition Networks.
3. Previous experience:
Renjie has co-organized related workshops at NeurIPS 2019, ICML 2019, and KDD 2019.
1. Email:
yujiali@google.com
2. Research expertise:
Yujia Li is a senior research scientist at DeepMind. He obtained his Ph.D. from University of Toronto. He has been working on graph neural networks since 2015 and is particularly interested in scaling up GNNs, graph generative models, and GNN's real world applications.
3. Previous experience:
Yujia has co-organized related workshops at ICML 2019 and ICLR 2019.
1. Email:
fidler@cs.toronto.edu
2. Research expertise:
Sanja Fidler is an Assistant Professor at University of Toronto. Prior coming to Toronto, in 2012/2013, she was a Research Assistant Professor at Toyota Technological Institute at Chicago, an academic institute located in the campus of University of Chicago. She did her postdoc with Prof. Sven Dickinson at University of Toronto in 2011/2012. Sanja finished her PhD in 2010 at University of Ljubljana in Slovenia. In 2010, she visited Prof. Trevor Darrell‘s group at UC Berkeley and ICSI. She got her BSc degree in Applied Math at University of Ljubljana. Her work is in the area of Computer Vision. Her main research interests are 2D and 3D object detection, particularly scalable multi-class detection, object segmentation and image labeling, and (3D) scene understanding. She is also interested in the interplay between language and vision: generating sentential descriptions about complex scenes, as well as using textual descriptions for better scene parsing (e.g., in the scenario of the human-robot interaction).
3. Previous experience:
Previous experience: Sanja has co-organized many workshops at ICCV and ECCV. She will be the program chair of ICCV'21 and 3DV'16.
1. Email:
fzemel@cs.toronto.edu
2. Research expertise:
Richard Zemel is a Professor of Computer Science at the University of Toronto, where he has been a faculty member since 2000. Prior to that, he was an Assistant Professor in Computer Science and Psychology at the University of Arizona and a Postdoctoral Fellow at the Salk Institute and at Carnegie Mellon University. He received a B.Sc. degree in History & Science from Harvard University in 1984 and a Ph.D. in Computer Science from the University of Toronto in 1993. He is also the co-founder of SmartFinance, a financial technology startup specializing in data enrichment and natural language processing. His awards include an NVIDIA Pioneers of AI Award, a Young Investigator Award from the Office of Naval Research, a Presidential Scholar Award, two NSERC Discovery Accelerators, and seven Dean’s Excellence Awards at the University of Toronto. He is a Fellow of the Canadian Institute for Advanced Research and is on the Executive Board of the Neural Information Processing Society, which runs the premier international machine learning conference. His research contributions include foundational work on systems that learn useful representations of data without any supervision; methods for learning to rank and recommend items; and machine learning systems for automatic captioning and answering questions about images. He developed the Toronto Paper Matching System, a system for matching paper submissions to reviewers, which is being used in many conferences, including NIPS, ICML, CVPR, ICCV, and UAI. His research is supported by grants from NSERC, CIFAR, Microsoft, Google, Samsung, DARPA and iARPA.
3. Previous experience:
Previous experience: Richard is the Co-Founder and Director of Research at the Vector Institute for Artificial Intelligence.
1. Email:
rsalakhu@cs.cmu.edu
2. Research expertise:
Ruslan Salakhutdinov is a UPMC professor of Computer Science in the Machine Learning Department, School of Computer Science at Carnegie Mellon University. He received his PhD in machine learning (computer science) from the University of Toronto in 2009. After spending two post-doctoral years at the Massachusetts Institute of Technology Artificial Intelligence Lab, he joined the University of Toronto as an Assistant Professor in the Department of Computer Science and Department of Statistics. In February of 2016, he joined the Machine Learning Department at Carnegie Mellon University as an Associate Professor. Ruslan's primary interests lie in deep learning, machine learning, and large-scale optimization. His main research goal is to understand the computational and statistical principles required for discovering structure in large amounts of data. He is an action editor of the Journal of Machine Learning Research and served on the senior programme committee of several learning conferences including NIPS and ICML. He is an Alfred P. Sloan Research Fellow, Microsoft Research Faculty Fellow, Canada Research Chair in Statistical Machine Learning, a recipient of the Early Researcher Award, Connaught New Researcher Award, Google Faculty Award, Nvidia's Pioneers of AI award, and is a Senior Fellow of the Canadian Institute for Advanced Research.
3. Previous experience:
Ruslan has co-organized workshops at ICML'13, NeurIPS'11, NeurIPS'10, NeurIPS'09, ICML'09, NeurIPS'07, and he was the program co-chair at ICML'19.