Overview

Deep learning has achieved great success in a variety of tasks such as recognizing objects in images, predicting the sentiment of sentences, or image/speech synthesis by training on a large-amount of data. However, most existing success are mainly focusing on perceptual tasks, which is also known as System I intelligence. In real world, many complicated tasks, such as autonomous driving, public policy decision making, and multi-hop question answering, require understanding the relationship between high-level variables in the data to perform logical reasoning, which is known as System II intelligence. Integrating system I and II intelligence lies in the core of artificial intelligence and machine learning.

Graph is an important structure for System II intelligence, with the universal representation ability to capture the relationship between different variables, and support interpretability, causality, and transferability / inductive generalization. Traditional logic and symbolic reasoning over graphs has relied on methods and tools which are very different from deep learning models, such Prolog language, SMT solvers, constrained optimization and discrete algorithms. Is such a methodology separation between System I and System II intelligence necessary? How to build a flexible, effective and efficient bridge to smoothly connect these two systems, and create higher order artificial intelligence?

Graph neural networks, have emerged as the tool of choice for graph representation learning, which has led to impressive progress in many classification and regression problems such as chemical synthesis, 3D-vision, recommender systems and social network analysis. However, prediction and classification tasks can be very different from logic/symbolic reasoning.

Bits and pieces of evidence can be gleaned from recent literature, suggesting graph neural networks may be a general tool to make such a connection. For example, (Battaglia et al., 2018; Barceĺó et al., 2019) viewed graph neural networks as tools to incorporate explicitly logic reasoning bias. (Kipf et al., 2018) used graph neural network to reason about interacting systems, (Yoon et al., 2018; Zhang et al., 2020) used neural networks for logic and probabilistic inference, (Hudson &Manning, 2019; Hu et al., 2019) used graph neural networks for reasoning on scene graphs for visual question reasoning, (Qu & Tang, 2019) studied reasoning on knowledge graphs with graph neural networks, and (Khalil et al., 2017; Xu et al.,2018; Velickovic et al., 2019; Sato et al., 2019) used graph neural networks for discrete graph algorithms. However, there can still be a long way to go for a satisfactory and definite answers on the ability of graph neural networks for automatically discovering logic rules, and conducting long-range multi-step complex reasoning in combination with perception inputssuch as language, vision, spatial and temporal variation.

Can graph neural networks be the key bridge to connect System I and System II intelligence? Are there other more flexible, effective and efficient alternatives? For instance, (Wang et al., 2019) combined max satisfiability solver withdeep learning, (Manhaeve et al., 2018) combined directed graphical and Problog with deep learning, (Arseny Skryagin,2020) combined sum product network with deep learning, (Silver et al., 2019; Alet et al., 2019) combined logic reasoning with reinforcement learning. How do these alternative methods compare with graph neural networks for being a bridge?

The goal of this workshop is to bring researchers from previously separate fields, such as deep learning, logic/symbolic reasoning, statistical relational learning, and graph algorithms, into a common roof to discuss this potential interface and integration between System I and System intelligence. By providing a venue for the confluence of new advances in theoretical foundations, models and algorithms, as well as empirical discoveries, new benchmarks and impactful applications,

we hope this can shed light on the bridge to higher order intelligence, and spark new ideas along this direction. The topics discussed in this workshop will include but are not limited to:

  • Deep learning and graph neural networks for logic reasoning, knowledge graphs and relational data.
  • Deep relational and graph reasoning in computer vision.
  • Deep learning and graph neural networks for multi-hop reasoning in natural language and text corpora.
  • Deep learning for statistical relational modeling (e.g., Bayes networks, Markov networks and causal models).
  • Deep learning for graph and symbolic algorithms (e.g., combinatorial and iterative algorithms).
  • Deep learning for induction of structures, such as logic and mathematical formulas and relational patterns.
  • Theoretical foundation of graph neural networks for logic reasoning and graph algorithms.
  • Applications in different fields such as computer vision, natural language processing, healthcare and other sciences.
  • Benchmark data sets and open source libraries.
  • Other mechanisms such as attention and consciousness.

Speakers

Yoshua Bengio

University of Montreal & Mila

Peter Battaglia

DeepMind

Zico Kolter

CMU

Ferran Alet

MIT

Tommi Jaakkola

MIT

Luc De Raedt

KU Leuven

Kristian Kersting

TU Darmstadt

Accepted Papers

  • [Spotlight] Generating Programmatic Referring Expressions via Program Synthesis
  • Jiani Huang, Osbert Bastani, Calvin Smith, Rishabh Singh, Aws Albarghouthi, Mayur Naik
    [Paper]
  • [Spotlight] Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning
  • Qing Li, Siyuan Huang, Yining Hong, Yixin Chen, Ying Nian Wu, Song-Chun Zhu
    [Paper]
  • [Spotlight] Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs
  • Hongyu Ren, Jure Leskovec
    [Paper]
  • [Spotlight] Barking up the right tree: an approach to search over molecule synthesis DAGs
  • John A Bradshaw, Brooks Paige, Matt J Kusner, Marwin Segler, Jose Miguel Hernandez-Lobato
    [Paper]
  • [Spotlight] Modeling the semantics of data sources with graph neural networks
  • Giuseppe Futia, Giovanni Garifo, Antonio Vetrò, Juan Carlos De Martin
    [Paper]
  • [Spotlight] SpatialSim: Recognizing Spatial Configurations of Objects with Graph Neural Networks
  • Laetitia Teodorescu, Katja Hofmann, Pierre-Yves Oudeyer
    [Paper]
  • [Spotlight] Enhancing Neural Mathematical Reasoning by Abductive Combination with Symbolic Library
  • Yangyang Hu, Yang Yu
    [Paper]
  • [Spotlight] Learning Retrosynthetic Planning with Chemical Reasoning
  • Binghong Chen, Chengtao Li, Hanjun Dai, Le Song
    [Paper]
  • [Poster] Neural Analogical Matching
  • Maxwell Crouse, Constantine Nakos, Ibrahim Abdelaziz, Ken Forbus
    [Paper]
  • [Poster] End-to-end permutation learning with Hungarian algorithm
  • Yuta Kawachi, Teppei Suzuki
    [Paper]
  • [Poster] KGNN: Distributed Framework for Graph Neural Knowledge Representation
  • Binbin Hu, Zhiyang Hu, Zhiqiang Zhang, Jun Zhou, Chuan Shi
    [Paper]
  • [Poster] Heterogeneous Graph Neural Network for Recommendation
  • Jinghan Shi, Houye Ji, Chuan Shi, Xiao Wang, Zhiqiang Zhang, Jun Zhou
    [Paper]
  • [Poster] Molecule Edit Graph Attention Network: Modeling Chemical Reactions as Sequences of Graph Edits
  • Mikołaj Sacha, Mikołaj Błaż, Piotr Byrski, Paweł Włodarczyk-Pruszyński, Stanislaw Jastrzebski
    [Paper]
  • [Poster] Hierarchical Relational Inference
  • Aleksandar Stanic, Sjoerd van Steenkiste, Jürgen Schmidhuber
    [Paper]
  • [Poster] Sum-Product Logic: Integrating Probabilistic Circuits into DeepProbLog
  • Arseny Skryagin, Karl Stelzner, Alejandro Molina, Fabrizio G Ventola, Zhongjie Yu, Kristian Kersting
    [Paper]
  • [Poster] Neural-Symbolic Modeling for Natural Language Discourse
  • Maria Leonor Pacheco, Dan Goldwasser
    [Paper]
  • [Poster] Towards Scale-Invariant Graph-related Problem Solving by Iterative Homogeneous Graph Neural Networks
  • Hao Tang, Zhiao Huang, Jiayuan Gu, Bao-Liang Lu, Hao Su
    [Paper]
  • [Poster] Towards Practical Multi-Object Manipulation using Relational Reinforcement Learning
  • Richard Li, Pulkit Agrawal
    [Paper]
  • [Poster] Performance Evaluation of Graph Convolutional Networks with Siamese Training for Few-Shot Classification of Nodes
  • Nichita Uțiu
    [Paper]
  • [Poster] Scenes and Surroundings: Scene Graph Generation using Relation Transformer
  • Rajat Koner, Poulami Sinhamahapatra, Volker Tresp
    [Paper]
  • [Poster] RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs
  • Meng Qu, Louis-Pascal A.C. Xhonneux, Yoshua Bengio, Jian Tang
    [Paper]
  • [Poster] Sparse Graph to Sequence Learning for Vision Conditioned Long Textual Sequence Generation
  • Aditya Mogadala, Marius Mosbach, Dietrich Klakow
    [Paper]
  • [Poster] Understanding Deep Learning with Reasoning Layer
  • Xinshi Chen, Yufei Zhang, Christoph Reisinger, Le Song
    [Paper]

Award

  • Best Paper Award
  • Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning
    Qing Li, Siyuan Huang, Yining Hong, Yixin Chen, Ying Nian Wu, Song-Chun Zhu
  • Best Paper Award Runner-up
  • Barking up the right tree: an approach to search over molecule synthesis DAGs
    John A Bradshaw, Brooks Paige, Matt J Kusner, Marwin Segler, Jose Miguel Hernandez-Lobato

Sponsored by Ant Financial

Schedule (EST)

8:50 a.m. - 9:00 a.m.          Opening Remarks: Jian Tang & Le Song
9:00 a.m. - 9:30 a.m.          Keynote: Yoshua Bengio
9:30 a.m. - 9:40 a.m.          Keynote: Yoshua Bengio (Q&A)
9:40 a.m. - 10:10 a.m.        Invited Talk: Peter Battaglia
10:10 a.m. - 10:20 a.m.      Invited Talk: Peter Battaglia (Q&A)
10:20 a.m. - 10:25 a.m.      Spotlight Talk (1): Generating Programmatic Referring Expressions via Program Synthesis
10:25 a.m. - 10:30 a.m.      Spotlight Talk (2): Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning
10:30 a.m. - 10:35 a.m.      Spotlight Talk (3): Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs
10:35 a.m. - 10:40 a.m.      Spotlight Talk (4): Barking up the right tree: an approach to search over molecule synthesis DAGs
10:40 a.m. - 11:30 a.m.      Morning Poster Session
11:30 a.m. - 12:00 p.m.      Invited Talk: Zico Kolter
12:00 p.m. - 12:10 p.m.      Invited Talk: Zico Kolter (Q&A)
1:30 p.m. - 2:00 p.m.          Invited Talk: Tommi Jaakkola
2:00 p.m. - 2:10 p.m.          Invited Talk: Tommi Jaakkola (Q&A)
2:10 p.m. - 2:40 p.m.          Invited Talk 4: Luc De Raedt
2:40 p.m. - 2:50 p.m.          Invited Talk 4: Luc De Raedt (Q&A)
2:50 p.m. - 2:55 p.m.          Spotlight talk (5): Modeling the semantics of data sources with graph neural networks
2:55 p.m. - 3:00 p.m.          Spotlight talk (6): SpatialSim: Recognizing Spatial Configurations of Objects with Graph Neural Networks
3:00 p.m. - 3:05 p.m.          Spotlight Talk (7): Enhancing Neural Mathematical Reasoning by Abductive Combination with Symbolic Library
3:05 p.m. - 3:10 p.m.          Spotlight Talk (8): Learning Retrosynthetic Planning with Chemical Reasoning
3:10 p.m. - 4:00 p.m.          Afternoon Poster Session
4:00 p.m. - 4:30 p.m.          Invited Talk 5:Ferran Alet
4:30 p.m. - 4:40 p.m.          Invited Talk 5:Ferran Alet (Q&A)
4:40 p.m. - 5:10 p.m.          Invited Talk 6: Kristian Kersting
5:10 p.m. - 5:20 p.m.          Invited Talk 6: Kristian Kersting (Q&A)
5:20 p.m. - 5:30 p.m.          Concluding Remarks

People

Organizers

  • Jian Tang (Google scholar)
  • 1. Email:
    jian.tang@hec.ca
    2. Research expertise:
    Jian is an assistant professor at HEC Montreal and also a core faculty member at Mila-Quebec AI Institute. He is a recipient of the first cohort of Canada AI Research Chairs. His main research interest is graph representation learning and graph neural networks with applications in knowledge graphs, drug discovery, and recommender systems. Before joining HEC Montreal, he was a postdoc at University of Michigan and also Carnegie Mellon University, and also a researcher at Microsoft Research Asia. He received a few best paper awards including ICML'14 Best Paper Award, WWW'16 Best Paper Nomination, and Best Paper Award at KDD'19 Workshop on Deep Learning Practice for High-dimensional Space data. He published one of the earliest work on graph representation learning---the LINE algorithm---which has been cited close to 1,900 times since it was published in 2015.
    3. Previous experience:
    Jian has co-organized workshops at SDM'19, CIKM'19, AAAI'20.

  • Le Song (Google scholar)
  • 1. Email:
    lsong@cc.gatech.edu
    2. Research expertise:
    Le is an Associate Professor in the College of Computing, an Associate Director of the Center for Machine Learning, Georgia Institute of Technology. He is also affiliated with the AI department in Ant Financial. His principal research direction is kernel methods, deep learning and probabilistic graphical models. Before he joined the Georgia Institute of Technology in 2011, he was a postdoc in the Department of Machine Learning, Carnegie Mellon University, and then a research scientist at Google. He is also the recipient of the NSF CAREER Award’14, and many best paper awards, including the NIPS’17 Materials Science Workshop Best Paper Award, the Recsys’16 Deep Learning Workshop Best Paper Award, AISTATS'16 Best Student Paper Award, IPDPS'15 Best Paper Award, NIPS’13 Outstanding Paper Award, and ICML’10 Best Paper Award. He have served as the area chair or senior program committee for many leading machine learning and AI conferences such as ICML, NIPS, AISTATS, AAAI, and IJCAI, and the action editor for JMLR and IEEE TPAMI.
    3. Previous experience:
    Le has co-organized workshops at ICML 2015, the Simon Institute 2017, WWW 2015, NIPS 2019, 2014 and 2012.

  • Jure Leskovec (Google scholar)
  • 1. Email:
    jure@cs.stanford.edu
    2. Research expertise:
    Jure Leskovec is Associate Professor of Computer Science at Stanford University, Chief Scientist at Pinterest, and investigator at Chan Zuckerberg Biohub. His research focuses on machine learning and data mining large social, information, and biological networks. Computation over massive data is at the heart of his research and has applications in computer science, social sciences, marketing, and biomedicine. This research has won several awards including a Lagrange Prize, Microsoft Research Faculty Fellowship, Alfred P. Sloan Fellowship, and numerous best paper and test of time awards. Leskovec received his bachelor’s degree in computer science from the University of Ljubljana, Slovenia, and his PhD in machine learning from Carnegie Mellon University and postdoctoral training at Cornell University.
    3. Previous experience:
    Jure has co-organized workshops at NeurIPS'19, ICML'19, ICLR'19, and multiple workshops at KDD and WWW.

  • Renjie Liao (Google scholar)
  • 1. Email:
    lrjconan@gmail.com
    2. Research expertise:
    Renjie is a PhD student from the machine learning group, University of Toronto. He is jointly supervised by Raquel Urtasun and Richard Zemel. He also works part-time as a senior research scientist in Uber Advanced Technology Group. His research interest is machine learning with a recent focus on deep probabilistic models of graph-structured data and its applications. He is also interested in applying machine learning algorithms to solve various computer vision and self-driving problems. He has made several contributions in the field of graph neural networks, published at top-tier venues in the machine learning community (NeurIPS, ICLR, ICML) and in the computer vision community (CVPR, ICCV). He is the lead developer of graph recurrent attention networks, LanczosNet, NerveNet, and Graph Partition Networks.
    3. Previous experience:
    Renjie has co-organized related workshops at NeurIPS 2019, ICML 2019, and KDD 2019.

  • Yujia Li (Google scholar)
  • 1. Email:
    yujiali@google.com
    2. Research expertise:
    Yujia Li is a senior research scientist at DeepMind. He obtained his Ph.D. from University of Toronto. He has been working on graph neural networks since 2015 and is particularly interested in scaling up GNNs, graph generative models, and GNN's real world applications.
    3. Previous experience:
    Yujia has co-organized related workshops at ICML 2019 and ICLR 2019.

  • Sanja Fidler (Google scholar)
  • 1. Email:
    fidler@cs.toronto.edu
    2. Research expertise:
    Sanja Fidler is an Assistant Professor at University of Toronto. Prior coming to Toronto, in 2012/2013, she was a Research Assistant Professor at Toyota Technological Institute at Chicago, an academic institute located in the campus of University of Chicago. She did her postdoc with Prof. Sven Dickinson at University of Toronto in 2011/2012. Sanja finished her PhD in 2010 at University of Ljubljana in Slovenia. In 2010, she visited Prof. Trevor Darrell‘s group at UC Berkeley and ICSI. She got her BSc degree in Applied Math at University of Ljubljana. Her work is in the area of Computer Vision. Her main research interests are 2D and 3D object detection, particularly scalable multi-class detection, object segmentation and image labeling, and (3D) scene understanding. She is also interested in the interplay between language and vision: generating sentential descriptions about complex scenes, as well as using textual descriptions for better scene parsing (e.g., in the scenario of the human-robot interaction).
    3. Previous experience:
    Previous experience: Sanja has co-organized many workshops at ICCV and ECCV. She will be the program chair of ICCV'21 and 3DV'16.

  • Richard Zemel (Google scholar)
  • 1. Email:
    fzemel@cs.toronto.edu
    2. Research expertise:
    Richard Zemel is a Professor of Computer Science at the University of Toronto, where he has been a faculty member since 2000. Prior to that, he was an Assistant Professor in Computer Science and Psychology at the University of Arizona and a Postdoctoral Fellow at the Salk Institute and at Carnegie Mellon University. He received a B.Sc. degree in History & Science from Harvard University in 1984 and a Ph.D. in Computer Science from the University of Toronto in 1993. He is also the co-founder of SmartFinance, a financial technology startup specializing in data enrichment and natural language processing. His awards include an NVIDIA Pioneers of AI Award, a Young Investigator Award from the Office of Naval Research, a Presidential Scholar Award, two NSERC Discovery Accelerators, and seven Dean’s Excellence Awards at the University of Toronto. He is a Fellow of the Canadian Institute for Advanced Research and is on the Executive Board of the Neural Information Processing Society, which runs the premier international machine learning conference. His research contributions include foundational work on systems that learn useful representations of data without any supervision; methods for learning to rank and recommend items; and machine learning systems for automatic captioning and answering questions about images. He developed the Toronto Paper Matching System, a system for matching paper submissions to reviewers, which is being used in many conferences, including NIPS, ICML, CVPR, ICCV, and UAI. His research is supported by grants from NSERC, CIFAR, Microsoft, Google, Samsung, DARPA and iARPA.
    3. Previous experience:
    Previous experience: Richard is the Co-Founder and Director of Research at the Vector Institute for Artificial Intelligence.

  • Ruslan Salakhutdinov (Google scholar)
  • 1. Email:
    rsalakhu@cs.cmu.edu
    2. Research expertise:
    Ruslan Salakhutdinov is a UPMC professor of Computer Science in the Machine Learning Department, School of Computer Science at Carnegie Mellon University. He received his PhD in machine learning (computer science) from the University of Toronto in 2009. After spending two post-doctoral years at the Massachusetts Institute of Technology Artificial Intelligence Lab, he joined the University of Toronto as an Assistant Professor in the Department of Computer Science and Department of Statistics. In February of 2016, he joined the Machine Learning Department at Carnegie Mellon University as an Associate Professor. Ruslan's primary interests lie in deep learning, machine learning, and large-scale optimization. His main research goal is to understand the computational and statistical principles required for discovering structure in large amounts of data. He is an action editor of the Journal of Machine Learning Research and served on the senior programme committee of several learning conferences including NIPS and ICML. He is an Alfred P. Sloan Research Fellow, Microsoft Research Faculty Fellow, Canada Research Chair in Statistical Machine Learning, a recipient of the Early Researcher Award, Connaught New Researcher Award, Google Faculty Award, Nvidia's Pioneers of AI award, and is a Senior Fellow of the Canadian Institute for Advanced Research.
    3. Previous experience:
    Ruslan has co-organized workshops at ICML'13, NeurIPS'11, NeurIPS'10, NeurIPS'09, ICML'09, NeurIPS'07, and he was the program co-chair at ICML'19.

References

  • Alet, F., Jeewajee, A. K., Bauza, M., Rodriguez, A., Lozano-Perez, T., & Kaelbling, L. P. (2019). Graph element networks: adaptive, structured computation and memory. arXiv preprint arXiv:1904.09019.
  • Skryagin, A., Stelzner, K., Molina, A., & Ventola, F. SPLog: Sum-Product Logic. In International Conference on Probabilistic Programming, 2020.
  • Barceló, P., Kostylev, E. V., Monet, M., Pérez, J., Reutter, J., & Silva, J. P. (2019, September). The Logical Expressiveness of Graph Neural Networks. In International Conference on Learning Representations.
  • Battaglia, P. W., Hamrick, J. B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., ... & Gulcehre, C. (2018). Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261.
  • Bengio, Y. (2017). The consciousness prior. arXiv preprint arXiv:1709.08568.
  • Dornadula, A., Narcomey, A., Krishna, R., Bernstein, M., & Li, F. F. (2019). Visual Relationships as Functions: Enabling Few-Shot Scene Graph Prediction. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 0-0).
  • Hu, R., Rohrbach, A., Darrell, T., & Saenko, K. (2019). Language-conditioned graph networks for relational reasoning. In Proceedings of the IEEE International Conference on Computer Vision (pp. 10294-10303).
  • Hudson, D., & Manning, C. D. (2019). Learning by abstraction: The neural state machine. In Advances in Neural Information Processing Systems (pp. 5901-5914).
  • Ji, J., Krishna, R., Fei-Fei, L., & Niebles, J. C. (2019). Action Genome: Actions as Composition of Spatio-temporal Scene Graphs. arXiv preprint arXiv:1912.06992.
  • Khalil, E., Dai, H., Zhang, Y., Dilkina, B., & Song, L. (2017). Learning combinatorial optimization algorithms over graphs. In Advances in Neural Information Processing Systems (pp. 6348-6358).
  • Kipf, T., Fetaya, E., Wang, K. C., Welling, M., & Zemel, R. (2018). Neural relational inference for interacting systems. arXiv preprint arXiv:1802.04687.
  • Manhaeve, R., Dumancic, S., Kimmig, A., Demeester, T., & De Raedt, L. (2018). Deepproblog: Neural probabilistic logic programming. In Advances in Neural Information Processing Systems (pp. 3749-3759).
  • Qu, M., & Tang, J. (2019). Probabilistic logic neural networks for reasoning. In Advances in Neural Information Processing Systems (pp. 7710-7720).
  • Qu, M., Bengio, Y., & Tang, J. (2019). Gmnn: Graph markov neural networks. arXiv preprint arXiv:1905.06214.
  • Sato, R., Yamada, M., & Kashima, H. (2019). Approximation Ratios of Graph Neural Networks for Combinatorial Problems. In Advances in Neural Information Processing Systems (pp. 4083-4092).
  • Silver, T., Allen, K. R., Lew, A. K., Kaelbling, L. P., & Tenenbaum, J. (2019). Few-Shot Bayesian Imitation Learning with Logic over Programs. arXiv preprint arXiv:1904.06317.
  • Velickovic, P., Ying, R., Padovano, M., Hadsell, R., & Blundell, C. (2019). Neural execution of graph algorithms.arXiv preprintarXiv:1910.10593.
  • Wang, P. W., Donti, P. L., Wilder, B., & Kolter, Z. (2019). SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. arXiv preprint arXiv:1905.12149.
  • Xu, K., Hu, W., Leskovec, J., & Jegelka, S. (2018). How powerful are graph neural networks?. arXiv preprint arXiv:1810.00826.
  • Yoon, K., Liao, R., Xiong, Y., Zhang, L., Fetaya, E., Urtasun, R., ... & Pitkow, X. (2018). Inference in probabilistic graphical models by graph neural networks. arXiv preprint arXiv:1803.07710.
  • Zhang, Y., Chen, X., Yang, Y., Ramamurthy, A., Li, B., Qi, Y., & Song, L. (2020). Efficient Probabilistic Logic Reasoning with Graph Neural Networks. arXiv preprint arXiv:2001.11850.