Introduction
Graph Representation Learning is an exciting and rapidly evolving field within machine learning and artificial intelligence, focusing on the development of algorithms that can learn meaningful representations of graph-structured data. Graphs are ubiquitous in the real world, representing complex systems in domains such as social networks, biological networks, transportation networks, and communication networks.
Key concepts of graph representation learning:
- Graph Structures: Graphs consist of nodes (or vertices) and edges connecting them. In graph representation learning, the goal is to encode these nodes and edges into low-dimensional vectors that capture the structure and features of the graph.
- Learning Techniques: This field leverages various machine learning techniques, particularly those from deep learning. Graph Neural Networks (GNNs) are a pivotal advancement here, where neural network models are adapted to operate directly on graphs.
- Challenges: Unlike traditional Euclidean data (like images and text), graph data is non-Euclidean and often lacks a fixed structure, posing unique challenges in learning meaningful representations.
- Applications: The learned representations are valuable for numerous tasks, including node classification, link prediction, graph classification, and clustering. This has implications in recommender systems, drug discovery, social network analysis, and many other areas.
- Innovations: The field is constantly evolving with new models and techniques that improve upon how these representations are learned, focusing on aspects like scalability, interpretability, and handling dynamic graphs.
Graph representation learning is at the forefront of tackling complex, real-world problems by extracting insights from the intricate web of relationships and interactions represented by graphs. The ongoing research and development in this area continue to expand its potential and applications across diverse scientific and industrial domains.
Graph Representation Learning in bioinformatics:
Graph representation learning in bioinformatics represents a significant intersection of computational biology, bioinformatics, and machine learning. In bioinformatics, graph structures are inherently present in various forms, such as protein-protein interaction networks, gene regulatory networks, metabolic pathways, and more. The application of graph representation learning in this field aims to extract meaningful insights from these complex biological networks, leading to advancements in understanding biological processes and disease mechanisms.
Our lab works:
- Wang, J., Ma, A., Chang, Y., Gong, J., Jiang, Y., Qi, R., … & Xu, D. (2021). scGNN is a novel graph neural network framework for single-cell RNA-Seq analyses. Nature communications, 12(1), 1882.
- Gu, H., Cheng, H., Ma, A., Li, Y., Wang, J., Xu, D., & Ma, Q. (2022). scGNN 2.0: a graph neural network tool for imputation and clustering of single-cell RNA-Seq data. Bioinformatics, 38(23), 5322-5325.
- Ma, A., Wang, X., Li, J., Wang, C., Xiao, T., Liu, Y., … & Ma, Q. (2023). Single-cell biological network inference using a heterogeneous graph transformer. Nature Communications, 14(1), 964.
- Chang, Y., He, F., Wang, J., Chen, S., Li, J., Liu, J., … & Ma, Q. (2022). Define and visualize pathological architectures of human tissues from spatially resolved transcriptomics using deep learning. Computational and Structural Biotechnology Journal, 20, 4600-4617.
- Wang, X., Duan, M., Li, J., Ma, A., Xu, D., Li, Z., … & Ma, Q. (2023). MarsGT: Multi-omics analysis for rare population inference using single-cell graph transformer. bioRxiv, 2023-08.
Agenda
Date | Time | Topic | Presenter | Location | Details |
---|---|---|---|---|---|
2023-08-18 | 2h | Background, Representation, Learning and Application – Homogeneous graph (Part 1) | Hao Cheng, Dr. Ma | Lincoln 350 Conf Rm | |
2023-08-21 | 1h | Background, Representation, Learning and Application – Homogeneous graph (Part 2) | Hao Cheng, Yi Jiang | Lincoln 350 Conf Rm | |
2023-09-05 | 1h | Background, Representation, Learning and Application – Homogeneous graph (Part 3) | Hao Cheng, Yi Jiang | PaRC 3001 | |
2023-09-25 | 2h | Framework – Machine Learning Foundation and Self-supervised Learning | Yi Jiang, Hao Cheng, Anjun Ma | PaRC 3001 | Graph representation learning callback (extra meeting, application), 20 min Machine learning foundation, supervised/unsupervised learning, loss function, 30 min Self-supervised learning, 1h15min |
2023-10-02 | 1.5h | Representation, Learning and Application – Heterogeneous graph | Hao Cheng, Xiaoying Wang | PaRC 3001 | Representation, 10 min Learning, 1 h Application, 20 min |
2023-10-16 | 1.5h | Learning and Application – Graph Transformer | Yi Jiang | PaRC 3001 | Transformer (video), 30min Homogeneous graph transformer, 30min Heterogeneous graph transformer, 30min |
2023-10-30 | 1.5h | Framework – Graph generative model | Hao Cheng, Yi Jiang | PaRC 3001 | Callback and background, 30min Generative model, 30min Application, 30min |
2023-11-13 | 2h | Representation, Learning and Application – Hypergraph and Line graph | Yi Jiang, Hao Cheng | PaRC 3001 | Callback, 10min Hypergraph and Line graph (video), 1h30min Application, 20min |
2023-11-27 | 2h | Framework – Transfer Learning and Foundation models | Hao Cheng | PaRC 3001 | Callback transformer, 10min Transfer learning models, 40min Foundation models, 40min |
2023-12-13 | 2h | Representation, Learning and Application – Dynamic graph and De Bruijn Graph | Yi Jiang, Hao Cheng | PaRC 3001 | |
2023-12-27 | 1.5h | Framework – Reinforcement learning | Hao Cheng | PaRC 3001 | |
2024-01-13 | 2h | Casual – Casual discovery and Casual inference | Yi Jiang | PaRC 3001 |
Related materials can be found from the Dropbox folder: BMBL Shared\BMBL-Resources\Graph representation learning.
Talks, courses, workshops and blogs
- Stanford CS224W: Machine Learning with Graphs, [link]
- ICLR 2021 Keynote – “Geometric Deep Learning: The Erlangen Programme of ML” – M Bronstein, [link]
- Graph Representation Learning: William L. Hamilton – 2021 McGill AI Learnathon, [link]
- Graph Neural Networks with Learnable Structural and Positional Representation, [link]
- Theoretical Foundations of Graph Neural Networks [pdf]
- Graph Representation Learning for Algorithmic Reasoning [pdf]
- A Gentle Introduction to Graph Neural Networks [link]
Surveys
- Battaglia, P. W., Hamrick, J. B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., … & Pascanu, R. (2018). Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261.
- Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Philip, S. Y. (2020). A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1), 4-24.
- Hamilton, W. L., Ying, R., & Leskovec, J. (2017). Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584.
- Xie, Y., Xu, Z., Zhang, J., Wang, Z., & Ji, S. (2022). Self-supervised learning of graph neural networks: A unified review. IEEE transactions on pattern analysis and machine intelligence.
- Yi, H. C., You, Z. H., Huang, D. S., & Kwoh, C. K. (2022). Graph representation learning in bioinformatics: trends, methods and applications. Briefings in Bioinformatics, 23(1), bbab340.
- Li, M. M., Huang, K., & Zitnik, M. (2022). Graph representation learning in biomedicine and healthcare. Nature Biomedical Engineering, 1-17.
- Chen, F., Wang, Y. C., Wang, B., & Kuo, C. C. J. (2020). Graph representation learning: a survey. APSIPA Transactions on Signal and Information Processing, 9, e15.
Researchers
- Jure Leskovec, Associate Professor of Computer Science at Stanford University, Google Scholar
- Thomas Kipf, Senior Research Scientist, Google Brain, Google Scholar
- William L. Hamilton, Assistant Professor of Computer Science, McGill University and Mila, Google Scholar
- Petar Veličković, Staff Research Scientist, DeepMind, Google Scholar
- Michael Bronstein, DeepMind Professor of AI, Google Scholar
Graph representation learning labs
- Shihua Zhang’s Lab
- James Zou’s lab
- Graph Deep Learning Lab
- Data Science and Engineering Lab
- Ming Li
- Yang-Yu Liu’s Lab
- Zitnik Lab
Textbooks
- Hamilton, W. L. (2020). Graph representation learning. Morgan & Claypool Publishers.
2. Ma, Y., & Tang, J. (2021). Deep learning on graphs. Cambridge University Press.
3. Wu, L., Cui, P., Pei, J., Zhao, L., & Guo, X. (2022, August). Graph neural networks: foundation, frontiers and applications. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 4840-4841).