Takashi Matsubara

Professor at
Faculty of Information Science and Technology
Hokkaido University

Takashi Matsubara

Inspired by the principles of differential geometry, analytical mechanics, and dynamical systems theory, our team is pioneering artificial intelligence innovations. We carefully design deep learning models to preserve of mathematical structures observed in target systems. This approach enables our models to adeptly capture physical laws for computer simulations, geometric symmetries in computer vision, causal relationships in medical data analysis, and more. See projects for more information.

Keywords: geometric deep learning, AI for Science, scientific machine learning, trustworthy AI, generative AI

We are hiring! Prospective postdoctoral researchers and PhD candidates in related disciplines, please reach out to us.

News

2024
Dec. 2025
Our paper "Number Theoretic Accelerated Learning of Physics-Informed Neural Networks," co-authored with Prof. Yaguchi, Kobe University, has been accepted at The Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI2025).
Oct. 2024
Graduate students, Razmik Arman Khosrovian and Yosuke Nishimoto, have had their papers accepted for presentation at the NeurIPS 2024 Workshops on Machine Learning and the Physical Sciences, and Compositional Learning, respectively.
Jun. 2024
I will give an invited talk at CAI 2024 Workshop on Scientific Machine Learning and Its Industrial Applications (SMLIA2024) at Singapore.
Apr. 2024
I have been appointed as a professor at the Faculty of Information Science and Technology, Hokkaido University.
Mar. 2024
I will give a tutorial talk at International Conference on Scientific Computing and Machine Learning (SCML2024), held at Kyoto.
Feb. 2024
Our paper "Predicated Diffusion: Predicate Logic-Based Attention Guidance for Text-to-Image Diffusion Models," co-authored with Mr. Sueyoshi, has been accepted at IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR2024). (Update: Selected as a highlight.)
2023
Jul. 2023
Our paper "Good Lattice Training: Physics-Informed Neural Networks Accelerated by Number Theory," co-authored with Prof. Yaguchi, was posted to arXiv.
Jun. 2023
1 paper was accepted to ICML 2023 Workshop on SynS & ML, and 2 papers were accepted to ICML 2023 Workshop on Frontiers4LCD.
Feb. 2023
Our paper "Deep Curvilinear Editing: Commutative and Nonlinear Image Manipulation for Pretrained Deep Generative Model," co-authored with Mr. Aoshima, has been accepted at IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR2023).
Feb. 2023
Our paper "The Symplectic Adjoint Method: Memory-Efficient Backpropagation of Neural-Network-Based Differential Equations," co-authored with Prof. Miyatake and Prof. Yaguchi, has been accepted at IEEE Transactions on Neural Networks and Learning Systems. This is an extended version of the paper accepted at NeurIPS 2021, demonstrating that the proposed method is generally effective in a wider range of situations.
Jan. 2023
Our paper "FINDE: Neural Differential Equations for Finding and Preserving Invariant Quantities," co-authored with Prof. Yaguchi, has been accepted at International Conference on Learning Representations (ICLR2023). The proposed deep learning method can automatically discover and preserve conserved quantities in dynamical systems. The method is inspired by projection and discrete gradient methods.
2022
Dec. 2022
"A Two-View EEG Representation for Brain Cognition by Composite Temporal-Spatial Contrastive Learning," authored by Dr. Chen, his collaborators, and me, has been accepted at SIAM International Conference on Data Mining (SDM23).
Oct. 2022
Our paper "Nonlinear and Commutative Editing in Pretrained GAN Latent Space," co-authored with Mr. Aoshima, has been accepted at NeurIPS 2022 Workshop on NeurReps.
Jun. 2022
"Automated Cancer Subtyping via Vector Quantization Mutual Information Maximization," authored by Dr. Chen, his collaborators, and me, has been accepted at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD2022).
Jun. 2022
Our paper "Topology-Aware Flow-Based Point Cloud Generation," co-authored with Mr. Kimura and Prof. Uehara, has been accepted by IEEE Transactions on Circuits and Systems for Video Technology. This is an extended version of the paper accepted at ACMMM2021.
Mar. 2022
Our paper "Imbalance-Aware Learning for Deep Physics Modeling," co-authored with Mr. Yoshida and Prof. Yaguchi, has been accepted at ICLR2022 Workshop on AI for Earth and Space Science (ai4earth).
2021
Dec. 2021
Our paper "KAM Theory Meets Statistical Learning Theory: Hamiltonian Neural Networks with Non-Zero Training Loss," co-authored with Ms. Chen and Prof. Yaguchi, has been accepted at AAAI Conference on Artificial Intelligence (AAAI) as an oral presentation.
Sep. 2021
Our paper "Symplectic Adjoint Method for Exact Gradient of Neural ODE with Minimal Memory," co-authored with Prof. Miyatake and Prof. Yaguchi, has been accepted at Neural Information Processing Systems (NeurIPS). The proposed adjoint method based on a symplectic integrator obtains a gradient of an ODE with much less memory than the naive backpropagation algorithm and checkpointing schemes and faster than the ordinary adjoint method in practice.
Sep. 2021
Our paper "Neural Symplectic Form: Learning Hamiltonian Equations on General Coordinate Systems," co-authored with Ms. Chen and Prof. Yaguchi, has been accepted at Neural Information Processing Systems (NeurIPS) as a spotlight.
Jul. 2021
Our paper "ChartPointFlow for Topology-Aware 3D Point Cloud Generation," co-authored with Mr. Kimura and Prof. Uehara, has been accepted at ACM International Conference on Multimedia (ACMMM) as an oral presentation. Our proposed model assigns a conditioned map to each continuous subset of a point cloud, similarly to a chart of a manifold, thereby nicely generating point clouds with different topologies.
Apr. 2021
Our paper "Deep Discrete-Time Lagrangian Mechanics," co-authored with Mr. Aoshima and Prof. Yaguchi, is accepted at ICLR2021 Workshop on Deep Learning for Simulation (SimDL).
2020
Oct. 2020
Our paper "The Error Analysis of Numerical Integrators for Deep Neural Network Modeling of Differential Equations," co-authored with Mr. Terakawa and Prof. Yaguchi, has been accepted at NeurIPS2020 Workshop on Machine Learning and the Physical Sciences (ML4PS).
Sep. 2020
Our paper "Deep Energy-based Modeling of Discrete-Time Physics," co-authored with Dr. Ishikawa and Prof. Yaguchi, has been accepted at Neural Information Processing Systems (NeurIPS) as an oral presentation. For modeling physical dynamical systems by neural networks, this study proposes the automatic discrete differentiation algorithm, which ensures the energy conservation and dissipation laws in discrete time.
Sep. 2020
Our paper "Deep Generative Model using Unregularized Score for Anomaly Detection with Heterogeneous Complexity" has been accepted by IEEE Transactions on Cybernetics. This study proposes the unregularized score, which detects anomalous samples robustly to their intrinsic uncertainty.
Sep. 2020
Our paper "Exploring Uncertainty Measures for Image-Caption Embedding-and-Retrieval Task," co-authored with Prof. Cai at Monash University, has been accepted at ACM Transactions on Multimedia Computing, Communications, and Applications.
Jul. 2020
Our paper "Deep Generative Model of Individual Variability in fMRI Images of Psychiatric Patients" has been accepted by IEEE Transactions on Biomedical Engineering. This study proposes a deep generative model of fMRI images with psychiatric disorders and individual variability as causes, thereby giving a more accurate diagnosis of the disorders.