s

Research in our group lies at the intersection of high performance computing (HPC) and machine learning (ML). Our research group has access to the largest supercomputers in the world, and uses this rich computational resource to perform state-of-the-art distributed deep learning problems. Research is done at the application level (scaling laws of neural networks), algorithm level (fast linear algebra methods), and low-level implementation (CUDA/PTX, C++).

News

2025.01.23 Two papers by Qi Sun (1st year PhD) in collaboration with Sakana AI were accepted to ICLR2025
2025.01.23 Two papers by Satoki Ishikawa (2nd year Master) were accepted to ICLR2025
2025.01.23 Two papers by Taishi Nakamura (1st year Master) were accepted to ICLR2025
2024.10.09 Okazaki Lab, Yokota Lab, and AIST have released large language models Llama3.1-Swallow-8B and Llama3.1-Swallow-70B.
2024.07.01 Okazaki Lab, Yokota Lab, and AIST have released large language models Llama3-Swallow-8B and Llama3-Swallow-70B.
2024.03.15 The joint work by Kazuki Fujii (4th year Bachelor) and Taishi Nakamura (3rd year Bachelor) with Okazaki Lab won the Outstanding Paper Award at NLP2024.
2024.03.11 Okazaki Lab, Yokota Lab, and AIST have released large language models Swallow-MS and Swallow-MX.
2023.12.19 Okazaki Lab, Yokota Lab, and AIST have released a large language model “Swallow” with the highest Japanese capability among open models.
2023.08.11 Prof. Yokota appeared in the TV series "Gaia no Yoake"
2023.04.25 We will be hosting a special seminar by members of Stability AI Japan - David Ha, Jerry Chi, and Kamil Rocki.
2023.03.04 The work by Satoki Ishikawa (4th year Bachelor) won the Student Encouragement Award at the 85th National Convention of IPSJ
2023.03.02 The work by Hiroyuki Ootomo (3nd year PhD) won the best paper award at HPC Asia 2023
2023.02.28 The work by Sora Takashima (2nd year Master) in collaboration with AIST was accepted to CVPR2023
2023.02.01 Kazuki Osawa (graduated) has joined DeepMind
more info.

About Us.

Tokyo Tech, GSIC, Advanced Computing Research Division, Advanced Applications of High-Performance Computing Group

» More Info.

Hierarchical Low-Rank Approximation

Dense matrices appear in many computational applications such as boundary integral methods for solving homogeneous partial differential .....

» More Info.

Application to Deep Learning

Deep learning does not require very high precision, and this fact is exploited by the recent low-precision hardware from NVIDIA and Google...

» More Info.