About Lab.

OUR MISSION

AT THE INTERSECTION OF SUPERCOMPUTING AND DEEP LEARNING

Our research group is working on various applications of large-scale distributed parallelism using supercomputers. In recent years, deep learning has been the focal point of attention in image recognition, natural language processing, and reinforcement learning, etc. The scale of deep neural nets used in these fields is increasing exponentially, and training these networks is becoming impossible without the use of supercomputers. However, simply running existing deep learning frameworks on supercomputers will not immediately improve the speed of the training nor the accuracy of the resulting model. It is necessary to solve the issues specific to large-scale distributed training one by one before we can investigate the scaling laws of deep neural networks. Scientific computing, which have been performed on supercomputers for a long time, also require continuous research on algorithms and implementation methods to match the ever-changing computer architectures. Furthermore, since the performance of computers continues to improve exponentially due to Moore's Law, the calculations performed on today's supercomputers will be able to be performed on a local desktop computer in 10 years. In other words, solving the problems on today's supercomputers is equivalent to solving the research problems of 10 years from now in advance.

COMPUTATIONAL RESOURCES

In our laboratory, we have access to some of the largest supercomputers in Japan, including TSUBAME at Tokyo Institute of Technology, Miyabi at the University of Tokyo, ABCI at AIST, and Fugaku at RIKEN. In addition, by actively using the Grand Challenge System, which allows us to have exclusive access to the entire system of these supercomputers, we are able to use one of the largest amounts of computing resources among academic research groups. In addition, by concluding collaborative research agreements (MOUs) [https://adac.ornl.gov] with 14 major supercomputer centers around the world, we have access to the world's largest supercomputers such as Frontier at ORNL, Aurora at ANL, LUMI at CSC, and Alps at ETH/CSCS. Computations that would take weeks in a normal research groups’ computing environment can be performed in a few hours in our group.

JOINT RESEARCH PROJECTS

Our expertise in large-scale computation on supercomputers and our vast amount of computing resources are useful in many research fields, including deep learning and scientific computing. Currently, we are participating in many joint research projects with domestic and international research institutions and companies, both within and outside the university. Within our university, we are collaborating with Okazaki Group on the Japanese large language model Swallow, and with Shinoda, Inoue, and Sato Groups on computer vision. Externally, we are collaborating with the Khan Group at RIKEN AIP on Bayesian deep learning, with the AI Research Center at AIST on vision-language models, and with NII on the Japanese large language model llm-jp. Outside Japan, we are collaborating with the top supercomputer centers such as ORNL, ANL, LLNL, CSC, and ETH. This means that you can choose from a wide range of research topics, or if you want to find a new research topic on your own, you are not limited to the expertise of your supervisor alone, but can receive appropriate support through our collaborators.

 

HOW RESEARCH IS CONDUCTED IN OUR GROUP

GUIDANCE AND SUPPORT

Our group covers a wide range of research topics at the boundary between artificial intelligence and high-performance computing, from the development of large-scale language models to the recovering precision from low-precision matrix engines. For this reason, the monthly meetings of the whole lab are used to discuss administrative matters, welcome and farewell parties, training camps, the university festival and other events, while research meetings are held not for the whole group but for each group. In addition, I hold one-on-one meetings with all students once a week to ensure that I can provide guidance that suits the pace of each student. The aim of the one-on-one meetings is to provide a place where students can discuss their problems rather than report the progress of their research. In addition to this, we also hold weekly meetings for each research topic, and depending on the topic, we hold joint meetings with other laboratories or companies that are conducting joint research. We do not have a core time, and while encouraging students to be active in a variety of ways, such as attending lectures, doing internships, joining clubs, working part-time, and job hunting, I also want to support them in their research so that they can get their papers accepted for top conferences and journals.

CAPTURING THE ESSENCE

Due to the recent publish or perish culture, there is an increasing number of papers that exaggerate the advantages and significance of the proposed method. This is a systemic problem since papers written in this way have a higher chance of being accepted by a journal or conference. However, this creates a situation where finding truly important information from all the noise becomes increasingly difficult. Up to the undergraduate level, we did not have to pay too much attention to the signal/noise ratio of the information we were given, since all the textbooks were a product of decades of distilled information. At the graduate level and onwards, the information is less distilled and all that is published is not necessarily true or important. Therefore, it becomes increasingly important to filter the noise from all the information you obtain and capture the essence and fundamental concepts within them. These judgements should be made based on whether it is consistent with everything you have learned so far, and not be influenced by authority of the author or the institution which publishes the material. Superficially reading a lot of papers will not help you in this regard.

OTHER ACTIVITIES

STUDY ABROAD

Our laboratory encourages students to study abroad. We can introduce you to long-term and short-term study abroad opportunities while you are in school, and since we have several alumni who have gone on to graduate school abroad, we have accumulated know-how on how to prepare for such opportunities. The following is a list of past students' destinations for study abroad and higher education.

  • A*STAR (Singapore) 1 student
  • Carnegie Mellon University (USA) 2 students
  • University of Montreal (Canada) 1 student
  • University of Tennessee (USA) 1 student

INTERSHIPS

Internships are not just a part of job-hunting activities, but we actively encourage them because in many cases they lead to increased motivation for research through practical experience at companies and research institutes. In addition, all of the students who have gone on to PhD programs in our group have also experienced multiple internships during their master's degree, and we hope that they will choose to go on to higher education for positive reasons after seeing the advantages of top companies. Many of the students who have belonged to our group so far have experienced multiple internships, and we have a network of internship sites that our students have cultivated. The following is a list of past student internships. (in alphabetical order)

  • AIST
  • Axon, Inc.
  • CoeFont Co.,Ltd.
  • CyberAgent, Inc.
  • Fixstars Corporation
  • Future Corporation
  • Google
  • IBM Research Tokyo.
  • Kotoba Technologies, Inc.
  • Livesense Inc.
  • LY Corporation
  • Mercari Inc.
  • Nagase Brothers Inc.
  • Nefrok
  • NextSilicon
  • NII
  • Nomura Research Institute, Ltd.
  • NVIDIA
  • Panasonic Corporation
  • Preferred Networks, Inc.
  • Quansight Inc.
  • RIKEN
  • Sakana AI
  • SB Intuitions Corporation
  • Sony Corporation
  • SORACOM Inc.
  • Team Lab, Inc.
  • Techouse, Inc.
  • Telexistence, Inc.
  • Turing, Inc.
  • Yahoo! Japan

PROGRAMMING CONTESTS

As a laboratory in the field of high performance computing, we have a large number of students who compete in programming competitions every year, achieving good results in various fun events such as AtCoder, Kaggle, and ICPC. Our research in high-performance computing directly benefits from the implementation skills cultivated in the competition.