Main Conference Keynotes
We are delighted to announce that the esteemed speakers listed below have graciously accepted our invitation to deliver keynote speeches at the main conference of IJCNLP-AACL 2025:
Saturday, December 20th, 09:00 - 10:00

Sadao Kurohashi, National Institute of Informatics, In-Person Keynote
Title: LLM-jp: Building a Sovereign LLM Ecosystem through Open and Team Science
Abstract
In recent years, the research and development of large language models (LLMs) has been dominated by a small number of private organizations, largely conducted in a closed manner, and been predominantly English centric. Although some leading companies, such as Meta and DeepSeek, have adopted more open-source strategies than their competition, their training data and development processes remain closed to the wider scientific community and society as a whole. This closedness and at best limited transparency leads to an inability of the great majority of us to investigate and explore key scientific and societal challenges associated with this emerging technology, such as adapting and evaluating it for our own languages and exploring and addressing known issues such as biases and hallucinations. To address this, the LLM-jp project was launched in May 2023 under the leadership of the National Institute of Informatics as a sovereign, open, and collaborative initiative. Our aim is to develop Japanese-competent LLMs, elucidate the mechanisms behind their capabilities, and openly release all models, datasets, tools, and even share our discussions and failures with the public. Anyone who shares this vision is invited to participate, making LLM-jp an example of truly open and team science with more than 2,400 members to date.
The project is structured as a Big Science effort, bringing together multiple working groups covering corpus construction, model training, tuning and evaluation, safety, multimodal processing, and real-world applications. More than a dozen leading Japanese researchers from both universities and research institutes collaborate closely, supported by large-scale, national computational infrastructure, in alignment with emerging global AI safety initiatives, and with governmental support from the Ministry of Education, Culture, Sports, Science, and Technology.
In this keynote, I will present the motivations, organization, and progress of LLM-jp, and discuss how building a sovereign LLM ecosystem can not only advance Japan’s academic and industrial prowess, but also contribute to a global movement toward open and transparent AI research.
Speaker’s Bio
Sadao Kurohashi received a PhD in Electrical Engineering from Kyoto University in 1994. He is currently the Director-General of the National Institute of Informatics, Japan, and a Specially Appointed Professor at the Graduate School of Informatics at Kyoto University. His research interests include natural language processing, knowledge infrastructure, and open science. He received the 10th and 20th anniversary best paper awards from the Journal of Natural Language Processing in 2004 and 2014, respectively, the 2009 IBM Faculty Award, the 2010 NTT DOCOMO Mobile Science Award, and the 2017 Commendation for Science and Technology by the Minister of Education.
Sunday, December 21st, 09:00 - 10:00

Diyi Yang, Stanford University, Remote Keynote
Title: Human–AI Collaboration in the Age of Large Language Models
Abstract
Recent advances in large language models (LLMs) have revolutionized how humans and AI systems work, learn, and interact, creating new opportunities for collaboration while also raising new challenges. In this talk, we explore the evolving landscape of human–AI collaboration from three perspectives. The first part focuses on teaming by rethinking human–AI collaboration through a large-scale audit of the future of work, highlighting mismatches between what humans want and current AI capabilities. We then talk about how we can create general user modeling from computer use to support robust memory modeling and proactive AI assistance during collaboration. The last part discusses how to evaluate human–AI collaboration, moving away from exams toward studying how people work with LLMs on diverse tasks. Overall, this talk demonstrates how to develop AI systems that are not just tools, but meaningful collaborators working alongside us, helping us grow, and adapting to who we are.
Speaker’s Bio
Diyi Yang is an assistant professor in the Computer Science Department at Stanford University, also affiliated with the Stanford NLP Group, Stanford HCI Group and Stanford Human Centered AI Institute. Diyi received her PhD from Carnegie Mellon University, and her bachelor’s degree from Shanghai Jiao Tong University. Her research focuses on socially aware natural language processing, large language models, and human-AI interaction. She is a recipient of IEEE “AI 10 to Watch” (2020), Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), an ONR Young Investigator Award (2023), and a Sloan Research Fellowship (2024). Her work has received multiple paper awards at top NLP and HCI conferences.
Monday, December 22nd, 09:00 - 10:00

Chenghua Lin, Manchester University, In-Person Keynote
Title: Beyond Correctness: Evaluating the Social Intelligence of LLMs and Re-Evaluating their Role as Evaluators
Abstract
Similar to human intelligence, which is highly complex in nature, evaluating large language models becomes especially challenging when they move beyond well-defined, STEM-style tasks into socially and culturally rich domains. The first part of this talk focuses on assessing social intelligence in LLMs, exploring their ability to handle phenomena where ambiguity, cultural difference, and subjectivity make “correctness” difficult to define, and where capabilities beyond text are required, such as omni-modal sensory understanding. The talk then examines the role of LLMs as evaluators, considering their reliability, biases, and prompt sensitivity, and concludes with reflections on building more robust and socially grounded evaluation frameworks.
Speaker’s Bio
Chenghua Lin is a Full Professor and Chair in Natural Language Processing in the Department of Computer Science at the University of Manchester. His research focuses on natural language generation, multimodal LLMs, and evaluation methods. He currently serves as Chair of the ACL SIGGEN Board, a member of the IEEE Speech and Language Processing Technical Committee, and Associate Editor for Computer Speech & Language. He has published over 160 papers in leading conferences and journals and has received several awards for his research and academic leadership, including the CIKM Test-of-Time Award, the INLG Best Paper Runner-up Award, and an Honourable Mention for the Scottish Informatics and Computer Science Alliance (SICSA) Supervisor of the Year Award. He has also held numerous program and chairing roles for *ACL conferences, including Tutorial Chair for EACL’26, Documentation Chair for ACL’25, Publication Chair for ACL’23, Workshop Chair for AACL-IJCNLP’22, and Program Chair for INLG’19.