ISMIR 2025 Satellite Workshop (DDL: 10th August, 2025)
The LLM4MA workshop invites submissions of research and position papers that present early-stage ideas, empirical findings, or visionary perspectives at the intersection of large language models and music/audio. All submissions will be peer-reviewed under a double-blind review process, meaning that authors must anonymize their manuscripts by removing all identifying information. Submissions should follow the Paper Template for ISMIR 2025 LLM4Music Satellite Event.zip (adapted from the ISMIR 2025 style).
We welcome contributions on topics including, but not limited to:
Accepted papers will be presented as posters, with selected contributions featured in oral spotlight sessions. A small number of submissions may be highlighted for special mentions or invited for further discussion in our panel sessions.
We offer flexible presentation options for accepted papers:
Note: We will help with poster printing for both in-person and virtual presentations. Please contact the organizers for printing arrangements.
We accept submissions that are under review elsewhere or intended for future conference submission. However, submissions must not have been formally published at other venues. We strongly encourage open science practices — including code, datasets, checkpoints, and training pipelines — to enhance transparency and reproducibility. Accepted papers will be published on the website, but the workshop is non-archival.
We are inviting reviewers for workshop submissions. If you are interested in reviewing, please register through:
Time | Activity |
---|---|
08:00–08:30 | Registration |
08:30–08:35 | Opening Talk: Welcome address and workshop overview |
08:35–09:30 | Keynote: Science of AI and AI for Science Prof. Noah Smith (University of Washington, Seattle) |
09:30–09:50 | Invited Talk: AI for Creators: Pushing Creative Abilities to the Next Level Dr. Yuhki Mitsufuji (SonyAI) |
09:50–10:10 | Invited Talk: TBA Liwei Lin (New York University, Shanghai) |
10:10–10:30 | Invited Talk: YuE: Scaling Open Foundation Models for Long-Form Music Generation Ruibin Yuan (Hong Kong University of Science and Technology) |
10:30–11:00 | Coffee Break |
11:00–13:30 | Poster Session & Lunch |
13:30–13:50 | Invited Talk: TBA |
13:50–14:10 | Invited Talk: TBA Dr. Elio Quinton (Universal Music Group) |
14:10–14:30 | Best Poster Award |
14:30–15:00 | Coffee Break |
15:00–16:00 | Keynote: TBA Dr. Eriksson Maria (HUMAINT lead by Dr. Emilia Gómez, Joint Research Centre, European Commission) |
16:00–17:00 |
Panel Discussion Host: Dr. Gus Xia Panelists: Ruibin Yuan, Dr. Elio Quinton (Universal Music Group), and others |
Online Zoom Meeting Link: https://zoom.us/j/99541677917
Affiliation: Amazon Professor at the University of Washington & Senior Director of NLP Research at the Allen Institute for AI
Talk Title: Science of AI and AI for Science
Abstract:
Neural language models with billions of parameters and trained on trillions of words are powering the fastest-growing computing applications in history and generating discussion and debate around the world. Yet most scientists cannot study or improve those state-of-the-art models because the organizations deploying them keep their data and machine learning processes secret. I believe that the path to models that are usable by all, at low cost, customizable for areas of critical need like the sciences, and whose capabilities and limitations are made transparent and understandable, is radically open development, with academic and not-for-profit researchers empowered to do reproducible science. In this talk, I'll discuss some of the work our team is doing to radically open up the science of language modeling and make it possible to explore new scientific questions and democratize control of the future of this fascinating and important technology. I'll then talk a bit about what open language models might do for the music technology community, highlighting opportunities and challenges.
Bio:
Noah A. Smith is a researcher in natural language processing and machine learning, serving as the Amazon Professor at the University of Washington and Senior Director of NLP Research at the Allen Institute for AI. He co-directs the OLMo open language modeling initiative. His current work spans language, music, and AI research methodology, with a strong emphasis on mentoring—his former mentees now hold faculty and leadership roles worldwide. Smith is a Fellow of the Association for Computational Linguistics and has received numerous awards for research and innovation.
Other invited speakers will be announced soon.
The LLM4MA workshop will be held at the Jung Geun Mo Conference Hall (5F), located at Korea Advanced Institute of Science & Technology(KAIST), Daejeon, Korea. This venue is shared with the main ISMIR 2025 conference. The hall accommodates up to 150 participants and is equipped for both oral and poster sessions.
Venue views: Jung Geun Mo Conference Hall, KAIST (5F)
For inquiries, please email: yinghao.ma@qmul.ac.uk and a43992899@gmail.com