The Launch Of Multimodal Art Projection
Yizhi Li / July 2022 (67 Words, 1 Minutes)
In July 2022, Ruibin Yuan, Yinghao Ma, Ge Zhang, and I came together to initialise a new research community focused on machine learning for multimodal arts including but not limited to music. Our goal was to create a space where researchers from different backgrounds could collaborate and share their expertise to advance our understanding of AIGC.
The day started with an introduction from me, outlining the vision for the research community and the goals we hoped to achieve. We then heard from each of the researchers present, who shared their own research interests and areas of expertise. The day started with brainstorming research questions and potential projects, discussing different approaches and methodologies, and identifying areas where we could collaborate. It was exciting to see the ideas flowing and the energy even it was online chats.
As we wrapped up the day, we all left feeling energized and inspired by the possibilities ahead. We knew that there would be challenges and obstacles along the way, but we were confident that with our collective expertise and dedication, we could make a real impact in the field of acoustic music modelliing, large language models, and more beyond.
Looking back on that day, I am grateful for the opportunity to have initiated this research community and to work alongside such talented and passionate researchers. I am excited to see where our collective efforts will take us, and I look forward to sharing our progress and discoveries with the wider research community.