Multimodal Art Projection
Multimodal Art Projection (M-A-P) is an open-source AI research community. The community members are working on research topics in a wide range of spectrum, including but not limited to pre-training paradigm of foundation models, large-scale data collection and processing, and the derived applciations on coding, reasoning and music generation. The community is open to researchers keen on any relevant topic. Welcome to join us! See our released models at our huggingface organization page: https://huggingface.co/m-a-p.
Projects
-
OpenCodeInterpreter
beats GPT-4 code interpreter on HumanEval!
-
MAP-Neo & Matrix
MAP-NEO is a fully open-sourced Large Language Model that includes the pretraining data, a data processing pipeline (Matrix), pretraining scripts, and alignment code.
-
MERT
MERT is a series of large-scale acoustic music understanding pre-trained models.
-
OmniBench
A hard reasoning benchmark for visual, audio and textual multimodal LLMs.
-
Chinese Tiny LLM
A series of LLM researches on Chinese LLMs.
-
CMMMU
We release CMMMU for better Chinese LMMs' Evaluation.
-
MuPT
MuPT is a series of pre-trained models for symbolic music generation.
-
ChatMusician
Understanding and Generating Music Intrinsically with LLM.
News
- 2022-07-20» The Launch Of Multimodal Art Projection
Old Projects
-
MARBLE
MARBLE is a benchmark proposed to help the academic & industrial to study, compare, and select pre-trained models according to comprehensive evaluation.
-
COIG Series
Chinese Open Instruction Generalist is a series of large-scale Chinese textual datasets for supervised fine-tuning.
-
SciMMIR
The scientific multimodal information retrieval (SciMMIR) is a image-text retrieval benchmark with 500K pairs extracted from the scholarly papers.