Dear Catart team and community,
I have a dataset of musical instrument sounds that were annotated by humans on timbral features, and I wish to use your wonderful tool for navigating in the dataset with these descriptors.
For instance, I have scores of brightness and nasality for each sound in different .csv files on one side and the sounds themselves on the other side. How can I implement those scores so the sounds are organised accordingly (e.g. x-axis = brightness, y-axis=nasality)?
Here is a toy dataset of 10 samples of the kind of data I’m treating.
sounds.zip (3.5 MB)
Thank you for your kind forthcoming help .