**UPDATE: I have now updated the code on github to include this workaround.** For anyone having issues importing audiolazy you can use this workaround: 1. Comment out the line that imports audiolazy (so it looks like "#from audiolazy import str2midi" and is ignored. 2. Add this block of code somewhere before the str2midi function is used: import itertools as it MIDI_A4 = 69 def str2midi(note_string): """Given a note string name (e.g. "Bb4"), returns its MIDI pitch number. (From audiolazy) """ data = note_string.strip().lower() name2delta = {"c": -9, "d": -7, "e": -5, "f": -4, "g": -2, "a": 0, "b": 2} accident2delta = {"b": -1, "#": 1, "x": 2} accidents = list(it.takewhile(lambda el: el in accident2delta, data[1:])) octave_delta = int(data[len(accidents) + 1:]) - 4 return (MIDI_A4 + name2delta[data[0]] + # Name sum(accident2delta[ac] for ac in accidents) + # Accident 12 * octave_delta # Octave )
Love it- thanks for the fix. It works! I have put this down for awhile but ready to get back into it. I have been collecting monsoon rainfall and soil temperature data where I live in the desert southwest. I've observed that adding mulch to soil helps extend the cooling effect over time so I'm looking forward to telling that story with sound. I couldn't have done this without you Matt - will highlight your tutorial in a future video once the dataset is complete and the sound file is generated. Thank You! 😄
I'm actually blown away, taking data to transform it to music is so mesmerizing !! I got few idea if you want so, i was hoping for a longer version of "One Sky" this is truly THE BEST ambient sound i ever heard. Thanks you for your work, you find your biggest support here !!!!
Awesome, thanks! The code behind One Sky used the same procedure shown in the video (plus other steps to arrange the choreography and stereo positioning). Try your own star sonification, catalogs of visible stars with colours, magnitudes, and positions are easy to find online. You can make one that goes on forever!
Just in time man! I was searching everywhere to find a how to do it and saw your video. You're the best! (P.S. Code link would be useful but thanks anyway ;))
Awesome! Thanks for letting me know the links weren't appearing, they should both be visible now. Let me know if you have any questions and check out part 2 when you're done with this one!
As an astrophotographer, I would like to know if Part 2 of your Sonification with Python make it possible to generate sonification for 2D images. If so, it would be great to use the Python script or Jupyter Notebook to generate music that could accompany a slide show of images from astronomical objects. I would not hesitate then to contribute $ to acquire the software. Thanks.
Hi, Part 2 is focused on 1D data (mapping continuous rather than discrete data) but I'm actually working on an image sonification series right now. I'll post an update in this thread once it's live.
Thanks for sharing. Awesome video! I am looking to implement sonification to the data, which I am analyzing as a part of my Doctoral thesis. This was a super helpful tutorial. Looking forward to part 2. Any chance you have already uploaded it?
Awesome, much appreciated! I've posted part 2 but it's only available on Gumroad: astromattrusso.gumroad.com/l/data2music-part2. You can 'purchase' part 1 there too (it's free) to get a discount for part 2. Let me know if you have any questions and please share your sonified thesis data when it's done!
Hello!!! This tutorial is amazing and super well organized and comprehensible. Thank you so much for your job, it is truly mesmerizing!! I am a total beginner, and I have this error message on the fourth step: NameError Traceback (most recent call last) Cell In[1], line 3 1 myrs_per_beat = 25 #number of Myrs for each beat of music ----> 3 t_data = times_myrs/myrs_per_beat #rescale time from Myrs to beats 5 t_data NameError: name 'times_myrs' is not defined Could you tell me how to define it? Did you already have it defined beforehand? Thank you very very much
Thanks! That would occur if you didn't run the cell where the first plots are made. Within that cell is "times_myrs = max(ages) - ages" which is what defines the variable times_myrs. Do you still get the error if you run all of the cells in order?
It worked. Thank you!! Also the block of code to solve issues importing audiolazy. This tutorial is incredible. Thank you again, it is taking me into a new dimension🙏🙏🙏🙏🙏@@SYSTEMSounds Is there gonna be part 2? Best wishes from near Barcelona, Spain!!
@@helenarosredon3602 Awesome, thanks for letting me know! Part 2 is available on Gumroad astromattrusso.gumroad.com/l/data2music-part2 and I'm hoping to do a new series this year on image sonification.
I find your explanations for the how and why fantastic, very easy to understand. I did some sonification of weather data a few years ago using sonic pi and was very pleased with the results, but I find your explanations take it to a new level of understanding. I have tried to use Jupyter to follow through your explanation but have found it difficult/impossible to import some of the modules that you use so that Jupyter/Anaconda can use them. Do you have any information on how to bypass this issue. I managed by writing my own note2midi function but it took a while, and still pondering the next roadblock now.
Thank you so much for this tutorial! As a heads up though, writing the MIDI file without including ' writeFile' can end up nonfunctional. I opted for: with open(filename +'.mid', "wb") as f: midi_file_SAR.writeFile(f)
I'm stuck on the choosing musical notes section because: from audiolazy import str2midi str2midi('C3') is resulting in me getting "ImportError: cannot import name 'Sequence' from 'collections' (C:\Python310\lib\collections\__init__.py)." From reading Stackoverflow this seems to be an issue with Python 3.10 correct? Should I post the entire traceback under issues on the Github page?
Yes, please post the traceback there and I'll look into it. In the meantime, if this is the only import causing the issue you could just copy the function below right into your notebook. Or you can make your note_midis list manually by looking at a note name/number chart. Thanks! import itertools as it MIDI_A4 = 69 def str2midi(note_string): """Given a note string name (e.g. "Bb4"), returns its MIDI pitch number. (From audiolazy) """ data = note_string.strip().lower() name2delta = {"c": -9, "d": -7, "e": -5, "f": -4, "g": -2, "a": 0, "b": 2} accident2delta = {"b": -1, "#": 1, "x": 2} accidents = list(it.takewhile(lambda el: el in accident2delta, data[1:])) octave_delta = int(data[len(accidents) + 1:]) - 4 return (MIDI_A4 + name2delta[data[0]] + # Name sum(accident2delta[ac] for ac in accidents) + # Accident 12 * octave_delta # Octave )
Given a flute music file, how can we convert the music to notes and decompress the file back to audio blocks using literally any method( trained spectograms, any ML algorithm..)
i followed this step by step but my note velocity is only mapped to the min_vel value, can this be something related to the data i'm using? my csv data goes from 0 to 749
Hi, just checking, does the step where you do the mapping look like this: note_velocity = round(map_value(y_data[i], min(y_data), max(y_data), vel_min, vel_min))? In the script I set the original limits of the data to 0 and 1 because I had normalized it first.
Thanks for letting me know, I'll need to post an update to the code. In the meantime you can add this block of code somewhere before the str2midi function is used: import itertools as it MIDI_A4 = 69 def str2midi(note_string): """Given a note string name (e.g. "Bb4"), returns its MIDI pitch number. (From audiolazy) """ data = note_string.strip().lower() name2delta = {"c": -9, "d": -7, "e": -5, "f": -4, "g": -2, "a": 0, "b": 2} accident2delta = {"b": -1, "#": 1, "x": 2} accidents = list(it.takewhile(lambda el: el in accident2delta, data[1:])) octave_delta = int(data[len(accidents) + 1:]) - 4 return (MIDI_A4 + name2delta[data[0]] + # Name sum(accident2delta[ac] for ac in accidents) + # Accident 12 * octave_delta # Octave )
Go for it! That would be a great project for this method since each explosion is a discrete event happening in time, plus there's plenty of information on each to control the pitch, volume, pan, or any other parameter or effect.
I can't seem to be able to use audiolazy. I installed it with pip but every time I enter it on the jupyter notebook, it says "ModuleNotFoundError: No module named 'audiolazy'. I'm new to programming so it might be that it's something I haven't learned to do yet, but do you have any tips or ideas about what this means and why it won't recognize audiolazy ?
Hi, after installing anything with pip on the command line, you'll need to restart the jupyter notebook kernel. Did you already try that? Are you able to install anything else with pip? If you're using python 3 (check with pip --version), you may need to install with pip3 install audiolazy. If audiolazy is the only one not working, see my replay to i32504. You can just include the one function from audiolazy you need.
@@SYSTEMSounds Thanks for the reply! This might be a very simple question but how do you restart the kernel ? 😅 I tried the function below though and it worked so I might just use that one.
Never mind about the kernel, I found how to do it, however I came across another issue. On step 7, line four, it says that the list index is out of range. Is it because I have too many data points? Or maybe I need to to define note_index differently or add more notes?
Hmm, n_notes is set by n_notes = len(note_midis) and the map_value function on line 3 should only return indexes from 0 to n_notes-1. Can you check if those are happening correctly? You may as well replace n_notes -1 with len(note_midis)-1 to minimize possible 'index out of range' errors. You can also replace range(n_impacts) with range(len(y_data)) for the same reason. I used the more explicit variable names in the video just to make things a bit clearer.
@@SYSTEMSounds got it! It was out of range because I had written that my y_data went from 0 to 1 but it acually goes all the way to 8. And again, thanks for the tips!
**UPDATE: I have now updated the code on github to include this workaround.**
For anyone having issues importing audiolazy you can use this workaround:
1. Comment out the line that imports audiolazy (so it looks like "#from audiolazy import str2midi" and is ignored.
2. Add this block of code somewhere before the str2midi function is used:
import itertools as it
MIDI_A4 = 69
def str2midi(note_string):
"""Given a note string name (e.g. "Bb4"), returns its MIDI pitch number. (From audiolazy)
"""
data = note_string.strip().lower()
name2delta = {"c": -9, "d": -7, "e": -5, "f": -4, "g": -2, "a": 0, "b": 2}
accident2delta = {"b": -1, "#": 1, "x": 2}
accidents = list(it.takewhile(lambda el: el in accident2delta, data[1:]))
octave_delta = int(data[len(accidents) + 1:]) - 4
return (MIDI_A4 +
name2delta[data[0]] + # Name
sum(accident2delta[ac] for ac in accidents) + # Accident
12 * octave_delta # Octave
)
Thanks for posting this workaround as it solves the issue with more recent versions of Python (3.10+) not being compatible with 'audiolazy'.
Love it- thanks for the fix. It works!
I have put this down for awhile but ready to get back into it. I have been collecting monsoon rainfall and soil temperature data where I live in the desert southwest. I've observed that adding mulch to soil helps extend the cooling effect over time so I'm looking forward to telling that story with sound. I couldn't have done this without you Matt - will highlight your tutorial in a future video once the dataset is complete and the sound file is generated. Thank You! 😄
Man, we need part 2. It's not even that we want it. We need it.
I'm actually blown away, taking data to transform it to music is so mesmerizing !! I got few idea if you want so, i was hoping for a longer version of "One Sky" this is truly THE BEST ambient sound i ever heard. Thanks you for your work, you find your biggest support here !!!!
Awesome, thanks! The code behind One Sky used the same procedure shown in the video (plus other steps to arrange the choreography and stereo positioning). Try your own star sonification, catalogs of visible stars with colours, magnitudes, and positions are easy to find online. You can make one that goes on forever!
Hello from Greece Matt, thanks for this! It's actually one of the coolest things I've seen in a while!
This is a very well done tutorial. I plan on checking out part two!
Wow I'm happy you are back. Love the work you do here...♡
Thanks!
Just in time man! I was searching everywhere to find a how to do it and saw your video. You're the best! (P.S. Code link would be useful but thanks anyway ;))
Awesome! Thanks for letting me know the links weren't appearing, they should both be visible now. Let me know if you have any questions and check out part 2 when you're done with this one!
cant wait for the next parts !
As an astrophotographer, I would like to know if Part 2 of your Sonification with Python make it possible to generate sonification for 2D images. If so, it would be great to use the Python script or Jupyter Notebook to generate music that could accompany a slide show of images from astronomical objects. I would not hesitate then to contribute $ to acquire the software. Thanks.
Hi, Part 2 is focused on 1D data (mapping continuous rather than discrete data) but I'm actually working on an image sonification series right now. I'll post an update in this thread once it's live.
Thanks for sharing. Awesome video! I am looking to implement sonification to the data, which I am analyzing as a part of my Doctoral thesis. This was a super helpful tutorial. Looking forward to part 2. Any chance you have already uploaded it?
Awesome, much appreciated! I've posted part 2 but it's only available on Gumroad: astromattrusso.gumroad.com/l/data2music-part2. You can 'purchase' part 1 there too (it's free) to get a discount for part 2. Let me know if you have any questions and please share your sonified thesis data when it's done!
Hello!!! This tutorial is amazing and super well organized and comprehensible. Thank you so much for your job, it is truly mesmerizing!!
I am a total beginner, and I have this error message on the fourth step:
NameError Traceback (most recent call last)
Cell In[1], line 3
1 myrs_per_beat = 25 #number of Myrs for each beat of music
----> 3 t_data = times_myrs/myrs_per_beat #rescale time from Myrs to beats
5 t_data
NameError: name 'times_myrs' is not defined
Could you tell me how to define it? Did you already have it defined beforehand?
Thank you very very much
Thanks! That would occur if you didn't run the cell where the first plots are made. Within that cell is "times_myrs = max(ages) - ages" which is what defines the variable times_myrs. Do you still get the error if you run all of the cells in order?
It worked. Thank you!! Also the block of code to solve issues importing audiolazy. This tutorial is incredible. Thank you again, it is taking me into a new dimension🙏🙏🙏🙏🙏@@SYSTEMSounds
Is there gonna be part 2?
Best wishes from near Barcelona, Spain!!
@@helenarosredon3602 Awesome, thanks for letting me know! Part 2 is available on Gumroad astromattrusso.gumroad.com/l/data2music-part2 and I'm hoping to do a new series this year on image sonification.
I find your explanations for the how and why fantastic, very easy to understand. I did some sonification of weather data a few years ago using sonic pi and was very pleased with the results, but I find your explanations take it to a new level of understanding. I have tried to use Jupyter to follow through your explanation but have found it difficult/impossible to import some of the modules that you use so that Jupyter/Anaconda can use them. Do you have any information on how to bypass this issue. I managed by writing my own note2midi function but it took a while, and still pondering the next roadblock now.
Thank you so much for this tutorial! As a heads up though, writing the MIDI file without including ' writeFile' can end up nonfunctional. I opted for:
with open(filename +'.mid', "wb") as f:
midi_file_SAR.writeFile(f)
I'm stuck on the choosing musical notes section because:
from audiolazy import str2midi
str2midi('C3')
is resulting in me getting "ImportError: cannot import name 'Sequence' from 'collections' (C:\Python310\lib\collections\__init__.py)."
From reading Stackoverflow this seems to be an issue with Python 3.10 correct? Should I post the entire traceback under issues on the Github page?
Yes, please post the traceback there and I'll look into it. In the meantime, if this is the only import causing the issue you could just copy the function below right into your notebook. Or you can make your note_midis list manually by looking at a note name/number chart. Thanks!
import itertools as it
MIDI_A4 = 69
def str2midi(note_string):
"""Given a note string name (e.g. "Bb4"), returns its MIDI pitch number. (From audiolazy)
"""
data = note_string.strip().lower()
name2delta = {"c": -9, "d": -7, "e": -5, "f": -4, "g": -2, "a": 0, "b": 2}
accident2delta = {"b": -1, "#": 1, "x": 2}
accidents = list(it.takewhile(lambda el: el in accident2delta, data[1:]))
octave_delta = int(data[len(accidents) + 1:]) - 4
return (MIDI_A4 +
name2delta[data[0]] + # Name
sum(accident2delta[ac] for ac in accidents) + # Accident
12 * octave_delta # Octave
)
@@SYSTEMSounds Will do! Thanks for the feedback. I might install Python 3.9 and see if that helps
@@SYSTEMSounds Thanks!
I am having the same issue@@SYSTEMSounds
Did you figure it out?@@i32504
This is marvelous!! How can I purchase the program/app? Can't wait to use it if it is available.
Peter
Given a flute music file, how can we convert the music to notes and decompress the file back to audio blocks using literally any method( trained spectograms, any ML algorithm..)
Thanks. I would like a sonification of space weather and human behaviour.
i followed this step by step but my note velocity is only mapped to the min_vel value, can this be something related to the data i'm using? my csv data goes from 0 to 749
Hi, just checking, does the step where you do the mapping look like this: note_velocity = round(map_value(y_data[i], min(y_data), max(y_data), vel_min, vel_min))? In the script I set the original limits of the data to 0 and 1 because I had normalized it first.
FIXED IT! i was dividing a value by 100 when testing and forgot to remove it afterwards
i tried to make myslef this project but i'm using python 3.11 and audiolazy is not supported in this what can i do ?
Thanks for letting me know, I'll need to post an update to the code. In the meantime you can add this block of code somewhere before the str2midi function is used:
import itertools as it
MIDI_A4 = 69
def str2midi(note_string):
"""Given a note string name (e.g. "Bb4"), returns its MIDI pitch number. (From audiolazy)
"""
data = note_string.strip().lower()
name2delta = {"c": -9, "d": -7, "e": -5, "f": -4, "g": -2, "a": 0, "b": 2}
accident2delta = {"b": -1, "#": 1, "x": 2}
accidents = list(it.takewhile(lambda el: el in accident2delta, data[1:]))
octave_delta = int(data[len(accidents) + 1:]) - 4
return (MIDI_A4 +
name2delta[data[0]] + # Name
sum(accident2delta[ac] for ac in accidents) + # Accident
12 * octave_delta # Octave
)
Wonderful work. Is there a way to reconstruct the original data using the midi files ?
Thank you!. Let's extract some data from 3-D hydrodynamical simulations of multiple supernova explosions and convert them into pretty sounds.
Go for it! That would be a great project for this method since each explosion is a discrete event happening in time, plus there's plenty of information on each to control the pitch, volume, pan, or any other parameter or effect.
I can't seem to be able to use audiolazy. I installed it with pip but every time I enter it on the jupyter notebook, it says "ModuleNotFoundError: No module named 'audiolazy'. I'm new to programming so it might be that it's something I haven't learned to do yet, but do you have any tips or ideas about what this means and why it won't recognize audiolazy ?
Hi, after installing anything with pip on the command line, you'll need to restart the jupyter notebook kernel. Did you already try that? Are you able to install anything else with pip? If you're using python 3 (check with pip --version), you may need to install with pip3 install audiolazy.
If audiolazy is the only one not working, see my replay to i32504. You can just include the one function from audiolazy you need.
@@SYSTEMSounds Thanks for the reply! This might be a very simple question but how do you restart the kernel ? 😅 I tried the function below though and it worked so I might just use that one.
Never mind about the kernel, I found how to do it, however I came across another issue. On step 7, line four, it says that the list index is out of range. Is it because I have too many data points? Or maybe I need to to define note_index differently or add more notes?
Hmm, n_notes is set by n_notes = len(note_midis) and the map_value function on line 3 should only return indexes from 0 to n_notes-1. Can you check if those are happening correctly? You may as well replace n_notes -1 with len(note_midis)-1 to minimize possible 'index out of range' errors. You can also replace range(n_impacts) with range(len(y_data)) for the same reason. I used the more explicit variable names in the video just to make things a bit clearer.
@@SYSTEMSounds got it! It was out of range because I had written that my y_data went from 0 to 1 but it acually goes all the way to 8. And again, thanks for the tips!
Exelente!!