Joint Audio and Symbolic Conditioning for Temporally Controlled Text-to-Music Generation
I'm a computer science PhD student at the Hebrew University of Jerusalem at the SLP Lab, and a research scientist intern at FAIR (Meta). My research interests are in fundamental AI models for audio. I currently study methods for real-time, controllable music generation. I am fortunate to be advised by Dr. Yossi Adi.
Previously, I've been a machine learning researcher at Riffusion (music generation) and Mobileye (autonomous vehicle perception).
I really like music!
In parallel with computer science, I'm also a semi-professional musician. I play piano and drums, and I have some experience in music production.
alonzi at cs.huji.ac.il