<aside> 📖
https://github.com/ml5js/Intro-ML-Arts-IMA-F24/tree/main/04-face-and-hand-models
</aside>
<aside> 📖
Research and find a project (experiments, websites, art installations, games, etc) that utilises machine learning in a creative way. Consider the following:
I found a project names FreddieMeter on the Google Creative Lab website, which sounded interesting. It is an “AI-powered singing challenge that rates how closely your singing matches the voice of Freddie Mercury”.
FreddieMeter by Google Research, Google Creative Lab, YouTube Music - Experiments with Google
What we know from the intro site is that the app is using tensorflow.js
. If we dig a bit deeper, going through the documentation shows us that behind the hood it uses SPICE (Self-supervised PItCh Estimation), which is a system based on a CNN (Convolutional Neural Network), trained on a synthetically generated dataset combined with other manually annotated datasets. SPICE is a great model, which lets you compare two samples based on many different parameters, perfect for a project as such.
<aside> 📖
Pick one of the models above (FaceMesh or HandPose), following the examples above and the ml5.js documentation, experiment with controlling elements of a p5.js
sketch (color, geometry, sound, text, etc) with the output of the model. Try to create an interaction that is surprising or one that is inspired by the project you find.
</aside>
As for the project, I wanted to create a more embodied version of a photo booth, where if you hold your hand up in a rectangle position the system takes a photo:
While creating the system, I also realised that it is quite inconvenient for it to instantly take a photo, so I added a countdown timer, for you to get into pose. The final code can be found below: