<aside> 📖

https://github.com/ml5js/Intro-ML-Arts-IMA-F24/tree/main/03-body-models

</aside>

Reading Reflections (Task 01 - 02 - 04):


<aside> 📖

Read Mixing movement and machine by Maya Man.

Read Humans of AI by Philipp Schmitt.

Read Open Sourcing the Origin Stories: The ml5.js Model and Data Provenance Project by Ellen Nickles and reflect on the the following questions:

Notes from the Sources:

Final Thoughts:

The article “Mixing movement and machine” made me look at these experiments in a new light. At the heart of design and art there always has to be emotion, and any medium we use should only be there to convey said emotion. As technologists we tend to forget about that, but this was a nice reminder to: “make those dots make somebody cry”.

As far as “Humans of AI” was concerned, I found the “Declassifier” to be quite an interesting idea. It also raised a very difficult questions. As AI is improving at such a rapid pace, policymakers are having a hard time keeping up with it. On the one hand we all learned what an apple is by being shown a lot of examples (some including paid or copyrighted content in exhibitions, movies, pieces of art) - so when we ask an AI to do the same, I would argue shouldn’t be copyright infringement either. On the other hand, I also felt quite untrusting when Adobe announced that they will use our designs and artworks to train their models.

Also on an unrelated note, I did find the fact that “Every rectangle with the label ‘dog’ contains a little Gryfe.” to be quite a heartwarming idea. These photos could live on, and help the future.

While reading through the “Open Sourcing the Origin Stories”, I started to think what makes AI feel unsafe or discriminatory in people’s eyes. I came to the conclusion that we sometimes assume that AI is truly a form of intelligence, but all that it really is, is mathematical probability. So as long as we give it biased data it will return the same bias to us. And just as we humans after so many years still can’t come to a singular unbiased view of the world, I don’t see why we should hold AI up to such standards. I believe it is a great tool when used in it’s “safe-zone”. - with which I'm also implying that maybe using it to determine if someone has committed a crime or not based on their facial features might not be in said safe zone.

This also leads me to my next question: Can designers be held accountable for the mistakes an AI system makes? Can a teacher be accountable for a mistake their student makes? Can a parent? - I believe these are issues we haven’t fully figured out fully as a society, and thus it is quite difficult to come to a consensus. - The only issue we are having is that these systems are improving at such fast paces that we might not get to consensus before it is “too late”.

Additionally, I also found that copyright might be also quite an interesting question. As some forms of copyright location-based, how should we deal with that? Is it fine if the model learns in Italy? Or could it only be used in Italy if it’s copyrighted content originated from there?