ML is always passively gazing at us.
As a women of color, the male gaze is always present in my life but I wondered what is the ML gaze like?
Is the gaze of machine learning different or familiar?
The project explores what it means to be gazed upon by machine learning in the days where these models are perpetually present.
Google’s Vision is a set of machine learning models that analyze images for objects, facial expression, and the type of content (if it is racy, adult, spoof, medical).
I had Google’s Vision watch me from my laptop for a week. At random points during each day, I would ask Google's Vision models “what do you see” and record it. At the same time, I would write down what I was actually doing at that moment.
The flags on the side of each page correspond with one of the following labels Google's Vision thought my image had: sorrow, fun, racy, smile. i selected these labels out of pure curiosity for them. I opted to use these flags to serve as a data visualization.
Behind the scenes:
The design of this piece was heavily inspired by Lauren McCarthy's piece LAUREN and lives in conversation with the piece. LAUREN is also a performative piece that explores life with AI presence. It is a powerful piece that does not translate successfully with its current form of presentation (which is extensive and varying forms of documentation).
When designing the format of this piece, I knew I wanted to explore exactly how should performance art be documented, especially creative tech pieces that are desperately need to be accessible to those outside of the creative tech world.
After extensive research, I found that I wanted this format to uphold the following constraints:
offer an intimate one on one experience
push away from the reliance on digital mediums
not be layered with cliches about surveillance
show main points that I want to highlight for educational purposes while maintaining the "art" quality
Bringing it to life:
In order to make this piece, I began by finding blank books on Amazon to purchase.
I then used Figma to decide the layout of every page. I showed these layouts to 5 people, using their feedback to discover how to create an impactful layout.
I then sized the pages for the book and printed them out. I cut the pages to the appropriate size and then glued them into the book. (Fun side fact: I had bought this spray on glue that would reduce rippling cause by typical glue sticks, but when I used it I broke out in a rash). So, I used regular glue and then individually put in the pages.
Finally, I added the tags. The color association was basically random since I did not want there to be a clear notion on how the colors were selected for each label.