There’s always the question of data privacy and consent when it comes to using people’s faces for training AI, so I’m curious as to whether that’s taken into account for Berenson the robotic art critic. If a museum involves a robot that roams around analyzing people’s faces, is there (and should there be) some sort of consent form that visitors must sign upon entry? It’s also interesting how as a humanoid robot, Pepper gets a lot of visitor interaction. I presume that to some extent, Pepper receives more love than the average museum staff or docent, as people don’t tend to be eager to take selfies with museum staff. Since Pepper isn’t human, there’s less of a barrier to interaction for people who aren’t extremely outgoing. Simultaneously, the kind of ‘love’ and interactions people can have with Pepper are limited, as Pepper can’t have the same extensive discussions about artwork that people can have. Although natural language processing technology is headed towards that direction (GPT-3’s conversation ability is very impressive), there will always be a certain disconnect and lack of humanity in conversations one has with a chatbot versus a real human.

I like the idea that when used as a museum tool, the AI itself becomes kind of a museum exhibit. I’ve done a personal project of training a language model on scraped pairs of artwork and artwork descriptions to build what’s basically an AI art critic: the model generates text ‘describing’ input artwork. The results were nowhere near art critic quality, but they were kind of funny (ex. “this cropping is religious. Metaphysical secrets may guide the vivid color”). Art Selfie seems like a great tool for people to see themselves represented in museum pieces. Museums still connote status and exclusivity as only a subset of all artwork can ever be displayed in one, so it’s important for people to see figures who look like them portrayed in such an exclusive space. Art Selfie lends itself nicely to conversation about this topic of who is/isn’t represented in museums and who gets to decide that. However, I wonder if Art Selfie has made any attempt at filtering out pictures of caricatures and otherwise discriminatory artwork. I would feel offended if it pulled up a Yellow Peril cartoon.

I think it’s important to stress that we’re “still in the phase of ‘training the toddler’” when it comes to AI. Teaching an AI “common sense” is a very, very difficult unsolved problem, and most AI have to (and in my opinion should) make decisions in tandem with humans. While the article mentions using automated sentiment analysis on feedback, the MFA doesn’t think it adequately captures the nuance of people’s feedback and evaluates them by hand. While AI decisions already have disproportionately hurt and perpetuated the oppression of certain groups of people, I don’t see an full on AI apocalypse happening anytime soon, at least definitely not within my lifetime.