I’ve forgotten how much I love describing things using metadata—and how hard it is to switch between thinking like a machine and thinking like a human being. It’s much harder than switching between human languages, because although we may use different syntaxes and vocabularies, humans are much more linguistically flexible, can make leaps, inferences, assumptions, etc. Machines can’t.
This became very clear to me today whilst re-organizing some of my Pintrest boards. (I know; I also ran around the beach at Lake Monroe while the dog chewed on ice floes). The kinds of associations (leaps and inferences) that bind the content of my respective Pintrest boards together are rarely the kinds that bind those images to others when, selecting an image, one scrolls down and keeps scrolling ’til the app produces a selection of “related” images (the one exception to this is when I’ve pinned an image of mid-century design or, sometimes, knitting patterns). Pintrest tends to bind images by, say, what’s in the picture (a street) rather than less readily tangibles as far as image-recognizing software is concerned (if Pintrest even uses that and doesn’t instead rely upon user tags and pins that are frequently pinned together) like quality of light.
Anyway. I wonder how one might build a training set in order to train a machine to select for those less than tangibles. Can metadata be used to make those inferences? might one study of the metadata associated with images that all had a particular quality of light—for subconscious awareness of the light on the part of the human describers that gets into associated keywords—or are my flowering cactus gardens shot with wide-angle lenses in the afternoon destined for human curation alone?