This website compiles a series of stock-image timelapses displaying flowers blooming and decaying with auto-generated captions added to them by a custom-made program. Viewers can toggle between various stages in the machine vision model’s training progress as they are presented with a new plant every time the page is loaded.
The project explores the intransferability of meaning between text and images and the reproduction of taxonomic orders in both stock imagery and machine vision. A technical reenactment of NeuralTalk, an early model designed to write sentences that describe images’ contents, this adaptation of it for contemporary computers exposes the limits of object recognition technologies — its inaccurate outcomes make explicit the unstable relation between images and their conceptual representations.
The ability for computers to segment and operationalize visuals as textual data marks a major shift in the role of photographs today. In the case of NeuralTalk, and this derivative, the algorithms over-identifies human forms due to their architecture and training datasets. By applying the software to images of plants in stages of transformation, this exploration makes cite of the anthropocentric mischaracterization coded into machines and the capabilities of computer vision when confronted with information that falls outside of a specific worldview.
Please view this website on your desktop computer.