The “Hues of Intelligence” project is an art installation that challenges us to examine the influx of machines into our daily lives–to explore how their perspectives on the world compare to our own. Algorithms were tasked with sequencing a collection of colors; the results of their attempts are presented as a collection of prints and animations.

If machines will be making an ever-increasing proportion of our human decisions, then should we not ask ourselves what differences exist between our human perceptions and those of the decision-makers?

The “Hues of Intelligence” series speaks to the vast technological revolution going on all around us and the implications of machines playing the role of decision-makers in our world.

A collection of machines are presented with the red, green, and blue intensities, which represent a collection of colors that mirror the function of cones in the human eye. It is asked to sort the colors in a way that preserves the relationships between them. Like colors should stay together, but the arrangement is allowed to be discovered by the machine.

Since nature has a preference for ordering color, in essence, Hues of Intelligence seeks to answer the question: “Can a machine discover the rainbow?”

A collection of attempts to perform this task is presented juxtaposed to a color wheel arranged by hue. The attempts are chosen from four algorithms: one demonstrates a machine learning to approximate the transformation to hue from labeled examples (supervised training), and the others were allowed to discover this concept instead (unsupervised). These three algorithms were not exposed to the true hue values; they were told only to keep similar colors close together.

The audience is invited to interrogate the prints at close range and build a visual intuition for the reasoning of the machines. Can they identify the natural ordering amongst the sea of algorithmic imagery?

If machines struggle to discover human preference, then should we entrust them with more substantive decisions? Is their ability to learn to approximate our preferences a sufficient criteria for handing over control?




More images in the gallery and alongside project details in the docs.