• Add to Collection
  • About


    In 2017, FIELD started an extensive research project exploring the most relevant smart technologies in code-based illustrations. From an in-depth… Read More
    In 2017, FIELD started an extensive research project exploring the most relevant smart technologies in code-based illustrations. From an in-depth study of their logic and code, and with a new visual language, they created a series of illustrations that reveal the complexity and architecture of these technologies. Read Less
System Aesthetics
In 2018, Algorithms will increasingly power every aspect of our lives, from voice recognition to self-driving cars. But it’s difficult for humans to understand such abstractions. “There’s a real lack of imagery and visual metaphors for all these new and very abstract things that we have in our lives,” explains Marcus Wendt, creative director at London-based art and technology studio FIELD. “We don’t have anything that will help us decide whether we can really trust these systems, or which one to go for, whenever there is more than one option.”

FIELD created five exclusive images for The World in 2018, WIRED magazine’s annual look at the technology which will impact our lives in the follpwing year. Cased on the structure of computing code, the images are part of a project to make algorithms much more accessible to all of us by developing a new visual language around them.
How Self-Driving Cars See The World
To navigate the world safely, autonomous vehicles must build a picture of it. To do this, an algorithm integrates real-time feeds from a multitude of sensors including video, infrared, radar and ultrasound. It then passes that data through up to 150 processing stages and filters informed by prior learning. This image is based on Inception, Google’s image recognition model, and shows the inputs (on the right) being pulled in and processed (top left) into a model of the road ahead. Other vehicles are represented by the red boxes.
The Next Generation of Voice Assistants
Personal assistants like Alexa, Siri and Cortana will get even smarter in 2018. A computer science breakthrough called ‘dynamic program generation’ will allow them to understand more complex instructions and even the “intent” of the input. They will provide responses that tap into functionality and data from all apps you use on your connected devices.
This illustration shows the natural language processing algorithm SyntaxNet, you can see the voice input in the form a soundwave coming in at the bottom layer. It is parsed into phonemes, and then processed across multiple, dynamically re-arranging layers to extract the user’s request and form a response.
Following The Money Trail
With initial coin offerings attracting attention and governments testing their own cryptocurrencies, digital money will continue to grow in influence in 2018. This image depicts transactions in Ethereum, an open-source computing system that allows developers to create blockchain-based applications.
Each square represents a line in the distributed ledger that makes up a blockchain, with each following on from the last. The squares’ colours are determined by the amount of money that is being moved.
Face Hackers
In 2017, researchers at the University of Washington managed to generate a believable video of President Obama, using only a forged audio recording – and a neural network trained on his public speeches. The lip sync is nearly perfect, and the possibilities of abuse are alarming. How much longer will we be able to trust what we see on camera?
This artwork, generating Obama’s likeness from a multitude of software modules, illustrates the way a neural network learns how different sounds corresponds to the movement of lips, eyes and cheeks in minute detail.
Image Creation
Algorithms receive feedback from humans to help them improve – but AI researchers are excited by generative adversarial networks.
Previously thought impossible, the idea is to pit two machine learning programs against each other — one to create something, the other to act as critic.
Amazon is testing an application in which networks analyse images and then create similar ones. Although they can currently only create tiny images, the technique mights one day be used in film-making.