Publication
Unsupervised Image to Sequence Translation with Canvas-Drawer Networks
Abstract
Unsupervised Image to Sequence Translation with Canvas-Drawer Networks
Kevin Frans, Chin-Yi Cheng
Encoding images as a series of high-level constructs, such as brush strokes or discrete shapes, can often be key to both human and machine understanding. In many cases, however, data is only available in pixel form. We present a method for generating images directly in a high-level domain (e.g. brush strokes), without the need for real pairwise data. Specifically, we train a ”canvas” network to imitate the mapping of high-level constructs to pixels, followed by a high-level ”drawing” network which is optimized through this mapping towards solving a desired image recreation or translation task. We successfully discover sequential vector representations of symbols, large sketches, and 3D objects, utilizing only pixel data. We display applications of our method in image segmentation, and present several ablation studies comparing various configurations.
Download publicationAssociated Researchers
Chin-Yi Cheng
Autodesk Research
Kevin Frans
Massachusetts Institute of Technology
Related Resources
2011
Biologically Inspired DesignThis paper reviews research on biologically inspired design, and has…
2006
Interactive Hatching and Stippling by ExampleWe describe a system that lets a designer interactively draw patterns…
2014
Towards Voxel-Based Algorithms for Building Performance SimulationThis paper explores the design, coupling, and application of…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us