ACM Transactions on Evolutionary Learning and Optimization
Language Model Crossover
Variation through Few-Shot Prompting
Fig. 1. Language Model Crossover (LMX). New candidate solutions are generated by concatenating parents into a prompt, feeding the prompt through any large pre-trained large language model (LLM), and collecting offspring from the output. Such an operator can be created through very few lines of code. The enormity and breadth of the dataset on which the LLM was trained, along with its ability to perform in-context learning, enables LMX to generate high-quality offspring across a broad range of domains. Domains demonstrated in this paper include (a) binary strings, (b) mathematical expressions, (c) English sentences, (d) image generation prompts, and (e) Python code; many more are possible. When integrated into an optimization loop, LMX serves as a general and effective engine of text-representation evolution.
Abstract
Language Model Crossover: Variation through Few-Shot Prompting
Elliot Meyerson, Mark J. Nelson, Herbie Bradley, Adam Gaier, Arash Moradi, Amy K Hoover, Joel Lehman
This paper pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e., they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e., to prompt a language model with a few text-based genotypes (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes’ offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open-source language models) is that it enables a simple mechanism to evolve semantically-rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this paper highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The conclusion is that language model crossover is a flexible and effective method for evolving genomes representable as text.
Download publicationAssociated Researchers
Elliot Meyerson
Cognizant AI Labs
Mark J. Nelson
American University
Herbie Bradley
University of Cambridge & CarperAI
Arash Moradi
New Jersey Institute of Technology
Amy K. Hoover
New Jersey Institute of Technology
Joel Lehman
Carper AI
Related Resources
2024
What’s in this LCA Report? A Case Study on Harnessing Large Language Models to Support Designers in Understanding Life Cycle ReportsExploring how large language models like ChatGPT can help designers…
2024
Inspired by AI? A Novel Generative AI System To Assist Conceptual Automotive DesignThis research explores using generative AI to streamline automotive…
2021
Robotic assembly of timber joints using reinforcement learningIn architectural construction, automated robotic assembly is…
2021
Meshmixer: Mesh Technology for Interactive Design and FabricationMeshmixer is a prototype design tool based on high-resolution dynamic…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us