Large Language Models Show Promise as General-Purpose Optimizers

A new paper from researchers at Google DeepMind demonstrates the potential for large language models (LLMs) like GPT-3 to act as general-purpose optimization algorithms. The approach, called OPRO (Optimization by PROmpting), simply describes an optimization problem to the LLM using natural language prompts. The LLM then iteratively generates new candidate solutions aiming to improve on previous ones.

An overview of the OPRO framework. Given the meta-prompt as the input, the LLM generates new solutions to the objective function, then the new solutions and their scores are added into the meta-prompt for the next optimization step. The meta-prompt contains the solution-score pairs obtained throughout the optimization process, as well as a natural language description of the task and (in prompt optimization) a few exemplars from the task.

The researchers tested OPRO on small instances of classic optimization problems like linear regression and the traveling salesman problem. With no specialized training, LLMs were able to find high-quality solutions competitive with standard algorithms.

The most promising results came from optimizing prompts themselves. The goal was to find an instructional prompt that maximizes an LLM’s accuracy on reasoning tasks like mathematical word problems. Across benchmarks like GSM8K and Big-Bench Hard, optimized prompts improved accuracy over human-designed prompts by up to 50%, respectively. The prompts generalized to unseen test data, sometimes matching state-of-the-art few-shot performance.

Test accuracies on GSM8K. Instruction with the highest test accuracy for each scorer-optimizer pair.

Overall, the work shows LLMs have an intrinsic capability for iterative optimization just by leveraging patterns from previous solution attempts provided in natural language. With their vast knowledge and few-shot learning abilities, LLMs could prove to be versatile black-box optimizers – adapting to new problems simply by updating the prompt descriptions. If the approach scales to larger instances, it could have widespread implications across fields like machine learning, engineering design, and operations research. Rather than designing custom algorithms, OPRO suggests we may one day simply describe optimization problems to a general-purpose LLM.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Emsi's feed

Subscribe now to keep reading and get access to the full archive.

Continue reading