Improving AI’s ability to follow complex human instructions


GPT-4: Researchers have developed a new dataset for training AI models in Vision and Language Navigation (VLN), enabling robots to better follow complex human instructions. The dataset, two orders of magnitude larger than existing ones, uses diverse indoor environments and synthetic instructions to train a pure imitation learning agent. This agent outperforms existing reinforcement learning agents on the VLN benchmark, paving the way for improved instruction-following capabilities in AI.
Read more at Medium…