[Paper] Recursive Language Models: Using code + recursion to make LLMs read 10M+ tokens without huge context windows
TL;DR: New CSAIL/MIT paper proposes Recursive Language Models (RLMs). Instead of scaling context windows forever, the LLM treats the prompt as data inside a REPL (e.g., Python) and generates code to slice, search, and recursively call itself on sub-chunks. Results: Handles