LITM
Lost in the Middle
Example
I wrote this super detailed prompt, but the model totally ignored half of the instructions in the middle
Classic LITM
Yeah, guess I need to break it into smaller chunks next time
LITM of a long prompt can make AI confused
Related Slang
| AI | Artificial intelligence |
| LLM | Large Language Model |
| GenAI | Generative AI |
| CGPT | ChatGPT |
| C.AI | Character.AI |
| Algo | Algorithm |
| Centaur | A human aided by artificial intelligence |
| EAT | Expertise, Authoritativeness, and Trustworthiness |
| VR | Virtual reality |
| Deepfake | An authentic-looking depiction of a person that is not real |
How well do you know your Fantasy Football slang?
LITM is an acronym used in AI and large language model (LLM) communities to describe what happens when key information in the middle of a long prompt gets ignored, misunderstood, or forgotten by the model. When prompts are overly detailed or contain multiple instructions, LLMs sometimes prioritize the beginning or end of the prompt, leaving the middle content effectively "lost."
The term emerged in online AI forums, Discord servers, and prompt engineering communities in the early 2020s as users shared tips and frustrations about getting consistent, accurate outputs from generative models. People who use LITM are typically prompt engineers, AI hobbyists, and creators working with models like ChatGPT, GPT-4, or other generative AI tools.