A large language model (LLM) is a natural language processing (NLP) system trained on massive amounts of text data to predict probable next words and sentences.

  • LLMs apply deep learning techniques like transformers to ingest text and learn the statistical patterns and relationships between words and larger linguistic concepts.
  • As more data is provided, they become better at generating realistic, human-like language and text.

Sample uses of LLMs:

  • Text completion – Autocomplete search queries, emails, or documents
  • Text generation – Create original essays, code, poetry, dialogue
  • Text summarization – Summarize and distill key information from documents
  • Translation – Translate text between languages
  • Question answering – Provide answers to fact-based questions
  • Text classification – Categorize documents by topic, sentiment, spam detection
  • Speech recognition – Transcribe spoken audio to text

Well-known examples of large models include OpenAI's GPT-3.5 and GPT-4, DeepMind's Gopher, Meta's OPT, and Anthropic's Constitutional AI.

State-of-the-art LLMs can produce coherent, accurate and contextually-relevant text while innovating on what AI assistants can achieve. However, risks around bias, safety and misuse remain active research frontiers.