A large language model (LLM), in the context of natural language processing and artificial intelligence, refers to a sophisticated neural network that has been trained on a massive amount of text data to understand and generate human-like language. These models are typically built on architectures like transformers.