top of page

Types of LLMS

  • Writer: Suhitha rao Boinapally
    Suhitha rao Boinapally
  • Apr 14, 2024
  • 1 min read

ree



Welcome, curious minds, Today, we set out to discover the types of LLMs - Base LLMs and Instruction-Tuned LLMs.


Base LLMs:

These models undergo extensive training on vast amounts of text data sourced from the internet and other repositories. Their primary function is to predict the next word in a given context. For instance, when prompted with "What is the capital of France?", a base LLM might generate a completion like "What is the capital of India?". Examples of such base LLMs include GPT-3 and Bloom.


Instruction-Tuned LLMs:

Instruction-tuned LLMs are engineered to follow instructions more precisely. They start as base LLMs and then undergo fine-tuning with input-output pairs containing instructions. Reinforcement Learning from Human Feedback (RLHF) is often utilized to further refine these models, ensuring they become adept at being helpful, honest, and harmless. Consequently, instruction-tuned LLMs are less prone to generating problematic text and are better suited for practical applications. For instance, in response to the prompt "What is the capital of France?", an instruction-tuned LLM would likely provide "Paris" or "Paris is the capital of France". Notable examples of instruction-tuned LLMs include OpenAI's ChatGPT and Codex, as well as Open Assistant, which have found widespread use in various applications such as chatbots and content generation.


Base LLMs versus Instruction-Tuned LLMs:

Base LLMs offer boundless versatility and a vast store of knowledge, making them perfect for general tasks and broad exploration. Conversely, Instruction-Tuned LLMs provide tailored solutions and superior performance in specific areas, offering deeper insight and accuracy. See you in next blog,

 
 
 

Comments


© 2035 by The Artifact. Powered and secured by Wix

bottom of page