AI Glossary

Understanding GPTs and LLMs Terminology

Explore our comprehensive AI glossary page to demystify the terminology related to Generative Pre-trained Transformers (GPTs) and Large Language Models (LLMs). Enhance your understanding of these cutting-edge technologies with clear definitions and explanations.

A
AGI
Artificial General Intelligence (AGI) refers to AI systems that possess the ability to understand, learn, and apply knowledge across a wide range of tasks.
AI
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans.
C
Chain-of-Thought Prompting
Chain-of-Thought Prompting is a prompt engineering technique that involves creating a sequence of prompts to guide an AI model to generate a coherent chain of responses.
ChatGPT
ChatGPT is a variant of the GPT model that is fine-tuned on conversational data to generate human-like responses in chat applications.
Chatbot
A chatbot is a computer program that simulates human conversation through voice commands or text chats.
Claude
Claude is a large language model developed by Groq that is trained on a diverse range of internet text data.
Context Window
A context window is a fixed-size window that is used to limit the amount of context that an AI model can use to generate a response.
E
Embedding
An embedding is a vector representation of a word or phrase that captures its meaning and context in a high-dimensional space.
F
Few-shot Learning
Few-shot learning is a machine learning paradigm that aims to train models on a small amount of data to perform a specific task.
Few-shot Prompt
A few-shot prompt is a prompt that is designed to guide an AI model to perform a specific task with only a few examples.
Fine-tuning
Fine-tuning is the process of training a pre-trained AI model on a specific dataset to adapt it to a particular task.
G
GPT
GPT stands for Generative Pre-trained Transformer. It is a type of large language model developed by OpenAI that is trained on a diverse range of internet text data.
H
Hallucinations
In the context of AI text generation, hallucinations refer to the generation of incorrect or nonsensical information by the model.
L
LLM Size
LLM size refers to the number of parameters in a Large Language Model, which determines the model's capacity and performance.
Large Language Model (LLM)
A Large Language Model (LLM) is a type of AI model that is trained on a large corpus of text data to generate human-like text.
Latency
Latency is the time delay between the input to a system and the corresponding output, often used to measure the responsiveness of a system.
M
Machine Learning
Machine Learning is a subset of AI that enables machines to learn from data and improve their performance without being explicitly programmed.
Multi-turn Conversation
A multi-turn conversation is a conversation between two or more participants that involves multiple exchanges of messages.
N
NLP
Natural Language Processing (NLP) is a branch of AI that enables machines to understand, interpret, and generate human language.
Natural Language Processing
Natural Language Processing (NLP) is a branch of AI that enables machines to understand, interpret, and generate human language.
O
One-shot Learning
One-shot learning is a machine learning paradigm that aims to train models on a single example of a task to perform that task.
One-shot Prompt
A one-shot prompt is a prompt that is designed to guide an AI model to perform a specific task with only a single example.
OpenAI
OpenAI is an artificial intelligence research lab that aims to ensure that artificial general intelligence (AGI) benefits all of humanity.
Overfitting
Overfitting is a common problem in machine learning where a model performs well on the training data but poorly on new, unseen data.
P
Pretrained Model
A pretrained model is a model that has been trained on a large dataset and can be fine-tuned on a specific task with a smaller dataset.
Prompt
In the context of AI and NLP, a prompt is a piece of text that is used to guide an AI model to generate a specific output.
Prompt Engineering
Prompt engineering is the process of creating prompts that are used to generate responses from AI models. The prompts are designed to guide the AI model to generate the desired output.
Prompt Optimization
Prompt optimization is the process of refining prompts to guide AI models to generate more accurate and relevant responses.
S
System Prompt
A system prompt is a prompt that is designed to guide an AI model to generate a specific type of response, such as a question or a command.
T
Temperature
In the context of AI text generation, temperature is a hyperparameter that controls the randomness of the generated text.
Token
In the context of AI and NLP, a token refers to a single unit of text, such as a word or a punctuation mark.
Tokenization
Tokenization is the process of breaking down text into smaller units called tokens, such as words or subwords.
Top-k Sampling
Top-k sampling is a sampling technique used in AI text generation that selects the top k most likely tokens at each step of generation.
Top-p Sampling
Top-p sampling is a sampling technique used in AI text generation that selects the minimum number of tokens whose cumulative probability exceeds a predefined threshold p.
Transformer
The Transformer is a deep learning model that is based on self-attention mechanisms and is used in natural language processing tasks.
U
User Prompt
A user prompt is a prompt that is provided by a user to guide an AI model to generate a specific type of response.
V
Vector Database
A vector database is a database that stores vector representations of data points, enabling efficient similarity search and retrieval.
Z
Zero-shot Learning
Zero-shot learning is a machine learning paradigm that aims to train models to perform a task without any labeled data.
Zero-shot Prompt
A zero-shot prompt is a prompt that is designed to guide an AI model to perform a specific task without any labeled examples.