AI glossary of terminology

Given that AI is a rapidly evolving field, new terms are constantly emerging.

LAST week's article delved into various pathways available for studying Artificial Intelligence (AI). Perhaps the title did not quite capture the essence of the content. If you are keen on delving into AI studies, it is definitely worth giving the piece another read. Click on the link or do a google search. Click on or google the following: The Independent AI Naison Bangure.

This week, we are delving into the terminology we encounter while navigating the realm of AI. I tasked my team of 12 AI Chatbot assistants to thoroughly search and create a comprehensive and inclusive AI glossary of terms.

Given that AI is a rapidly evolving field, new terms are constantly emerging. Here is our compilation of findings:

A/B testing: A controlled experiment to compare two variants of a system or model.

Activation function: In neural networks, a function that generates output from the weighted sum of inputs from the previous layer.

Active learning: A type of semi-supervised learning where a model can query an oracle for labels on new data points.

AI bias: The presence of unfair, discriminatory, or prejudiced outcomes in AI systems, often resulting from biased training data or algorithms.

AI ethics: The study of ethical issues related to the development and deployment of artificial intelligence technologies.

AI explainability: The ability to understand and interpret the decisions and predictions made by AI systems, especially in critical or sensitive applications.

Algorithm: A set of instructions to perform tasks like calculations and data analysis.

Annotation: Metadata attached to data, typically provided by humans.

Artificial intelligence (AI): The simulation of human intelligence in machines to enable them to learn, reason, and solve problems.

Artificial neural network: A computing system inspired by biological neural networks, with interconnected nodes that process data.

Auto-encoder: An unsupervised neural network that learns efficient data representations by dimensionality reduction.

Automated speech recognition (ASR): Technology that transcribes spoken language into text.

Benchmark: A standardised test to evaluate and compare the performance of AI systems.

Bias: Assumptions made by an AI model about the data, which can lead to unfair or inaccurate results.

Chatbot: A computer programme that simulates human conversation using natural language processing.

Compute: The computational resources used to train or run AI models.

Computer vision: A field of AI that enables computers to interpret and understand visual information from the real world, such as images and videos.

Convolutional Neural Network (CNN): A deep learning model that processes grid-like data such as images.

Cross-validation: A technique to evaluate a model's ability to generalise by training on a subset of data and testing on the remaining data.

Data augmentation: The process of creating new training data from existing data through techniques like cropping, flipping, or adding noise.

Data mining: The process of discovering patterns and extracting useful information from large datasets.

Decision tree: A supervised learning algorithm that makes decisions based on a series of branched nodes and leaf nodes.

Deep learning: A subset of machine learning based on artificial neural networks with multiple layers.

Deep reinforcement learning: A type of reinforcement learning that uses deep neural networks as the function approximator.

Dimensionality reduction: The process of reducing the number of features in a dataset while retaining most of the relevant information.

Ensemble learning: A technique that combines multiple models to improve predictive performance.

Ethical AI: The practice of designing and deploying AI systems that are fair, transparent, and accountable, and that consider the societal impacts of their use.

Explainable AI (XAI): A field focused on making AI models transparent and their decisions interpretable to humans.

Feature engineering: The process of selecting, transforming, and creating features (input variables) to improve the performance of machine learning models.

Federated learning: A machine learning approach where models are trained on decentralised data across multiple devices or silos.

Generative Adversarial Network (GAN): A type of neural network architecture used for generating new data instances like images or text.

Generative AI: AI systems that can create new content like text, images, audio, or code based on training data.

Generative Adversarial Networks (GANs): A class of AI algorithms where two neural networks, the generator and the discriminator, are trained together in a competitive process to generate realistic data samples.

Hyper-parameter: A parameter whose value is set before the learning process begins, as opposed to being learned from the training data.

Large Language Model (LLM): A type of AI model trained on massive text data to generate human-like language.

Machine learning: A subset of AI that allows systems to learn from data and improve performance on a specific task.

Natural Language Processing (NLP): A field of AI focused on enabling computers to understand, process, and generate human language. 

Neural Network: A computational model inspired by the structure and function of the human brain, composed of interconnected nodes (neurons) organised in layers.

Overfitting: A modelling error where a model learns the training data too well, including noise, leading to poor generalisation.

Reinforcement learning: A type of machine learning where an agent learns to make decisions by interacting with an environment to maximise a reward signal.

Responsible AI: The practice of developing and deploying AI systems that are ethical, transparent, fair, robust, and respect privacy.

Semi-spervised learning: A combination of supervised and unsupervised learning, where a small amount of labelled data is combined with a large amount of unlabelled data.

Sentiment analysis: The process of determining the emotional tone or attitude behind text data.

Supervised learning: A type of machine learning where models are trained on labelled data to learn a mapping function from inputs to outputs.

Transfer learning: A technique where a model trained on one task is repurposed for a different but related task.

Transformer: A type of neural network architecture widely used for processing sequential data like text or time series.

Under-fitting: A modelling error where a model is too simple to capture the underlying patterns in the data, leading to poor performance.

Unsupervised learning: A type of machine learning where models are trained on unlabelled data to discover inherent patterns or groupings.

Zero-shot learning: A capability of some AI models to perform tasks without being explicitly trained on them, by leveraging knowledge from pre-training. This glossary covers some of the key terms in the field of artificial intelligence, but there are many more specialised terms and concepts depending on specific areas of study or application.

  • Bangure has extensive experience in print and electronic media production and management.  He is also a filmmaker. — [email protected]

 

Related Topics