Published on Sun Mar 07 2021

Expert System Gradient Descent Style Training: Development of a Defensible Artificial Intelligence Technique

Jeremy Straub
0
0
0
Abstract

Artificial intelligence systems, which are designed with a capability to learn from the data presented to them, are used throughout society. These systems are used to screen loan applicants, make sentencing recommendations for criminal defendants, scan social media posts for disallowed content and more. Because these systems don't assign meaning to their complex learned correlation network, they can learn associations that don't equate to causality, resulting in non-optimal and indefensible decisions being made. In addition to making decisions that are sub-optimal, these systems may create legal liability for their designers and operators by learning correlations that violate anti-discrimination and other laws regarding what factors can be used in different types of decision making. This paper presents the use of a machine learning expert system, which is developed with meaning-assigned nodes (facts) and correlations (rules). Multiple potential implementations are considered and evaluated under different conditions, including different network error and augmentation levels and different training levels. The performance of these systems is compared to random and fully connected networks.

Thu Oct 15 2020
Machine Learning
Neograd: Near-Ideal Gradient Descent
NeogradM is shown to outperform Adam on several test problems. It can easily reach cost function values that are smaller by a factor of .
0
0
0
Sun Jun 24 2012
Machine Learning
Practical recommendations for gradient-based training of deep architectures
This chapter is meant as a practical guide with recommendations for some of the most commonly used hyper-parameters. Overall, it describes elements of the practice used to successfully and efficiently train and debug large-scale and often deep neural networks.
1
0
1
Sat Jun 20 2020
Machine Learning
Blind Descent: A Prequel to Gradient Descent
0
0
0
Sat Jan 27 2018
Machine Learning
Gradient descent revisited via an adaptive online learning rate
Any gradient descent optimization requires to choose a learning rate. With deeper and deeper models, tuning that learning rate can easily become tedious. We propose a variation of the gradient descent algorithm in the which the learning rate is not fixed.
0
0
0
Tue Jun 11 2019
Machine Learning
Power Gradient Descent
The development of machine learning is promoting the search for fast andstable minimization algorithms. We suggest a change in the current gradient descent methods that should speed up the motion in flat regions andslow it down in steep directions of the function to minimize.
0
0
0
Tue Mar 02 2021
Machine Learning
Categorical Foundations of Gradient-Based Learning
We propose a categorical foundation of gradient-based machine learning algorithms. This includes lenses, parametrised maps, and reverse derivative Categories. It encompasses a variety of gradient descent algorithms such as AdaGrad and Nesterov momentum.
7
133
636