Published on Sat Sep 19 2020

Simplifying Reinforced Feature Selection via Restructured Choice Strategy of Single Agent

Xiaosa Zhao, Kunpeng Liu, Wei Fan, Lu Jiang, Xiaowei Zhao, Minghao Yin, Yanjie Fu

Feature selection aims to select a subset of features to optimize the downstream predictive tasks. MARFS has been introduced to automate feature selection, by creating agents for each feature. We develop a single-agent reinforced feature selection approach integrated with a restructured choice strategy.

0
0
0
Abstract

Feature selection aims to select a subset of features to optimize the performances of downstream predictive tasks. Recently, multi-agent reinforced feature selection (MARFS) has been introduced to automate feature selection, by creating agents for each feature to select or deselect corresponding features. Although MARFS enjoys the automation of the selection process, MARFS suffers from not just the data complexity in terms of contents and dimensionality, but also the exponentially-increasing computational costs with regard to the number of agents. The raised concern leads to a new research question: Can we simplify the selection process of agents under reinforcement learning context so as to improve the efficiency and costs of feature selection? To address the question, we develop a single-agent reinforced feature selection approach integrated with restructured choice strategy. Specifically, the restructured choice strategy includes: 1) we exploit only one single agent to handle the selection task of multiple features, instead of using multiple agents. 2) we develop a scanning method to empower the single agent to make multiple selection/deselection decisions in each round of scanning. 3) we exploit the relevance to predictive labels of features to prioritize the scanning orders of the agent for multiple features. 4) we propose a convolutional auto-encoder algorithm, integrated with the encoded index information of features, to improve state representation. 5) we design a reward scheme that take into account both prediction accuracy and feature redundancy to facilitate the exploration process. Finally, we present extensive experimental results to demonstrate the efficiency and effectiveness of the proposed method.

Fri Oct 02 2020
Machine Learning
Interactive Reinforcement Learning for Feature Selection with Decision Tree in the Loop
We study the problem of balancing effectiveness and efficiency in automated feature selection. We propose a novelInteractive and closed-loop architecture to simultaneously model interactivereinforcement learning (IRL) and decision tree feedback (DTF) In particular, the tree-structured feature hierarchy from decision tree is leveraged to improve state representation.
0
0
0
Thu Aug 27 2020
Artificial Intelligence
AutoFS: Automated Feature Selection via Diversity-aware Interactive Reinforcement Learning
Feature selection is a fundamental intelligence for machine learning and predictive analysis. Traditional feature selection methods (e.g., mRMR) are mostly efficient, but difficult to identify the best subset. We propose an Interactive Reinforced Feature Selection (IRFS) framework that guides agents.
0
0
0
Tue Mar 03 2020
Machine Learning
Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Deep reinforcement learning (RL) algorithms have recently achieved remarkable successes in various sequential decision making tasks. One natural question to ask is whether learning good representations for states and using larger networks helps in learning better policies. To do so, we propose an online feature extractor network (OFENet)
0
0
0
Mon Sep 18 2017
Machine Learning
Why Pay More When You Can Pay Less: A Joint Learning Framework for Active Feature Acquisition and Classification
We consider the problem of active feature acquisition, where we sequentially select the subset of features in order to achieve the maximum prediction performance in the most cost-effective way. We formulate this active feature acquisition problem as a reinforcement learning problem, and provide a novel framework for jointly learning both the RL
0
0
0
Thu Oct 05 2017
Artificial Intelligence
Exploration in Feature Space for Reinforcement Learning
The infamous exploration-exploitation dilemma is one of the oldest and most important problems in reinforcement learning. We present a new method for computing a generalized state visit-count, which allows the agent to estimate the uncertainty associated with any state. This method is simpler and less computationally expensive than previous proposals.
0
0
0
Mon Nov 02 2020
Artificial Intelligence
Reinforcement Learning with Efficient Active Feature Acquisition
Solving real-life sequential decision making problems under partial observability involves an exploration-exploitation problem. We propose a model-based reinforcement learning framework that learns an active feature acquisition policy. Key to the success is a novel sequential variational auto-encoder.
0
0
0