Published June 4, 2023 | Version v1
Conference paper

Understandable Relu Neural Network For Signal Classification

Description

ReLU neural networks suffer from a problem of explainability because they partition the input space into a lot of polyhedrons. This paper proposes a constrained neural network model that replaces polyhedrons by orthotopes: each hidden neuron processes only a single component of the input signal. When the number of hidden neurons is large, we show that our neural network is equivalent to a logistic regression whose input is a non-linear transformation of the processed signal. Hence, the training of our neural network always converges to a unique solution. Numerical simulations show that the loss of performance with respect to state-of-the-art methods is negligible even though our neural network is strongly constrained on robustness and explainability.

Abstract

International audience

Additional details

Identifiers

URL
https://hal.science/hal-04195962
URN
urn:oai:HAL:hal-04195962v1

Origin repository

Origin repository
UNICA