Finite state space hidden Markov models are flexible tools to model phenomena with complex time dependencies: any process distribution can be approximated by a hidden Markov model with enough hidden states.We consider the problem of estimating an unknown process distribution using nonparametric hidden Markov models in the misspecified setting,...
-
July 9, 2018 (v1)PublicationUploaded on: December 4, 2022
-
September 14, 2023 (v1)Journal article
We consider the situation in which cooperating agents learn to achieve a common goal based solely on a global return that results from all agents' behavior. The method proposed is based on taking into account the agents' activity , which can be any additional information to help solving multi-agent decentralized learning problems. We propose a...
Uploaded on: November 25, 2023 -
June 29, 2024 (v1)Publication
No description
Uploaded on: July 2, 2024 -
September 12, 2024 (v1)Publication
In reinforcement learning, credit assignment with historydependent reward is a key problem to solve for being able to model agents: (i) associating the returns from their environment with their past (series of) actions, and (ii) figuring out which past decisions are responsible for the current achievement of their goal. Usual approaches...
Uploaded on: September 17, 2024 -
September 12, 2024 (v1)Publication
No description
Uploaded on: September 19, 2024 -
February 13, 2021 (v1)Publication
This paper considers the deconvolution problem in the case where the target signal is multidimensional and no information is known about the noise distribution. More precisely, no assumption is made on the noise distribution and no samples are available to estimate it: the deconvolution problem is solved based only on the corrupted signal...
Uploaded on: December 4, 2022 -
April 19, 2023 (v1)Publication
We consider noisy observations of a distribution with unknown support. In the deconvolution model, it has been proved recently [19] that, under very mild assumptions, it ispossible to solve the deconvolution problem without knowing the noise distribution and with no sample of the noise. We first give general settings where the theory applies...
Uploaded on: April 22, 2023 -
April 27, 2023 (v1)Publication
When fitting the learning data of an individual to algorithm-like learning models, the observations are so dependent and non-stationary that one may wonder what the classical Maximum Likelihood Estimator (MLE) could do, even if it is the usual tool applied to experimental cognition. Our objective in this work is to show that the estimation of...
Uploaded on: October 14, 2023 -
September 3, 2024 (v1)Publication
Recent advances have demonstrated the possibility of solving the deconvolution problem without prior knowledge of the noise distribution. In this paper, we study the repeated measurements model, where information is derived from multiple measurements of X perturbed independently by additive errors. Our contributions include establishing...
Uploaded on: September 7, 2024 -
May 2024 (v1)Publication
We prove oracle inequalities for a penalized log-likelihood criterion that hold even if the data are not independent and not stationary, based on a martingale approach. The assumptions are checked for various contexts: density estimation with independent and identically distributed (i.i.d) data, hidden Markov models, spiking neural networks,...
Uploaded on: October 12, 2024 -
September 25, 2022 (v1)Publication
In this paper, we consider a bivariate process (Xt, Yt) t∈Z which, conditionally on a signal (Wt) t∈Z , is a hidden Markov model whose transition and emission kernels depend on (Wt) t∈Z. The resulting process (Xt, Yt, Wt) t∈Z is referred to as an input-output hidden Markov model or hidden Markov model with external signals. We prove that this...
Uploaded on: December 3, 2022 -
February 15, 2021 (v1)Publication
We propose a novel self-supervised image blind denoising approach in which two neural networks jointly predict the clean signal and infer the noise distribution. Assuming that the noisy observations are independent conditionally to the signal, the networks can be jointly trained without clean training data. Therefore, our approach is...
Uploaded on: December 4, 2022 -
July 3, 2024 (v1)Publication
International audience
Uploaded on: September 27, 2024