Our work is presented in three separate parts which can be read independently. Firstly we propose three active learning heuristics that scale to deep neural networks: We scale query by committee, an ensemble active learning methods. We speed up the computation time by sampling a committee of deep networks by applying dropout on the trained...
-
December 12, 2018 (v1)PublicationUploaded on: December 4, 2022
-
April 30, 2018 (v1)Conference paper
The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy...
Uploaded on: December 4, 2022 -
2017 (v1)Book section
International audience
Uploaded on: February 28, 2023 -
June 7, 2016 (v1)Conference paper
Author identification and text genesis have always been a hot topic for the statistical analysis of textual data community. Recent advances in machine learning have seen the emergence of machines competing state-of-the-art computational linguistic methods on specific natural language processing tasks (part-of-speech tagging, chunking and...
Uploaded on: February 28, 2023 -
July 15, 2018 (v1)Conference paper
In this paper, we propose a new strategy , called Text Deconvolution Saliency (TDS), to visualize linguistic information detected by a CNN for text classification. We extend Deconvolution Networks to text in order to present a new perspective on text analysis to the linguistic community. We empirically demonstrated the efficiency of our Text...
Uploaded on: December 4, 2022