Published June 3, 2014
| Version v1
Conference paper
GOOFRE version 2
- Creators
- Brunet, Etienne
- Vanni, Laurent
- Others:
- BCL, équipe Logométrie : corpus, traitements, modèles ; Bases, Corpus, Langage (UMR 7320 - UCA / CNRS) (BCL) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)
- Emilie Née
- Jean-Michel Daube
- Mathieu Valette
- Serge Fleury
Description
The amount of data contained within Google Books has doubled over the last two years and now exceeds 500 billion words. A new treatment of the data has included a re-examination of scanned images, offering a more accurate recognition of the text. In addition, for the first time, included texts have been subjected to deambigation and lemmatisation. Finally, the website Culturomics has made tools available that facilitate its accessibility. It seemed interesting, therefore, to develop a new expertise and to create a new database, complete with all the necessary statistical tools, available online or locally, for exploiting such large corpora.
Abstract
International audience
Additional details
- URL
- https://hal.science/hal-01196595
- URN
- urn:oai:HAL:hal-01196595v1
- Origin repository
- UNICA