graviti
Products
Resources
About us
WikiText
Text
NLP
|...
License: CC BY-SA 3.0

Overview

The WikiText language modeling dataset is a collection of over 100 million tokens extracted
from the set of verified Good and Featured articles on Wikipedia.
Compared to the preprocessed
version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over
110 times larger. The WikiText dataset also features a far larger vocabulary and retains the
original case, punctuation and numbers - all of which are removed in PTB. As it is composed
of full articles, the dataset is well suited for models that can take advantage of long term
dependencies.
In comparison to the Mikolov processed version of the Penn Treebank (PTB),
the WikiText datasets are larger. WikiText-2 aims to be of a similar size to the PTB while
WikiText-103 contains all articles extracted from Wikipedia. The WikiText datasets also retain
numbers (as opposed to replacing them with N), case (as opposed to all text being lowercased),
and punctuation (as opposed to stripping them out).
dataset statistics

Data Collection

We selected articles only fitting the Good or Featured article criteria specified by editors
on Wikipedia. These articles have been reviewed by humans and are considered well written,
factually accurate, broad in coverage, neutralin point of view, and stable. This resulted in
23,805 Good articles and 4,790 Featured articles. The text for each article was extracted using
the Wikipedia API. Extracting the raw text from Wikipedia mark-up is nontrivial due to the
large number of macros in use. These macros are used extensively and include metric conversion,
abbreviations, language notation, and date handling.
Once extracted, specific sections which
primarily featured lists were removed by default. Other minor bugs, such assort keys and Edit
buttons that leaked in from the HTML, were also removed. Mathematical formulae and LaTeX code,
were replaced with〈formula〉tokens. Normalization and tokenization were performed using the
Moses tokenizer, slightly augmented to further split numbers (8,600→8 @,@ 600) and with some
additional minor fixes. A vocab-ulary was constructed by discarding all words with a count
below 3. Words outside of the vocabulary were mapped to the〈unk〉token, also a part of the vocabulary.

Citation

Please use the following citation when referencing the dataset:

@article{DBLP:journals/corr/MerityXBS16,
  author    = {Stephen Merity and
               Caiming Xiong and
               James Bradbury and
               Richard Socher},
  title     = {Pointer Sentinel Mixture Models},
  journal   = {CoRR},
  volume    = {abs/1609.07843},
  year      = {2016},
  url       = {http://arxiv.org/abs/1609.07843},
  archivePrefix = {arXiv},
  eprint    = {1609.07843},
  timestamp = {Thu, 21 Mar 2019 11:19:44 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/MerityXBS16.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

License

CC BY-SA 3.0

Data Summary
Type
Text,
Amount
--
Size
373.28MB
Provided by
salesforce
Salesforce is a customer relationship management solution that brings customers and companies together.
| Amount -- | Size 373.28MB
WikiText
Text
NLP
License: CC BY-SA 3.0

Overview

The WikiText language modeling dataset is a collection of over 100 million tokens extracted
from the set of verified Good and Featured articles on Wikipedia.
Compared to the preprocessed
version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over
110 times larger. The WikiText dataset also features a far larger vocabulary and retains the
original case, punctuation and numbers - all of which are removed in PTB. As it is composed
of full articles, the dataset is well suited for models that can take advantage of long term
dependencies.
In comparison to the Mikolov processed version of the Penn Treebank (PTB),
the WikiText datasets are larger. WikiText-2 aims to be of a similar size to the PTB while
WikiText-103 contains all articles extracted from Wikipedia. The WikiText datasets also retain
numbers (as opposed to replacing them with N), case (as opposed to all text being lowercased),
and punctuation (as opposed to stripping them out).
dataset statistics

Data Collection

We selected articles only fitting the Good or Featured article criteria specified by editors
on Wikipedia. These articles have been reviewed by humans and are considered well written,
factually accurate, broad in coverage, neutralin point of view, and stable. This resulted in
23,805 Good articles and 4,790 Featured articles. The text for each article was extracted using
the Wikipedia API. Extracting the raw text from Wikipedia mark-up is nontrivial due to the
large number of macros in use. These macros are used extensively and include metric conversion,
abbreviations, language notation, and date handling.
Once extracted, specific sections which
primarily featured lists were removed by default. Other minor bugs, such assort keys and Edit
buttons that leaked in from the HTML, were also removed. Mathematical formulae and LaTeX code,
were replaced with〈formula〉tokens. Normalization and tokenization were performed using the
Moses tokenizer, slightly augmented to further split numbers (8,600→8 @,@ 600) and with some
additional minor fixes. A vocab-ulary was constructed by discarding all words with a count
below 3. Words outside of the vocabulary were mapped to the〈unk〉token, also a part of the vocabulary.

Citation

Please use the following citation when referencing the dataset:

@article{DBLP:journals/corr/MerityXBS16,
  author    = {Stephen Merity and
               Caiming Xiong and
               James Bradbury and
               Richard Socher},
  title     = {Pointer Sentinel Mixture Models},
  journal   = {CoRR},
  volume    = {abs/1609.07843},
  year      = {2016},
  url       = {http://arxiv.org/abs/1609.07843},
  archivePrefix = {arXiv},
  eprint    = {1609.07843},
  timestamp = {Thu, 21 Mar 2019 11:19:44 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/MerityXBS16.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

License

CC BY-SA 3.0

0
Start building your AI now
graviti
wechat-QR
Long pressing the QR code to follow wechat official account

Copyright@Graviti