graviti
Products
Resources
About us
TIMIT
Audio
NLP
|...
License: CC BY-NC-SA 4.0

Overview

The TIMIT corpus of read speech has been designed to provide the speech research community
with a standardized corpus for the acquisition of acoustic-phonetic knowledge and for the development
and evaluation of automatic speech recognition systems. The creation of any reasonably-sized
speech corpus is very labor intensive. With this in mind, TIMIT was designed so as to balance
utility and manageability, containing small amounts of speech from a relatively diverse speaker
population and a range of phonetic environments. This section provides more detailed information
on the contents of TIMIT and on the division of the TIMIT speech material into subsets for
training and testing purposes.
TIMIT contains a total of 6300 utterances, 10 sentences spoken
by each of 630 speakers from 8 major dialect divisions of the United States. The 10 sentences
represent roughly 30 seconds of speech material per speaker. In total, the corpus contains
approximately 5 hours of speech. All speakers are native speakers of American English and were
judged by a professional speech pathologist to have no clinical speech pathologies.

Data Collection

The speakers were primarily TI personnel, many of whom were new to TI and the Dallas area.
They were selected to be representative of different geographical dialect regions of the U.S.2
A speaker's dialect region was defined as the geographical area of the U.S. where he or she
lived during their childhood years (age 2 to 10). The geographical areas correspond with recognized
dialect regions of the U.S. (Language Files, Ohio State University Linguistics Dept., 1982),
with the exception of the Western dialect region (dr7) in which dialect boundaries are not
known with any confidence and "dialect region" 8 where the speakers moved around a lot during
their childhood. The locale of each speaker's childhood is indicated by a color-coded marker
on the map.
Recordings were made in a noise-isolated recording booth at TI, using a semi-automatic
computer system (STEROIDS) to control the presentation of prompts to the speaker and the recording.
Two-channel recordings were made using a Sennheiser HMD 414 headset-mounted microphone and
a Breul & Kjaer 1/2" far-field pressure microphone (#4165).
The speech was directly digitized
at a sample rate of 20 kHz using a Digital Sound Corporation DSC 200 with the anti-aliasing
filter at 10 kHz. The speech was then digitally filtered, debiased, and downsampled to 16 kHz.

Subjects were seated in the recording booth and prompts were presented on a monitor. The
subjects wore earphones through which a low-level (approximately 53 dB SPL) of background noise
was played to eliminate the unusual voice quality produced by the "dead room" effect. TI attempted
to keep both the recording gain and the level of noise in the subject's earphones constant
during the collection. At the beginning of each recording day, a standard calibration tone
was recorded from each microphone and the voltage at the subject's earphones was checked and
adjusted as necessary.
The speakers were given minimal instructions and asked to read the
prompts in a "natural" voice. The recordings were monitored, and any suspected mispronunciations
were flagged for verification. Verification consisted of listening to the utterance by both
the monitor and the speaker. When a pronunciation error was detected, the sentence was re-recorded.
Variant pronunciations were not counted as mistakes.

Citation

Please use the following citation when referencing the dataset:

@article{article,
author = {Garofolo, J. and Lamel, Lori and Fisher, W. and Fiscus, Jonathan and Pallett, D.
and Dahlgren, N. and Zue, V.},
year = {1992},
month = {11},
pages = {},
title = {TIMIT Acoustic-phonetic Continuous Speech Corpus},
journal = {Linguistic Data Consortium}
}

License

CC BY-NC-SA 4.0

Data Summary
Type
Audio,
Amount
--
Size
419.81MB
Provided by
Linguistic Data Consortium, University of Pennsylvania
The Linguistic Data Consortium (LDC) is an open consortium of universities, libraries, corporations and government research laboratories.
| Amount -- | Size 419.81MB
TIMIT
Audio
NLP
License: CC BY-NC-SA 4.0

Overview

The TIMIT corpus of read speech has been designed to provide the speech research community
with a standardized corpus for the acquisition of acoustic-phonetic knowledge and for the development
and evaluation of automatic speech recognition systems. The creation of any reasonably-sized
speech corpus is very labor intensive. With this in mind, TIMIT was designed so as to balance
utility and manageability, containing small amounts of speech from a relatively diverse speaker
population and a range of phonetic environments. This section provides more detailed information
on the contents of TIMIT and on the division of the TIMIT speech material into subsets for
training and testing purposes.
TIMIT contains a total of 6300 utterances, 10 sentences spoken
by each of 630 speakers from 8 major dialect divisions of the United States. The 10 sentences
represent roughly 30 seconds of speech material per speaker. In total, the corpus contains
approximately 5 hours of speech. All speakers are native speakers of American English and were
judged by a professional speech pathologist to have no clinical speech pathologies.

Data Collection

The speakers were primarily TI personnel, many of whom were new to TI and the Dallas area.
They were selected to be representative of different geographical dialect regions of the U.S.2
A speaker's dialect region was defined as the geographical area of the U.S. where he or she
lived during their childhood years (age 2 to 10). The geographical areas correspond with recognized
dialect regions of the U.S. (Language Files, Ohio State University Linguistics Dept., 1982),
with the exception of the Western dialect region (dr7) in which dialect boundaries are not
known with any confidence and "dialect region" 8 where the speakers moved around a lot during
their childhood. The locale of each speaker's childhood is indicated by a color-coded marker
on the map.
Recordings were made in a noise-isolated recording booth at TI, using a semi-automatic
computer system (STEROIDS) to control the presentation of prompts to the speaker and the recording.
Two-channel recordings were made using a Sennheiser HMD 414 headset-mounted microphone and
a Breul & Kjaer 1/2" far-field pressure microphone (#4165).
The speech was directly digitized
at a sample rate of 20 kHz using a Digital Sound Corporation DSC 200 with the anti-aliasing
filter at 10 kHz. The speech was then digitally filtered, debiased, and downsampled to 16 kHz.

Subjects were seated in the recording booth and prompts were presented on a monitor. The
subjects wore earphones through which a low-level (approximately 53 dB SPL) of background noise
was played to eliminate the unusual voice quality produced by the "dead room" effect. TI attempted
to keep both the recording gain and the level of noise in the subject's earphones constant
during the collection. At the beginning of each recording day, a standard calibration tone
was recorded from each microphone and the voltage at the subject's earphones was checked and
adjusted as necessary.
The speakers were given minimal instructions and asked to read the
prompts in a "natural" voice. The recordings were monitored, and any suspected mispronunciations
were flagged for verification. Verification consisted of listening to the utterance by both
the monitor and the speaker. When a pronunciation error was detected, the sentence was re-recorded.
Variant pronunciations were not counted as mistakes.

Citation

Please use the following citation when referencing the dataset:

@article{article,
author = {Garofolo, J. and Lamel, Lori and Fisher, W. and Fiscus, Jonathan and Pallett, D.
and Dahlgren, N. and Zue, V.},
year = {1992},
month = {11},
pages = {},
title = {TIMIT Acoustic-phonetic Continuous Speech Corpus},
journal = {Linguistic Data Consortium}
}

License

CC BY-NC-SA 4.0

0
Start building your AI now
graviti
wechat-QR
Long pressing the QR code to follow wechat official account

Copyright@Graviti