graviti
Products
Resources
About us
THUCNews
Text
NLP
|...
License: Custom

Overview

THUCTC (THU Chinese Text Classification) is a Chinese text classification toolkit launched
by the Natural Language Processing Laboratory of Tsinghua University, which can automatically
and efficiently implement user-defined text classification corpus training, evaluation, and
classification functions. Text classification usually includes three steps: feature selection,
feature dimensionality reduction, and classification model learning. How to select appropriate
text features and reduce dimensionality is a challenging problem for Chinese text classification.
Based on years of research experience in Chinese text classification, my group selected two-character
string bigram as the feature unit in THUCTC, the feature reduction method is Chi-square, the
weight calculation method is tfidf, and the classification model uses LibSVM or LibLinear.
THUCTC has good universality for long texts in the open field, does not depend on the performance
of any Chinese word segmentation tools, and has the advantages of high accuracy and fast test
speed.

Data Collection

THUCNews is generated by filtering and filtering historical data of Sina News RSS subscription
channels from 2005 to 2011. It contains 740,000 news documents (2.19 GB), all in UTF-8 plain
text format. On the basis of the original Sina news classification system, we re-integrated
and divided 14 candidate classification categories: finance, lottery, real estate, stocks,
home furnishing, education, technology, society, fashion, current affairs, sports, horoscope,
games, entertainment. Using THUCTC toolkit to evaluate on this data set, the accuracy rate
can reach 88.6%.

Instruction

We provide two ways to run the toolkit:

  1. Use a java development tool, such as eclipse, to import the packages in the lib folder including
    lib\THUCTC_java_v1.jar into your own project, and then call the function by imitating the Demo.java
    program.

  2. Use THUCTC_java_v1_run.jar in the root directory to run the toolkit.

    Use command java -jar THUCTC_java_v1_run.jar + 程序参数

Operating parameters

  • [-c CATEGORY_LIST_FILE_PATH] Read category information from the file. Each line
    in the file contains only one category name.
  • [-train TRAIN_PATH] Perform training and set
    the path to the training corpus folder. The name of each subfolder under this folder corresponds
    to a category name, and contains training corpus belonging to that category. If not set, no
    training will be performed.
  • [-test EVAL_PATH] Perform evaluation and set the path to the
    evaluation corpus folder. The name of each subfolder under this folder corresponds to a category
    name, and contains evaluation corpus belonging to that category. If not set, no evaluation
    will be performed. You can also use -eval.
  • [-classify FILE_PATH] Classify a file.
  • [-n topN] Set the number of returned candidate
    categories, sorted by score. The default is 1, which means that only the most probable category
    is returned.
  • [-svm libsvm or liblinear] Choose whether to use libsvm or liblinear for training
    and testing, and liblinear is used by default.
  • [-l LOAD_MODEL_PATH] Set the path to read the model.
  • [-s SAVE_MODEL_PATH] Set the path to save the model.
  • [-f FEATURE_SIZE] Set the number of retained features, the default is 5000.
  • [-d1 RATIO] Set the proportion of the training
    set to the total number of files, the default is 0.8.
  • [-d2 RATIO] Set the proportion of the test set to the total number of files, the default is 0.2.
  • [-e ENCODING] Set the encoding of training and test files, the default is UTF-8.
  • [-filter SUFFIX] Set file suffix filtering.
    For example, if you set "-filter .txt", only files with a file name suffix of .txt will be
    considered during training and testing.

Citation

@inproceedings{chen2015joint,
  title={Joint learning of character and word embeddings},
  author={Chen, Xinxiong and Xu, Lei and Liu, Zhiyuan and Sun, Maosong and Luan, Huanbo},
  booktitle={Twenty-Fourth International Joint Conference on Artificial Intelligence},
  year={2015}
}
@inproceedings{inproceedings,
  author = {Li, Jingyang and Sun, Maosong and Zhang, Xian},
  year = {2006},
  month = {01},
  pages = {},
  title = {A Comparison and Semi-Quantitative Analysis of Words and Character-Bigrams as Features
in Chinese Text Categorization.},
  volume = {1},
  doi = {10.3115/1220175.1220244}
}

License

Custom

Data Summary
Type
Text,
Amount
--
Size
1.45GB
Provided by
Tsinghua University
Tsinghua University is a major research university in Beijing, and a member of the C9 League of Chinese universities. Since its establishment in 1911, it has graduated Chinese leaders in science, engineering, politics, business, academia, and culture.
| Amount -- | Size 1.45GB
THUCNews
Text
NLP
License: Custom

Overview

THUCTC (THU Chinese Text Classification) is a Chinese text classification toolkit launched
by the Natural Language Processing Laboratory of Tsinghua University, which can automatically
and efficiently implement user-defined text classification corpus training, evaluation, and
classification functions. Text classification usually includes three steps: feature selection,
feature dimensionality reduction, and classification model learning. How to select appropriate
text features and reduce dimensionality is a challenging problem for Chinese text classification.
Based on years of research experience in Chinese text classification, my group selected two-character
string bigram as the feature unit in THUCTC, the feature reduction method is Chi-square, the
weight calculation method is tfidf, and the classification model uses LibSVM or LibLinear.
THUCTC has good universality for long texts in the open field, does not depend on the performance
of any Chinese word segmentation tools, and has the advantages of high accuracy and fast test
speed.

Data Collection

THUCNews is generated by filtering and filtering historical data of Sina News RSS subscription
channels from 2005 to 2011. It contains 740,000 news documents (2.19 GB), all in UTF-8 plain
text format. On the basis of the original Sina news classification system, we re-integrated
and divided 14 candidate classification categories: finance, lottery, real estate, stocks,
home furnishing, education, technology, society, fashion, current affairs, sports, horoscope,
games, entertainment. Using THUCTC toolkit to evaluate on this data set, the accuracy rate
can reach 88.6%.

Instruction

We provide two ways to run the toolkit:

  1. Use a java development tool, such as eclipse, to import the packages in the lib folder including
    lib\THUCTC_java_v1.jar into your own project, and then call the function by imitating the Demo.java
    program.

  2. Use THUCTC_java_v1_run.jar in the root directory to run the toolkit.

    Use command java -jar THUCTC_java_v1_run.jar + 程序参数

Operating parameters

  • [-c CATEGORY_LIST_FILE_PATH] Read category information from the file. Each line
    in the file contains only one category name.
  • [-train TRAIN_PATH] Perform training and set
    the path to the training corpus folder. The name of each subfolder under this folder corresponds
    to a category name, and contains training corpus belonging to that category. If not set, no
    training will be performed.
  • [-test EVAL_PATH] Perform evaluation and set the path to the
    evaluation corpus folder. The name of each subfolder under this folder corresponds to a category
    name, and contains evaluation corpus belonging to that category. If not set, no evaluation
    will be performed. You can also use -eval.
  • [-classify FILE_PATH] Classify a file.
  • [-n topN] Set the number of returned candidate
    categories, sorted by score. The default is 1, which means that only the most probable category
    is returned.
  • [-svm libsvm or liblinear] Choose whether to use libsvm or liblinear for training
    and testing, and liblinear is used by default.
  • [-l LOAD_MODEL_PATH] Set the path to read the model.
  • [-s SAVE_MODEL_PATH] Set the path to save the model.
  • [-f FEATURE_SIZE] Set the number of retained features, the default is 5000.
  • [-d1 RATIO] Set the proportion of the training
    set to the total number of files, the default is 0.8.
  • [-d2 RATIO] Set the proportion of the test set to the total number of files, the default is 0.2.
  • [-e ENCODING] Set the encoding of training and test files, the default is UTF-8.
  • [-filter SUFFIX] Set file suffix filtering.
    For example, if you set "-filter .txt", only files with a file name suffix of .txt will be
    considered during training and testing.

Citation

@inproceedings{chen2015joint,
  title={Joint learning of character and word embeddings},
  author={Chen, Xinxiong and Xu, Lei and Liu, Zhiyuan and Sun, Maosong and Luan, Huanbo},
  booktitle={Twenty-Fourth International Joint Conference on Artificial Intelligence},
  year={2015}
}
@inproceedings{inproceedings,
  author = {Li, Jingyang and Sun, Maosong and Zhang, Xian},
  year = {2006},
  month = {01},
  pages = {},
  title = {A Comparison and Semi-Quantitative Analysis of Words and Character-Bigrams as Features
in Chinese Text Categorization.},
  volume = {1},
  doi = {10.3115/1220175.1220244}
}

License

Custom

0
Start building your AI now
graviti
wechat-QR
Long pressing the QR code to follow wechat official account

Copyright@Graviti