graviti
Products
Resources
About us
Wordsim-297
Text
NLP
|...
License: MIT

Overview

Most word embedding methods take a word as a basic unit and learn embeddings according to words’
external contexts, ignoring the internal structures of words. However, in some languages such
as Chinese, a word is usually composed of several characters and contains rich internal information.
The semantic meaning of a word is also related to the meanings of its composing characters.
Hence, we take Chinese for example, and present a character-enhanced word embedding model (CWE).
In order to address the issues of character ambiguity and non-compositional words, we propose
multiple-prototype character embeddings and an effective word selection method. We evaluate
the effectiveness of CWE on word relatedness computation and analogical reasoning. The results
show that CWE outperforms other baseline methods which ignore internal character information.

Data Collection

We select a human-annotated corpus with news articles from The People’s Daily for embedding
learning. The corpus has 31 million words. The word vocabulary size is 105 thousand and the
character vocabulary size is 6 thousand (covering 96% characters in national standard charset
GB2312). We set vector dimension as 200 and context window size as 5. For optimization, we
use both hierarchical softmax and 10-word negative sampling. We perform word selection for
CWE and use pre-trained character embeddings as well. We introduce CBOW, Skip-Gram and GloVe
as baseline methods, using the same vector dimension and default parameters. We evaluate the
effectiveness of CWE on word relatedness computation and analogical reasoning.

Citation

@inproceedings{chen2015joint,
  title={Joint learning of character and word embeddings},
  author={Chen, Xinxiong and Xu, Lei and Liu, Zhiyuan and Sun, Maosong and Luan, Huanbo},
  booktitle={Twenty-Fourth International Joint Conference on Artificial Intelligence},
  year={2015}
}

License

MIT

Data Summary
Type
Text,
Amount
--
Size
7KB
Provided by
Tsinghua University
Tsinghua University is a major research university in Beijing, and a member of the C9 League of Chinese universities. Since its establishment in 1911, it has graduated Chinese leaders in science, engineering, politics, business, academia, and culture.
| Amount -- | Size 7KB
Wordsim-297
Text
NLP
License: MIT

Overview

Most word embedding methods take a word as a basic unit and learn embeddings according to words’
external contexts, ignoring the internal structures of words. However, in some languages such
as Chinese, a word is usually composed of several characters and contains rich internal information.
The semantic meaning of a word is also related to the meanings of its composing characters.
Hence, we take Chinese for example, and present a character-enhanced word embedding model (CWE).
In order to address the issues of character ambiguity and non-compositional words, we propose
multiple-prototype character embeddings and an effective word selection method. We evaluate
the effectiveness of CWE on word relatedness computation and analogical reasoning. The results
show that CWE outperforms other baseline methods which ignore internal character information.

Data Collection

We select a human-annotated corpus with news articles from The People’s Daily for embedding
learning. The corpus has 31 million words. The word vocabulary size is 105 thousand and the
character vocabulary size is 6 thousand (covering 96% characters in national standard charset
GB2312). We set vector dimension as 200 and context window size as 5. For optimization, we
use both hierarchical softmax and 10-word negative sampling. We perform word selection for
CWE and use pre-trained character embeddings as well. We introduce CBOW, Skip-Gram and GloVe
as baseline methods, using the same vector dimension and default parameters. We evaluate the
effectiveness of CWE on word relatedness computation and analogical reasoning.

Citation

@inproceedings{chen2015joint,
  title={Joint learning of character and word embeddings},
  author={Chen, Xinxiong and Xu, Lei and Liu, Zhiyuan and Sun, Maosong and Luan, Huanbo},
  booktitle={Twenty-Fourth International Joint Conference on Artificial Intelligence},
  year={2015}
}

License

MIT

0
Start building your AI now
graviti
wechat-QR
Long pressing the QR code to follow wechat official account

Copyright@Graviti