Measuring Word Relatedness Using Heterogeneous Vector Space Models

Noticing that different information sources often provide complementary coverage of word sense and meaning, we propose a simple and yet effective strategy for measuring lexical semantics. Our model consists of a committee of vector space models built on a text corpus, Web search results and thesauruses, and measures the semantic word relatedness using the averaged cosine similarity scores. Despite its simplicity, our system correlates with human judgements better or similarly compared to existing methods on several benchmark datasets, including WordSim353.

Yih-Qazvinian_NAACL12.pdf
PDF file

In  Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-2012)

Publisher  Association for Computational Linguistics

Details

TypeInproceedings
> Publications > Measuring Word Relatedness Using Heterogeneous Vector Space Models