Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Cross lingual text classification by mining multilingual topics from wikipedia

Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen


This paper investigates how to effectively do cross lingual text classification by leveraging a large scale and multilingual knowledge base, Wikipedia. Based on the observation that each Wikipedia concept is described by documents of different languages, we adapt existing topic modeling algorithms for mining multilingual topics from this knowledge base. The extracted topics have multiple types of representations, with each type corresponding to one language. In this work, we regard such topics extracted from Wikipedia documents as universal-topics, since each topic corresponds with same semantic information of different languages. Thus new documents of different languages can be represented in a space using a group of universal-topics. We use these universal-topics to do cross lingual text classification. Given the training data labeled for one language, we can train a text classifier to classify the documents of another language by mapping all documents of both languages into the universal-topic space. This approach does not require any additional linguistic resources, like bilingual dictionaries, machine translation tools, or labeling data for the target language. The evaluation results indicate that our topic modeling approach is effective for building cross lingual text classifier.


Publication typeInproceedings
Published inWSDM '11 Proceedings of the fourth ACM international conference on Web search and data mining
> Publications > Cross lingual text classification by mining multilingual topics from wikipedia