Characters or Morphemes: How to Represent Words?

Ustun A., Kurfali M., CAN BUĞLALILAR B.

3rd Workshop on Representation Learning for NLP (RepL4NLP), Melbourne, Australia, 20 July 2018, pp.144-153 identifier

  • Publication Type: Conference Paper / Full Text
  • Volume:
  • City: Melbourne
  • Country: Australia
  • Page Numbers: pp.144-153
  • Hacettepe University Affiliated: Yes


In this paper, we investigate the effects of using subword information in representation learning. We argue that using syntactic subword units effects the quality of the word representations positively. We introduce a morpheme-based model and compare it against to word-based, characterbased, and character n-gram level models. Our model takes a list of candidate segmentations of a word and learns the representation of the word based on different segmentations that are weighted by an attention mechanism. We performed experiments on Turkish as a morphologically rich language and English with a comparably poorer morphology. The results show that morpheme-based models are better at learning word representations of morphologically complex languages compared to character-based and character ngram level models since the morphemes help to incorporate more syntactic knowledge in learning, that makes morphemebased models better at syntactic tasks.