ABSTRACT

Contextual representations of words built using neural language models have been shown to successfully store minor differences between multiple meanings of the same word. However, because these representations are not linked to a semantic network, they leave word meanings undefined, and hence overlook information obtained from the knowledge base. Word sense disambiguation (WSD), or the problem of assigning senses to words in context, has seen a surge in interest with the advent of neural models, along with a considerable improvement in performance. Using syntagmatic information to bridge performance gap between knowledge-based and supervised WSD) is a potentially fruitful research path to pursue. The great majority of languages in the world are believed to be too under-resourced for deep learning technology to be used successfully. Graph learning is a successful improvement to meta-learning methods that allows for better and more consistent learning, as well as setting a new state of the art for specific languages, performing on par with other methods while using just a minimal amount of labeled data. We can therefore use graph learning for WSD. This chapter offers a comprehensive review of multilingual and cross-lingual graph-based methods.