Published online by Cambridge University Press: 01 February 2021
Deaf people communicate naturally using visual-spatial languages, called sign languages (SL). Although SLs are recognized as a language in many countries, the problems faced by Deaf people for accessing information remain. As a result, they have difficulties exercising their citizenship and access information in SLs, which usually leads to linguistic and knowledge acquisition delays. Some scientific works have been developed to address these problems related to the machine translation of spoken languages to sign languages. However, the existing machine translation platforms have some limitations, especially in syntactic and lexical nature. Thus, this work aims to develop a mechanism for machine translation to Libras, the Brazilian Sign Language, with syntactic-semantic adequacy. It consists of an automatic translation component for Libras based on syntactic-semantic translation rules and a formal syntactic-semantic rule description language. As proof of concept of the proposed approach, we created a specific grammar for Libras translation exploring these aspects and integrating these elements into VLibras Suite, a service for machine translation of digital content in Brazilian Portuguese (BP) to Libras. We performed several tests using this modified version of VLibras to measure the level of comprehension of the output generated by the new translator mechanism. In the computational experiments, as well as in the actual tests with Deaf and hearing users, the proposed approach was able to improve the results of the current VLibras version.