Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-28T12:21:22.857Z Has data issue: false hasContentIssue false

6 - Healthcare Accessibility for the Deaf

The BabelDr Case Study

Published online by Cambridge University Press:  31 August 2023

Meng Ji
Affiliation:
University of Sydney
Pierrette Bouillon
Affiliation:
Université de Genève
Mark Seligman
Affiliation:
Spoken Translation Technology

Summary

Access to healthcare profoundly impacts the health and quality of life of Deaf people. Automatic translation tools are crucial in improving communication between Deaf patients and their healthcare providers. The aim of this chapter is to present the pipeline used to create the Swiss-French Sign Language (LSF-CH) version of BabelDr, a speech-enabled fixed phrase translator that was initially conceived to improve communication in emergency settings between doctors and allophone patients (Bouillon et al., 2021). In order to do so, we start off by explaining how we ported BabelDr in LSF-CH using both human and avatar videos. We first describe the creation of a reference corpus consisting of video translations done by human translators, then we present a second corpus of videos generated with a virtual human. Finally, we relate the findings of a questionnaire on Deaf users’ perspective on the use of signing avatars in the medical context. We showed that, although respondents prefer human videos, the use of automatic technologies associated with virtual characters is not without interest to the target audience and can be useful to them in the medical context.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

Access to healthcare profoundly impacts the health and quality of life of Deaf people. Automatic translation tools are crucial in improving communication between Deaf patients and their healthcare providers. The aim of this chapter is to present the pipeline used to create the Swiss-French Sign Language (LSF-CH) version of BabelDr, a speech-enabled fixed-phrase translator that was initially conceived to improve communication in emergency settings between doctors and allophone patients (Bouillon et al, 2021). In order to do so, we start off by explaining how we ported BabelDr in LSF-CH using both human and avatar videos. We first describe the creation of a reference corpus consisting of video translations done by human translators, then we present a second corpus of videos generated with a virtual human. Finally, we relate the findings of a questionnaire on Deaf users’ perspective on the use of signing avatars in the medical context. We showed that, although respondents prefer human videos, the use of automatic technologies associated with virtual characters is not without interest to the target audience and can be useful to them in the medical context.

6.1 Introduction

According to the Swiss Federal Statistical Office, 9 percent of the population speaks a language not among the four national languages. Moreover, one-third of this 9 percent understands none of the national languages. If these people are ill and require treatment, language barriers can pose considerable obstacles to their care, from both clinical and ethical viewpoints. Clearly, this issue hugely impacts equal access to healthcare (Reference Flores, Laws and MayoFlores et al., 2003; Reference Wasserman, Renfrew and GreenWasserman et al., 2014). One way to provide quality healthcare to all and to facilitate communication between doctors and patients is through the use of translation technologies – more specifically, by using fixed-phrase translators, now widely used in the medical field (see Chapter 5). Although ideal tools would provide the flexibility of full machine translation systems, various studies show that the fixed-phrase translation systems currently available can offer good alternatives to full machine translation for such safety-critical domains (Reference Bouillon, Gerlach, Spechbach, Tsourakis and Halimi.Bouillon et al., 2017; Reference Turner, Choi and DewTurner et al., 2019).

BabelDr is a flexible speech-enabled phraselator aimed at language barrier-related problems in emergency settings (Reference Bouillon, David, Strasly and Spechbach.Bouillon et al., 2021). BableDr is now in use for immigrants speaking non-national languages; however, the application is also under development at present for the local Deaf linguistic minority. We refer here to Deaf patients who live in the French-speaking area of Switzerland and use Swiss-French Sign Language (LSF-CH) as their mother tongue or preferred language. Deaf LSF-CH users identify as members of a minority community with its own language and culture (Reference Padden and Humphries.Padden and Humphries, 1988; Reference PrestonPreston, 1995). The use of the capital D in “Deaf” refers to their cultural identity.

Research in past years has shown that access to healthcare impacts the health and quality of life of Deaf people. Although the need for enhanced access to healthcare services has been highlighted (Reference Emond, Ridd and SutherlandEmond et al., 2015; Reference Kuenburg, Fellinger and Fellinger.Kuenburg et al., 2016), the issue remains quite challenging, even in high-income countries (Reference Pollard, Betts and CarrollPollard et al., 2014; Reference Smeijers and Pfau.Smeijers and Pfau, 2009). Much like ethnic minority groups, Deaf people encounter severe barriers when trying to communicate in a healthcare context. The associated miscommunication between patients and their healthcare providers can lead to potential misunderstandings of diagnosis and treatment (Reference ScheierScheier, 2009) and to a lack of trust. In England, a report by the Royal National Institute for Deaf People recounts the experiences of various Deaf people using health services. Sixty-six percent of British Sign Language users find communication with staff difficult; thirty percent avoid visiting their family doctor for communication reasons; and 33 percent remain unsure about instructions or about the correct treatment following consultations with family doctors (Reference Abou-Abdallah and Lamyman.Abou-Abdallah and Lamyman, 2021; Reference Middleton, Niruban, Girling and Myint.Middleton et al., 2010). Similar results have been shown in the Netherlands, where a study found that 39 percent of Deaf patients who took part in the survey rated their communication with healthcare practitioners as moderate or bad (Reference Smeijers and Pfau.Smeijers and Pfau, 2009), and in the USA where Deaf patients report great difficulties in communication with their physicians (Reference Ralston, Zazove and GorenfloRalston, Zazove and Gorenflo, 1996). In Switzerland, studies carried out by Tatjana Binggeli in 2015 and by Odile Cantero in 2016 also highlight similar barriers (Reference BinggeliBinggeli, 2015; Reference CanteroCantero, 2016). While additional projects have addressed the healthcare needs of Deaf people in Switzerland, communication barriers still remain today (Strasly, in preparation).

Improvements are certainly achievable. For instance, provision of specific training in cultural competency by knowledgeable community representatives could make healthcare professionals more aware of their communication preferences. In recent years, rapid advances in the use and performance of information technology have also greatly benefited Deaf people, and have the potential to make healthcare more accessible to this community and thus enable them to receive adequate and equal care. The aim of this chapter is to focus on the development of speech-enabled fixed-phrase translators for sign languages and, more specifically, to present the pipeline used to create the LSF-CH version of the BabelDr system. While phraselators such as MediBabble (medibabble.com) and Universal Doctor (universaldoctor.com) are commonly used in medicine (Reference Khander, Farag and Chen.Khander et al., 2018; Reference Panayiotou, Gardner and WilliamsPanayiotou et al., 2019), they rarely integrate sign language. Development of such a pipeline is therefore a necessary step toward collection of corpora and creation of useful translation tools.

In the following sections, we first give an outline of the current legal framework regarding the right to health and access to healthcare in Switzerland in order to elucidate the legal background favorable to our project’s emergence. A brief overview of the core principles of the “right to health” follows. We then describe existing sign language projects aimed at improving doctor-patient communication. While some translation tools do exist, they are always limited to very specific coverage, are often unsophisticated, and provide no general solutions for production of sign language resources and translation into sign language. We then explain how we ported BabelDr for LSF-CH using both human and avatar videos. Finally, we present the results of a questionnaire about Deaf users’ perspective on the use of signing avatars in the medical context.

6.2 Legal Framework in Switzerland

Currently, there are no precise or official statistics concerning the number of profoundly Deaf individuals in Switzerland. Current estimates are based on the following formula, established by the World Health Organization (WHO) and used worldwide: number of Deaf signing people = 0.001 percent of the total population, i.e., 1 per 1000 inhabitants. Based upon this formula, and upon the numbers of (1) memberships of Deaf people in clubs and associations and (2) users of interpreting services, Deaf sign language users in all of Switzerland currently number approximately 10,000 people (Reference Braem, Rathmann. and BrentariBraem and Rathmann, 2010). Three different sign languages are used: Swiss-German Sign Language (DSGS) is used in the German-speaking area of Switzerland; LSF-CH in western Switzerland; and Swiss-Italian Sign Language (LIS-SI) in the Italian-speaking region.

Each region formerly had its own association of Deaf people. These were then federated in 2006 under an umbrella organization, the Swiss Federation of the Deaf, which strives to achieve equal rights for the Deaf and hard-of-hearing throughout the country. Per its new strategic plan for 2021–25, the Federation will undertake four key areas of action, as voted by members in October 2020: (1) inclusion in the labor market; (2) participation in direct democracy; (3) access to the healthcare system; and (4) inclusive education (SGB-FSS, 2021). Concerning access to the healthcare system, discrimination against Deaf people is not due to a lack of legislation (Reference Binggeli, Hohenstein., Hohenstein and Lévy-TödterBinggeli and Hohenstein, 2020). In fact, Switzerland has signed international treaties and has enacted national and cantonal legislation that promotes the highest health standards for its population. Instead, the challenges probably stem from the country’s federal makeup (Reference Marks-Sultan, Kurt, Leyvraz and Sprumont.Marks-Sultan et al., 2016). There are twenty-six cantons in the Swiss Confederation, each with its own constitution, legislature, executive, and judiciary. Where health is concerned, Switzerland has a two-tier system built on the federal constitution and cantonal legislation, giving cantons the largest share of responsibilities. Cantons implement regulations in areas where the Federal State has adopted laws, but can also adopt their own health policies, laws, and regulations.

6.2.1 Overview of the Core Principles of the Right to Health

The right to health means that States must establish ethically and culturally suitable policies that address local needs, as well as plans for measures and resources for promotion of national health according to their individual capacities. Two principles that are key to this right are non-discrimination and equality. States must recognize and provide for groups having specific needs and generally facing health-related challenges. And since Deaf people are particularly vulnerable in terms of health, access to care is a major topic of discussion in the local Deaf community.

At the international level, the right to health was first recognized in the Preamble of the Constitution of the WHO in 1946 (WHO, 1948). Because this treaty is binding for Switzerland as a Member State, the country should ensure maximum health for its population by protecting and promoting appropriate measures. According to WHO (WHO, 1948, Preambule, §2), health is “a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity”. The right to health is also recognized in the Universal Declaration of Human Rights of 1948, in Article 25, which states that “[e]veryone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care […]”. States must take active measures to assure suitable quality of life for all their citizens. Adequate health is also defined in Article 12 of the International Covenant on Economic, Social and Cultural Rights (ICESC, 1976) as “ … the right of everyone to the enjoyment of the highest attainable standard of physical and mental health.” This treaty was ratified by Switzerland in 1992.

At a national level, the 1999 Swiss Constitution is the most significant legal document. It views the right to health as a duty of the State (articles 41 and 118) and prohibits discrimination on the basis of origin, race, sex, age, language, social situation, way of life, religious, philosophical or political beliefs, or psychological and mental deficiencies (article 8). On January 1, 2004, the Disability Equality Act came into force at the federal level, stating that all disabled persons have the same right to barrier-free access to social services (article 2). However, French-speaking Switzerland currently lacks sign language interpreters. Thus the Deaf community’s access to health services can be enhanced by tools that can effectively bridge the gap between the need for language services in healthcare contexts and their actual availability.

The United Nations has developed three key documents that frame the understanding and promotion of accessibility: the World Programme of Action concerning Disabled Persons; the United Nations Standard Rules on the Equalization of Opportunities for Persons with Disabilities; and the Convention on the Rights of Persons with Disabilities. These require governments and the international community to ensure equal rights and opportunities for persons with disabilities. Particular attention is paid to access – first, to information and communication, and second, to public services such as healthcare. Of the three documents listed above, the Convention on the Rights of Persons with Disabilities (UNCRPD) (United Nations, 2006) is particularly important. In this document, which entered into force in 2008, the international community undertook a political and legal commitment to include people with disabilities in all aspects of society. Article 25 of the UNCRPD states that “persons with disabilities have the right to enjoy the highest attainable standard of health and that States Parties have to take all appropriate measures to ensure access for persons with disabilities to health services.” Switzerland ratified the UNCRPD on 15 April 2014, thus making the same commitment.

As we rely more on technologies, the impetus increases to build tools functional for Deaf sign language users to enhance their equal access to healthcare. Well-designed tools should have the potential to improve users’ quality of life and independence. Accordingly, we have reviewed the legal framework and the potential impact of technology on Deaf people’s access to healthcare to explain our decision to create a version of BabelDr for LSF-CH. We go on now to a general review of existing tools developed for hospitals, followed by a description of the BabelDr application for LSF-CH.

6.3 Sign Language Translation Tools for Hospitals

With increased mobility worldwide, an increasing number of patients require translation services in healthcare settings. In order to respond to this demand, many medical translation applications for mobile phones have been developed (Reference Khander, Farag and Chen.Khander et al., 2018). However, resources for sign languages are still lacking, despite progress in machine translation and in automatic sign language processing, both in sign language recognition and sign language animation (Reference Bragg, Koller and BellardBragg et al., 2019; Reference Ebling, Johnson, Wolfe, Antona and StephanidisEbling, 2017; Reference Papastratis, Chatzikonstantinou, Konstantinidis and KonstantinidisPapastratis et al., 2021; Reference Sáfár, Glauert, Pfau, Steinbach and WollSáfár and Glauert, 2012).

Sites do exist that provide resources and popular explanations related to medical terminology for Deaf communities, such as PisourdFootnote 1 in Switzerland or World Health SignFootnote 2 (Spanish/Italian project). One famous project for the collection of medical terminology was developed in Australian Sign Language (Auslan): the Medical Signbank projectFootnote 3. In view of a perceived lack of health and medicine vocabulary, this project conducted linguistic research among Auslan users. The collected signs were made available on the Signbank site. Interpreters and the Deaf community could then provide feedback concerning them (Reference Johnston and Napier.Johnston and Napier, 2010).

Some text-to-sign phraselators using human-recorded videos also exist, but the number of sentences they translate is limited, and translation is often into American Sign Language (ASL) only. Moreover, the methodology used to produce sign language videos is often unclear, and information is often lacking concerning extension of the systems to other content or sharing of resources. In Europe, TraducMedFootnote 4, a French tool first used for the medical care of migrants, offers text-to-sign translation in LSF, to be used in medical practices or hospitals. More recently, at the Department of General Practice of the University Medical Center Göttingen, a multilingual application informing about COVID-19 vaccinations has been developed, aimed at vaccination candidates with limited proficiency in the local language. There are thirty-nine target languages, including a German Sign Language (DGS) module equipped with a set of videos (Reference Noack, Schäning and Müller.Noack et al., 2022).

Hybrid medical tools have also been developed for the medical sector that combine human-recorded videos and avatar generation. For example, in Romania, the Carol Davila University of Medicine and Pharmacy in Bucharest and the Faculty of Sociology in Pitesti, in collaboration with teachers of Romanian Sign Language (LSR), implemented a corpus of video recordings in LSR related to oral health. The corpus could also be augmented through online editing using the JASigning animated avatar (Reference Chiriac, Tivadar, Podoleanu, Balas, Jain and KovačevićChiriac et al., 2016). The team then worked on a comparative study of the two characters – human and avatar – with consideration of their advantages and disadvantages (Reference Chiriac, Tivadar and Podoleanu.Chiriac et al., 2015). A recent tool for medical use was built in the Netherlands (Reference 174Roelofsen, Esselink, Mende-Gillings, Smeijers, Mitkov, Sosoni, Giguère, Murgolo and DeyselRoelofsen et al., 2021), where the research group conceived a modular text-to-sign system that allows healthcare professionals interactively translate from written Dutch or English into Dutch Sign Language (NGT). The doctor enters a sentence or series of words in the search bar. He/she then chooses the closest match found within a database of written sentences. The system then shows prepared signed videos, some using recordings of human interpreters and some using a synthetic sign language module employing the JASigning virtual avatar. The team selected the human-recorded or the avatar videos according to the complexity or topic of the questions. (For example, videos with human interpreters were used when questions on ethical issues were asked.) (Figure 6.1).

Figure 6.1 Prototype of SignLab, Dutch Medical Application: human recording (left); avatar generation (right)

There are also tools that use sign language recognition to allow patients to answer. HospiSign, a Turkish interactive translation platform, was developed to assist Deaf patients in the hospital reception area on a daily basis (Reference Süzgün, Özdemir and CamgözSüzgün et al., 2015). At the reception terminal, the HospiSign interface displays a written question with its corresponding videoFootnote 5. The lower part of the screen displays various possible answers. The Deaf patient or his or her caregiver reproduces the corresponding signs. (Sign recognition is handled by a Microsoft Kinect v2 sensor, which has been configured to follow and recognize the hand movements of users when they respond (Reference Süzgün, Özdemir and CamgözSüzgün et al., 2015, p. 82).) He or she then moves on to the next question. Once all the questions have been answered, a summary report is provided to the doctor.

The last decade has seen growing interest in sign language translation systems that seek to empower Deaf people in hospital settings. However, prototypes remain limited to very specific domains. They offer only text interaction and provide no generic tools for developing sign language resources (Reference Albrecht, Jungnickel and von Jan.Albrecht et al., 2015). It is sometimes unclear what methodology has been used to translate written sentences into sign language, and videos are rarely shared with the research community. And for LSF-CH in particular, there is no specific open-source tool for the medical sector apart from the above-mentioned Pisourd website. Clearly, then, new tools are sorely needed that can address the needs of Deaf patients and their caregivers to increase access to hospitals. In the following section, we present our approach to creation of speech-to-sign fixed-phrase translators with the BabelDr platform and to production of sharable resources in LSF.

6.4 BabelDr for Swiss-French Sign Language

In contrast with other fixed-phrase translators, BabelDr aims for easy portability to new domains and coverage: it should be possible to continually add new content. Adapting BabelDr to sign language therefore requires flexible solutions. Human videos recorded by sign language interpreters/translators are known to be ideal, but they pose many technical problems. In particular, they cannot be generated productively and cannot be changed once recorded. We therefore decided to combine human and avatar videos, as suggested by Reference 174Roelofsen, Esselink, Mende-Gillings, Smeijers, Mitkov, Sosoni, Giguère, Murgolo and DeyselRoelofsen et al. (2021).

The translation of BabelDr content was carried out in two steps. First, a reference corpus consisting of video translations with human translators was created for a subset of sentences, in order to develop reference translations for many terms and typical structures. The first set of recorded videos was then annotated and used to develop a larger corpus of videos generated with a virtual human (an avatar). In the following sections, we explain (1) the methodology used to film the human translations and (2) how the avatar version was generated and integrated into BabelDr to develop a flexible speech-to-sign translator.

6.4.1 Recording Translations with Deaf Experts

We used a community participatory approach to translate a first set of sentences from written French into LSF-CH. The team working on the translation is comprised of a Deaf nurse and two Deaf sign language specialists (both working as sign language teachers and translators). Also in the exchange group is a doctor currently doing a specialization in Switzerland who – together with a translation researcher (a co-author of this work) – organizes sign language courses in hospitals in French-speaking Switzerland. As of March 2022, 2,661 medical questions have been translated and validated (1,552 for the hospital reception unit, 1,063 in the field of abdominal pain, and 46 specific to COVID-19).

Three main challenges were encountered by the translation work group. 1) The translation of medical jargon. The use of specific terminology in the medical context is well known to be a source of serious misunderstandings in medicine (e.g., Reference Ong, Haes, Hoos and LammesOng et al., 1995). Translation problems are frequent even for widely used languages (Reference MajorMajor, 2012). In the Deaf community, the problem is compounded: specific medical terms are rarely used (see also Major et al. 2013) and often there is no sign that would be universally accepted by the community – as in the case of “spleen” or “bile ducts”. 2) The translation of proper names, such as the names of medication like Dafalgan®, for which there is no specific sign. Translators consider that using the manual alphabet to translate these names would cause excessive eye strain for Deaf people watching the video.3) The recording medium. Videos require a switch to a two-dimensional presentation, which is especially challenging when sentences must be partially signed on the signer’s back.

Solutions that our translation work group found for these challenges were: 1) the use of paraphrases when a word was unambiguous and its meaning could be paraphrased with general concepts considered easily understandable by the patients; 2) the use of subtitles when the meaning of a word was ambiguous and a short paraphrasis was not possible; 3) the use of images to clarify the meaning of a word (e.g., the image of a specific part of the body to ensure that the Deaf patient understands the intended location, or the image of a specific medicine). Table 6.1 displays a few sentences and the strategies employed to translate specific terminology.

Table 6.1 Sample sentences from the BabelDr corpus and the translation strategies applied for specific terminology

Sample sentencesStrategies employed to translate
are you allergic to aspirin?Subtitle
are you allergic to codeine?
have you taken anticoagulants today?ParaphrasisAGAINST-BLOOD-MASS
have you stopped taking antiarrhythmics?MEDECINE-FOR-HEART-RHYTHM-STABILITY
have you taken any treatment for osteoporosis today?BONE-INSIDE-BRITTLE
do you also have pain in the upper left side of your back?Image
do your shoulder blades also hurt?

To record our translations in real time, we used the LiteDevTools online platform developed at the University of Geneva (https://regulus.unige.ch/litedevtools/client/#/login), designed to facilitate the recording of oral/video translations (Reference Strasly, Sebaï and RigotStrasly et al., 2018).

6.4.2 Virtual Avatar Generation

One way to generate virtual animation is to rely heavily on humans throughout the whole production process, exploiting motion capture and/or animation by hand. This technique can make the final rendering quite realistic. Another way is to use automatic sign language processing (Reference Ebling, Johnson, Wolfe, Antona and StephanidisEbling et al., 2017). For BabelDr, we opted for translation via a fully synthesized avatar developed by the School of Computing Sciences at the University of East Anglia (United Kingdom) – the JASigning avatar. The system’s main version (Reference Ebling and Glauert.Ebling and Glauert, 2013) is freely available for research purposes and provides several virtual characters. It was developed in the context of the European Union-funded ViSiCAST (Reference BanghamBangham, 2000), eSIGN (Reference Zwitserlood, Verlinden, Ros and van der Schoot.Zwitserlood et al., 2005) and DictaSign (Reference Efthimiou, Fotinea, Hanke, Miesenberger, Karshmer, Penaz and ZaglerEfthimiou et al., 2012) projects. In the context of BabelDr, the avatar Françoise was selected for its realism, ethnic neutrality, and expressiveness.

The JASigning avatar is based upon a notation system called G-SiGML (Gestural Signing Gesture Markup Language), which enables the transcription of sign language gestures (Reference Elliott, Glauert, Jennings and Kennaway.Elliott et al., 2004). The application uses XML to encode the features of individual signs using the Hamburg Notation System for Sign Languages (HNS) (Reference Prillwitz, Leven and ZienertPrillwitz et al., 1989). HNS describes the physical form of the signs (Figure 6.2) and has been developed to support transcription of the hands’ activity: handshape, orientation, location, and movement (Table 6.2). G-SiGML also allows researchers to represent non-manual features: facial expressions, body expressions, and gesture mouthing.

Figure 6.2 HNS description of NURSE in LSF-CH: gloss (top); image with cross movement represented by arrows (middle); HamNoSys (HNS) notation (bottom)

Table 6.2 HNS symbols for NURSE in LSF-CH, based on (Reference SmithSmith, 2013)

Dominant hand (right hand)Non-dominant hand (left hand)
SymbolDescriptionSymbolDescription
HandshapeThe hand forms a closed fist with the thumb extended.The hand forms a closed fist.
OrientationThe extension of the index finger is oriented to the signer’s left and the orientation of the palm to the left and down the axis of rotation.The extension of the index finger is directed toward the front of the body, to the right of the signer and the orientation of the palm downwards.
LocationThe right thumb touches the signer’s right shoulder.The hand is located in front of the lower abdomen.
MovementThe hand moves down, forward, up slightly on the outside left of the signer and then moves to the right.No movement is made.

To facilitate the production of the G-SiGML code, we developed the SigLa platformFootnote 6. Its aim is to generate G-SiGML from two main resources: 1) a lexicon that associates individual signs (named with glosses) with their HNS representation; and 2) a synchronous context-free grammar that productively maps source sentences into their corresponding sign tables.

Sign tables (Table 6.3, below) are intermediate representations of signed utterances (Reference RaynerRayner, 2016). They specify a sequence of glosses (the manual signs defined via HNS) and associate them with non-manual features. The tables consist of eight rows that represent the parallel channels of signed output. The first row, GLOSS, specifies the sequence of glosses. The second row, APERTURE, refers to the degree of openness of the eyes, for example, ClosedLeft or Small. The third row, BODY, describes the movement of the body, for example, RotateLeft or TiltRight. The fourth row, EYEBROWS, describes the movement of the eyebrows, for example, Up or LeftUp. The fifth row, GAZE, indicates where the signer is looking, for example, Down or LeftUp. The sixth row, HEAD, describes the movement of the head, for example, TurnRight or TiltedBack. The seventh row, SHOULDERS, refers to the movement of the shoulders, for example, RaiseLeft, HunchBothForward. The eighth and last row, MOUTHING, describes the movement of the lips, cheeks, tongue, or teeth. The associated grammar describes the link between these sign tables and generated sentences, using variables (terminal and non-terminal symbols) as described in Rayner, 2016.

Table 6.3 Sign Table 6.for the sentence “I am a cardiologist”

GlossBE_1SGDOCTORSPECIALISTHEART
ApertureWideWideWideWide
BodyStraightStraightStraightTiltBack
EyebrowsNeutralNeutralUpNeutral
GazeNeutralRightNeutralDown
HeadNeutralTurnRightNeutralNeutral
ShouldersNeutralNeutralNeutralNeutral
MouthingnullmedsaspesialiskO:

Once the lexicon and the synchronous grammar are ready, they can be uploaded to the SigLa platform and compiled. The SigLa platform then produces the G-SiGML code for sentences as generated by the grammar, or for specific rules only. During generation, each element of the sign table is mapped to the corresponding G-SiGML element. SigLa also aims to facilitate rule development: while storing all necessary resources and enabling the grammar developer to produce the signed animation for a sentence, it also enables modification of the corresponding grammar rule if necessary. Table 6.4 shows the resulting G-SiGML representation of the sign NURSE in LSF-CH.

Table 6.4 G-SiGML code for the gloss NURSE in LSF-CH

6.4.3 Speech2sign Version of BabelDr

When new sentences are added in the BabelDr application, their G-SiGML code is generated with the SigLa platform. They are imported into BabelDr with the metadata and stored with other translation resources (SL human videos and written translations in other languages) so that they can be played directly in real time with JASigning in the BabelDr application. The two versions of BabelDr (with human videos and avatar generation) are accessible onlineFootnote 7, along with non-signed languages. Figure 6.3 shows the doctor and patient views for both versions.

Figure 6.3 Doctor and patient view of BabelDr with human and avatar videos

As of March 2022, the glossary consists of 608 HamNoSys entries: 370 nouns, 82 verbs/actions, 57 adjectives, 36 adverbs, 19 transfer signsFootnote 8, 15 pronouns, 8 prepositions, 5 forms of punctuations, 3 interjections, and 3 conjunctions. The grammar consists of 438 rules with 121 non- terminal and 381 terminal symbols, and can generate G-SiGML code for 1,234,828 sentences. For compliance with the FAIR principles (Findable, Accessible, Interoperable, and Reusable), the parallel corpus of human and avatar videos is now fully available on the Yareta Swiss repository in .webm and .mp4 formats (for human recordings) and G-SiGML files (for the avatar-based version).Footnote 9

6.5 Qualitative Evaluation on the Perception of Avatars and Human Videos

How do Deaf people in French-speaking European countries perceive the use of human and avatar videos in the BabelDr context? To find out, we created an online questionnaire (Reference Bouillon, David, Strasly and Spechbach.Bouillon et al., 2021). The survey, launched in four languages (LSF, LSF-CH, French Belgian Sign Language [LSFB] and written French) was implemented through LimeSurvey, an accessible online survey platform. A “snowball” sampling method was used to recruit respondents, who were given six weeks to participate. Thirty-two questions were divided into six sections on the following themes: 1) background of the videos; 2) additional images added to clarify content; 3) subtitles; 4) screen format and size; 5) advertisements and logos displayed on the screen; and 6) perception of the use of three-dimensional avatars.

The questionnaire, written in French, was made accessible through videos in LSF, LSFB and LSF-CH, all made by Deaf professionals who are native speakers of these sign languages. Each theme was introduced by a short video summarizing the topic covered. Responses were limited to “yes/no” or multiple-choice (Reference Haug, Herman and Woll.Haug et al., 2015). The questionnaire is available on the Research outputs tab of the Swiss Centre for Barrier-free CommunicationFootnote 10.

We focus here on results concerning the appreciation of virtual characters. Four questions were asked:

Past studies have determined that Deaf people may have problems in understanding the signs performed by an avatar (Reference Huenerfauth, Zhao, Gu and Allbeck.Huenerfauth et al., 2008; Reference Kipp, Heloir and Nguyen.Kipp et al., 2011). While our current work may also have demonstrated a certain preference for traditional human interpretation (Question 2: 64 percent; N=16/28), we also find that the use of automatic technologies associated with virtual characters is not without interest for the target audience. Considering the abstention rate (Question 1: 9.7 percent; N=3/31) and the negative rate (Question 1: 12.9 percent; N=4/31), our study shows that in fact most Deaf respondents (Question 1: 77.4 percent; N=24/31) do find the video information provided by a virtual character useful in the medical context (Figure 6.4).

Figure 6.4 Results of our online survey. “Question 1. Do you consider that videos with avatars can be useful?” (N=31)

Concerning the display of avatars on the screen, a major number of respondents [Question 3: 64 percent; N=18/28] prefer the signer to be shown front-on only. The options proposing multiple perspectives on the same screen or a 45° left/right perspective were almost unanimously rejected (Figure 6.5).

Figure 6.5 Results of our online survey. “Question 3. To better understand the signer, which video would you prefer?” (N=28)

Respondents could also leave personal comments if they wished. One of our participants was particularly conscious of the possibility of customizing avatars (i.e., with respect to physical appearance, age, sex and origin), so that no patients need feel uneasy or excluded:

(…) l’avatar est intéressant, car on peut choisir enfant, homme, femme, blanc, noir, etc. selon l’éthique auquel certains peuvent s’identifier sans aucune discrimination.

(…) the avatar is interesting because we can choose a child, a man, a woman, white, black, etc. according to the ethics [ethnicity would be the correct word here, but in French our Deaf respondent wrote “ethics”] of people and the group they identify with without discrimination [our translation]

Some of our respondents provided suggestions for improving the avatar. In particular, they suggested that we emphasize some movements in order to make sentences more understandable. For example, they suggested adding shrugs of the shoulders, frowning eyebrows, and a more intense look in order to make the avatar more understandable:

Pour toute interrogation, on hausse les épaules quand il s’agit des questions.

(…) je trouve qu’il manque des expressions faciales pour montrer que c’est une question.

Every time we ask a question, we shrug our shoulders (…) I find facial expressions are missing to show that we are asking a question [our translation].

6.6 Conclusions and Future Work

This chapter examines the difficulties faced by Deaf people in gaining access to healthcare. In contrast to the few other existing translation tools for the Deaf, the BabeldDr project aims to create a pipeline for productive development of quality sign language resources and to make those resources available to doctors for diagnosis via a flexible speech-enabled fixed-phrase translator, or phraselator. To produce the signed videos used in BabelDr, we have developed innovative platforms directly linked with BabelDr, including LiteDevTool and SigLa. Our results include several corpora, including (1) a reference corpus of human LSF videos for medical questions and instructions and (2) a large artificial corpus of SiGML representations.

An initial questionnaire for Deaf people concerning their perception of avatars showed that 77 percent of respondents found the information conveyed by the avatars useful in this context, although they preferred human videos. Even if avatars are far from perfect, this technology seems promising for emergency situations and for the production of sign language video corpora.

This research is pioneering in our field. To our knowledge, it is the first automatic speech translation system with sign language used in hospitals for diagnosis. The system will soon be evaluated on diagnostic tasks with the Deaf population. That evaluation will also allow us to validate the quality of the human and avatar videos and to compare patients’ satisfaction level related to the diagnostic task. In another study, we showed that Arabic and Albanian patients have less confidence in their doctor when a speech translation system uses speech synthesis instead of human recording for translated output (Reference Gerlach, Bouillon, Troqe, Mallem and SpechbachGerlach et al., 2023). It would be interesting to replicate that study for Deaf patients with human and avatar videos.

The SigLa platform will soon be made available to researchers. Experiments will be conducted to involve Deaf people in the development of the SigLa grammar and the validation of the SiGml code. We also intend to evaluate the effort needed to port the grammar to another Swiss sign language, for example, to Italian sign language or another closely related sign language, for example, LSFB. To facilitate the translation of medical sentences into real-human videos, we plan to employ Deaf people currently studying in a new training program developed at our Faculty, a Diploma in Advanced Studies (DAS) for Deaf translators.

6.7 Acknowledgments

The authors would like to thank the translation work group, especially Tanya Sebaï, Adrien Pelletier, Sonia Tedjani, Evelyne Rigot and Valentin Marti; Jonathan Mutal for the development of the SigLa platform and to thank Johanna Gerlach for her comments on this chapter.

Footnotes

5 The Turkish Sign Language (TID) videos are from the medical corpus of the larger BosphorusSign corpus. Reference Camgöz, Kındıroğlu and Karabüklü(Camgöz et al., 2016)

8 Specific signs used to explain by demonstration. There are several different sorts of transfer, such as size and shape transfer, situation transfer, character transfer, and so on (Reference CuxacCuxac, 1996; Reference Tournadre and Hamm.Tournadre and Hamm, 2018).

References

Abou-Abdallah, M., and Lamyman., A. 2021. Exploring Communication Difficulties with Deaf Patients. Clinical Medicine, 21 (4) pages e380e383. DOI: 10.7861/clinmed.2021-0111; PMCID: PMC8313197; PMID: 35192482CrossRefGoogle ScholarPubMed
Albrecht, U.-V., Jungnickel, T., and von Jan., U. 2015. ISignIT – Communication App and Concept for the Deaf and Hard of Hearing. In J. Mantas, A. Hasman, and M. S. Househ (Eds.), Enabling Health Informatics Applications, Studies in Health Technology and Informatics (Vol. 213, pages 283–286). Presented at the 13th International Conference, ICIMTH 2015, Athens, Greece, July 9 – July 11,2015, IOS Press. DOI: 10.3233/978-1-61499-538-8-283; PMID: 26153016CrossRefGoogle Scholar
Bangham, J. A. 2000. Signing for the Deaf Using Virtual Humans. IEE Seminar on Speech and Language Processing for Disabled and Elderly People (Vol. 2000, pages 4–4). Presented at the IEE Seminar on Speech and Language Processing for Disabled and Elderly People, London, UK: IEE. DOI: 10.1049/ic:20000134Google Scholar
Binggeli, T. 2015. “Für eine Welt ohne Barrieren”. Besteht in der Schweiz ein barrierefreier Zugang zum Informationserwerb sowie zur Inanspruchnahme von medizinischen Dienstleistungen für Menschen mit Hörbehinderung? UFL Private Universität im Fürstentum Lichtenstein, Lichtenstein.Google Scholar
Binggeli, T., and Hohenstein., C. 2020. Deaf Patients’ Access to Health Services in Switzerland: An Interview with Dr. Tatjana Binggeli, Medical Scientist and President of the Swiss Federation of the Deaf SGB-FSS. In Hohenstein, C. and Lévy-Tödter, M. (Eds.), Multilingual Healthcare, FOM-Edition (pages 333347). Wiesbaden: Springer Fachmedien Wiesbaden. DOI: 10.1007/978-3-658-27120-6_13CrossRefGoogle Scholar
Bouillon, P., David, B., Strasly, I., and Spechbach., H. 2021. A Speech Translation System for Medical Dialogue in Sign Language: Questionnaire on User Perspective of Videos and the Use of Avatar Technology. Barrier-free Communication (pages 4654). Presented at the 3rd Swiss Conference, BFC 2020, Wintherthur (online), June 29–July 4, 2020, Winterthur, Switzerland: ZHAW Zürcher Hochschule für Angewandte Wissenschaften.Google Scholar
Bouillon, P., Gerlach, J., Spechbach, H., Tsourakis, N., and Halimi., S. 2017. BabelDr vs Google Translate: A User Study at Geneva University Hospitals (HUG). 20th Annual Conference of the European Association for Machine Translation (EAMT) (pages 747752). Presented at the 20th Annual Conference of the European Association for Machine Translation (EAMT), Prague, Czech Republic.Google Scholar
Braem, P. B., and Rathmann., C. 2010. Transmission of Sign Languages in Northern Europe. In Brentari, D. (Ed.), Cambridge Language Surveys: Sign Languages (1st ed., pages 1945). Cambridge University Press. DOI: 10.1017/CBO9780511712203.003CrossRefGoogle Scholar
Bragg, D., Koller, O., Bellard, M., et al. 2019. Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective. Presented at the 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’19), Pittsburgh, PA, USA, October 28–October 30, 2019: ACM. DOI: 10.1145/3308561.3353774CrossRefGoogle Scholar
Camgöz, N. C., Kındıroğlu, A. A., Karabüklü, S., et al. 2016. BosphorusSign: A Turkish Sign Language Recognition Corpus in Health and Finance Domains. Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) (pages 1383–1388). Portorož, Slovenia: European Language Resources Association (ELRA).Google Scholar
Cantero, O. 2016. Accès aux soins et communication : Vers une passerelle entre la communauté sourde et les soignants de Suisse romande. University of Lausanne, Lausanne, Switzerland.Google Scholar
Chiriac, I. A., Tivadar, Stoicu- L, and Podoleanu., E. 2015. Comparing Video and Avatar Technology for a Health Education Application for Deaf People. In R. Cornet, L. Stoicu-Tivadar, A. Hörbst, C. L. Parra Calderón, S. K. Andersen, and M. Hercigonja-Szekeres (Eds.), Digital Healthcare Empowering Europeans, Studies in Health Technology and Informatics (Vol. 210, pages 516–520). Presented at the 26th Medical Informatics in Europe Conference (MIE2015), Madrid, Spain, May 27 – May 29, 2015, Amsterdam: IOS Press. DOI: 10.3233/978-1-61499-512-8-516; PMID: 25991201CrossRefGoogle Scholar
Chiriac, I. A., Tivadar, Stoicu- L, and Podoleanu, E.. 2016. Romanian Sign Language Oral Health Corpus in Video and Animated Avatar Technology. In Balas, V. E., Jain, L. C., and Kovačević, B. (Eds.), Soft Computing Applications (SOFA 2014). Advances in Intelligent Systems and Computing (Vol. 356, pages 279293). Cham: Springer International Publishing. DOI: 10.1007/978-3-319-18296-4_24Google Scholar
Cuxac, C. 1996. Fonctions et structures de l’iconicité des langues des signes : Analyse descriptive d’un idiolecte parisien de la langue des signes française, (Thèse de doctorat en Linguistique). University Paris 5.Google Scholar
Ebling, S., and Glauert., J. 2013. Exploiting the Full Potential of JASigning to Build an Avatar Signing Train Announcements. Proceedings of the Third International Symposium on Sign Language Translation and Avatar Technology (SLTAT). Chicago. DOI: 10.5167/uzh-85716Google Scholar
Ebling, S., Johnson, S., Wolfe, R., et al. 2017. Evaluation of Animated Swiss German Sign Language Fingerspelling Sequences and Signs. In Antona, M. and Stephanidis, C. (Eds.), Universal Access in Human–Computer Interaction. Designing Novel Interactions (pages 313). Cham: Springer International Publishing. DOI: 10.1007/978-3-319-58703-5_1CrossRefGoogle Scholar
Efthimiou, E., Fotinea, S.-E., Hanke, T., et al. 2012. The Dicta-Sign Wiki: Enabling Web Communication for the Deaf. In Miesenberger, K., Karshmer, A., Penaz, P., and Zagler, W. (Eds.), Computers Helping People with Special Needs (Vol. 7383, pages 205212). Berlin-Heidelberg: Springer. DOI: 10.1007/978-3-642-31534-3_32CrossRefGoogle Scholar
Elliott, R., Glauert, J., Jennings, V., and Kennaway., R. 2004. An Overview of the SiGML Notation and SiGMLSigning Software System. In O. Streiter and C. Vettori (Eds.), (pages 98–104). Presented at the Sign Language Processing Satellite Workshop of the Fourth International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal, May 24, 25, 29–30, 2004.Google Scholar
Emond, A., Ridd, M., Sutherland, H., et al. 2015. Access to Primary Care Affects the Health of Deaf People. British Journal of General Practice, 65 (631), pages 9596. DOI: 10.3399/bjgp15X683629; PMCID: PMC4325446; PMID: 25624302CrossRefGoogle ScholarPubMed
Flores, G., Laws, M. B., Mayo, S. J., et al. 2003. Errors in Medical Interpretation and Their Potential Clinical Consequences in Pediatric Encounters. Pediatrics, 111 (1), pages 614. DOI: 10.1542/peds.111.1.6; PMID: 12509547CrossRefGoogle ScholarPubMed
Gerlach, J., Bouillon, P., Troqe, R., Mallem, I. S. Halimi, & Spechbach, H.. 2023. Patient acceptance of translation technology for medical dialogues in emergency situations. In Translating Crises (pages 253–272). Bloomsbury Academic. https://doi.org/10.5040/9781350240117.ch-15CrossRefGoogle Scholar
Haug, T., Herman, R., and Woll., B. 2015. Constructing an Online Test Framework, Using the Example of a Sign Language Receptive Skills Test. Deafness and Education International, 17 (1), pages 37. DOI: 10.1179/1557069X14Y.0000000035CrossRefGoogle Scholar
Huenerfauth, M., Zhao, L., Gu, E., and Allbeck., J. 2008. Evaluation of American Sign Language Generation by Native ASL Signers. ACM Transactions on Accessible Computing, 1 (1), pages 127. doi : 10.1145/1361203.1361206CrossRefGoogle Scholar
ICESC. 1976. International Covenant on Economic, Social and Cultural Rights. New York, NY, USA: United Nations; UN document A/6316.Google Scholar
Johnston, T., and Napier., J. 2010. Medical Signbank: Bringing Deaf People and Linguists Together in the Process of Language Development. Sign Language Studies, 10 (2), pages 258275.CrossRefGoogle Scholar
Khander, A., Farag, S. and Chen., K. T. 2018. Identification and Evaluation of Medical Translator Mobile Applications Using an Adapted APPLICATIONS Scoring System. Telemedicine and e-Health, 24 (8), pages 594603. DOI: 10.1089/tmj.2017.0150; PMID: 29271702CrossRefGoogle ScholarPubMed
Kipp, M., Heloir, A., and Nguyen., Q. 2011. Sign Language Avatars: Animation and Comprehensibility. In Hannes Högni Vilhjálmsson, S. Kopp, S. Marsella, and K. R. Thórisson (Eds.), Intelligent Virtual Agents, Lecture Notes in Computer Science (Vol. 6895, pages 113–126). Presented at the 10th International Conference, IVA 2011, Reykjavik, Iceland, September 15 – September 17, 2011, Berlin, Heidelberg: Springer. DOI: 10.1007/978-3-642-23974-8_13CrossRefGoogle Scholar
Kuenburg, A., Fellinger, P., and Fellinger., J. 2016. Health Care Access Among Deaf People. Journal of Deaf Studies and Deaf Education, 21 (1), pages 110. DOI: 10.1093/deafed/env042; PMID: 26405210CrossRefGoogle ScholarPubMed
Major, G. 2012. Not Just “How the Doctor Talks”: Healthcare Interpreting as Relational Practice (Unpublished Doctoral Thesis). Macquarie University, Sydney, Australia.Google Scholar
Major, G., Napier, J., Ferrara, L., and Johnston, T. (2013). Exploring Lexical Gaps in Australian Sign Language for the Purposes of Health Communication. Communication and Medicine, 9(1), 37–47. DOI: https://doi.org/10.1558/cam.v9i1.37; PMID: 23763235Google Scholar
Marks-Sultan, G., Kurt, S., Leyvraz, D., and Sprumont., D. 2016. The Legal and Ethical Aspects of the Right to Health of Migrants in Switzerland. Public Health Reviews, 37 (1) page 15. DOI: 10.1186/s40985-016-0027-2CrossRefGoogle ScholarPubMed
Middleton, A., Niruban, A., Girling, G., and Myint., P. K. 2010. Communicating in a Healthcare Setting with People Who Have Hearing Loss. British Medical Journal, 341 (7775), pages 726729. DOI: 10.1136/bmj.c4672; PMID: 20880905CrossRefGoogle Scholar
Noack, E. M., Schäning, J., and Müller., F. 2022. A Multilingual App for Providing Information to SARS-CoV-2 Vaccination Candidates with Limited Language Proficiency: Development and Pilot. Vaccines, 10 (3), pages 360. DOI: 10.3390/vaccines10030360; PMCID: PMC8955787; PMID: 35334992CrossRefGoogle ScholarPubMed
Ong, L. M. L., Haes, de J. C. J. M., Hoos, A. M., and Lammes, F. B.. 1995. Doctor–Patient Communication: A Review of the Literature. Social Science and Medicine, 40 (7), pages 903918. DOI: 10.1016/0277-9536(94)00155-m; PMID: 7792630CrossRefGoogle ScholarPubMed
Padden, C., and Humphries., T. 1988. Deaf in America: Voices from a Culture. Cambridge, Mass.: Harvard University Press.Google Scholar
Panayiotou, A., Gardner, A., Williams, S., et al. 2019. Language Translation Apps in Health Care Settings: Expert Opinion. JMIR mHealth and uHealth, 7 (4) page e11316. DOI: 10.2196/11316; PMCID: PMC6477569; PMID: 30964446CrossRefGoogle ScholarPubMed
Papastratis, I., Chatzikonstantinou, C., Konstantinidis, D., Konstantinidis, D., et al. 2021. Artificial Intelligence Technologies for Sign Language. Sensors, 21, 5843. DOI: 10.3390/s21175843.CrossRefGoogle ScholarPubMed
Pollard, R. Q., Betts, W. R., Carroll, J. K., et al. 2014. Integrating Primary Care and Behavioral Health with Four Special Populations: Children with Special Needs, People with Serious Mental Illness, Refugees, and Deaf People. American Psychologist, 69 (4), pages 377387. DOI: 10.1037/a0036220; PMID: 24820687CrossRefGoogle ScholarPubMed
Preston, P. 1995. Mother Father Deaf: The Heritage of Difference. Social Science and Medicine, 40 (11), pages 14611467. DOI: 10.1016/0277-9536(94)00357-y; PMID: 7667651CrossRefGoogle ScholarPubMed
Prillwitz, S., Leven, R., Zienert, H., et al. 1989. Hamburg Notation System for Sign Language. An Introduction Guide. International Studies on Sign Language and the Communication of the Deaf, 5. Hamburg (Allemagne): Institute of German Sign Language and Communication of the Deaf. University of Hamburg.Google Scholar
Ralston, E., Zazove, P., & Gorenflo, D. W.. 1996. Physicians’ attitudes and beliefs about deaf patients. The Journal of the American Board of Family Practice, 9(3), 167173.Google ScholarPubMed
Rayner, M. 2016. Using the Regulus Lite Speech2Sign Platform. Using the Regulus Lite Speech2Sign Platform.Google Scholar
Roelofsen, F., Esselink, L., Mende-Gillings, S., & Smeijers, A.. 2021. Sign Language Translation in a Healthcare Setting. In Mitkov, R., Sosoni, V., Giguère, J. C., Murgolo, E., & Deysel, E. (Eds.), Sign Language Translation in a Healthcare Setting (p. 110124). Presented at the TRanslation and Interpreting Technology ONline (TRITON 2021), July 5–July 7, 2021, Shoumen: INCOMA Ltd. DOI : 10.26615/978-954-452-071-7_013CrossRefGoogle Scholar
Sáfár, E., & Glauert, J.. 2012. Computer Modelling. In Pfau, R., Steinbach, M., & Woll, B. (Eds.), Sign Language: An International Handbook, Handbooks of Linguistics and Communication Science. Berlin: De Gruyter Mouton, pages 10751101.CrossRefGoogle Scholar
Scheier, D. B. 2009. Barriers to Health Care for People with Hearing Loss: A Review of the Literature. The Journal of the New York State Nurses’ Association, 40 (1), pages 410. PMID: 19835226Google ScholarPubMed
SGB-FSS. 2021). Stratégie 2021-2025. Fédération Suisse des Sourds.Google Scholar
Smeijers, A., and Pfau., R. 2009. Towards a Treatment for Treatment: On Communication between General Practitioners and Their Deaf Patients. The Sign Language Translator and Interpreter, Amsterdam Center for Language and Communication (ACLC), 3 (1), pages 114.Google Scholar
Smith, R. (Ed.). 2013. HamNoSys 4.0: User guide. Blanchardstown: Institute of Technology.Google Scholar
Strasly, I. (upcoming). Accessibilité et langue des signes en Suisse romande : Le projet BabelDr et l’accès à la santé. Université de Genève, Faculté de traduction et d’interprétation, Geneva, Switzerland.Google Scholar
Strasly, I., Sebaï, T., Rigot, E., et al. 2018. Le projet BabelDr : Rendre les informations médicales accessibles en Langue des Signes de Suisse Romande (LSF-SR). Proceedings of the 2nd Swiss Conference on Barrier-free Communication: Accessibility in educational settings (BFC 2018) (pages 9296). Presented at the Accessibility in educational settings (BFC 2018), Geneva (Switzerland).Google Scholar
Süzgün, M. M., Özdemir, H., Camgöz, N. C., et al. 2015. HospiSign: An Interactive Sign Language Platform for Hearing Impaired. Journal of Naval Science and Engineering, 11 (3), pages 7592.Google Scholar
Tournadre, N., and Hamm., M. 2018. Une approche typologique de la langue des signes française. TIPA. Travaux interdisciplinaires sur la parole et le langage, (34). doi : 10.4000/tipa.2568Google Scholar
Turner, A. M., Choi, Y. K., Dew, K., et al. 2019. Evaluating the Usefulness of Translation Technologies for Emergency Response Communication: A Scenario-Based Study. JMIR Public Health and Surveillance, 5 (1), pages e11171. DOI: 10.2196/11171; PMCID: PMC6369422; PMID: 30688652CrossRefGoogle ScholarPubMed
United Nations. 2006. Convention on the Rights of Persons with Disabilities. A/RES/61/106.Google Scholar
Wasserman, M., Renfrew, M. R., Green, A. R., et al. 2014. Identifying and Preventing Medical Errors in Patients with Limited English Proficiency: Key Findings and Tools for the Field. Journal for Healthcare Quality, 36 (3), pages 516. DOI: 10.1111/jhq.12065; PMCID: PMC5111827; PMID: 24629098CrossRefGoogle ScholarPubMed
WHO. 1948. Constitution of the World Health Organization. International Health Conference, New York, June 19–22, 1946, (pages 1–18). New York, NY.Google Scholar
Zwitserlood, I., Verlinden, M., Ros, J., and van der Schoot., S. 2005. Synthetic Signing for the Deaf: Esign. Proceedings of the conference and workshop on assistive technologies for vision and hearing impairment.Google Scholar
Figure 0

Figure 6.1 Prototype of SignLab, Dutch Medical Application: human recording (left); avatar generation (right)

Figure 1

Table 6.1 Sample sentences from the BabelDr corpus and the translation strategies applied for specific terminology

Figure 2

Figure 6.2 HNS description of NURSE in LSF-CH: gloss (top); image with cross movement represented by arrows (middle); HamNoSys (HNS) notation (bottom)

Figure 3

Table 6.2 HNS symbols for NURSE in LSF-CH, based on (Smith, 2013)

Figure 4

Table 6.3 Sign Table 6.for the sentence “I am a cardiologist”

Figure 5

Table 6.4 G-SiGML code for the gloss NURSE in LSF-CH

Figure 6

Figure 6.3 Doctor and patient view of BabelDr with human and avatar videos

Figure 7

Figure 6.4 Results of our online survey. “Question 1. Do you consider that videos with avatars can be useful?” (N=31)

Figure 8

Figure 6.5 Results of our online survey. “Question 3. To better understand the signer, which video would you prefer?” (N=28)

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×