Live Aquarium Show Information Support for Deaf and Hard of Hearing People WAKATSUKI Daisuke1), KATO Nobuko1), KOBAYASHI Makoto2), MIYAGI Manabi3), KITAMURA Masami1), NAMATAME Miki1) 1)Department of Industrial Information, Faculty of Industrial Technology, Tsukuba University of Technology, Amakubo 4-3-15, Tsukuba, Ibaraki 305-8521, Japan E-mail: waka@a.tsukuba-tech.ac.jp 2)Department of Computer Science, Faculty of Health Sciences, Tsukuba University of Technology 3)Division of Research on Support for the Hearing and Visually Impaired, Research and Support Center on Higher Education for the Hearing and Visually Impaired, Tsukuba University of Technology Abstract: Deaf and hard of hearing (DHH) people often lose the opportunity to learn and enjoy interactive live presentations in museums using voice and sound. We focused on an aquarium live show with cheering and tried to provide captions of trainers’ and narrators’ voices to DHH people’s smartphones. Speech recognition was not available because the venue was very noisy. We manually typed text for captions. Because typing Japanese with Kana–Kanji conversion is more complicated than typing English, we used a special system that we developed. This system allows two or more people to take turns inputting text in real time. Captions can be viewed on participants’ smartphones. Fourteen participants answered some questionnaire items. The results of the experiment suggest the following. Participants were able to enjoy the live show more with the help of real-time captions. However, the usability of captions was low. Several participants noted that they strongly felt the caption delay when trainers and other spectators interact and that they did not have time to read long captions. Our future work is to optimize caption delay and length. Keywords: Accessible Museum, Deaf and Hard of Hearing, Live Aquarium Show, Realtime Caption 1. Introduction  Many museums have researched and practiced universal design. For example, all exhibits at the Smithsonian Museum (Washington DC, United States) are displayed in accordance with the museum’s accessibility manual. Such attempts at universal design hold social significance. Because people spend most of their living time outside the school environment, learning in museums is important [1]. Additionally, because museums enhance educational effectiveness, museums also function as centers for education undertaken outside school in cooperation with schools [2]. Therefore, museums should promote educational and cultural activities in their roles as lifelong learning facilities.  On April 1, 2016, the Disability Discrimination Law was enacted in Japan to realize a symbiotic society [3]. This required schools “to reasonably accommodate individuals.” However, such requirements are currently limited to classroom-based learning and do not address education outside the classroom [4].  Many Japanese museums have introduced advanced technologies and are extremely fulfilling for able-bodied people [5]. However, insufficient attention has been paid to information accessibility and learning opportunities are not equal for all museum visitors [6][7]. Therefore, we propose the use of an information support system to allow all museum users to “learn together” in an inclusive environment.  We focused on an aquarium’s dolphin and sea lion show. Our aim is to support the learning activities of deaf and hard of hearing (DHH) people using participatory design including design for all, universal design, and inclusive design to improve information accessibility.  Many attempts to provide live speech-to-text captions in the classroom have been made. However, it is difficult to use live captions at live shows in aquariums that lack the facilities to provide captions, and there are few reports on the effects of using such captions. We have developed a web-based speech-to-text interpretation system, “captiOnline,” which can produce and provide captions anytime, anywhere [8]. Because this system operates entirely online, users/typists can view/type captions using a web browser on a smartphone or laptop connected to the Internet. In this paper, we describe a system that provides live captions to DHH visitors and analyze its use during a live aquarium show. 2. The System  Because aquariums generally lack the facilities required to present captions, such as monitors, power supplies, and in-house networks, we used laptops and tablets (iPad Pro) connected via WAN to produce and view captions. Users and typists accessed the captiOnline server using a web browser on their laptops and tablets. Figure 1 summarizes the system.  Typists listen to the live show and type the speech to create captions. We adapted a unique captioning method called “Renkei-Nyuryoku ” (combination input), which allows two people to simultaneously create captions, for live speech-to-text caption production for DHH visitors. captiOnline is designed for Renkei-Nyuryoku. Unlike classroom lessons, audience elements such as clapping and cheering are important to the enjoyment of live shows. Therefore, floating subtitles, which move from right to left, appear in front of the captions (Fig. 2). Fig. 1. Summary of information support system for DHH visitors. 3. Experiment  On September 26, 2017, we conducted an experiment at a live show (Live Performance of Dolphins and California Sea Lions, Aqua World Ibaraki Prefectural Oarai Aquarium). The show was approximately 25 minutes long and included a stage drama.  Fourteen DHH university students viewed the live show and read captions on the tablet. Subjects placed tablets on stands or held them according to their preferences. Two typists listened to the trainer’s speech and other announcements and produced captions using Renkei-Nyuryoku. Because there were many improvisations and interactions with the audience, the typists typed all captions live without using scripts, and added floating subtitles for cheering and clapping. We conducted a questionnaire survey on this information support after the live show. 1 This is a method to enter sentences in collaboration with a partner. For example, one partner enters the first half of a sentence and the other enters the second half. The input speed is approximately 1.5 to 2.0 times faster than the input speed of a single person. Fig. 2. Floating subtitle moving from right to left in front of captions. 4. Results  Subjects answered each questionnaire item on a 5-point scale. They described the reasons for their answers arbitrarily when replying to each item. The results are shown in Table 1.  All subjects enjoyed the live show. Approximately 71% said that this was because captions were available. Floating subtitles were helpful for approximately 43% of subjects, with 50% having no positive or negative opinion. A few subjects also stated that floating subtitles were easy to read without obstructing captions. Among subjects who could not accept the delay, the main reason was that they could not instantaneously understand the communication between the trainer and the audience or the timing of cheering and clapping. The reasons for subjects feeling fatigue were constantly changing their line of sight between the stage and the tablet and holding the tablet by hand. Table 1. Questionnaire results. 5. Discussion  Live captions enabled most DHH subjects to enjoy the live aquarium show. Subjects generally watched the live show and often checked the captions when they did not understand its content or between performances. Some subjects used the subtitles to remember the names of trainers and animals. These results suggest that summarized captions and information about the show are more effective than regular captions.  Floating subtitles were helpful for understanding other elements of the show. However, because they only appear once after posting, some subjects missed them. Therefore, we should add a function that allows floating subtitles to appear multiple times or that allows users to view the subtitle history.  Problems included fatigue in users’ arms from holding the tablet and that the stand could not fix the tablet in the optimum position. A smartphone or smartglasses could eliminate both setbacks.  Future challenges will be to make improvements based on issues identified in this experiment and to provide better information support for the live aquarium show. Following this, we would like to study additional applications of information support, such as museums, which will allow DHH users to “learn together” with other users. References [1] Bevan, B., Bell, P., Stevens, R., and Razfar, A. (eds) , LOST Opportunities: Learning in Out-of-School Time, Springer, 2013. [2] Stocklmayer, S.M., Rennie, L.J., and Gilbert, J.K. The Roles of the Provision of Effective Science Education, Studies in Science Education, 46(1), 1–44, 2010 [3] Cabinet Office, Government of Japan Promotion of the Elimination of Discrimination on the Basis of Disability. 2014., http://www8.cao.go.jp/shougai/suishin/sabekai.html (Japanese) [4] Central Education Council, Ministry of Education, Culture, Sports, Science and Technology-Japan. Promotion of Special Support Education for Building an Inclusive Education System for the Formation of Symbiotic Society. 2012, http://www.mext.go.jp/b_menu/shingi/chukyo/chukyo3/044/houkoku/1321667.htm (Japanese) [5] Yoshida R., et al. Experience-Based Learning Support System to Enhance Child Learning in a Museum: Touching Real Fossils and “Experiencing” Paleontological Environment, Proceeding, ACE ‘15 Proceedings of the 12th International Conference on Advances in Computer Entertainment Technology, Article No. 25. 2015. doi:10.1145/2832932.2832977 [6] Murakami Y. A Nationwide Survey on Attention to People with Visual and Hearing Impairment in Museums, Journal of the Faculty of Human Life Sciences [Translated from Japanese.], Prefectural University of Kumamoto, No.4, 33-44, 1998. [7] Okuno K. A Questionnaire Survey Result Report on Visually Handicapped People in Museums Nationwide, Bull. Kanagawa prefect. Mus. (Nat. Sci.), No.27, 95-106, 1998. (Japanese) [8] Wakatsuki D., et al. Development of Web-Based Remote Speech-to-Text Interpretation System captiOnline, JACIII, 21(2), 310–320, 2017. doi:10.20965/jaciii.2017.p0310