Non-visual Interfaces and Network Games for Blind Users Satoshi INA Computer Science Department Tsukuba College of Technology Abstract: Visually impaired people have difficulty with communication of graphical information. It is to be more difficult for them to work/play in cooperation with sighted people at a distance. We developed a non-visual access method to a graphical screen through tactile and auditory sense, and applied it into network board/card games as a joint workspace for blind and sighted users via communication of image, sound, and voice. We took an "IGO" type boardgame and a Card game "SEVENS" as sample subjects for a cooperative task in the workspace. Next, we proposed two types of human interfaces, both of which would support direct access to graphical information on the PC screen. The first, new ActiveX-type driver software that works under Windows 95/98/NT, supports direct access to graphical information using synthetic speech sounds and a touch panel (Nomad) with a tactile checkerboard. We introduced these details of the system and discussed their application to the interface of the "IGO" type boardgame or the card game "SEVENS". The second is an acoustic graphic interface. The acoustic graphic may be simply described as "graphics observed by finger and ear". It helped blind users recognize the colors, shapes, and sizes of displayed graphics via cooperative perception of the fingertip position and nonverbal feedback sound. We experimented with the acoustic graphic, and discuss our results herein. Finally, we refer to a production system of 3D tactile objects (teaching materials) and its role as a teaching aid to be used with the acoustic graphic. Key Words: acoustic graphic, auditory interface, blind user, GUI, haptic interface, non-visual interface, Nomad, tactile graphic, visually impaired 1. INTRODUCTION Multi-media functions have rapidly become widespread and accessible, and network communication has been further popularized by performance improvement of the PC (personal computer) and the progress of GUI (Graphical User Interface). Further, the form of communication and human interface via computers is diversifying. Since its appearance, the vision-oriented GUI had not been consistently employed by blind users. The fundamental GUI operability has been dramatically improved and widely utilized for the past several years due to the availability of several screen readers for GUI2), and that of homepage readers for WWW (World Wide Web). But the popularization of multi-media and network functions of PCs occurred so rapidly, the development of other non-visual interfaces could not keep pace. As a result, blind users have few chances to utilize these new media and the network, and have received almost no practical benefit. It might also be said that the environmental gap between sighted users and blind users in GUI to some degree extended that in CUI (Characteristic User Interface). In particular, blind users have had difficulty communicating with graphical information. To date, several systems geared toward graphics presentation to blind users have been proposed; e.g., a tactile picture system3)4) for a static picture, and a 2.5D pin-matrix-device5) for a dynamic picture. These systems, however, were designed to be used in a stand-alone environment. Looking toward the future, based on these conditions, we foresee a CSCW (Computer Supported Cooperative Work) environment in which blind users are able to interact. Our goal was to construct an environment in which the workspace was shared with blind remote users via a multi-modal communication of images, non-speech sound, voice, and graphical information to achieve a cooperative subject. We illustrated a simple "IGO" type boardgame and a card game, "SEVENS", as sample cooperative subjects, then we constructed and experimented with a few new interfaces for blind users to communicate graphical information. 2. COMMUNICATION Here the word "communication" has two meanings; communication between humans and communication between humans and machines, as shown in Fig.1. 2.1 Communication between humans The communication here is bi-directional and human-to-human over distance via machines and the network as shown in Fig.1, and particularly the case between a blind user and a sighted user, or that of a blind user and a blind user. Our intention was to integrate multi-modal channels via images, non-speech sound, voice, and graphical information to realize a workspace cooperative with the opposite site. 2.2 Communication between human and machine (man-machine communication) We refer to man-machine communication as human interface. Human interface is not only important in terms of accessing the PC but also essential to realize the cooperative work through the communication for blind users. Later, we propose and discuss some haptic acoustic interfaces, which map visual media into haptic and acoustic media for blind users. 2.3 Human interfaces To realize a low-cost visual aid system without using large-scale facilities, we simply attached an inexpensive video camera and a touch panel to a PC with standard configuration. Next, we proposed two types of human interface, which would support direct access to graphical information. The first is a new ActiveX-type driver software working under Windows 95/98/NT, which utilized a touch panel (Nomad). We integrated our ActiveX driver into board/card games. The second is the acoustic graphic interface. Ample research has been conducted on interface design utilizing acoustic media for communicating graphical information to blind users. AUDIO GRAPH8) used music and musical instrument timbres to present line drawing pictures to blind users. The application of 3D spatial audio with depth feedback into interfaces for blind users was also investigated6), in which tonal, musical, and orchestral sounds were mapped to an (x,y,z) position in a 3D environment. These investigations adopted MIDI instruments for sound generation and utilized only the acoustic sense. Here we propose a new concept of acoustic graphic, which presents CG (Computer Graphics) or color images via both haptic and auditory senses. Acoustic graphic could be realized without adding any special equipment (other than a low cost touch panel) to a standard PC, because it only uses Microsoft's DirectSound class library to generate and mix several 3D sounds concurrently in real time. We constructed an experimental system and examined whether the recognition of graphic images on the screen was possible via acoustic graphic. Lastly, we introduced a production system of a 3D tactile object(teaching material) and its application for making 3D-mazes to examine the role of a teaching aid for acoustic graphic. 2.4 Category of human interface Although haptic interface devices (keyboard, mouse) and verbal interface devices (keyboard, character display, synthetic voice) have been popular until now, these devices have restricted the possibility of human interfaces. So we considered adding nonverbal or non-haptic interface to them. The mapping between different modes is called cross modal mapping. Acoustic graphic might be described as the cross modal mapping from CG to sound, i.e., from the visual sense to the auditory sense, and the 3D tactile object might be described as the cross modal mapping from CG to the tactile object, i.e. from the visual sense to the haptic sense. Fig.1 Diagram of communication 3. HUMAN TO HUMAN COMMUNICATION AT A DISTANCE 3.1 Communication media and environment We assumed a one-to-one type communication between individuals at a distance. To enhance the efficiency of cooperative work and create a sense of awareness, the environmental information at one site is transferred to the other site via continually running non-speech sound, voice, and images, as shown in Fig.2. Using these communication media, distance supports, such as distance education, distance guide, distance monitoring, and distance consultation for blind users, become possible for a continuous 24-hour period. 3.2 Communication via image and sound We utilized a set of standard mounted loudspeakers, a microphone, and the sound board of full duplex to make a server/client program to communicate sounds through the network in real time. Next, we adopted a remote-controlled video camera and constructed a server/client program, which could control wide/zoom, tilt, pan, and focus of the camera remotely from one site, to capture and send images in serial sequence to the opposite site via the network. Fig.3 shows the server/client for the image and sound communication. Available image size is up to 640x480 pixels, restricted by our video board's specification, with 24 bits of plane per pixel. Fig.2 Human to human communication from a distance Fig. 3 Server/client for image and sound communication 4. DIRECT ACCESS AID TO THE WINDOWS SCREEN We propose a new utilization of Nomad Pad1) under the Windows PC system. Nomad Pad is equipment that was developed originally in Australia to present map and graphic information for visually impaired users (see Fig.4). Technologically, the idea of combining synthetic voice with a tactile graphic was, at the time, groundbreaking. By putting a tactile graphic on the pad, tactile operation with both palms and the fingers of both hands became possible, and Nomad Pad detects input only when fingertip pressure is applied to the surface. However, no device driver that would work under the Windows system was forthcoming from the development agent, though the concept had remained in conventional usage related to the presentation of static map and graphics. We proposed a new usage of Nomad Pad as direct access interface to the PC screen. 4.1 ActiveX-type driver software PadX for Nomad We made a new ActiveX-type driver software called PadX for Nomad Pad, which works under Windows 95/98/NT, Visual Basic and Visual C++ language system. PadX reports an absolute coordinate value with one of the three depressed modes in the Pad depression. Here, the depressed mode means the depressed time length. We made a sample Visual Basic (VB) program that displays the depressed coordinates, depressed mode, and the graphics of the polygonal line connecting the depressed points. Fig.5 shows the executing screen of the PadX sample program. The specifications of the PadX component are as follows. 1) Method function PadOpen() initializes Nomad's RS232C line PadOn() starts event loop for PadX detection PadClose() closes Nomad's RS 232C line 2) Property valuables Lower left and upper right coordinate of Nomad Pad: LeftBx, LeftBy, RightTx,RightTy Depressed time-control variable: PushCnt0, PushCnt1, PushCnt2, PushCnt3 3) Event handler function OnPadPush(x As Long, y As Long, mode As Integer): Pushed event handler where (x,y)=returned coordinate, mode=returned depressed mode (1/2/3) OnPadDBclick(x As Long, y As Long): Double-clicked event handler where (x,y) =returned coordinate Fig.4 Nomad Pad with tactile checkerboard Fig.5 Execution of PadX sample program Fig.6 Programming structure of "IGO" type boardgame 5. "IGO" TYPE NETWORK BOARDGAME We constructed an "IGO" type network boardgame called "RENJYU" as a sample of cooperative workspace and PadX application. We also made GomokuOcx (ActiveX), which has PadX built in to support the RENJYU game, and a Client/Server program, which has GomokuOcx built in to support communication via the network, as shown in Fig.6. A blind user was able to play the game with another blind user or with a sighted user who was at a distance and connected through the network. An executing screen of the client/server is shown in Fig.7. By putting a tactile graphic of checkerboard on the Nomad Pad, as shown in Fig.4, direct access to the screen image became possible. During the whole game, a synthetic voice guide was available. 5.1 Specification of GomokuOcx component You can easily make a server and client program by putting the GomokuOcx component onto the VB form and setting the following method and properties. 1) Method No opening method exists GClose() terminates the process 2) Event No event exists 3) Property LocalIP, LocalPort: Client side IP, Port number RemoteIP, RemotePort: Server side IP, Port number SocketPosition: Discrimination of Client/Server(1/0) 5.2 How to play After the connection has been completed from client to server, players negotiate who puts down the first stone. Generally, the first player is supposed to select the black stone, and the white stone falls to the opponent. The two players continue to put their stones on the empty places, alternating turns until one player wins by putting five stones continuously in a horizontal, vertical or oblique line. The stones must be set on the grid cross-point in the checkerboard. The synthetic voice guide is also built in and guides the whole operational process or warns against forbidden moves. The PadX depressed modes correspond to the following operations. Depressed mode 1: put stone there Depressed mode 2: confirm previous stone existence there (Depressed mode 3: unused) Fig.7 Client and server of "IGO" type network boardgame 6. MULTI-USER NETWORK CARD GAME ("SEVENS") "SEVENS" is a multi-user network game by two or six blind or sighted players in the distance via the Internet. We utilized a tactile graphic of "SEVENS" card layout attached on Nomad as shown in Fig. 8. And Fig.9 shows the executing screen of "SEVENS" game. The progress situation such as the position of card and manipulations are always guided through synthetic voice9). 7. ACOUSTIC GRAPHIC We suggest an idea, which we call "acoustic graphic", formed by the cooperative perception of nonverbal sound and fingertip position. It may be simply called "graphics observed by finger and ear". It helps blind users access the graphic information on the PC screen and recognize the colors, shapes, and sizes of graphics. "Acoustic graphic" returns feedback of nonverbal sound corresponding to the pixel color under the fingertip. We adopted a touch panel as a pointing device (shown in Fig. 10), and used Microsoft DirectSound software for programming that utilized the standard built-in soundboard. Acoustic graphic has three features. One is real time responsibility, two is portability, and three is color sensibility. We transposed three kinds of sound into three primary colors (RGB) as follows: red, the sound of a bell; green, the sound of a xylophone; and blue, the sound of spring water. These primary sounds are mixed in proportion to the intensity of each pixel's RGB components, and are emitted simultaneously through the Microsoft DirectSound software. We constructed an experimental learning system. Fig. 11 shows the executing screen of the system. 7.1 Outline of the experimental system of acoustic graphic We made an experimental system of acoustic graphic. It presented the basic figures (ellipse, square, triangle) in colors on the screen one after another in random order, and the examinee searched in response to questions about color, position, and shape. Colors here were restricted to 7: white, cyan, purple, yellow, blue, green, and red. Each shape was displayed changing the aspect ratio, size, and position automatically. There were two types of presentation mode, fixed mode and variable mode. In the fixed mode, only one figure with optional shape, size, and color was presented in the center of the screen. In the variable mode, multiple figures with different shapes, sizes, colors and positions were displayed simultaneously on the screen. In this experiment, we restricted the number of figures to one or two for simplicity. Examinees were two, one blind and one sighted. The sighted person was blindfolded so as not to see the screen. 7.2 Results We found that it is difficult to move the fingertip straight and parallel to the corner of the touch panel. Even when the examinees intended to move their fingertip straight, horizontally, or vertically, it tended to move obliquely or bend. This was a negative factor in searching acoustic graphic. Therefore, we attached a tactile graphic that we call "tactile guide" on the touch panel. It had an embossed checkerboard pattern and worked as a guide for fingertip movement. (1) The single figure mode of fixed size and fixed position Answers regarding color were easily completed within 6 seconds. But answers regarding shape were a little difficult for novice examinees. At first, it took around 40 seconds for them to answer, and they often gave wrong answers. After trial and error, they acquired the knack of distinguishing the characteristic parts of the shape such as corners, tips, and concentrated overhangs. After the examinees became accustomed to this process, they could reach the right answer within 30 seconds. (2) The single or multiple figure mode of variable size and variable position 1) The single figure mode The answers regarding color and position were completed in around 20 seconds, and those regarding shape were completed in around 30 seconds. But in answers regarding shape, the trade-off between the graphic size and the grid spacing of tactile guide affected the recognition rate. This was due to the fact that when the grid spacing of tactile guide was large compared to the figure's size, examinees were apt to overlook delicate features. In that case, the recognition rate was improved using the other guide with small spacing. But when it became small compared to fingertip size, it did not work well as a fingertip guide. 2) The multiple figures mode When two figures were displayed separately, the colors and positions were correctly answered within 30 seconds, and questions regarding each shape were answered in nearly the same amount of time as that for questions posed in the case of a single figure. In the case of an overlap of two figures, the answers regarding color and position were also carried out easily, if the figures were of different colors. But it took examinees more time to answer questions regarding their shapes, more than 60 seconds, even when the overlap was less than about 30%. In the overlap of three figures, questions regarding the rough positions and colors could be easily answered, but the recognition of the shapes became considerably difficult. 7.3 Evaluation of acoustic graphic 1) On the application into GUI interface Almost all of the components used in GUI, such as windows, icons, and buttons, have rectangular shapes, unlike the more irregular ones used in the above experimental system. By mapping the GUI components into acoustic graphic, blind users may gain an understanding of the layout image of Windows' components. So we constructed a prototype application, which presents the current windows' layout as acoustic graphics in real time using Windows API libraries. These acoustic graphics may help the blind programmer who intends to develop a visual GUI program him- or herself, to design and check Windows' layout. Fig. 12 shows a Windows screen including the four top windows (explorer, control panel, my computer, AI Shogi: Japanese chess). Fig. 13 shows its acoustic graphic, indicating the top windows' status (position, size, shape, and overlay), excepting the desktop icons. This application has two display-modes, an all windows display-mode and an individual window display-mode, and the space key or function keys switch the modes. 2) On improvement of the touch panel In a graphical interface such as a touch panel to which a tactile graphic is attached, blind users usually perform the two types of operation concurrently; that is, a search operation along the tactile guide using the palms and fingers of both hands, and a pointing operation using a single fingertip. However, because most of the touch panels on the market do not permit an extra hand touch (in addition to that of the fingertip), blind users have to make a studied effort to restrict movement to this singular kind of touch; if they happen to make a second, accidental touch, the cursor picks up at an unexpected point. Nomad Pad suitably offered more latitude in this regard, but it had the disadvantage of weak drag operation. 3) On the tactile guide We confirmed the effectiveness of the tactile guide on acoustic graphic. With regard to its characteristics, we supposed that the material of the tactile paper would have some relationship with the answering speed. In our experiment, we used a microcapsule paper, which gave some friction to fingertip sliding, perhaps more than is desirable for a tactile guide. In fact, one examinee offered that it was a little difficult to move the fingertip smoothly at a constant speed along or across the guide's line. Therefore, we consider that it would be more effective to construct a tactile guide from a material with very low friction, slight stickiness, and high durability. Fig.8 Nomad attached with an embossed tactile graphic of "SEVENS" card layout Fig.9 Multi-user windows of "SEVENS" Fig. 10 Touch panel Fig. 11 Experimental and learning system for acoustic graphic Fig. 12 Windows screen with four top windows Fig. 13 Acoustic graphic representing the windows' layout 8. 3D TACTILE OBJECTS Although 3D graphics and color images are very popular and common for sighted people in daily life, they are not the same for blind people. We aimed at making the virtual solid objects displayed on the PC screen tactile for blind users, because it is supposed that 3D objects are more familiar to blind people in terms of touch and feel than are 2D tactile pictures on paper. 8.1 System overview We constructed a desktop production system of 3D tactile materials9) which had a low-cost 3D plotter connected to a PC. The 3D plotter is a small NC (Numerical Control) machine with three possible directions to move; x, y, or z. The z-axis is assigned to the direction of the blade up/down. This means that objects are always carved to represent the one-valued function z = f(x,y). We call these objects half-solid (2.5D), not completely 3D. a. Allowable size of material Width150mm x Length100mm x Height40mm b. Materials to be available Chemical wood, Plaster, Polyurethane, Modeling wax, Wood, etc. c. The time required to carve It depends on the size and hardness of materials, or the carving depth. 8.2 3D-maze generation program We developed a 3D graphics program for designing 3D mazes. This program controls the grade of complication, the wall height of the maze, and the viewpoint, and displays the 3D solid mazes on the PC screen. Fig. 14 shows a real 3D maze made by using this program. Our intention was to use these mazes as an educational tool for blind users, to teach them the relation of 3D space and 2D space through the task of tracing a maze. 9. OVERALL DISCUSSION AND FUTURE WORK We created a joint workspace via the network for blind users at a distance, employing media such as video images and environmental sound and voice. Applying our direct access interface Nomad+PadX, blind users and sighted users could play an "IGO" type game or a card game, "SEVENS", under conditions of the integrated communication space. Based on the user-awareness often said to be achieved with CSCW, this form of multi-channel communication gave us more information than did single-channel communication; e.g., it was possible to have conversation, give instructions, ask and answer questions, teach the operations, and monitor the circumstances throughout the work. In the latter half of this report, we mainly describe acoustic graphic, which has not yet been applied in the above cooperative workspace, but is expected to be in the near future. In the following section, we discuss an individual theme introduced herein, and its relation to future research. 9.1 Classification of interface and media At present, generally available media and equipment in PC are classified according to the sense type as follows. 1) Visual sense graphic, image, character: display screen 2) Auditory sense non-speech sound, voice, synthetic voice: microphone/loudspeaker. 3) Tactile sense braille, tactile graphic: braille display, braille printer. It is also possible to classify these media interfaces into verbal/nonverbal and haptic/non-haptic categories. We show this classification in Fig. 15. The dotted line shows the conventional general interface function, and the solid line shows a new communication aid function proposed herein. Here, the pointing device means mice, tablets, or touch panels. Our interfaces cover some of the boundary region, as shown in Fig. 15. However, they are only the tip of iceberg, and sufficient integration of these has not yet been achieved. In the future, new interfaces are anticipated to be multi-modal and utilize multiple compound senses. Attempts at integration must be focused on enhancing the integrity of these interfaces, aiming at the mode-free interface, which would be available in any CSCW environment. 9.2 Direct access interface with PadX In the "IGO" type game, this was a difficult subject for novice blind users; examinees found it difficult to memorize and recall the layout of the stones, and the assistance of the voice guide was required with high frequency. Novice blind users tended to put a stone in the wrong place late in the game and subsequently lost. However, we consider that training and habituation will lead blind users to enjoyment and occasions of victory. Although the direct access interface in our sample "IGO" type game was a problem-oriented one, we could confirm that an attachment of one communication aid tool improved the human interface and made a cooperative workspace possible for blind users. This kind of problem-oriented direct access interface could be easily constructed and modified by the use of Nomad pad and our PadX, and these interfaces would open the gateway into CSCW for blind users in the near future. 9.3 Acoustic graphic We also examined the applicability of acoustic graphics into images, which include color gradation. Fig. 16 shows the sample pattern of acoustic graphics with gradation. For simplicity, we applied a linear cross-modal mapping from luminous intensity into sound frequency, such as high intensity to high frequency, and low intensity to low frequency. It was possible to recognize the luminous intensity change of the cylindrical surface as the change of sound frequency. The possibility of more reasonable mapping remains a problem, because studies of the correspondence between the visual sense for luminous intensity and the auditory sense for frequency are not frequently done. Next, we applied acoustic graphics into the maze game. We confirmed that even novice blind users could play the acoustic maze game when it was simple enough, including 5 roads (formed from a 3x3 checkerboard pattern) or 7 roads (formed from a 4x4 checkerboard pattern)(see Fig. 17). When the thickness of the wall was insufficient and an examinee moved the fingertip quickly, the tunnel effect—the fingertip going through the wall abruptly—was observed. In the case of more complex mazes, say, those formed from checkerboard patterns larger than 5x5, we suppose that blind users would have more difficulty, since the clarity of the maze is dependent on the trade-off between the resolution of the touch panel, its size, the fingertip's mobile speed, and so on. This is a problem for future work. 9.4 3D tactile objects 3D-maze material enabled blind users to touch and recognize the real structure of the maze prior to playing the acoustic maze game. Further, this material prompted an understanding of how to play an acoustic maze game. We expect that, in the near future, the 3D tactile object will help blind users to understand 3D graphics via crossmodal mapping from the 3D object's image with gradation to acoustic graphic. On the other hand, the following limitations still exist in some applications. The object size is limited to small ones. The 3D plotter, while carving a material, is accompanied by a modicum of noise and dust. It takes a long time to produce an object. Tactile 3D objects cannot represent the colors and textures on the virtual graphics objects. The effect of the feel upon recognition, soft/hard and smooth/rough, is also a pending subject. Fig. 14 Examples of tactile 3D-maze Fig. 15 Classification of media interfaces Fig. 16 Acoustic graphic sample with gradation Fig. 17 Acoustic graphic of simple maze game 10. CONCLUSIONS We introduced four themes here, which might be roughly classified into two groups, the first described a cooperative workspace using a haptic interface in the first half, and the second was about new non-visual interfaces using acoustic graphics and 3D tactile objects in the latter half, which would be expected to be integrated into a cooperative workspace in the near future. (1) Cooperative workspace via network and haptic interface for Windows GUI computer We constructed "IGO" and "SEVENS" for visually impaired people to achieve communicative and cooperative subjects, integrating multimedia such as image, sound, and voice in conjunction with our new haptic interface. Through the experience of the "IGO" type boardgame and the card game "SEVENS", we confirmed that the online real-time tactile/auditory aid could support them to communicate graphical information and to execute a cooperative task with a sighted person at a distance. We also developed a direct-access interface to PC's graphical screen via haptic perception, and applied it into human to human communication via network. For this purpose, we made a new device-driver (PadX) to use Nomad Pad with the Windows GUI system. Moreover, we developed and experimented with an "IGO" type boardgame and a card game, "SEVENS", as sample applications of PadX, and confirmed that the haptic interface could support the blind to communicate graphical information and to execute a cooperative work with a sighted person. As a result, our driver-software for Nomad, PadX, proved to be very easy and effective for creating a new interactive non-visual and nonverbal interface for problem-oriented applications, such as our "IGO" type boardgame and card game "SEVENS", with Windows GUI system. (2) Acoustic graphic and 3D tactile objects (3D-maze) We presented and experimented with an idea of acoustic graphic using Microsoft DirectSound software. In our experiment, we found that an attachment of an appropriate tactile guide onto a touch panel could improve the results. The experiment also proved that our acoustic graphic needed a little longer period to recognize graphic shapes than conventional embossed tactile-pictures, but it could present rather simple color graphics to the blind interactively and continuously. Moreover, in order to examine the possibility of acoustic graphic, we developed a prototype system, which could present the current GUI windows' layout interactively as an acoustic graphic in the real time. The system proved that the acoustic graphic could give the blind an easy, portable, and effective way to grasp a GUI screen image interactively and timely. In the near future, it might also serve blind programmers as an assistive tool for visual programming using Windows GUI. At the end, we presented an acoustic maze as a sample application in conjunction with 3D-maze tactile object. We adopted a tactile 3Dmaze, produced by our desktop production system of 3D tactile objects and 3D-maze generation program, to assist the blind in playing an acoustic maze. It was proved that a tactile 3D-maze enabled the blind to touch and recognize the real structure of a maze, prior to accessing an acoustic maze, and prompted him/her to play the acoustic maze game. From our experience, 3D tactile objects could be called very effective teaching materials for the blind. References 1) Quantum Technology, Pty Ltd. Touch Blaster Nomad Installation and User Guide for Nomad Pad and TouchBlaster software, 1994. 2) Elizabeth D.Mynatt, Gerhard, Weber. Nonvisual Presentation of Graphical User Interfaces: Contrasting Two Approaches, Conference proceedings on Human factors in computing systems (CHI '94), 166-172, 1994. 3) INA, S. Computer graphics for the blind, ACM SIGCAPH Newsletter, 55, 16-23, 1996. 4) INA, S. Presentation of Images for the Blind, ACM SIGCAPH Newsletter, 56, 10-16, 1996. 5) SHINOHARA, M., SHIMIZU, Y, NAGAOKA, H. Experimental Study of 3-D tactile Display: A Step towards the Improvement, Proceedings of the ICCHP '96, 749-754, 1996. 6) Stephen W. Mereu, Rick Kazman. Audio-Enhanced 3D Interfaces for Visually Impaired Users, Conference proceedings on Human factors in computing systems (CHIf96), 72-78, 1996. 7) INA, S. Embodiment of 3D virtual object for the blind, ACM SIGCAPH Newsletter, 60, 17-21, 1998. 8) James L. Alty, Dimitrios I. Rigas. Communicating Graphical Information to Blind Users Using Music: The Role of Context, Conference proceedings on Human factors in computing systems (CHI '98), 574-581, 1998. 9) INA, S. A Support System for the Visually Impaired Using a Windows PC and Multimedia. INTERACT '99, 2, 37-38, 1999.