Abstract

The authors analyze the impact of marginalizing discourses surrounding disability on the design of communication options in online and virtual worlds. The primary focus is on conflict between participants in an ongoing study in Second Life, based on audio versus textual communication needs. Although the participants in the study are diagnosed with Myalgic Encephalomyelitis/ Chronic Fatigue Syndrome, the difficulties faced in facilitating communication between the two groups reveal serious problems with current modes of live chatting which privilege one sense over the other, which would be relevant to other disabled populations. The inability to blend audio and textual communication creates an additional barrier for participants in the ongoing study in Second Life, and for visually and hearing impaired individuals who wish to use technology as a means of communicating with one another.

Democratic access to the Internet has been the subject of much discussion and debate (Grimaldi 1990; Feenberg 1999; Best 2005; Blair 2006; Macnamara 2010), but how do accessibility issues translate into virtual worlds such as Second Life? An ongoing study testing the feasibility of Second Life as a social support platform for people with Myalgic Encephalomyelitis/Chronic Fatigue Syndrome provides insight into the necessity of accessibility and flexibility in virtual world design. This project began with a single research question: "How do virtual worlds assist ME/CFS sufferers in managing their illness, one characterized both by high levels of isolation and a type of cognitive and physical impairment which makes computer use extremely challenging?" Second Life was chosen for this study because of the element of virtual space. Each individual has an avatar representing them in the world, who is present within the rooms where the groups meet. We tested whether this additional closeness via avatar interaction and shared space would help foster sense of community among participants and reduce social isolation.

Virtual worlds can supplement the lives of people who are isolated by providing them with outlets for personal creativity, jobs that offer real world payment, or by giving them opportunities to connect with other people. However, technological problems, particularly relating to communication needs, are an obstacle to positive outcomes. Cross-disability communication, which has been largely ignored even within digital disabilities scholarship, is central to the construction of positive and supportive spaces online for disabled individuals.

We begin with an explanation of relevant background information: the symptoms of ME/CFS, an overview of the Second Life study, information about the accessibility of Second Life, and an explanation of the authors' methodology. We then explain the communication conflicts between the audio and textual communicators which emerged during the study. When sites and virtual worlds are not designed to accommodate the divergent or overlapping technological needs of disabled individuals cross-categorically, they can become inaccessible to users. Individuals with conflicting disabilities (for example, hearing impaired and visually impaired individuals) then face further obstacles when attempting to communicate with one another.

Background

Methodology

Thirty-two individuals with ME/CFS from Australia and Canada participated in the study. Participants were interviewed in-depth at the onset of the study to ascertain the severity of their symptoms; their level of social isolation; and their use of the Internet for researching their illness, and for finding support. Quantitative surveys were administered to assess their comfort and experience with using computers and the Internet. Private tutorials were provided for creating a Second Life account and avatar, and navigating the virtual world. Support groups were then offered three times a week at the ME/CFS Centre in Second Life. The Centre remained open at all times so participants could casually meet or explore the site. The support groups, and the Centre, were also open to people with ME/CFS, and their caregivers, who were not part of the original study. A further seventy-five people with ME/CFS from around the world found the Centre through our recruiting efforts and began to attend meetings; they were included in the focus groups and participant observation. Over the course of three years participant observation of interactions within the virtual world and email correspondence with participants were used in data collection. In year two focus groups were conducted with the aim of receiving feedback on the benefits and challenges participants faced in their engagement with Second Life. At the end of year three in-depth interviews were conducted to further assess the participants' physical conditions and their experiences within Second Life. The focus groups and interviews were coded and analyzed with NVivo software based on emergent themes. Participant observation was chosen for information gathering because the "qualitative descriptions generated by participant observation" are useful in formulating "concepts for measurement, as well as generalizations and hypotheses that with further testing may be used to construct explanatory theories" (Jorgensen, 1989). Participant observation allowed researchers to assess the benefits and drawbacks of the virtual world for people with ME/CFS.

The virtual support groups were participant directed. The researchers were merely observers, rather than facilitators, since the aim of the study was to see if Second Life could work as a social support platform. We observed what was of use in Second Life and what posed a challenge, and gathered information about any changes that were necessary to the design of the Centre. It was important to discover whether the Centre could be run by participants; eventually it would have to be operable without funding from a grant, or the organizational input of staff members.

ME/CFS and Disability

Myalgic encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) impacts all of the body's systems, including the neurological, autonomic, neuroendocrine and immune. Cognitive impairment is particularly severe, and includes confusion, disorientation and vertigo, as well as difficulties with concentration, memory and vision (Carruthers 2003). In 2003 the Canadian Expert Consensus Panel developed a new clinical definition for ME/CFS that addresses all its physical indicators instead of focusing on the fatigue symptom. To be diagnosed with ME/CFS, a person must meet all the criteria set forth in the new definition: significant and persistent physical and mental fatigue; post-exertional malaise; sleep dysfunction; pain; at least two neurological/cognitive symptoms (such as memory impairment, disorientation and confusion); and at least one symptom from two additional bodily systems which include neuroendocrine (such as loss of thermostatic control), and immune (such as tender lymph nodes), and autonomic (such as orthostatic intolerance). People with ME/CFS face physical isolation because the need for periodic rest, as well as recuperation from periods of intensive worsening of symptoms and virtual immobility, leads to a dependence on the home.

In 2010 the Canadian government commissioned the Canadian Community Health Survey (CCHS). Respondents were asked if they had any of a number of chronic health conditions which had been diagnosed by a health professional. They were then asked a number of questions regarding their level of disability and socio-economic status. People with ME/CFS were primarily female (66%), about half aged 25-44 and half aged 45-64. One primary measure of disability is the extent to which a population requires help with basic tasks. The percentage of the ME/CFS cohort which required help with basic tasks was 47%, second only to Alzheimer's and stroke victims. The survey also found that populations experiencing the highest level of social isolation were Alzheimer's, ME/CFS and mood/anxiety disorders. People with ME/CFS therefore suffer from high levels of both disability and social isolation.

Second Life: A Background

Second Life, a virtual world founded by Philip Rosedale, former RealNetworks chief technology officer, came online in June 2003. Second Life has become a locus of social networking, and hundreds of Universities and Corporations have purchased virtual property. Inability to access Second Life can place individuals at a disadvantage; they do not have access to information and opportunities for connection offered within the virtual world. Second Life operates via a viewer which uses an Internet connection to load the main virtual world. The virtual world comprises 3D graphics of environments which users can explore and interact with via their avatar. They create an avatar who becomes their representative in this world, and they can communicate with other individuals in-world through text or audio. In Second Life people can explore, connect with other people, engage in artwork, hold jobs, and attend or facilitate musical performances and academic studies.

By releasing the source code for Second Life in 2007, Linden Labs prompted user driven content that encouraged interoperability. Although TextSL is a command-based interface that allows visually impaired individuals to access Second Life using a screen-reader or built-in self-voice, this interface was not developed until after the advent of the study, and is still in its beta version. Participants were already overwhelmed by technological problems thus adding an interface like TextSL, or implementing captioning or text to voice transcription, would increase the technological strain on participants. These adaptations to the technology are of benefit; however, locating these options and making them work properly, is very difficult. Even the text-to-speech reader for Second Life viewer 2.0 is neither reliable nor easy to navigate.

Communication Conflict in Second Life

During the study conflict developed between participants based on their preferred modes of communication. Although ME/CFS is one disability, the diversity of symptoms and their varying impact on each individual with the condition meant that participants had different, and often conflicting, communication needs. Some participants chose auditory communication because of the frustration and exhaustion resulting from the graphic stimulation of the virtual world, or because of physical pain or exhaustion which interfered with the manipulation of the keyboard and textual communication option on Second Life. Some preferred textual communication because they found speaking to be more painful and physically draining than typing, or because they had difficulty disentangling and interpreting multiple streams of conversation. These people often found cognitive fatigue made them feel self-conscious about their ability to orally engage with and respond to their fellow participants; the text option allowed them to read the comments of other participants several times, and to take the time to formulate a cogent response, thereby reducing misunderstandings.

Textual communicators vs. audio communicators

At the extreme end of the spectrum, the disability of ME/CFS interacting with the inflexibility of Second Life's communication options meant participants could not make use of the virtual world at all. One participant was unable to continue with the study because her physical symptoms were so severe that she required speech recognition software to interface with her computer. Ellen explains: "I use speech recognition on the computer because I can't type anymore." The keyboard is what is used to navigate the avatar, so she "couldn't do it" without "having a set back." Although she was impressed with the appearance of the virtual world, she was unable to interface with the viewer and was therefore excluded from engagement with the virtual world. Regardless of communication choice—whether textual or auditory—participants are required, by the Second Life viewer, to use the keyboard to select either option. Some people are thus automatically excluded from this virtual world because of their physical symptoms which prevent physical engagement with the keyboard as physical viewer interface.

Most participants, however, were able to mitigate their disability by focusing on one channel of communication, either text or audio, rather than bombarding their senses with information. One group preferred audio communication, because of physical fatigue associated with typing, or intolerance of excessive visual stimulation. The other group preferred text due to the sensory over-stimulation they experienced from group audio discussions.

The audio communicators in the group often chose to approximate Skype conferencing in their methods of communicating in Second Life. Jasmine opted to "treat it like a phone conversation—listening/talking, not looking/seeing," and usually chose to "turn off the screen" thus severing herself from the visual interface. She and other audio communicators preferred voice communication because of their negative physiological reactions to the excessive visual stimulation of Second Life's 3D graphics. Several of the participants explained that although they were not averse to textual communication or inter-relay chat (IRC), they preferred audio communication on Second Life because of the difficulty of reading and using the textual option in Second Life. Jasmine explained that "Skype is much simpler technically so we don't end up spending half of each discussion sorting out technical issues." For her, having the avatars added nothing to the experience. Another issue participants had with the Second Life textual communication option as opposed to other programs, like Skype, was that the text could not be enlarged to make it easier to read.

Participants cited their cognitive and physical symptoms as detrimental to their level of patience with using the complicated Second Life communication options. Carter felt that typing was easier for him because of the "brain fog" he experienced with ME/CFS. In his experience people with ME/CFS "have problems with processing too much at one time." Jasmine agreed: "I am driven mad by the fact that I have to constantly use up precious energy on recurring tech issues," and Debra concurred that the excessive visuals of Second Life exacerbated her cognitive symptoms which then increased her base level of confusion in communication. Jasmine also mentioned that having to accommodate textual communicators by looking at the screen drained her energy and reduced the amount of time she could spend online. Participants who preferred audio communication were annoyed by energy expenditure and cognitive fatigue exacerbated by the visual elements of Second Life; this annoyance carried over into interactions with their fellow participants.

Ellen's reliance on speech recognition software lead her to disengage from using Second Life; the physical difficulties inherent in using the technology were compounded by the difficulties she faced in communicating in-world. She felt that she was "struggling with the keyboard," and that if she "met somebody and tried to talk to them," that she would "be disabled," that she'd "be like a mute person." She was unable to manage physically moving the avatar and using the communication options; consequently, she would be even more disabled within the virtual world than she was in the real world.

Participants' reasons for preferring text included technological issues that arose due to the relatively new audio function of Second Life, and fatigue related to oral communication. During her time in Second Life, Debra went through a period of "extremely bad mental fog and could not follow spoken discussion at all." She preferred typing responses because it afforded her extra time to organize her thoughts and to edit her comments before transmitting them to the rest of the group. She recalled: "I just get really confused and especially if there's kind of a lot of people talking I don't know what's going on, I can't follow it at all." When more than a few individuals attempt to communicate via the auditory communication option, Debra found it difficult to process, and respond to, their spoken dialogue. Other participants insisted that she communicate via the audio function instead of typing, so Debra "ended up leaving the group," because it was no longer accessible to her.

Technological difficulties with audio transmission further compounded the cognitive and physical difficulties members with ME/CFS experienced. Brenda and Theodora often encountered difficulty with audio within group settings, which disrupted the group dynamic. Theodora reported having to log off and back on again regularly to fix her sound, which she found extremely physically draining for someone with her illness. She also insisted that in most groups there "is at least one person," whose sound is not working, which creates additional strain on the already-stressed participants. Audio issues were noted by many during group meetings to be a factor in why they were reluctant to attend more meetings; they feared experiencing the stress of having to wrestle with their audio equipment, draining themselves and potentially bringing on a "crash," a common occurrence where a person's ME/CFS symptoms would worsen greatly for an extended period of time.

Although both types of communication are afforded to users of Second Life, and both the audio and textual communicators have valid reasons for their specific needs, issues arose when they attempted to communicate with each other. Communication conflicts were exacerbated when the audio communicators dominated the group and expected all other participants to conform to their own specific need for audio communication. The lack of engagement with the visual cues used to indicate speech (the green dot over the avatar's head which means that the person operating the avatar has engaged their talk button) and the inability to recognize text responses because of turned-off screens, meant that the audio communicators often did not acknowledge, or respond to, the input of the textual communicators. The lack of visual cues meant that some of the more outspoken audio communicators would continue to speak, without engaging with their peers.

Consequences of divergent communication needs within a single disabled population

Conflicts between the two groups of participants may appear to be based on interpersonal differences, but the disability created by ME/CFS was the direct cause of these disputes. A task that is simple for a person without ME/CFS such as typing or viewing 3D animation can be almost intolerable for someone with the disability. The illness affects the brain's capacity to keep up with visual stimulus, which becomes hypersensitive to information overload in a way that would be unnoticeable by someone unencumbered by the disability. For some people with ME/CFS, this visual burden, which can also lead to vertigo, profound disorientation and physical pain, was enough to lead them to turn off the monitor and engage with simply the audio portion of the experience. This allowed them to converse with others and enjoy social interaction in real time, across the globe, without exacerbating their primary intolerance of visual stimulation.

Unfortunately, this solution does not work for everyone. For some people with ME/CFS, it is the cognitive burden of trying to decipher and disentangle multiple conversations which is the most debilitating. The illness affects the brain's capacity to keep track of multiple audio streams, sort them into a reliable order, and process the information. For some then, the solution they opt for is to turn off all sounds and concentrate instead on simply managing the visual stream of information in their own time. The technical burden of the sound interface was also proportionately large for people with ME/CFS whose cognitive "brain fog" meant that wrestling with these functions was not something they could do on a regular basis and still manage their illness.

Thus, the nature of this illness is that the person affected can manifest the disability in different ways, and what is a solution for one is actually anathema to another. In either of these instances—processing excessive visual or audio stimulation—for someone with ME/CFS, the energy output required for this task vastly exceeds that of someone without the illness. They feel both confused and utterly exhausted, and are prone to "crash" for an extended period of time where they will be reduced to an even greater level of disability and possibly even be bedbound.

Trying to manage the cognitive and physical symptoms of ME/CFS while navigating a 3D, full-sound, virtual world is intensely difficult for people with ME/CFS. Adding the additional stress of learning how to technically master the communication options available in Second Life, and then attempting to communicate with others, while working across discrepant communication needs, created an additional, unmanageable burden for participants. This burden coloured interpersonal dynamics among participants with divergent communication needs. The marginalized status of ME/CFS and people with the condition also created a basic level of stress that contributed to disagreements among participants. All participants had at some point experienced lack of understanding and contempt for their illness by doctors, government officials and even their own families. For most this was an ongoing reality. They were often left alone by their familial circles who showed irritation towards them for "pretending" to be ill. Most had financial difficulties and were struggling to get any recognition or support from the government. All were fully aware that their regular doctors were of no use to them in managing this illness. This backdrop of intolerance and the need to assert the illness's physical reality was ever-present. Although on the one hand being around others who profoundly understood their bodily experience was reassuring to many who attended the meetings, there was also always a pressure to assert their specific physical needs which by so many others were misunderstood or ridiculed. Thus it was even more of a source of internal conflict and disappointment when people with ME/CFS, who expected complete empathy and support from others experiencing the same physical condition, discovered a lack of understanding and accord. This "single" disability was in fact many, with multiple and conflicting consequences, even for those experiencing the same illness.

Solutions to this communication clash which might have been available to able bodied individuals were of course not available to members of these support groups. The physical and cognitive disability caused by ME/CFS (including extreme confusion, pain and exhaustion) prevented participants from collaborating to transcribe spoken dialogue into textual communication to accommodate textual communicators, or trouble-shooting audio issues. The disability similarly prevented the reading of text communication aloud to other participants to accommodate audio communicators. Time and energy management are non-negotiable to people with ME/CFS, and attempting to take on these types of additional responsibilities would lead to an intense worsening of disability and debilitating "crashes."

While researchers were not acting as group facilitators, efforts were made to conduct smaller meetings simultaneously in different areas of the ME/CFS Centre so as to reduce participants' stress levels and to encourage communication between people with similar communication needs. Although this was a partial solution, it left each group feeling even more isolated, which the support groups were intended to help overcome. Participants longed to converse all together.

False Conceptions of Homogeneity Within Disabled Communities

Disability is a broad category with many different subdivisions of impairment that pose challenges for the development of accessible and user-friendly technologies. Our "single" illness, ME/CFS, has a number of cognitive and physical symptoms that manifest in different ways, all of which indicate central issues of information overload. Some individuals' symptoms are exacerbated by auditory stimuli, while for others visual stimuli is more damaging. Even within this one subcategory of disability there are multiple conflicting needs. More flexible and user-friendly design would improve the level of accessibility for most users. The bridging of discrepancies such as text/audio necessitates a sophisticated, simple, and user-friendly device which also provides options for cross-compatibility of needs.

In previous scholarship greater emphasis is placed on communication between disabled groups and the dominant culture. Goggin and Newell (2003) and Ellis and Kent (2011) explain the benefits of technology in facilitating communication between discrete categories of disabled peoples and the dominant able-bodied culture. However, little space is given to a consideration of communication between disabled groups. This limited focus inadvertently reproduced the centrality of the dominant able-bodied culture because it is assumed that at least one of the individuals in any given communication will be non-impaired. We are here interested in fostering supportive and egalitarian technological spaces that allow differently impaired individuals to communicate beyond the surveillance of the dominant culture.

Goggin and Newell's analysis of NL Groce's (1988) study of deafness in Martha's Vineyard, provides an example of communication between hearing impaired people and able-bodied people which omits the visually impaired. By situating sign language as a dominant language, spoken by everyone, "regardless of whether they were able to speak or hear" (Goggin and Newell, 27), individuals with hearing impairments were integrated into the community. Their mode of communication was streamlined rather than conceptualized as a special requirement. By centralizing the relation between a group of people with a specific impairment and the dominant able-bodied culture, Groce's study and the work of Goggin and Newell (2003) do not provide insights into communication among disabled individuals cross-categorically, meaning outside of their socially constructed categories of disability.

N.L Groce's study provides a positive example of the incorporation of deaf people into mainstream society, which also reveals the social component of disability; however, Goggin and Newell discuss hearing impaired individuals and visually impaired individuals as exclusive groups situated in conversation with able bodied individuals and society. This approach reinforces the centralization of the able bodied they seek to challenge. There is very little discussion about the ways in which technology will enable people with different impairments to interact with one another, outside of and around the able-bodied dominant culture. Goggin and Newell (2003) acknowledge the divergent technological needs of individuals with different disabilities in their discussion of universal design; however, there is no space devoted specifically to communication between visually impaired and hearing impaired individuals. Ellis and Kent (2011) briefly mention how developments in technological accessibility for people with one type of disability may actually impede accessibility for people with different disabilities. However, if websites and virtual worlds are to become accessible to disabled groups, and if they are to become a space in which disabled individuals can communicate with one another, more emphasis must be placed on assessing how accessibility requirements differ based on types of disability.

In his critique of previous analyses of N.L Groce's study, Shakespeare (2006, 51) argued that the non-hearing Vineyarders were still at a disadvantage because their hearing impairments prevented them from trading off-land. Even within their community they missed out on social cues because they did not have as many communication options. All barriers may not be removable, yet we assert that technology can become a means of increasing inclusion as long as alternative modes of communication and engagement are available to individuals, and these modes are made compatible with one another to ensure communication among people with divergent and even conflicting communication requirements.

Recent scholarship on the accessibility of websites for individuals with specific disabilities (Jackson-Sanborn, Odess-Harnish, and Warren 2002; Stock, Davies, and Wehmeyer 2004; Skylar, Higgins, and Boone 2007) provides important starting points for more thorough comparative analyses of website and virtual world accessibility. Although broader surveys of accessibility provide an overview of existing barriers on specific sites, they do not provide detailed analyses of the accessibility issues experienced by disabled individuals specifically comparing and contrasting the accessibility barriers of individuals with differing disabilities or ranges of severity (Loiacono, and McCoy 2004; Jaegar 2004a).

Paul T. Jaeger's more recent scholarship on disability and e-governance (2006a; 2006b; Wentz, Jaeger, Lazar 2011) works against the tendency to amalgamate all disabilities under a larger heading. In his 2006 study on the accessibility of government websites, in compliance with section 508 and the American Rehabilitation Act, Jaeger compared the results of expert analyses of accessibility to the assessments offered by disabled users/participants in his study. The participants of Jaeger's study consisted of individuals with visual and physical disabilities covering a broad range of severity within these categories of impairment. Although the results were consistent between the experts' assessments and those of the disabled users, a more detailed exegesis of the accessibility barriers were offered by the disabled users' descriptions of their experiences engaging with the sites. Jaeger was then able to record the accessibility issues that were common to individuals from both categories of impairment (visual and physical) and make note of the issues that were more pronounced in each category or among participants with more severe impairments.

Jaeger's emphasis on the differences between individuals with disabilities is particularly relevant to our study, and to our assertion that a more detailed and complex assessment of accessibility is necessary for improvements to accessibility. Jaeger asserts that: A site may be completely inaccessible for users with one type of disability and fully accessible for users with a different type of disability. Even within the same type of disability, persons with one level of severity of a disability may have different accessibility issues than persons with a different level of severity of a disability (2006b, p 171). Our study reveals that even within a specific category of disability, ME/CFS, the accessibility requirements of individuals differ based on the manifestation of symptoms, and the severity of the illness, at time of access. Although the modes of communication on Second Life offer choices between audio and text or a combination of both, what is accessible to one group becomes inaccessible to the other. Without accessible ways to translate text input/output in the Second Life chat option into aural output, and vice versa, the audio and textual communicators must either reach a compromise, separate into mutually exclusive groups, or face continued conflict. While those who use speech recognition software, like Ellen, are excluded from Second Life unless they rely on another person to act as an intermediary.

In our study some individuals cited a preference for support groups offered by GimpGirl Community, a group on Second Life geared toward women with disabilities. GimpGirl Community offers an inter relay chat (IRC) service connected to various points within their Second Life property. These IRC connections allow users with slower Internet, less advanced computers, or who are less technologically savvy, or who face accessibility issues on Second Life, to connect with users within Second Life for real-time chat. However, both the Second Life and IRC server options remain largely inaccessible to individuals with visual impairments (Cole et al 2011, 10). Although there are cases of participants in the GimpGirl Community reading text aloud to others, this option would not work for the participants in our study, as it places additional strain on individuals with ME/CFS.

Several of the participants in our study described their experiences attending GimpGirl meetings in Second Life. While Jasmine was positive about the GimpGirl meetings and preferred to access them via IRC to avoid Second Life, Debra insisted that she found the GimpGirl meetings "too confusing" because there were "too many lines of chat going at one time". Jasmine agreed, stating that "GG tends to have multiple simultaneous conversations so it's hard to figure out who's talking to who sometimes and it means the chat progresses fast". Although the GimpGirl meetings were viewed positively, the cognitive symptoms that posed a challenge in the communications between participants in our study also extended to their experiences when interacting with the GimpGirl community. The communication problems in our study were not just caused by interpersonal dynamics within our study, but were grounded in confusion and frustration resulting from the interaction of ME/CFS symptoms and communication options in-world.

Discourses of Digital Marginalization

Adaptive technologies are controversial because disabled individuals are not taken into consideration during the initial planning and development stages of communicative technologies (Ransom 1994; Goggin and Newell 2003; Kanayama 2003; Jaegar and Bowman 2005, 71). Although these devices, including screen readers and Braille technologies, function to include visually impaired users as consumers of information, by positioning these technologies as special, companies and programmers, assume that the standard users of their technologies will be able bodied (Keates and Clarkson 2003; Goggin and Newell 2003; Jaeger and Bowman 2005, 71).

In the case of our participants and their communicative conflicts, the technologies used to regulate their access to the virtual world are their computers and accoutrements (mouse, keyboard, speakers, and microphone), broadband Internet connections, and Second Life viewers. The computer is the central technology which is developed primarily with the able-bodied consumer in mind as the ideal user, while adaptive technologies, or ergonomic devices, are constituted as add-ons. Instead of programming accessibility into the device (whether hardware, software, or website and virtual world interfaces) the user with disabilities is not taken into consideration from the beginning, but is instead only factored into considerations of additional devices (Goggin and Newell 2003, 42-48). Incorporating the needs of disabled users into the programming and development of devices from the beginning would be more cost effective, yet developers focus instead on developing adaptive technologies, or on retrospective redesign, which some then fail to accomplish because of the projected additional cost of retrofitting the devices based on the needs of disabled consumers (Bowe 1993; Goggin and Newell 2003, 47-48; Lang 2000; Jaegar 2006b). This decentralization of the needs of disabled consumers recreates marginalizing social discourses in which disabled individuals are considered peripheral.

Virtual Ability runs additional avatar training for disabled users of Second Life, and IBM is developing an alternative virtual world viewer. However, the devices used, and the coding of the site, including the communication options, regulate the ability of the participants in this study to access and make use of Second Life. The programming language controls the manifestation of 3-D images, avatar animations, and the landscapes into which the participants travel. In the case of the audio versus textual communication, the programming of the Second Life viewer determines the modes by which users are able to communicate. Goggin and Newell (2003) argue that disability can be digitally constructed or reinforced. In this instance hearing and sight are proffered as the dominant and exclusive senses by which participants can communicate on Second Life. These communication options in the Second Life viewers recreate analog world divisions between hearing impaired, and vision impaired individuals at the level of computer language, and functionality. Language functions palpably to exclude individuals from virtual spaces and to regulate what kind of access is available to them. Even their communications with one another are determined by design, and thereby streamed in particular directions (audio/visual, inbox, live chat, etc …). The computer language used to design the structure of the Second Life viewer determines manifestations of human to human discourse; the values of the dominant culture are linguistically re-inscribed in this programming.

Cross-disability Communication

Boulos and Burden (2007) describe how "voice communication in 3-D worlds" can "be seen as an accessibility enhancer for some people having difficulties dealing with/communicating with typed text," while it poses a "challenge for the hard of hearing and deaf" (14). Similar to our participants, visually and hearing impaired people may require different and conflicting modes of communication. However, Boulos and Burden note that "technologies are being developed that automatically convert the spoken word to sign language using speech recognition to animate an avatar" (14). Technological advances of this type could enable communication between non-impaired and visually impaired individuals, without the need to resort to IRC server connections. Voice activated avatar sign language is a first step; however, technologies would need to be developed to allow for recognition of a hearing impaired individual's communicative input on Second life, by their visually impaired peers. These alternative interfaces and avatar animations could enable individuals with visual or hearing impairments to access Second Life, and to communicate with able bodied individuals or those with non-conflicting impairments. But these technologies could not bridge the gap between hearing and visually impaired individuals, since visually impaired individuals would be unable to see the sign-language animations of the hearing impaired individual's avatar. Similarly, without a functionally reliable and easy to navigate means of converting text to audio or audio to text in the Second Life communication options, discussion between the audio and textual communicators is limited.

Conclusion

Virtual worlds can increase disabled individuals' social connections. However, greater emphasis must be placed on developing technologies that enable communication between differently impaired individuals, instead of simply focusing on facilitating communication between single categories of disabled users and the dominant able-bodied culture. What is accessible for some disabled users of technology may actually pose serious problems for other disabled users. Unless their conflicting needs can be reconciled, there will be a problematic privileging of one group over the other, or a sacrificing of communication between these groups resulting in mutual isolation. By studying accessibility requirements of individuals with divergent types of impairment, improvements can be made to the accessibility of existing sites to promote more successful communications between individuals with differing technological requirements.

References

  • Best, K. (2005). Rethinking the globalization movement: Toward a cultural theory of contemporary democracy and communication. Communication and Critical/Cultural Studies 2.3, 214-237.
  • Blair, J. (2006). A computer and Internet future: Enabling inclusion?. Learning Disability Practice 9.9.
  • Boulos, M. N. K., Burden, David. (2007). Web GIS in practice V: 3-D interactive and real-time mapping in Second Life. International Journal of Health Geographics.
  • Bowe, F. G. (1993). Access to the information age: Fundamental decisions in telecommunications policy. Policy Studies Journal 21:765–74.
  • Canadian Community Health Survey. (2010). Statistics Canada.
  • Carruthers, B.M., Jain, A.k., & De Meirleir, K.L. (2003). Myalgic encephelomyelitis/chronic fatigue syndrome: Clinical working case definition, diagnostic and treatment guidelines a consensus document. Journal of Chronic Fatigue Syndrome, 11.1, 7-115.
  • Cole, Jennifer., Nolan, Jason., Seko, Yukari., Mancuso, Katherine., Ospina, Alejandra. (2011). GimpGirl grows up: women with disabilities rethinking, redefining, and reclaiming community. New Media and Society, 1-19.
  • Ellis, Kate., Mike, Kent. (2011) Disability and New Media. New York: Taylor and Francis.
  • Feenberg, A. (1999) Questioning Technology. London, EN: Routledge.
  • Foucault and the Government of Disability. (2005). Edited by Shelley Tremain. Ann Arbor: U of Michigan Press.
  • Goggin, Gerard., Newell, Christopher (2003). Digital Disability: The Social Construction of Disability in New Media. New York: Rowman > Littlefield.
  • Grimalidi, C., Goette, T. (1990). The Internet and the independence of individuals with disabilities. Internet Research: Electronic Networking Applications and Policy 9.4, 272-80.
  • Groce, N.L. (1988). Everyone here spoke sign language. Cambridge, Mass.: Harvard U.P.
  • Jackson-Sanborn, E., Odess-Harmish, K., & Warren, N. (2002). Web site accessibility: A study of six genres. Library Hi Tech, 20.3, 308-317.
  • Jaegar, Pault T. (2004a). The social impact of an accessible e-democracy: Disability rights laws in the development of the federal e-government. Journal of Disability Policy Studies 15.1, 19-26.
  • Jaegar, Paul T. (2004b). Beyond section 508: The spectrum of legal requirements for accessible e-government Web-sits in the United States. Journal of Government Information 30, 518-533.
  • Jaegar, Paul T., Bowman, Cynthia Ann (2005). Understanding Disability: Inclusion, Access, Diversity, and Civil Rights. Westport, Connecticut: Praeger Publishers.
  • Jaeger, Paul T. (2006a). Telecommunications policy and individuals with disabilities: Issues of accessibility and social inclusion in the policy and research agenda. Telecommunications Policy 30, 112--24.
  • Jaeger, Paul T. (2006b). Assessing section 508 compliance on federal e-government Web sites: A Multi-method, user-centered evaluation of accessibility for persons with disabilities. Government Information Quarterly 23, 169-190.
  • Jorgenson, Danny L. (1989). Participant observation: A methodology for human studies. Thousand Oaks, C.A.: Sage Publications.
  • Kanayama, T. (2003). Leaving it all up to industry: People with disabilities and the Telecommunications Act of 1996. Information Society, 19, 185-194.
  • Keates, S., Clarkson, P.J. (2003). Countering design exclusion: Bridging the gap between usability and accessibility. Universal Access in the Information Society, 2, 215-225.
  • Lang, H. G. (2000). A phone of our own: The deaf insurrection against Ma Bell. Washington, DC: Gallaudet University Press.
  • Loiacono, E., & McCoy, S. (2004). Web site accessibility: An online sector analysis. Information Technology and People, 17.1, 87-101.
  • Macnamara, J. (2010). The 21st century media (r)evolution: Emergent communication practices. Washington, D.C: Peter Lang.
  • Ransom, P. (1994). Public policy/legislative trends: Telecommunications access for people with disabilities. Technology and Disability, 3(3), 533-556.
  • Shakespeare, Tom. (2006). Disability rights and wrongs. New York: Routledge.
  • Skylar, A. A., Higgins, K., & Boone, R. (2007). Strategies for adapting webquests for students with learning disabilities. Intervention in School and Clinic, 43.1, 20-28.
  • Stock, S. E., Davies, D. K., & Wehmeyer, M. L. (2004). Internet-based multimedia tests and surveys for individuals with intellectual disabilities. Journal of Special Education Technology, 19.4, 43-47.
  • Wentz, Brian., Jaeger, Paul T., Lazar, Jonathan. (2011). Retrofitting accessibility: The legal inequality of after-the-fact online access for persons with disabilities in the United States. First Monday, 16.11.

Kirsty Best (primary author): Senior Lecturer at Murdoch University's School of Media, Communication, and Culture. This work is the product of a Discovery Grant from the Australian Research Council (ARC).

Stephanie Butler (secondary author): PhD student at the University of Waterloo, Department of English Language and Literature.

Return to Top of Page