A As blind individuals cannot access such visual

A large quantity of information exchanged in a
social interaction is nonverbal, namely facial expressions, eye contact or
bodily gestures (Knapp, 1992). As blind individuals cannot access such visual information, they often
find themselves in embarrassing situations, such as responding to a question
that was not aimed at them (Everts, 2012). Over time, such difficulties can
potentially leave them feeling socially isolated (Wiender & Lawson, 1997).
Assistive technology has been successfully developed to help overcome some of
the other challenges blind individuals’ face, for example, navigational
difficulties (Velázquez, 2010). Due to loss of vision, assistive technology for
blind individuals involves transferring visual information to the user via
audio and tactile feedback, which is referred to as sensory substitution
(Bach-y-Rita & Kercel, 2003). For assistive technology to successfully
support blind individuals, a greater focus should be given to designing
technology that supports social interaction. It is crucial that the user’s
requirements, in terms of functionality, usability and aesthetics
characteristics, are at the forefront of the design (Krishna, Colbry, Black,
Balasubramanian & Panchanathan, 2008). Therefore, this report will
establish a set of criteria that should be satisfied when developing an
assistive device to support social interactions for the blind. A recently
developed prototype of such a device will then be critically analysed in terms
of how it positions against the stated criteria. This analysis will
consequently lead to obtaining important design recommendations for all such
technology developments that aim to support social interactions. Ultimately,
this report proposes that the appropriate design determines whether assistive
technology can overcome the challenges blind individuals face in social
interaction. 

Firstly,
for technology to successfully support social interactions, a crucial design
criterion is that it provides the essential nonverbal information that blind
individuals require. For this criterion to be met, designers must recognise the
nonverbal information that is of most importance to blind individuals in social
interactions. Consequently, Krishna et al. (2008) carried out a study to
identify the most essential unmet needs of the blind population during social
interaction. From focus groups including blind individuals, blindness
specialists and families of blind individuals, they established the following
needs: knowing the number, location, identity and appearance of individuals in
the room, knowing where an individual is attending, knowing the facial
expressions and body gestures of individuals and finally, knowing whether one’s
personal gestures align with sighted individuals’ expectations and hence are
appropriate. Thus, it is crucial that assistive technology
is designed to incorporate functional features that provide this important
visual information.

When
designing assistive technology, it is also vital to think through how and when
this visual information will be transferred to users without it being too
cognitively demanding. As humans have a limited capacity to cognitively process
sensory information taken from their surroundings (Sweller & Chandler,
1991), it is crucial that technology is not too cognitively demanding whereby
it affects their natural processing capabilities (Velázquez, 2010). For example, although a high
percentage of the information exchanged in social interaction is nonverbal
(Knapp, 1992), ultimately verbal information is still vital. Therefore, the
level of cognitive demand should be carefully considered in the design and
assessment of technology that supports social interaction.

Furthermore,
assistive technology should be easy to use. One of the major driving factors in
a blind individual’s decision to use an assistive device is if it is easy to
understand and use (Gerber, 2003). If the device is neither, users end up
feeling frustrated (Gerber, 2003). While teaching users how to use the device
could overcome this frustration, is it important that technology is designed
with the difficulties that blind individuals face with technology in mind.

Finally,
the aesthetics and social acceptably of assistive technology must be considered
in the design. This is critical as assistive devices have been found to be
abandoned at great rates if the device attracts undesirable attention from
their sighted peers (Pape, Kim & Wiener, 2002), even if it has been found
to be functional and usable (Riemer-Reiss & Wacker, 2000). Undesirable
attention can lead to blind individuals feeling different and thus
self-conscious during social interaction (Shinohara & Wobbrock, 2011).
Furthermore, assistive technology has been found to highlight ones disability,
in turn, forming social barriers which such devices have been developed to
overcome (Shinohara & Wobbrock, 2011). 
Therefore, as blind individuals have a desire to fit in (Profita, 2016),
it is important that technology is subtle and socially acceptable. Ultimately,
assistive technology should be designed whereby blind individuals can interact
with sighted peers without them recognising their disability because of the
functional benefits it provides or noticing the assistive technology itself
because of how subtle and socially acceptable it is.

Last year,
Sarfraz, Constantinescu, Zuzej and
Stiefelhagen (2017) designed
a prototype of an assistive device aimed to support social interactions for the
blind population. The device is multimodal as it provides important visual information
through both audio and haptic feedback. The device is made up of a computer
element, wireless bone conducting headphones and a wireless vibrotactile belt
(see figure 1). The prototype has been designed to perform two specific
functions, namely the ‘GAZE’ and ‘ID’ function. The ‘ID’ function involves face
detection and face recognition algorithms in the computer analysing the video
stream from the camera. The name of a person or ‘unknown’, if they have not
been inputted into the computer, is then announced in the headphones. Vibration
on the belt towards the location of said person then follows. The ‘GAZE’
function involves face recognition algorithms in the computer deducing gaze
direction from individuals in the video stream. ‘Eye contact’ and the name of
the person is then announced in the headphones. If there is more than two
people looking at the user, the number of people is stated instead. The same
tactile feedback as with the ‘ID’ function subsequently follows. The device is
user-generated as they press a key on the belt when they need visual
information.