This week I was struck by what seem to be two related tendencies in this week’s readings about technological interfaces. The first involves making devices increasingly portable, which at least to me, results in an increasingly “personal” experience. The second, mentioned in passing by Krug at the end of his chapter, involves using voice for input. This seems to inevitably involve making your actions audible and therefore more public.
There seems to be a distinct tension between these two tendencies. For instance, since my iPhone’s screen is legible almost exclusively to me, I feel comfortable typing almost whatever I want on it in public. For instance, when I’m at a concert at a noisy bar, I’ll type on it and hand it over to my friends so that we can snarkily comment on the band or any obnoxious people around us. Although standing in public, the interface allows rather private conversations to happen.
At the same time, as smaller devices like the Apple Watch appear to make typing close nigh impossible (unless used with a linked device), voice input would appear to be a good solution. I wonder whether the limited uptake of voice technology is due to other people’s inhibitions about doing things in public. Something like Siri appears nice, until a user realizes that everyone around will hear your search terms or input. To me, the only acceptable conditions for using this would be something like driving a car, when my hands are otherwise occupied. Perhaps I’m intensely private—I don’t particularly even like talking on cell phones in public—but it seems like there will be a very profound split around this.
My parents teach deaf or hard-of-hearing people, and I’ve noticed that in deaf/Deaf culture, it’s almost the pinnacle of rudeness to “eavesdrop” by looking at other people’s conversations. Since ASL is a broadly gestural language, it’s close to impossible to effectively contain one’s language like hearing people do with whispering. As a result, there’s a cultural shift around expectations: you just *do not* eavesdrop. I wonder whether something like this will happen in the future, or whether there will be further refinements for interfaces that so vibrations are picked up by a throat sensor (like a necklace) rather than having to actually *sound* words in public. Even then, lip readers might still be able to deduce what you’re searching. Perhaps privacy as I know and expect it will become less important to future users of mobile devices, but I can’t help but think about the challenges facing both designers and the public who will be confronted with these new limitations and the “abilities” of voice input. It seems like it would behoove designers to consider not just the needs imposed by screens, but also to seriously dwell on the other needs of users when attempting to create the next innovation in interface design.