Thursday, December 2, 2010

better start learning the sign language...

or gesture language....because Kinect is here, and its going to revolutionize everything. no more keyboards and no more mice...touch, sound and gestures are the new modes of inputs....the screen will still remain the primary mode of output, but that too might soon change.
the possibilities of gesture are immense, it will be the primary mode of input in confined spaces like in homes and office. Imagine entering a room, and just holding your fingers up such that they form a flower...command for switching on the lights. Or imagine pointing towards the TV and then just swiping in the air and the channels change automatically, because each room has a Kinect like devices embedded in the walls, which are connected to a central machine (yep, the machine i described here) and the machine is controlling your TV and your microwave and the lights and everything else!!! 
Kinect like devices will soon be embedded in all confined spaces, be it homes or offices...the question is whether they can work in the open as well...if they can, then their goes ur traffic lights. Instead there will be these devices looking in all four directions and synchronizing...as they see a two conflicting vehicles coming, they hit the Red light for one vehicle. Or maybe when they see someone crossing the road...if only they can work in the open. 
so where will the three modes be used...well its easy to see where Voice recognition will be used...where you need to give long instructions, or where you have to type a lot. Like when writing blogs or mails, and also where the instructions are specific and not general. 
Touch is where you want to have privacy, so on devices, on your phones and your tablets and on console monitors at malls. Personal devices.
and Gestures will be at more open, public places like in malls...imagine walking into a mall, and you want to know where is the loo...well hold out that pinky and some device somewhere sees it and passes the instructions..but how?
how is the output mode...so maybe there is a giant screen which tells you where to go...but nah, i think whats gonna happen is that the instruction might be passed to ur personal device, ur mobile, with instructions, so you can hear it on that tiny dot of a earphone that is in now implanted in ur inner ear canal...
but again, what can the output mode be?? screens and voice are ofcourse the two options...but more then how you see it, the change will be in where you see it. If the output is somehow transferred to your mobile device or ur earphones, that becomes a personal output which moves along with you, which is mobile. 
so here's to the future...
that brings me to the language...the only reason that gesture tech will take time to be adopted is people learning the sign language...first the vocab is not there yet, so going to the loo could be holding up your pinky or ur index finger. and even once the vocab does get developed, it will be very limited, because we don't talk in the language of gestures, there are very few. Same goes for touch, right now the best of touch devices (which is the apple touchpad i think) use 4 fingers, and just three gestures (swipe, pinch, single and double tap) so that is in all just 12 unique touch signals. but these are good enough for now, because they apply to everything, you can swipe at so many places, webpages, pics et al. also touch can be combined with location on the screen, because you get visual clues from the screen on which you are touching. But not the same with Gestures, you will have to remember every gesture. Here also there can be giant screen guiding you, but that defeats the entire idea of having ubiquitous devices seeing you and waiting for ur gestures all the time. 
Whatever be the case...be ready to learn a new language...the sign language.
For some reason Voice has not taken off like touch, and i think that is because of the lack of a convenient language. Today most Voice enabled devices expect you to speak out the commands, but that means that you need to remember those commands, or the device needs to show them to you. Both of which defeat the purpose. I think what will happen is that there will be an intelligent engine which will be able to understand the meaning of what you say in the context of where you are, what you are doing. Not sure what that means right now...lets see.
btw, am surprised that MS came up with this tech before anyone else, and the reviews i have heard, they did an awesome job of not just using the basic tech, but also adding the bells and whistles and building an experience out of it. Now only if they will embed it into Windows 8, and change the way machines are seen (not as desktops, but hidden process centers), they would rule again. But will they take that risk, or will Apple beat them to it??!?
I think after Apple coming up with touch, this is next big fundamental thing in technology, which will change everything. But unlike touch, which only affected devices, i think gesture has the potential to change far more, building technology for sure. With people figuring out Voice, be ready for a paradigm shift in the way you interact with everything but humans, but then that is already changing thanks to FB...


take care.

No comments:

About Me

My photo
experimenting...with life!!!