So first, recall that when new technology emerges, rarely do we see all the applications and ramifications (negative consequences) so we strive to assess technology from that Business – IT – Society triangle perspective.
Voice Recognition (VR)
Our mobile devices’ voice recognition is getting better and better but on the computer, I use Dragon. I used to bring in my microphone and demonstrate but this link achieves the same thing. First, as discussed in the lecture, make sure you have a good USB microphone for the best Analog => Digital (AD) conversion. VR has many other variables and in mobile computing Android has the best approach since the VR is built into Android’s Linux Kernel (research the word Kernel if you are still unfamiliar).
Now I rely on Dragon to get through my 100s of daily emails and while the video specifies Windows PC Dragon, Mac Dragon Dictate is quickly catching up. Note that I turn off Dragon’s navigation (dictate mode) because we Computer Scientists use words like file and menu in our dictation and I find it annoying when I’m dictating and my computer starts erroneously opening files and surfing the Web.
A new kind of 3-D camera facilitating post shot refocus and it has recently been cited that all future cameras will have this functionality. The camera captures light vectors reflected off surfaces in contrast to raster images that are created by overlaying a grid on the image and sampling the image for each cell: https://www.lytro.com/camera
Now consider if this camera was available for the Boston Marathon Bombing incident as an APB with the criminal’s face could have been published in 5 minutes rather than sending all the photos off to the FBI for extensive evaluation and photograph construction. The application in criminal forensics is straightforward but what if they are placed on drones or around a city… you could just shoot pictures and later determine… what is in his or her hand… what does their facial expression construe… etc. => major privacy issues.
Now consider this functionality on everyone’s phone/tablet: http://www.cnet.com/news/intels-realsense-3d-tech-offers-glimpse-at-future-of-mobile-cameras/ Recall and consider the lag between the emergence of technology and subsequent development of policy. I gave the example of the convergence of cell phones and the camera. This was great as we now only needed a single device and we always had a camera at hand to capture events but people started to abuse this functionality and started taking inappropriate pictures in locker rooms and children’s playgrounds hence legislation was developed to prevent this. Now consider the Lytro camera where individuals can inconspicuously shoot a picture without focusing and later choose what they want to see. Similar to the surveillance issues presented above, post production zooming and analysis could answer the question… what was name and number of that credit card they used at checkout?
Leap Motion – Motion Based Computing
We’re seeing this in many applications (Kinect) but keep this in mind as we are now able to determine what a person’s body position is within a room. The Leap uses two cameras and three infrared LEDs and these track infrared light with a wavelength of 850 nanometers, which is outside the visible light spectrum. Now consider this in virtual reality or training.
Motion Command Touch Screen
This has a screen that overlays a Win/Mac screen adding motion commands to any existing system – https://youtu.be/BFKGEVJhOF0
Myo (muscle) Gesture Controller
Motion Based Security
James Bond proof (if you can secure the entire system, run from generator/battery etc.). Now MIT is working on this and they have developed systems that can see through cement walls and detect movement.
Motion Input & Output
MIT’s Tangible Media has developed motion Input & Output
Disney Touche Touch Surfaces
What is particularly interesting is that it is done with a single connection. Consider the following from the standpoint of an instrument or device that can lead you through its operation, maintenance or repair. Consider a car engine that can lead you through a radiator flush or repair as it detects and corrects you hand placement and process steps.
Disney Fingertip Microphone (although this is also output)
When the device converts someone’s voice, it creates a “modulated electrostatic field” around the user’s skin. When they touch someone else, the electrostatic field causes a very small vibration of the person’s earlobe. Together, the ear and finger form a speaker. Voila! Disney magic at its finest.
Not only can one person pass on a message to another person; multiple people can pass along a message simply by putting part of their body to the other person’s ear and so on. http://www.isciencetimes.com/articles/6047/20130912/disney-listening-device-turns-fingertip-human-microphone.htm
Thought Controlled Computing
The next evolution beyond voice and gesture, this first one is just concept and possibilities.
Researchers at the University of Washington just successfully demonstrated a brain-to-brain interface in a six-person study. Here is full story: http://gizmodo.com/telepathy-over-the-internet-is-getting-real-1655918017
Thought based computing – Theory & Technology