Introduction to Computers/Future Peripherals

From Wikiversity
Jump to navigation Jump to search

Course Navigation

<< Previous - Output Devices Next - System software >>


Introduction to Computers Future Peripherals
This page is part of the Introduction to Computers project.

Computers are continually evolving as technology becomes more advanced, cheaper, and more efficient. In this section we explore what could become the technology of tomorrow.


Gesture Recognition[edit | edit source]

Greg Roberts shot this photo of his son while running a beta version of a computer vision algorithm trained to detect hands.

Gesture Recognition is a way for computers to understand human body language using mathematical algorithms. Gestures can be from bodily motion or state, but are usually from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques.

Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse.

Gesture recognition enables humans to interface with the machine (HMI) and interact with it naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant. Gesture recognition can be conducted with techniques from computer vision and image processing.

The literature includes ongoing work in the computer vision field on capturing gestures or more general human pose and movements by cameras connected to a computer.

Gesture recognition and pen computing: In some literature, the term gesture recognition has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, and mouse gesture recognition. This is computer interaction through the drawing of symbols with a pointing device cursor (see discussion at Pen computing).

Using 2 video stereo cameras together with a positioning reference allows people to interact naturally without the use of mechanical devices, an example of this is the XBOX Kinect system.

Uses[edit | edit source]

Gesture recognition is useful for processing information from humans which is not conveyed through speech or type. As well, there are various types of gestures which can be identified by computers:

  • Sign language recognition. Just as speech recognition can transcribe speech to text, certain types of gesture recognition software can transcribe the symbols represented through sign language into text.
  • For socially assistive robotics. By using proper sensors (accelerometers and gyros) worn on the body of a patient and by reading the values from those sensors, robots can assist in patient rehabilitation. The best example can be stroke rehabilitation.
  • Directional indication through pointing. The use of gesture recognition to determine where a person is pointing is useful for identifying the context of statements or instructions. This application is of particular interest in the field of robotics.
  • Control through facial gestures. Controlling a computer through facial gestures is a useful application of gesture recognition for users who may not physically be able to use a mouse or keyboard. Eye tracking in particular may be of use for controlling cursor motion or focusing on elements of a display.
  • Alternative computer interfaces. Foregoing the traditional keyboard and mouse setup to interact with a computer, strong gesture recognition could allow users to accomplish frequent or common tasks using hand or face gestures to a camera.
  • Immersive game technology. Gestures can be used to control interactions within video games to try and make the game player's experience more interactive or immersive.
  • Virtual controllers. For systems where the act of finding or acquiring a physical controller could require too much time, gestures can be used as an alternative control mechanism. Controlling secondary devices in a car, or controlling a television set are examples of such usage.
  • Affective computing. In affective computing, gesture recognition is used in the process of identifying emotional expression through computer systems.
  • Remote control. Through the use of gesture recognition, "remote control with the wave of a hand" of various devices is possible. The signal must not only indicate the desired response, but also which device to be controlled.

Input devices[edit | edit source]

The ability to track a person's movements and determine what gestures they may be performing can be achieved through various tools. Although there is a large amount of research done in image/video based gesture recognition, there is some variation within the tools and environments used between implementations:

  • Depth-aware cameras. Using specialized cameras such as time-of-flight cameras, one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3d representation of what is being seen. These can be effective for detection of hand gestures due to their short range capabilities.
  • Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. To get the cameras' relations, one can use a positioning reference such as a lexian-stripe or infrared emitters.
  • Controller-based gestures. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. Mouse gestures are one such example, where the motion of the mouse is correlated to a symbol being drawn by a person's hand, as is the Wii Remote, which can study changes in acceleration over time to represent gestures.
  • Single camera. A normal camera can be used for gesture recognition where the resources/environment would not be convenient for other forms of image-based recognition. Although not necessarily as effective as stereo or depth aware cameras, using a single camera allows a greater possibility of accessibility to a wider audience.

Challenges[edit | edit source]

There are many challenges associated with the accuracy and usefulness of gesture recognition software. For image-based gesture recognition there are limitations on the equipment used and image noise. Images or video may not be under consistent lighting, or in the same location.

Items in the background or distinct features of the users may make recognition more difficult. The variety of implementations for image-based gesture recognition may also cause issue for viability of the technology to general usage. For example, an algorithm calibrated for one camera may not work for a different camera. The amount of background noise also causes tracking and recognition difficulties, especially when occlusions (partial and full) occur. Furthermore, the distance from the camera, and the camera's resolution and quality, also cause variations in recognition accuracy.

In order to capture human gestures by visual sensors, robust computer vision methods are also required, for example for hand tracking and hand posture recognition or for capturing movements of the head, facial expressions or gaze direction.

Gorilla arm[edit | edit source]

"Gorilla arm" was a side-effect that destroyed vertically-oriented touch-screens as a mainstream input technology despite a promising start in the early 1980s.

Designers of touch-menu systems failed to notice that humans are not designed to hold their arms in front of their faces making small motions. After more than a very few selections, the arm begins to feel sore, cramped, and oversized—the operator looks like a gorilla while using the touch screen and feels like one afterwards. This is now considered a classic cautionary tale to human-factors designers; "Remember the gorilla arm!" is shorthand for "How is this going to fly in real use?".

Gorilla arm is not a problem for specialist short-term-use uses, since they only involve brief interactions which do not last long enough to cause gorilla arm.

Emotion Recognition[edit | edit source]

emotion recognition

Technology is constantly evolving; it seems that every time something new emerges, something better is not far behind! Computers are our main source of technology and humans consistently try to improve them. A glimpse into the future shows the possibility of computers that can interact with human beings. Research shows that in order to create a computer with 'artificial intelligence', that is, one that can interact with humans, the machine would need to be able to read human emotion. Many theories have come about as to how this would be possible, but the most popular one seems to be a camera attached to the computer that would read the facial expressions of its user. The computer would turn the facial expression into a formula that the device could read and interpret. Of course this idea is still in its research stages and requires time to formulate.





DirecTV[edit | edit source]

direcTV

DirecTV (trademarked as "DIRECTV") is a direct broadcast satellite (DBS) service. DirecTV transmits digital satellite television and audio to households across North and South America. Satelittes such as direcTV are becoming more and more popular as they make it possible for the users to never leave the comfort of their home to enjoy the most recent video releases.

3D screen[edit | edit source]

Three-dimensional printing is a method of converting a virtual 3D model into a physical object. 3D printing is a category of rapid prototyping technology. 3D printers typically work by 'printing' successive layers on top of the previous to build up a three dimensional object. 3D printers are generally faster, more affordable and easier to use than other additive fabrication technologies.

3D printing[edit | edit source]

3D Computer Imaging

3D Printing is the method of converting a virtual 3D model into a physical object. It is categorized as a type of rapid prototyping technology. These printers work by printing multiple layers on top of the previous layer to build a three dimensional object. 3D printing is generally built for speed, low cost, and ease-of-use, making it suitable for visualizing a design when dimensional accuracy and mechanical strength of prototypes are less important.

3D Printing technology is currently being considered and studied by firms for use in tissue engineering applications in which the organs and body parts are built using inkjet techniques. Source: http://en.wikipedia.org/wiki/3D_printing


Course Navigation

<< Previous - Output Devices Next - System software >>