Anthropomorphic Robotics Project/Brainstorming/Visual systems

From Wikiversity
Jump to navigation Jump to search
The earth storming.

Please brainstorm on visual systems for AI.

Input devices are needed.

This may overlap with other systems, partly because all systems can be seen as affecting each other.

It seems that one way a robot could navigate the world is by creating a map of itself in space, and interacting with that map in computational, mathematical space, and having those interactions correspond to motor control of the mechanical systems.

At first glance it may seem that this would require great amounts of computing power.

The idea comes from a book entitled, "Why God Won't Go Away". It is about religion and neuroscience. One of the theories in the book postulates something like: we feel a certain sense of loss of self in meditation because the we shut off the part of our brain that orients us in space...

The point is, the book talks about part of the brain that orients you in space. Your brain says, you are sitting in a chair. Your hand is out in front of your body on your mouse. Your head is above your chest, and your feet are touching the ground. Your back is pressed against a chair. Your brain orients you. Your brain intakes all the sensory information from your many nerves and other sensory devices (like your eyes) and orients you so you feel reality.

So maybe a robot can do the same. It could orient itself from its vision system, and maybe its auditory system (sonar perhaps?).

I saw a show on PBS on DARPA's grand challenge. Basically, the winning team (Stanford) how they were able to keep they car on the road and avoiding obstacles was they used laser scanners to build a 2D picture representing the 3D picture of the terrain using color. They showed how the cars brain (its computer) would laser scan the terrain and then using image processing on each frame would color areas like rocks on the road, ditches on the side of the road and other obstacles. This way the car would have one section of the road painted a certain color to indicate that it was safe to drive on.

So maybe a robot could do the same. If algorithms could be written efficiently enough to allow for a robot to discern the shapes of the world around it... then maybe it could interact with those shapes in meaningful ways. Perhaps geometry processing could be used.

I am by no means an expert, but it seems that maybe people sometimes forget the fact that computers don't have to think like people to create artificial intelligence, it just has to have that appearance.


Mapping indoor terrain, and other terrain, possibly also including outdoor terrain.

The gist of these tentative thoughts are as follows.

Stereoscopic cameras... and accelerometors.

As long as the direction of the view is known, and the location of the cameras doing the viewing, and the distance of the object viewed.. as long as these are all known, could you not simply plot points in a digital three dimensional field, and have this as a map?

Perhaps this has been down... You would need to take points in space... probably a bunch of points.... and then all this data could calculate the space... this probably needs more refineing before any sort of implementation. Where are/what are stereoscopic algorithms to determine distance?


External links[edit | edit source]

Wikipedia[edit | edit source]

Other resources[edit | edit source]