Markerless Tracking/Realization

From Wikiversity
Jump to navigation Jump to search

On of the main targets of our research project Markerless Tracking is to implement the stated idea for the use in Robotics and Augmented Reality. Because the aim is quite clear but the best way of realization is unsettled we are using a prototype-centered software development approach.

Prototype A1[edit | edit source]

The real model of the Stanford dragon created with rapid prototyping which was recognized and tracked by the Prototype A.

The first prototype was developed by Dipl.-Inform Rodja Trappe, Dipl.-Inform. Matthias Dennhardt and Dipl.-Inform Tobias Feldmann at the University of Koblenz.

Setting[edit | edit source]

While capturing a physical version of the Stanford dragon (archived by rapid prototyping) with a single 640x480 fire-wire camera in a controlled light situation an OpenGL Rendering-Engine generates a whole bunch of images with possible locations of the dragon.

Workflow[edit | edit source]

After capturing an image the adaptive random search-optimization is started to obtain the relative pose between camera and dragon. The optimization algorithm is therefor able to evaluate a 6D function. This evaluation is implemented as an similarity measure between captured image and the on-the-fly generated computer graphics image. The measure is based on the well known normalized cross-correlation.

Technical Details[edit | edit source]

Prototype A is written in C++ and only compiles on Linux Systems. Ogre was used as Rendering-Engine and unicap for image aquisition. To load, manipulate and handle images we used the DevIL image library.

Implementation Details[edit | edit source]

The rendering was done with 512x512 resolution and completely downloaded into cpu memory for computing the similarity between generated and captured image.

Results[edit | edit source]

We were able to find the pose of the dragon program in about 70 seconds. We switched into an other optimization mode to keep track of little movements after first pose was found. Now it was possible to turn and move the dragon (~1cm) each ten seconds without loosing track of the object.

Prototype A2[edit | edit source]

The second prototype is heavily based on the first one and targets the same setting with th e same workflow. It was written by Dipl.-Inform Rodja Trappe to repair some bugs that had been occured, to gain more speed and archive greater robustness in pose estimation.

comparing images[edit | edit source]

The similarity mesure beween rendering and capturing was improoved by not normalizing the colors with the mean of the image. Also the performance could be increased.

The C++ implementation is as follows:

compareImages(RGBAImage* imageA, RGBAImage* imageB){
	
	float distance;
	int pixelCount = 0;

	CGImage::byte* dataA = imageA->getRaw();
	CGImage::byte* dataB = imageB->getRaw();
	for(int i = 0; i < size; i++){

		// save the color channels
		Vec3 cA(*dataA++,*dataA++,*dataA++);
		Vec3 cB(*dataB++,*dataB++,*dataB++);

		// if we have no alpha: compute distance of colors and sum up the result
		if (*dataA != 0){
			pixelCount++;
			distance += (cA - cB).length();
		}
		dataA++; dataB++;
	}

	Vec3 meanDistance = distance / pixelCount;
	float difference = meanDistance / MAX_DISTANCE_IN_COLORSPACE;
	float similarity = 1 - difference;
	return similarity;
}

With MAX_DISTANCE_IN_COLORSPACE beeing the constant .

Prototype A3[edit | edit source]

Short video showing prototype A3 in action.

The first prototype was enhanced by implementing the image comperation on the GPU. This results in a current speedup to 10 ms per hypothesis. The CPU implementation took about 12 ms for image download from the graphicscard and 10 ms for the image comperation. With the new implementation it is possible to have nearly interactive framerates after initial detection of the pose.

Prototype B[edit | edit source]

Note: This prototype is currently in planing phase.

An adaptive random search-optimization as it has been implemented in Prototype A seems to be very slow converging. The used implementation also badly adapts when the function values get bad over time. Therefor we would like to implement a Particle Filter to replace the old optimization algorithm.