Pages

Friday 31 May 2013

Getting started with OpenMP

In the previous posts, about creating logging software for the boat trial, I mentioned the fact we were logging GPS and stereo images. This was done using OpenMP to parallelise the processing so that the program wasn't hanging around for a GPS update causing delay between image samples.

This is a quick guide on how to get OpenMP going.

My first mistake was thinking it was something that needed to be installed and configured. It's just a switch in Visual Studio 2010. Go to the project properties page. C/C++  > Language > OpenMP Support > Yes.



Now you'll be amazed how easy it is to get a simple program working that uses two cores...Here's a snippet from the logger that capture GPS data using 1 core and images from a camera using the other. I've highlighted the important lines of code and briefly outlined what their functions are.

#include <iostream>
#include <stdio.h>
#include <string>
#include <Windows.h>
#include "SerialGPS.h"
#include <omp.h>

#include <opencv/cv.h>
#include <opencv/highgui.h>

using namespace std;

int main( int argc, char** argv )
{  
  //set up the com port 
  SerialGPS GPS("COM6",4800);
  SerialGPS::GPGGA GPSData={};
  
  if (!GPS.isReady()) { return 0;  }
  cout << "GPS Connected Status = " << GPS.isReady() << endl;
  
  int gpsReadCounter=0;
  
  //create the video capture object
  cv::VideoCapture cap1;
  cap1.open(0);

  cv::Mat Frame;
  
  //define the shared variables between threads
  #pragma omp parallel sections shared(GPSData) {

    //////// FIRST CPU THREAD ////////
    #pragma omp section
    {
      for (int i = 0; i<100; i++) {
        cap1 >> Frame;
        cv::imshow("Cam",Frame);
        cv::waitKey(33);
        cout << "   GPS Data Time = " << GPSData.GPStime << endl;
      }
    }

    //////// SECOND CPU THREAD ////////
    #pragma omp section
    {

      for (int i = 0; i<100; i++) {
        if (GPS.update(GPSData, FORCE_UPDATE)) {
          cout << " T=" << GPSData.GPStime << " La=" << GPSData.Lat << " Lo=" << GPSData.Lon << endl;
        } else {
          cout << "Error Count = " << GPS.getErrorCount() << "  ";
          GPS.displayRAW();
          cout << endl; 
        }
      }
    }
  }
}


- Make sure to #include <omp.h>
- The line #pragma omp parallel sections shared(GPSData) enables the data structure, GPSData, to be accessible from either thread.
- The lines #pragma omp section define the code that is to run on an individual thread
- GPSData gets populated from the call GPS.update(GPSData, FORCE_UPDATE)
- Since we initialised all the values to 0 with SerialGPS::GPGGA GPSData={}; it doesn't matter if the camera thread gets to reading the data before the GPS update thread has populated the structure with any meaningful values.
- As soon as the the GPS thread receives a valid signal and updates GPSData accordingly this data is accessible from the camera thread.
- Note that while both threads loop while i<100, this i value is local to each thread. It's very likely that one thread will finish before the other as no synchronization is taking place here, we're just offloading the GPS updates so that it doesn't interfere with the image sampling and allows use to simply tag an image with the latest good GPS values.

This example broadly gives you an idea of how to use two threads and share a variable between them.

There's a handy OpenMP cheatsheet here (http://openmp.org/mp-documents/OpenMP3.1-CCard.pdf)

Wednesday 29 May 2013

A "quick" Side Project

Since the last post I had my 9 month PhD review, it went pretty smoothly, passed with no problems :)
Shortly after I was asked by my project supervisor if I could help out quickly on another project. I'm always keen to work on a variety of projects.

The task was to simply pull together some already existing code to create a data logging program that logs stereo image from the +Point Grey Research bumblebee camera and log GPS positions, all this was going a remotely operated boat for the +Environment Agency.

I wrote a little C++ class to connect to a COM port and read off GPS data. This worked fine. The bumblebee however proved to be more of a challenge. I consulted a Masters student who had used it more than I had to obtain some code from him to read the images. The logging laptops are +Dell 6400 ATG's, theses are rugged laptops, perfect for use on the boat. Fresh installs of Win7 x64 were on these machines as they'd been hacked to pieces (not in the literal, more in the coding sense) in the past for various projects. So with the code working on a 32bit WinXP machine we ported the source over to +Microsoft Visual Studio (Link) in Win7 installed all the relevant +Point Grey Research drivers and..... nothin'.... 2 days went by of trying every sodding combination of drivers and libraries but none of the laptops were having any of it under x64. So we binned it and went back to trusty old XP.

We had hoped to get out onto the river yesterday (28/05/13) but a morning re-coding session to log grayscale stereo rectified images instead of colour un-rectified images, delays from +Leica Geosystems AG in obtaining a theodolite, problems with a bluetooth 5Hz GPS system that needs a power cycle if something tries to connect at the incorrect baud rate and not forgetting the British weather pissing it down all afternoon.

Student with his boat... finally.
As it was getting on for 6pm by the time everything was up and running we postponed the data logging day until tomorrow (30/05/13). Stay tuned for more....






Tuesday 7 May 2013

Getting back to science...


So now the 9 month review report has been handed in I can get back to doing science. Before tackling the review report this is the stage I had reached. A 3D map of the DIP (Digital Image Processing) Lab at Cranfield University (albeit, a noisy one). It is a simple reproduction created using a stereo vision algorithm and visual odometry algorithm.

The process is...
- Create disparity depth map from stereo camera
- Calculate camera trajectory from the visual odometry
- Add point clouds to a global map with an appropriate offset calculated from the trajectory.

It's good... but there's lots of work to do yet...


The equation for converting disparity map values into real world depth is quoted in a ton of papers/books but I could not actually see where it was derived from so I worked it out for the review report for completeness. I'm posting it up on here so that it might help someone else.



Above is a diagram of the triangulation theory. We need to know a few parameters about the setup before we can triangulate from a disparity map image to real world depth.
We need :

- The focal length of the cameras (f) (ideally they would be the same)
- The baseline of the stereo rig (B)

So use the triangle ratios to get...
$\frac{P_{L}}{f}=\frac{X_{L}}{Z}\qquad\frac{P_{R}}{f}=\frac{X_{R}}{Z}$
...rearrange to get...
$X_{R}=\frac{ZP_{R}}{f}\qquad X_{L}=\frac{ZP_{L}}{f}$
...add them together to get the baseline...
$B=\frac{ZP_{R}}{f}+\frac{ZP_{L}}{f}=\frac{Z\left(P_{R}+P_{L}\right)}{f}$
...substituting disparity as d=Pl-Pr  by defining Pr  as a -ve ...
$Z=\frac{fB}{d}$

Disparity is calculated as difference in pixel locations for matched regions from left to right image (or the other way round depending on your ference camera). So Pr is shown as a -ve direction from the centre of the image plane so we define it as a negative.

Edit
10:35 22/5/13
I have no idea why but the latex equations have stopped rendering...
The addition of the javascript used to kill the Google plus comments box at the expense of the latex equations. Now it's just killing everything... arse.

10:38 22/5/13
oh no wait the comments box is back....wtf. 

14:09 1/7/13
It looks like using the dynamic page template in Blogger kills the latex script.