Dispersive Flies Optimisation in Symmetry Analysis

In this post i’ll be exploring and evaluating DFO in finding points and lines of n-fold symmetry to aid Computer Vision. See my previous blog post explaining DFO here.

To define our problem to find points of n-fold symmetry, we are looking for specific areas (in this case a single pixel) where the fitness function of symmetry is at its optimum (in this case a minima). Let’s define our fitness function; we need 2 blocks of comparison, each with the same radius (the distance from the centre to any four of the blocks vertices), the sums are then evaluated around our proposed axis of symmetry, such that our symmetry metric = absolute(sum1 – sum2). In the case of a vertical axis for example:


We can see this in action here on b/w images:

Then, to improve, we change our communication so that the flies navigate around the best fly in the swarm (as opposed to our previously democratic approach),

for (int d = 0; d < Global.dim; d++) {
temp[d] = Global.fly[chosen].getPos(d) +
   //random(1) * (Global.fly[chosen].getPos(d) - Global.fly[i].getPos(d));// local nei
  random(1) * (Global.fly[Global.bestIndex].getPos(d) - Global.fly[i].getPos(d));

and re-initialise our agents when loading a new image:

Here we see a more stable point of symmetry, and most importantly, a solid observation; the more symmetrical the image by human perception, the more stable the position of the best fly in the swarm. 

The next problem which would lead to evolving our algorithm and making it more useful is to find symmetry in nature i.e for detecting animals. Let’s see how our most recent algorithm behaves for an image of a zebra (with an updated radius that is proportional to our new image size):

Here we see some interesting behaviour straight away, around 0:13 and 0:52 there is a clear indication of the central, most obvious line of symmetry, albeit a little left of centre! This may be because the right side of our image is a little brighter than the left side, causing some confusion with our fitness function evaluation. Here might be a good place to start evolving our algorithm, then. It is wort noting that because our image is black and white, we don’t have to worry too much about our colour space, and utilising brightness values is sufficient for the next step in developing our algorithm, however, when we start to analyse more colourful animals, we’ll need to consider the best colour space to use, i.e L*a*b*  (where euclidean distance is relative to perceptive difference in colour), but more on this later.

For now let’s investigate how to deal with the varying brightness issue. Whereas before we simply had all black pixels and all white pixels, we now need to account for pixels that lie on the grayscale. One way to solve this could do be to use a more balanced approach, i.e to use 4 summation boxes to compare instead of two, and use a reversed comparison for our second set. More to come!

See the code on Github.

Thanks for reading! This is the fourth part in a series of  blogs for the Natural Computing module at Goldsmiths, University of London.

P.S. A fitting soundtrack for the task:


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s