The Knapsack problem is a classical problem in combinatorial optimisation, it has many applications including finding the least wasteful way to cut raw materials, and other economic problems.

An example of the knapsack problem could be the following (substituting a knapsack for a car boot). Suppose we are planning a trip to a car boot sale; and we are, therefore, interested in filling the car with items that are considered valuable to sell. There are N different items; these could include a bunch of old toys, a games console, an antique chair, a CD player, and so forth. Each item type has a given set of two attributes, namely a weight (or volume) and an estimated value associated with each item. Since the boot has a limited weight (or volume) capacity, the problem of interest is to figure out how to load the boot with a combination of units of the specified types of items that yields the greatest total value.

Here I will be reporting my progress on using DFO to solve the n-knapsack problem, that is, where there are multiple bags to fill and each bag has unique constraints, though the items have constant values.

So far, i have solved the problem for 2 knapsacks and 28 items with the same capacity, but different weight constraints. See the ‘example’ problem, within the problem dataset i will be using here. This example problem hence has 28 dimensions, though a more complex solution surface than just 1 knapsack. It was solved in 500 iterations, using 100 flies and a disturbance threshold of 0.165.

I am now trying to solve the problem for 30 knapsacks and having some trouble getting close to the global optima (always getting stuck in the local optima), even when using 8000 iterations! This now seems like a good opportunity to optimise my approach. I will now try and improve the efficiency of the algorithm by reducing the dimensionality of the update phase in DFO.

Dimensionality Reduction

As the solution vector is a binary string, we could convert this to a decimal value and update the flies in one dimension rather than nItems dimensions, and then convert it back to a binary within the fitness function in order to evaluate whether it meets the constraints, and how much the solution is worth. Upon implementing this I discovered that although it speeds up convergence initially, it gets stuck in a local optima.

Fitness Penalisation Oscillation

In my most recent experiments, I have found that penalising the fitness by multiplying it by a value between 0 and 1 if it goes over the constraint threshold is much more successful. If we then begin to slowly decrease/increase this penalisation value over time, we see greatly improved results, in both function evals and best combination/value obtained. My next step is to see if I can model this in terms of an oscillator, adapting the frequency of this oscillator dynamically according to the dimensionality/variance in constraints of the particular problem set.

**Update:**

I’m now working on this problem in a group with a couple of my super smart peers; Francesco Perticarari and Bashar Saade. See our code on github here. We’ll hopefully be presenting a research paper with our work in the new year at ANTS and/or AISB conferences!

*Thanks for reading! This is the seventh part in a series of blogs for the Natural Computing module at Goldsmiths, University of London.*