Stochastic Diffusion Search (and where we can use it)

Stochastic Diffusion Search is one of the first Swarm Intelligence algorithms, characterised as a ‘population based best-fit pattern matching algorithm’[1], first proposed by John Bishop in 1989 [2]. It reminds me most of ants foraging for food, and how they communicate distantly via pheromones to reveal promising locations, this results in a truly incredible ‘collective intelligence’ that allows the ants to efficiently locate and utilise food sources.

As a heuristic, the basic idea is that a swarm of agents are randomly initialised in a search space. They then have some kind of fitness function that determines whether they are satisfied or not satisfied, determined by their position in the search space — this is the Test phase. Then follows the Diffusion phase, which is essentially a phase of communication between the agents; in active diffusion, the successful, active agents will let an unsuccessful, inactive agent (chosen at random) know where in the search space they were successful, and then the notified agent will go there to look for neighbouring success locations. The Test and Diffusion phases are then repeated until the system has converged to an optimum location or set of locations (dependent on the problem).

When we start to solve problems based on this heuristic, an important characteristic of the algorithm arises, this is the ‘Recruitment Mode’ contained within the Diffusion, or communication phase, and it has to do with the direction and condition of the communication. Namely, there is active recruitment (explained above), passive recruitment and dual recruitment. There are also context sensitive/ context free mechanisms that can be used in the recruitment modes.

In the dual recruitment strategy, both successful, active and non-successful, inactive agents randomly select other agents. When an agent is successful, and the randomly selected agent is not, then the hypothesis of the successful agent is shared with the unsuccessful one and the agent is flagged as active. Also, if there is an agent which is inactive and disengaged in communication; and the newly selected agent is active, there will be a flow of information from the successful agent to the unsuccessful one and the unsuccessful agent is then flagged as active. Nevertheless, if there remains an agent that is neither active or engaged, a new random hypothesis is chosen for it. [3] 

I have explored two separate implementations of SDS, one is a mining simulation (image in header) that is very similar to the heuristic i have described above (just replace agents with miners and success with gold!). The other is to find a word in a text document, here is an example of finding all the ‘Denmarks’ in a short passage, making use of the Context Sensitive recruitment mechanism, whereby if an active agent randomly chooses an- other active agent that maintains the same hypothesis, the selecting agent is set inactive and adopts a new random hypothesis (therefore achieving a wider exploration):

Screen Shot 2017-10-26 at 15.12.52

In the Context Free Mechanism, the performance is similar to context sensitive mechanism, where each active agent randomly chooses another agent. However, if the selected agent is active (irrespective of having the same hypothesis or not), the selecting agent becomes inactive and picks a new random hypothesis. ‘By the same token, this mechanism ensures that even if one or more good solutions exist, about half of the agents explore the problem space and investigate other possible solutions.’ [3]

On the subject of improving education, SDS has been utilised to predict whether students enrolled on MOOC courses will pass or fail [4]. One problem which I thought SDS would be useful is the Cold Start Problem in Recommender Systems/Collaborative Filtering; when we want to find suitable recommendations for user with little user information, we could use populate a feature space with popular items and then based on the users feature characteristics, we could engineer the swarm so that they choose the best possible items (using the fitness function e.g. rating). Evidence shows that Swarm Intelligence is ‘likely to be particularly useful when applied to problems with huge dimensionality’ [4] which is usually the case for recommendation systems!

It could also be useful to achieve the main goal of CF; to use the mining analogy, we represent each user as a ‘hill’, they may have ‘gold’ (i.e good items based on rating, or ‘goodness’ rating based on users feature weights) within their collections/purchase history. Employing SDS could be a good way of finding similar users as an alternative to using metrics like euclidean distance.

There is much more to say on SDS and it’s applications. One of the things that fascinates me is the algorithms conception, proposed as an alternative to more traditional Artificial Neural Networks to recognise patterns [1]; Neural Networks are by design, non stochastic, deterministic models, for which an alternative (i.e swarm intelligence) is often more accurate and efficient. It is worth noting that ANN’s are also nature inspired models; emulations of a simplified model, or heuristic, of the brain. Let’s keep looking to nature, then, for more amazing problem solving heuristics!

Thanks for reading! This is the third part in a series of  blogs for the Natural Computing module at Goldsmiths, University of London.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s