In this post I will be exploring ACO in relation to a particular application of the Travelling Salesman Problem, for music exploration between items in an existing collection, and items that have been recommended by a system.
A user is presented with items in their collection, and can immediately see the recommendations in relevant distance to items in their collection. Then, upon browsing to a new item, a new graph/map is visualised, showing the most important features as relevant to the user preferences (heavily weighted features being graphed) and displaying a network of music releases as nodes, with edges displaying the name and possibly value of the features which connect the recommendation or collection item set of items in-between.
The starting node would be an existing item in a users collection, and the final destination is a recommended item of the users choice.
Thus, a visual data space is presented, to which a user can explore and learn about new music, ‘joining the dots’ between music releases. This, in essence, takes part of the Machine Learning algorithm out of the ‘black-box’ and presents it alongside the results to the user. There could also be an opportunity to implement an evaluation system for the RecSys, where users can say if they like or dislike a recommendation, or further; if they like or dislike how the recommendation has been made, thus allowing the Machine Learning algorithm to improve and adapt accordingly. A large part of work for this task will be finding the best way of visualising the multidimensional feature space in 2 or 3 dimensions. Dimensionality reduction techniques such as TSNEE/MDS/UMAP will be very useful for this. I would like to incorporate release artworks for each item in the visual space, which I believe will add to the immersivity and attractiveness of the application.
Thus, we can see the paths that the ants, have taken, via the deposition of their pheromones, and parameterise their exploration based on a users input!
The Final Blog Post
As this is my final blog post for the Natural Computing module, I will be evaluating what I’ve learned and what I would still like to learn in the topic which was not covered in class. The best things I’ve learned would likely be the wonders of dimensionality, optimisation and simulated collective intelligence. I am a scientist at the core, and the content has been very intellectually stimulating to me.
I would like to learn more about Natural Computing techniques as applied in the field of Machine Learning, how they interact with Data Science and compare with widely popular models like Neural Networks. It has also come to my attention that Neural Networks are an attempt at simulating a natural model of cognition, and therefore could fit under the umbrella of Natural Computing. As humanity delves further into the depths of Artificial Intelligence, we will with no doubt draw from nature as a bountiful, seemingly endless and wonderful source of inspiration.
And now on to some Natural Computing jokes:
- A fly walks into a bar. It checks if its best neighbour also walked into that bar. If so, all the flies drink at that bar. All the humans leave.
- What kind of ants are very learned? Pedants!
- What do you call an ant who can’t speak? A mutant (mute ant).
- How can you better understand genetics in cold weather? Put your codon! source: http://web.pdx.edu/~newmanl/GeneticsJokes.html
- Why did the chicken cross the road? Darwin: ‘It was the logical next step after coming down from the trees. He was the fittest chicken.’
Thanks for reading! This is the ninth and final part in a series of blogs for the Natural Computing module at Goldsmiths, University of London.