BSc Major Project in Music Computing Chapter 1 – Plan of Action

Welcome, this is my 1st blog post for my BSc Major Project. I’ll be outlining my aims and plan of action for the project, entitled: Data & AI in Music Recommendation, Discovery and Exploration.

I have been researching popular applications such as Spotify, Youtube and Discogs to consider the state of the art approaches to AI/Machine Learning based Music Recommendation Systems (MRS) and assisted music exploration. I have also been researching new and – in my opinion – underused methods in this field; such as Swarm Intelligence, to consider how I can offer a competitive alternative to the current state of the art systems and add to the current research. [1,2,3]

There has been a great deal of prior research in MRS; Schedl et al., in their paper “Current Challenges and Visions in Music Recommender Systems Research”, states that: “such systems are still far from being perfect and frequently produce unsatisfactory recommendations. This is partly because of the fact that users’ tastes and musical needs are highly dependent on a multitude of factors” [1]. I plan to address the issue of personalisation by using algorithms and methods that have a good social and psychological basis. I am well aware that the variance in users tastes of large statistical significance, therefore I will further evaluate my approach by iterative user testing– after implementations and alterations of the core recommendation system. One of my core questions being: what defines a ‘good’ recommendation? For my intents and purposes; I will define and thus evaluate the success of the algorithm by the ratings of my users for the generated recommendations, with a 5 point ‘star’ rating system.

I propose to create an MRS, and furthermore, an explorative web application; that can be used by music enthusiasts, DJs and producers to find new, personalised recommendations. Further more, there is an opportunity to create a UI where the user can explore further the resultant data space from the MRS. 

“What do you mean ‘resultant data space’?”

Dimensionality reduction needs to be applied in order to successfully create a meaningful distance, whereby the closest items are the best recommendations from the current item. Simultaneously, this is very useful for the latter part of my project (if I have time)– whereby I will be finding the shortest paths between 2 tracks and visualising the resultant path to the user.

In the multi-dimensional hybrid data space (weighted compound features based on metadata, user data and possibly acoustic data). MDS/TSNEE/UMAP will be tested out to see which performs best based on the quality of the recommendations. See Leon’s blog post for more details (comparing UMAP, t-SNE and PCA).

One of the most important reasons for the reduction in a large number of dimensions is the so called ‘Curse of Dimensionality’, in summary: when the dimensionality increases, the volume of the space increases su h that the available data become sparse. And therefore these Euclidean distances that I will be relying on will become harder to deal with. A large portion of the work that needs to be done here is also experimenting with feature weighting (which is largely dependent on user preferences) such that distances between tracks are most meaningful to that user.

Semantic Analysis using Doc2Vec/Word2Vec

I plan to create a small set of features based on semantics, for this I will be using doc2vec/word2vec to extract a similarity measure between 2 tracks qualitative, semantic information. For this, I will use lyric websites like Rapgenius and a review website such as Allmusic/MusicBrainz; for which I can scrape the text using Beautiful Soup, or obtain it through the APIs. This will create a great way of describing where songs fit into a low dimensional, semantic data space. Most probably an efficient feature engineering method, as opposed to using multidimensional acoustic data space. With enough time however I will experiment with high level acoustic features (e.g Danceability, Tonality) to include into the overarching compound feature/data space.

Running the System on a Server

To execute theses processes I will be running Python as scripts on a server, and serve the results in HTML to be analysed and visualised. I will be working for the duration of this project on my University’s server: Igor. This saves time and money over setting up AWS and such related services.


[1]  M. Schedl, H. Zamani, C. Chen, Y. Deljdoo, M. Elahi, “Current Challenges and Visions in Music Recommender Systems Research”, Conference 2017, Washington, DC, USA.

[2] P. Covington, J. Adams, E. Sargin, “Deep Neural Networks for YouTube Recommendations”, in Proceedings of the 10th ACM Conference on Recommender Systems – RecSys ’16 (ACM Press, New York, New York, USA, 2016;, pp. 191–198



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s