Working with models in LightSide

Most of the exercises for this week were concerned with building models in LightSide and comparing their performance.

The first exercise dealt with using different feature spaces within the model and seeing how this affected their performance. The initial model, using unigrams, resulted in an accuracy of 75.9% and a kappa value of 0.518. This is OK, but would including bigrams and trigrams as features improve these results? They might, by providing further context for each word, thus reducing the number of incorrect predictions. By including these extra features, there was a slight improvement in the model – an accuracy of 76.5% and a kappa value of 0.530. However, by increasing the number of features there is a risk of creating a model which overfits the data, and can’t be applied to other data sets. To overcome this there is a Feature Selection tool, which only uses the 3,500 (in this case) most predictive features in the model. The result of using this select group of features was a statistically significant improvement in the quality of the model.

 

Getting on the right side of LightSide

As I was watching one of the text mining lecture videos this morning, I experienced a “lightbulb moment” with regards to using LightSide. Up until now I didn’t think that LightSide would be useful for my Twitter research project, as I wasn’t interested in building models, I just wanted to analyse the content of the tweets. However, I know realise that I don’t need to use the model-building features of LightSide for my Twitter data, I can just use it to extract features to get a count of the number of the times each word (or group of words) appears in all the tweets. This is the type of analysis that I’m interested in. I was really pleased that I’ve managed to find a tool to help me with this part of the data analysis.

I couldn’t wait to get home and try using LightSide on some of the tweets that I’ve already collected. I had to do a bit of a clean-up of the Excel file to make it ready to import into LightSide, but once that was done everything worked fine. The image below shows the LightSide workspace once I’d extracted the features.

Tweets in LightSideOnce I had the Feature Table prepared, I exported it as a .csv file, and was able to use the Sum feature in Excel to quickly tally the occurrence of each term. I’m going to play around with LightSide a bit more to explore the other features that can be extracted, but I’m pretty sure that it can do exactly what I need it to do. Time to crunch some data!

Conducting text mining with LightSide

The next topic in the Data, Analytics and Learning MOOC is text mining, which I’ll explain further in my next post. We were introduced to the last software tool that we’ll be using in the course – LightSide (the Star Wars fan in me is wondering if there’s a competing program called DarkSide which does the opposite of LightSide). It seems fairly simple to use, and I managed to get the correct answer for the exercise we were given:

LightSide exerciseI’m still not 100% sure if text mining is going to be useful for the Twitter analysis project I’m working on. I think I need a tool which will categorise the data, rather than try and build predictive models of it. Anyway, it’s always good to learn about a new tool – you never know when it might come in handy.