And that’s a wrap!

As learning analytics has emerged as a discipline over the last few years, several organisations have been founded with the aim of conducting research in the field as well as bringing together professionals to discus the latest developments. Some of them are listed below:

Overall I found the DALMOOC interesting, and I was certainly introduced to tools and ideas that I can use during my research project. Here is my first ever attempt a concept map for what was covered in the course:

DALMOOC

I didn’t really engage with the social learning aspects of the course – I preferred to work through the edX platform in the traditional way. I’ve always been a bit wary and nervous of putting my work out there for my peers to assess, so that’s why I stuck with edX. As far as the structure is concerned, I did find it a little disorienting in the first week, but soon got the hang of it. I didn’t really get much out of the Hangouts – I was expecting that they were going to be a bit more interactive and allow some participation from students, rather than only having the instructors involved.

As a complete newbie to learning analytics I found the content manageable and fairly easy to understand. The exception to this was the unit on prediction modeling and behaviour detection in weeks 5 and 6. I found it all quite technical and confusing, and I didn’t complete any of the assessments during those weeks. It was nice to be exposed to it, but I don’t think it’s an area that I’ll be using in my small research project. The tools that we were introduced to in the DALMOOC were pretty easy to use, and I can see that I’ll find Tableau, Gephi and LightSide useful in my Twitter research, at least at a basic level. On a side note, it was nice to see the work of researchers at other Australian universities was mentioned during the course e.g. Shane Dawson and Lori Lockyer.

The DALMOOC has given me a taste of what’s involved in working with learning analytics, and the tools and techniques that are available. There are certainly opportunities for libraries to get involved and make use of the data that our systems produce.

Now we’re on to text mining

Text mining is the next type of data analysis that we’re looking at in the Data, Analytics and Learning MOOC. I’m looking forward to the next couple of weeks, as I think that some of these tools and techniques might be useful for my research project, which is based on analysing tweets. Text mining is all about trying to find patterns in large collections of text, and using these patterns as a basis for identifying data that is worth investigating further. It’s this finding patterns in textual data which interests me, as that’s the vision that I’ve got for my Twitter research project.

One of the subareas of text mining is analysing the collaborative learning process that occurs in online courses via the discussion forums. This analysis involves modelling conversational interactions between students , and using those models to find out what it is about conversations that make them valuable for online learning. Based on this understanding it’s then possible to design interventions to support learning in online settings. Analysing conversations in online courses draws on knowledge from a number of fields, such as education, psychology, and sociolinguistics. This knowledge is used to determine the cognitive processes associated with collaborative learning, investigate what conversational interactions look like, and build models of how psychological signals are revealed through language. All this ultimately allows the development of models showing where processes are happening during interactions.

An example of how these models can be used in learning analytics research is assessing some reasons for attrition along the way in MOOCs. The models are based on the analysis of the posts in discussion forums, both from the point of view of individual students and from the overall tone of individual threads. The negativity and positivity of the posts and threads is calculated, and then survival modeling is carried out to determine the probability that a student will have dropped out of the course by the following week.

This sort of detailed modeling is out of scope for my research project, but some of the aspects of conversation analysis could be useful, as many of the interactions between Twitter users could be characterised as conversations. At this stage I think I’ll be learning some useful stuff over the next couple of weeks.

Working with models in LightSide

Most of the exercises for this week were concerned with building models in LightSide and comparing their performance.

The first exercise dealt with using different feature spaces within the model and seeing how this affected their performance. The initial model, using unigrams, resulted in an accuracy of 75.9% and a kappa value of 0.518. This is OK, but would including bigrams and trigrams as features improve these results? They might, by providing further context for each word, thus reducing the number of incorrect predictions. By including these extra features, there was a slight improvement in the model – an accuracy of 76.5% and a kappa value of 0.530. However, by increasing the number of features there is a risk of creating a model which overfits the data, and can’t be applied to other data sets. To overcome this there is a Feature Selection tool, which only uses the 3,500 (in this case) most predictive features in the model. The result of using this select group of features was a statistically significant improvement in the quality of the model.

 

Getting on the right side of LightSide

As I was watching one of the text mining lecture videos this morning, I experienced a “lightbulb moment” with regards to using LightSide. Up until now I didn’t think that LightSide would be useful for my Twitter research project, as I wasn’t interested in building models, I just wanted to analyse the content of the tweets. However, I know realise that I don’t need to use the model-building features of LightSide for my Twitter data, I can just use it to extract features to get a count of the number of the times each word (or group of words) appears in all the tweets. This is the type of analysis that I’m interested in. I was really pleased that I’ve managed to find a tool to help me with this part of the data analysis.

I couldn’t wait to get home and try using LightSide on some of the tweets that I’ve already collected. I had to do a bit of a clean-up of the Excel file to make it ready to import into LightSide, but once that was done everything worked fine. The image below shows the LightSide workspace once I’d extracted the features.

Tweets in LightSideOnce I had the Feature Table prepared, I exported it as a .csv file, and was able to use the Sum feature in Excel to quickly tally the occurrence of each term. I’m going to play around with LightSide a bit more to explore the other features that can be extracted, but I’m pretty sure that it can do exactly what I need it to do. Time to crunch some data!

Getting my head around text mining

In response to a tweet from one of the instructors of the Data, Analytics and Learning MOOC, I wanted to try and unpack how I think I can use text mining for my Twitter research project.

As a complete novice at this whole text mining caper, I’m still coming to terms with all the concepts behind it. To answer Carolyn’s question, I guess I see classification models working like this:

1. Take a subset of the data, and classify each item in the subset e.g. individual tweets, by hand. The classification scheme I’m thinking of would have categories such as “administrative”, “presentation summary”, “marketing”.

2. Build a model which will take this subset and learn the characteristics of the tweets which are in each category.

3. Apply the model to the remaining unclassified data so that each tweet is assigned the correct classification.

My take on predictive models is similar, I suppose, but I see them as more theoretical rather than practical. I guess by their nature models are theoretical, but the application of the models is still something I’m not sure about. The basic premise of training the model and then applying it to the data is the same, but I haven’t yet seen what happens at the end of the process i.e. the predictive side of things. This may be covered in next week’s content, so hopefully I’ll have a better understanding of the process then.

From the perspective of the Twitter analysis project that I’m working on, I don’t think the text mining tools will do what I need them to do. My aim is to categorise all the tweets that I’ve collected, based on their content. This is something that needs to be 100% accurate so that I can get an accurate picture of what was tweeted about. Perhaps I might do a bit of playing around with LightSide as part of the data analysis, but I won’t be relying on it to categorise all the tweets.

This whole course has been a great introduction to data analysis and mining, and the tools which are available. I think I’ll be trying to think of future projects to utilise them.

Conducting text mining with LightSide

The next topic in the Data, Analytics and Learning MOOC is text mining, which I’ll explain further in my next post. We were introduced to the last software tool that we’ll be using in the course – LightSide (the Star Wars fan in me is wondering if there’s a competing program called DarkSide which does the opposite of LightSide). It seems fairly simple to use, and I managed to get the correct answer for the exercise we were given:

LightSide exerciseI’m still not 100% sure if text mining is going to be useful for the Twitter analysis project I’m working on. I think I need a tool which will categorise the data, rather than try and build predictive models of it. Anyway, it’s always good to learn about a new tool – you never know when it might come in handy.