And that’s a wrap!

As learning analytics has emerged as a discipline over the last few years, several organisations have been founded with the aim of conducting research in the field as well as bringing together professionals to discus the latest developments. Some of them are listed below:

Overall I found the DALMOOC interesting, and I was certainly introduced to tools and ideas that I can use during my research project. Here is my first ever attempt a concept map for what was covered in the course:

DALMOOC

I didn’t really engage with the social learning aspects of the course – I preferred to work through the edX platform in the traditional way. I’ve always been a bit wary and nervous of putting my work out there for my peers to assess, so that’s why I stuck with edX. As far as the structure is concerned, I did find it a little disorienting in the first week, but soon got the hang of it. I didn’t really get much out of the Hangouts – I was expecting that they were going to be a bit more interactive and allow some participation from students, rather than only having the instructors involved.

As a complete newbie to learning analytics I found the content manageable and fairly easy to understand. The exception to this was the unit on prediction modeling and behaviour detection in weeks 5 and 6. I found it all quite technical and confusing, and I didn’t complete any of the assessments during those weeks. It was nice to be exposed to it, but I don’t think it’s an area that I’ll be using in my small research project. The tools that we were introduced to in the DALMOOC were pretty easy to use, and I can see that I’ll find Tableau, Gephi and LightSide useful in my Twitter research, at least at a basic level. On a side note, it was nice to see the work of researchers at other Australian universities was mentioned during the course e.g. Shane Dawson and Lori Lockyer.

The DALMOOC has given me a taste of what’s involved in working with learning analytics, and the tools and techniques that are available. There are certainly opportunities for libraries to get involved and make use of the data that our systems produce.

Now we’re on to text mining

Text mining is the next type of data analysis that we’re looking at in the Data, Analytics and Learning MOOC. I’m looking forward to the next couple of weeks, as I think that some of these tools and techniques might be useful for my research project, which is based on analysing tweets. Text mining is all about trying to find patterns in large collections of text, and using these patterns as a basis for identifying data that is worth investigating further. It’s this finding patterns in textual data which interests me, as that’s the vision that I’ve got for my Twitter research project.

One of the subareas of text mining is analysing the collaborative learning process that occurs in online courses via the discussion forums. This analysis involves modelling conversational interactions between students , and using those models to find out what it is about conversations that make them valuable for online learning. Based on this understanding it’s then possible to design interventions to support learning in online settings. Analysing conversations in online courses draws on knowledge from a number of fields, such as education, psychology, and sociolinguistics. This knowledge is used to determine the cognitive processes associated with collaborative learning, investigate what conversational interactions look like, and build models of how psychological signals are revealed through language. All this ultimately allows the development of models showing where processes are¬†happening during interactions.

An example of how these models can be used in learning analytics research is assessing some reasons for attrition along the way in MOOCs. The models are based on the analysis of the posts in discussion forums, both from the point of view of individual students and from the overall tone of individual threads. The negativity and positivity of the posts and threads is calculated, and then survival modeling is carried out to determine the probability that a student will have dropped out of the course by the following week.

This sort of detailed modeling is out of scope for my research project, but some of the aspects of conversation analysis could be useful, as many of the interactions between Twitter users could be characterised as conversations. At this stage I think I’ll be learning some useful stuff over the next couple of weeks.

Working with models in LightSide

Most of the exercises for this week were concerned with building models in LightSide and comparing their performance.

The first exercise dealt with using different feature spaces within the model and seeing how this affected their performance. The initial model, using unigrams, resulted in an accuracy of 75.9% and a kappa value of 0.518. This is OK, but would including bigrams and trigrams as features improve these results? They might, by providing further context for each word, thus reducing the number of incorrect predictions. By including these extra features, there was a slight improvement in the model – an accuracy of 76.5% and a kappa value of 0.530. However, by increasing the number of features there is a risk of creating a model which overfits the data, and can’t be applied to other data sets. To overcome this there is a Feature Selection tool, which only uses the 3,500 (in this case) most predictive features in the model. The result of using this select group of features was a statistically significant improvement in the quality of the model.

 

Getting on the right side of LightSide

As I was watching one of the text mining lecture videos this morning, I experienced a “lightbulb moment” with regards to using LightSide. Up until now I didn’t think that LightSide would be useful for my Twitter research project, as I wasn’t interested in building models, I just wanted to analyse the content of the tweets. However, I know realise that I don’t need to use the model-building features of LightSide for my Twitter data, I can just use it to extract features to get a count of the number of the times each word (or group of words) appears in all the tweets. This is the type of analysis that I’m interested in. I was really pleased that I’ve managed to find a tool to help me with this part of the data analysis.

I couldn’t wait to get home and try using LightSide on some of the tweets that I’ve already collected. I had to do a bit of a clean-up of the Excel file to make it ready to import into LightSide, but once that was done everything worked fine. The image below shows the LightSide workspace once I’d extracted the features.

Tweets in LightSideOnce I had the Feature Table prepared, I exported it as a .csv file, and was able to use the Sum feature in Excel to quickly tally the occurrence of each term. I’m going to play around with LightSide a bit more to explore the other features that can be extracted, but I’m pretty sure that it can do exactly what I need it to do. Time to crunch some data!

The Learning Analytics data cycle

There are several steps to the learning analytics (LA) data cycle. These include:

  • Collection and Acquisition: data is collected and acquired from one, or several, sources.
  • Storage: data is stored so it can be worked on. This storage may be located within the system which is used to produce the data, or the data may need to be exported and stored elsewhere.
  • Cleaning: there will usually be a need for some cleaning of the data so that it is in a format which can be used by the analysis software. This will be especially true if the data has been collected from a range of different sources, as each source will have its own data format.
  • Integration: if data is collected from multiple sources, it needs to be integrated into a single file so that it can be analysed.
  • Analysis: a software package is used to analyse the data to produce statistics about it.
  • Representation and Visualisation: in order to make the results of the data analysis easier to understand, they need to be represented and visualised in some way e.g. as a graph or chart, or a network diagram.
  • Action: finally, some action should be taken on the basis of the results of the data analysis. There is no point in initiating this LA data cycle if there is not going to be an action at the end of it.

Although LA have traditionally been used by departments other than the library, there are library systems which could produce data which could be analysed using this cycle. We can collect data about loans (from our catalogue), database access from proxy server logs), and website usage. Librarians are very good at collecting data and statistics about our patrons and collections, but often there is no particular reason for collecting them. LA ties nicely into the philosophy of Evidence Based Library and Information Practice (EBLIP), which is defined as:

Evidence based librarianship (EBL) is an approach to information science that promotes the collection, interpretation, and integration of valid, important and applicable user reported, librarian observed, and research derived evidence. The best available evidence moderated by user needs and preferences is applied to improve the quality of professional judgments.

Booth, A. (2002). From EBM to EBL: Two steps forward or one step back? Medical Reference Services Quarterly, 21(3), 51-64. doi: 10.1300/J115v21n03_04

By using an approach similar to the LA data cycle, it’s possible for librarians to collect the evidence that they can use to improve existing services or develop new ones.

Before LA are used at an institution, there needs to be consideration of policies and planning around it. There should be policies dealing with the ethical collection and use of the data, as well as a clear outline of how the results of the data analysis will be used to improve the learning experience. LA is nicely suited to be part of the quality and evaluation system within an institution, and the LA cycle could be incorporated into a continual improvement process.

As LA can potentially use data from a range of units from across the university, there needs to be some strategic planning around how it will be implemented and used. The results of LA data analysis could be used to inform changes to teaching practice, and these changes need to have a sound planning framework associated with them. Strategic planning could also help mitigate the “bright and shiny syndrome”, where institutions rush to embrace the latest new technology without a plan for how it will be used. LA is a powerful tool for providing insight into the learner experience, but it should not be relied on as the sole driver for change.

What are learning analytics, and what can we learn from them?

Learning analytics (LA) are certainly becoming a hot topic within the education sector. There are conferences, societies and journals where new developments in the LA field are discussed and developed. But what are LA, and how can academic libraries use them to learn more about our users?

The goal of LA is to use the data generated by the various systems on campus to improve the teaching and learning experience for students. It’s about bringing together the data from these disparate systems, e.g. the Learning Management System (LMS), the student administration system, to look for patterns and trends. Once these patterns and trends have been identified, they can be used to inform changes to teaching practices to assist students. Traditionally LA have been used by departments other than the library, as their systems can provide more information about a student’s progress and background. The LMS, for example, is a rich source of data on student behaviour during a semester. However, data from library systems can be combined with data from other systems on campus to make use of LA. For example, library staff at Curtin University combined the data from library systems and the campus student administration system to “explore if an association between library use and student retention is evident”. As they describe in their paper, they found that “[a]lthough student retention was associated with high levels of library use generally, it was the finding that use of electronic Library resources early in the semester appears to lead to an improved likelihood of remaining enrolled that is most useful.”

Another potential use of LA by academic libraries is to investigate whether embedding library content in the LMS can be linked to student performance. Increasingly librarians are collaborating with teaching staff to include library content directly in the LMS for individual units or subjects. It should be possible to examine the data produced by the LMS which shows how many times a link to library content is clicked on, and see if students who access library resources regularly achieve better results than those students who don’t use these resources.

I think there is great potential for libraries to use the data that our systems produce to try and learn more about our students, and try and improve their learning experience. It will not be an easy process, as there are institutional barriers which need to be overcome. I’ll discuss these in a future post.