Help Me Understand The Buzz Around Learning Analytics

I'm trying to understand the buzz behind Learning Analytics.

The syllabus for the Learning Analytics Online Course contains some useful links and resources that provide background on the subject. According to the syllabus, the course lays "a foundation for the upcoming 1st International Learning Analytics and Knowledge Conference held in February, 2011 in Banff, Canada.

From the syllabus

The growth of data surpasses the ability of organizations or individuals to make sense of it.

This isn't really new. It also begs the question: what data points (aka, individual human behaviors) are actually valuable to measure? And are the same behaviors predictably and/or uniformly valuable to all individuals across all contexts? I suspect that the answer is "no."

Also from the syllabus

In an age where educational institutions are under growing pressure to reduce costs and increase efficiency, analytics promises to be an important lens through which to view and plan for change at course and institutions levels.

Crow

Money is certainly tight everywhere. However, the claim that crunching data will improve the process of learning to the point where it saves a significant amount of money and time feels analogous to a personal jet pack: it could happen eventually, but claims of a straight line from from where we are now to that promised land seem hyperbolic. What specific data points are needed to actually make this valuable to a school or company? More importantly, why should learning require a significant loss of privacy, as embodied by a third party collecting, storing, and analyzing a person's behavior over weeks, months, or even years?

As a side note, it's interesting that Google is probably in the best position to do this type of analysis, as school districts, universities, governments, and companies are all tripping over each other in their rush to throw their user data into the data abyss that is Google Apps.

From Mining Social Networks: Untangling the Social Web

Of course, companies have long mined their data to improve sales and productivity. But broadening data mining to include analysis of social networks makes new things possible. Modelling social relationships is akin to creating an “index of power”, says Stephen Borgatti, a network-analysis expert at the University of Kentucky in Lexington. In some companies, e-mails are analysed automatically to help bosses manage their workers. Employees who are often asked for advice may be good candidates for promotion, for example.

In this use of data mining, the analysis helps highlight individuals who are, in some way, effective. I'm assuming that the learning/productivity patterns of these individuals can then be mined and compared to discover common patterns. However, the level of detail required for meaningful analysis could quickly run up against privacy concerns. From the perspective of a company, knowing what their top performers are doing every minute of every day would be valuable information, but I suspect that the level of scrutiny needed to make this data valuable would be pretty intolerable to the people being scrutinized.

It's also worth noting that the information returned by this initial analysis is a person, which has some interesting implications for teacher professional development:

"Well, Ms. Jones, your students all did well on their tests, parents report positive interactions, your classroom observations were flawless, but your social graph was quiet between November and January."

Learner Control

Leigh Blackall encapsulates some of what I'm thinking about learner control in this comment:

When the method for analysis is usable, it will compliment existing processes inside formal education, such as Recognition of Prior Learning (RPL), or assessment of prior learning (APL) that theoretically help people find accelerated pathways through curriculum. This really only works where assessment is standardised, such as the national unit standards used in Australia or New Zealand. HE resist this standardisation, making RPL and APL impossibly inefficient.

The ultimate would be a method that assists people to do their own LA and develop evidence for arguing for more precise RPL or APL.

I imagine this tool as a browser add on, something that could collect, track and save communicative and informative activity.

Learning Analytics, at least from the definitions I have been able to find, makes the most sense for and is most useful to organizations. Learner control, and ensuring that learners reap the benefit of being hyper-observed, deserves greater consideration.

It's also worth noting that putting time into analyzing how people learn does nothing to improve a more fundamental problem that could help improve questions asked via better data analysis: how can informal/non-traditional learning be assessed and credited within formal education/professional development? Weaknesses in how competency and knowledge are assessed need to be part of any comprehensive system that examines how learning can be more effective. It's difficult to make the case that we need to make the processes by which we learn more effective when we haven't yet mastered how to assess how people are learning now. If people have some examples of meaningful assessment of informal learning within traditional environments, please let me know about them in the comments.

Existing Practice, New Branding

Learning Analytics seems like a blend of:

  1. Traditional forms of data and behavioral analysis; combined with
  2. Making use of the increased processing power of the cloud; combined with
  3. An increased understanding of what the semantic web is, and how it works; combined with
  4. Better search tools, allowing less tech-savvy users to craft more sophisticated queries through larger data sets stored in the cloud, combined with
  5. Better visualization tools to make interpretations of these data sets more intuitive.

It's a powerful set of tools, and as a construct it's interesting, but I'm definitely missing what makes it novel. However, there are a lot of smart people who are interested in it. Please feel free to tell me what I'm missing in the comments.

Image Credit: "Crow" taken by Lucina M, published under an Attribution-NonCommercial license.

Comments

Thanks for your reflections on this Bill. I'll try and address some of your concerns:

1. Saving costs - you're absolutely right that it's not a straight line from "here to there". If learning analytics are deployed systemically, we could conceivably eliminate the waste of one course for everyone. This means some learners would progress more rapidly and others more slowly (as the content and information is more personal). It's an attempt to reduce the current splatter approach to education.

2. Privacy and 3rd party data - one of the reasons I'm most interested in analytics for learning is the almost inevitability of their inclusion in education. We can sit on the side and lament their corrupting evilness...or we can recognize that digital data, increased computation power, large scale data analysis (i.e. "big data"), etc. are going to impact education and become participants in conversation so that we drive it according to our interests/concerns/agenda. Perhaps that's utopian. But it still seems like a better approach than "letting analytics happen to us".

3. Analytics has limits, obviously. At best, they can give us a lay of the land - reveal patterns that exist. From there, as with any research, we need to decide what to do with the patterns we encounter. Having detailed analytics of learner activities in a class/course still requires a teacher/educator to intervene or respond. Yes, some machine learning and automated intervention systems exist, but they are hardly advanced enough to replace an educator.

4. In terms of what makes it novel - nothing, actually, as you noted in your breakdown of individual elements. The "newness" is in the particular blend of tools and approaches, in considering systemic impact. Analytics give us an opportunity to peer into the black box of education and make decisions about how/what to improve (another important element that is often overlooked relates to learners - analytics should first and foremost serve learner's need. A learner should have access to all of the analytics the institution collects. In this instance, analytics could serve to advance learner self-awareness in her learning activity). By this measure, blogs were nothing new, neither was/is elearning - they are simply a unique constellation of existing technologies and some evolving techniques for their implementation in a particular context in meeting a particular opportunity (or solving a particular problem).

Yo.

I'm not particularly interested in why it's novel. Some people are always going to say that things are new, others claim that they are part of a long standing history of knowledge. both true, both false. There are always haters. There are some very smart people very concerned... and there are people who always distrust money. I have no reply to those people. I'm looking into something that might help me with some things i'm working on.

my needs are specific. I've seen the edges of the kind of data analysis that guys like tony hirst have done. I've been poking around in the last year or so in some of the other stuff that you are talking about. I think it might be a solution to the open courses i want to get funded. I want to convince foundations to spend their money on something that works like i wish the MOOCs would and how they sometimes do. Something like what Edtechtalk is... but with enough structure that people can join without being network-geeks. Something open and rhizomy.

I know from conversations i've had in the last few months that people will need something, even the broad scope provided by 10,000 foot analytics, to estimate what kind of impact their foundations funding is having. I know of several really, really awesome projects that are being held up for the lack of it. I also have seen many projects over the years shoved into restrictive content management systems or thrown over to proprietary systems in order to lock the network down so it can be counted.

My entirely utopian hope is that i can learn enough to find a way to measure the impact of a project like my upcoming 'MOOC for basic skills' so that we could offer them free of charge to whoever wants them. This, then, allowing them to PLAR back and get credit if they want to come to university, use it to help them do something else, or just complain about. Hopefully all three :)

Is that a good idea? I don't know yet. But it might be. And I can't see any harm in giving it a run.

That's why i think it's interesting.

Thanks for raising points for LAK11 discussion Bill, George and Dave.

A few of my non issues. Novelty, firsts or pioneer status looks great on conference letterheads . Re-named, re-mashed, re-purposed to me is an academic funding furphy, and on the incline. Academic analytics is not new, this latest re-jig may be, or not. Let's just deal with it.

Moore's law is allowing ready access and eventual ownership to the means of analysis and with next generation privacy definitions and digital use, data innundation is assured . Raw materials are therefore cheap and abundant so to monetise is when not if. By whom does worry me. Considerably.

I suspect big data informed systemic policy is also inevitable. I want to be informed about this emerging inter disciplinary field as real public education funding from governments is on a long ebb tide with no low in sight. Agendas of any new money in public education need careful scrutiny.

One utopia would be OER, self crafted, negotiated credentialing assessment, learner owned dashboards deeply informed by their own data exhausts in real time. All in a blended PLE with human checks and balances maintaining ultimate say. Not too much to dream for, in learning.

Concerns include the intent or agendas of those who own/fund the means of analysis or authority to credential. Loss of privacy during cross-silo data linking and profiling should also concern heritage institutions but I suspect less so for the half billion+ FB gen. Hope I'm wrong there.

Lastly, I'm both excited and apprehensive. That means this data analyticals is on the JSB etal Pull Edge and where we need to be.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
To prevent automated spam submissions leave this field empty.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.