VTech Data Breach - Some Steps To Take

3 min read

Edit, 11/27/2015: Troy Hunt - who was consulted on the Vice story - has a comprehensive review of the breach that is a great read. End edit.

Earlier today, Vice published a story on a data breach at VTech, a popular toy and game manufacturer. The bad news is that the breach was very extensive, with nearly 5 million unique emaila and personal info on approximately 200,000 children. The good news - if there is ever good news when reporting a data breach - is that the person who compromised the data isn't planning on releasing it.

However, from reading the description of the breach, it is possible that other people have already accessed the data stored by VTech. Data that could be accessed included email addresses, children's first name, gender, and date of birth, passwords, and password recovery questions and answers. According to the hacker (as quoted in the article linked above):

The hacker said that while he doesn’t intend to publish the data publicly, it’s possible others exfiltrated it before him.
"It was pretty easy to dump, so someone with darker motives could easily get it".

What To Do Now

First, don't panic. If you have bought a VTech product in the past, these steps can help minimize any risk. And, even if you haven't bought a VTech product, these steps are good practice. Nothing listed here is particularly novel or earth shattering, but these steps can help protect you over time.

  • On sites that use password recovery questions, go back and update your password and change the recovery questions and answers. This can be tedious, but it's a lot less work than recovering an account after it has been compromised.
  • Use a password manager (something like LastPass or 1Password) to store your passwords. This will allow you to store and use more complex passwords.
  • Establish a fraud alert. This guide from the Privacy Rights Clearinghouse contains clear instructions on how to do that, as well as other useful information on steps to take if and when your personal data gets compromised.
  • Visit Have I been pwned to see if your data was breached in any of the high profile breaches. While this site is not 100% comprehensive, it's a useful resource to see if you have been affected by recent data breaches. Because the data from the VTech breach has not been released, this site will not include people affected there (the list from the VTech breach has been added to "Have I Been pwned"), but it is still a useful resource in monitoring whether your information has been breached anywhere.

And, to state the obvious: now is not a good time to buy any new VTech games.

Tracking We Can't Hear

4 min read

The Center for Democracy and Technology recently filed comments to the FTC on cross device tracking. Their report is a good summary of current practices in cross device tracking, and worth a read. In this report, however, they highlight the use of high frequency tones embedded in television and online video ads to connect people to devices in order to allow marketers to connect a single individual to multiple devices. Because each device holds information about how a person interacts online, the ability to combine these separate views into a single profile allows marketers to develop more comprehensive (and more invasive) profiles of people. Ars Technica has a solid writeup that summarizes these points. The FTC is holding an event dedicated to cross device tracking tomorrow (on November 16, 2015).

As noted in the Ars writeup and the CDT document, multiple companies use high frequency pitches to track users. In September, 2015, one of these companies announced additional VC funding - while the quotations below are specifically about this one company, they are generally accurate about this tracking practice.

With such data related to TV commercials, companies can come up with targeted mobile ads. The technology essentially consists of an audio beacon signal embedded into tv commercials which are picked up silently by an app installed on a user phone (unknown to a user).

A rough profile of user (sic) is then created, containing information about where the ad was watched, for how long did the user watch that commercial before changing the channel, which kind of mobile device is user using and so on.

Just to highlight: the app that picks up the audio signal that cannot be detected by human hearing needs to be installed on a person's phone. That would seem to be a pretty significant barrier, as very few people would willingly install software on their phone for the expressed purpose of tracking them.

However, affiliate deals sidestep this barrier:

The company reportedly has agreements with about 6-7 apps to incorporate this technology in their app to catch signals from TV and claims to have data of 18 million smartphones already. It has already created mobile ads for over 50 brands in six countries including Google, Dominos, Samsung, Candy Crush, Airtel, P&G, Kabam and Myntra.

Based on this report, it sounds like the tracking technology is embedded within other apps. So, when you download an app from the Play Store of the Apple Store, it could have this tracking software silently embedded in it, with no notice to end users. Both Google and Apple could play a positive role here by requiring apps that embedded this tracking software to display a prominent notice to end users on their app pages.

It's also worth noting that, while the stated use is for advertisers to connect multiple devices to a single user, this technology could also be used to track multiple people to a single location. For example, high frequency pitches could be sent out in a mall (tracking people through a store), at a concert, or via any televised display. This would allow a specific device (and the person carrying it) to be tracked to a precise location, even if that person has their location services fully disabled on their phone. This technology would also allow marketers or observers to identify people who were in the same place at the same time.

Intrusive practices like this move marketing solidly into the realm of profiling and surveillance. Technologies like this also make the case for requiring a hardware switch on mobile devices to disable microphones and cameras. At the very least, these intrusive practices by marketers show us that browsing the web with the volume turned off, and muting the television when any commercials play, are best practice. It also highlights how privacy weaknesses in the Internet of Things (most recently seen in Vizio's sloppy business and privacy practice) can be compounded into greater intrusions into our privacy. These intrusions - committed by marketers in their self-described mission to deliver more relevant content - cross the line from marketing research into tracking and surveillance. Many of these practices are invisible, and offer no option to opt out, let alone to review or correct the full profiles amassed on us.

Now, it turns out that in addition to being invisible, the tracking is inaudible as well.

Privacy Policies and Machine Learning

3 min read

Today, Google announced the release of their v2 machine learning system under an open source license. This is a big deal for a few reasons. First, to understate things, Google understands machine learning. The opportunity to see how Google works on addressing machine learning will save huge numbers of people huge amounts of time. Second, this lets us take a look inside what is generally a black box. We don't often get to see how ratings, reviews, recommendations, etc are made at scale. This release peels back one piece of one curtain and lets us look inside.

Before we go any further, it's worth highlighting that machine learning - even with a solid codebase - is incredibly complex. Doing it well involves a range of work on infrastructure, data structure, training the algorithm, and ongoing, constant monitoring for accuracy - and even then, there is still a lot of confusion and misconception about what machine learning does, and what machine learning should do. So, before we proceed any further it needs to be highlighted that doing machine learning well requires (at the very least) clearly defined goals, a reliable dataset, and months of dedicated, focused work training the algorithm with representative data. The codebase can jumpstart the process, but it is only the beginning.

As part of the work we're doing at Common Sense Media, Jeff Graham and I are working with a large number of school districts on a system that streamlines the process of evaluating the legal policies and terms of a range of education technology applications.

The first part of this work involves tracking policies and terms so we can (among other things) track changes to policies to alert us when we need to update an evaluation. There are a range of other observations this will allow - and we have started talking about some of them already.

The second part of this work involves mapping specific paragraphs in privacy policies to specific privacy concerns. When it comes to evaluating policies, this analysis is the most time consuming. Doing it well requires reading and re-reading the policies to pull relevant sections together. While there are ways to simplify this, these methods are more useful for a general triage than a comprehensive review.

However, Jeff has been looking at machine learning as a way to simplify the initial triage for a while. To understate things, it's complicated. Doing it right - and training and adjusting the algorithm - is no small feat. Implementing machine learning as part of the privacy work is a distant speck on a very crowded roadmap. It's incredibly complicated, and we have a lot of work to do before it makes sense to begin looking at implementing machine learning to do the initial categorization. But, announcements like the one from Google today get us closer.

Is This Part Of Your Social Media Training For Kids?

2 min read

While the fact that a variety of data brokers engage in digital redlining is old news, and Facebook's patent on assessing credit via friends could theoretically be dismissed as a future plan, we now have reports that credit rating agencies are using social media to assess credit ratings:

FICO is working with credit card companies to use several different methods for deciding what size loans people can handle, and using non-traditional sources like social media allows them to collect information on people who don't have an in-depth credit history

As educators, if you steer kids towards social media use, how do you prepare them for the reality that their posts are being archived by companies that will use their interactions to judge and sort them for the indeterminate future? Does your social media training and digital citizenship for kids cover how to create an online persona that is as creditworthy as possible? And how do we reconcile the needs for "authentic conversation" against a backdrop where for-profit companies invisibly mine these "authentic" interactions looking for predictors of future behavior? How many of us could withstand the actions of our youth weighted alongside our adult choices? How many parents consider these realities when they share information about their kids online?

Privacy Protection and Human Error

5 min read

As part of my work, I spend a fair amount of time reading through the websites of educational technology offerings. The other day, while on the site of a well known, established product, I came across a comment from one person asking for information about another person. Both people - the commenter and the person who was the subject of the question - were identified by first and last name. The nature of the question on this site struck me as strange, so I did a search on the name of the person who left the comment.

The search on the commenter's name returned several hits - including every one of the top five - that clearly showed that the commenter is a principal at a school in the United States. Jumping to the school's webpage, it clearly shows that the principal's school supports young children. With that information, I returned to the comment. Knowing that the original questioner is a principal of a school, it became clear that the subject of the question - who, remember, is identified by first and last name - is almost certainly a student at the school.

I had stumbled across a comment on an edtech site where a principal identified a student at their school by name, asked a question that implied an issue with the student, on the open web. The question had been asked over a month ago.

To make matters worse, the principal's question about a student on the open web had been responded to by the vendor. Staff for the company answered the question, and left the thread intact.

In this post, we're going to break down the ways that this exchange is problematic, what is indicated by these problems, and what to do when you encounter something similar in the future.

The Problems

Problem 1: The principal who asked the original question has access to large amounts of data on kids, but doesn't understand privacy law or the implications of sharing student information - including information with implications for behavioral issues - on the open web. This problem is particularly relevant now, when some people are complaining that teachers haven't been adequately trained on new privacy laws coming on the books. The lack of awareness around privacy requirements is as old as data collection, and it's disingenuous and ahistorical to pretend otherwise.

Problem 2: The vendor responded to the question, and allowed a student to be identified by name, by that student's principal, on their product's web site. The product in question here is in a position to collect, manage, and store large amounts of student data, and much of that data contains potentially sensitive student information. Every member of their staff should be trained on handling sensitive data, and on how to respond when someone discloses sensitive information in a non-secure way. When a staff member stares a potential FERPA violation in the face and blissfully responds, we have a problem.

This problem is exacerbated by rhetoric used by a small but vocal set of vendors, who insist that they "get" privacy, and that people with valid privacy concerns are an impediment to progress. Their stance is that people should get out of their way and let them innovate. However, when a vendor fails to adequately respond to an obvious privacy issue, it erodes confidence in the potential for sound judgment around complicated technical, pedagogical, and ethical issues. If a vendor can't master the comment field in blogging software, they have no business going anywhere near any kind of tracking or predictive analytics.

How To Respond

If you ever see an issue that is a privacy concern, reach out to the company, school, and/or organization directly. In this case, I reached out via several private channels (email, the vendor's online support, and a phone call to their support). The comment with sensitive data and the vendor's response were removed within a couple hours. A private response is an essential part of responsible disclosure. We make privacy issues worse when we identify the existence of an issue before it has time to be addressed.

For principals and educators, and anyone in a school setting who is managing student data: spend some time reading through the resources at the federal Privacy Technical Assistance Center. While some of the documents are technical, and not every piece of information will be applicable in every situation, the resources collected there provide a sound foundation for understanding the basics. At the very least, schools and districts should create a student data privacy protection plan.

For vendors, train your staff. If you're a founder, train yourself. For founders: start with the PTAC and FERPA resources linked in this document. Cross reference the data you collect for your application with the data covered under FERPA. If there is any chance that you will have any people under the age of 13 using your site, familiarize yourself with COPPA. Before you have any student data in your application, get some specific questions about your application and your legal concerns and talk with a lawyer who knows privacy law.

For staff: make sure you have a Data Access Policy and some training on how to respond if a customer discloses private information. If you are part of an accelerator, ask for help and guidance. Talk to other companies as well. This is well work ground, and there is some great work that has been done and shared.


Privacy is complicated. We will all make mistakes, and by working together, over time, we will hopefully make fewer of them, and the ones we do make will be smaller in magnitude. This is why we need an increased awareness of privacy, and sound protection for student data. By taking concrete steps, however, we can improve the way we handle data, and move toward having an informed conversation around both the risks and rewards of sound data use.

What Peeple Tells Us About Privacy

2 min read

The latest Internet furor-de-jour is over an app called Peeple. This post is not going to get into the details or problems with the app, as other people have already done a great job with that.

In brief, the app allows anyone with a Facebook account to rate anyone else. No consent is needed, or asked. All a person needs to rate another person is their phone number.

As seen in the links above (and in a growing angry mob on Twitter), people are pointing out many of the obvious weaknesses in this concept.

The reason many people are justifiably furious about Peeple is that it allows strangers to rate us, and makes that rating visible as a judgment we potentially need to account for in our lives. However, what Peeple aims to do - in a visible and public way - is a small subset of the ways we are rated and categorized every day by by data brokers, marketers, human resources software, credit ratings agencies, and other "data driven" processes. These judgements - anonymous, silent, and invisible - affect us unpredictably, and when they affect us we often don't know about it until much later, if at all.

While Peeple is likely just a really bad idea brought to life by people with more money and time than sense, I'm still holding out hope that Peeple is a large scale trolling experiment designed to highlight the need for increased personal privacy protections.

Some Tips For Vendors When Looking At Your Privacy Policies

4 min read

This post is the result of many conversations over the last several years with Jeff Graham. It highlights some things that we have seen in our work on privacy and open educational resources. This post focuses on privacy, but the general lesson - that bad markup gets in the way of good content - holds true in both the OER and the privacy space.

When looking at privacy policies and terms of service, the most important element is the content of the policy. However, these policies are generally delivered over the web, so it's important to look at how the pages containing these policies perform on the web. Toward that end, here are some simple things that vendors should be doing to ensure that their policies are as accessible as possible to as many people as possible, with as few barriers as possible.

Toward that end, here are four things that vendors should be doing to test the technical performance of their policies.

  • View the source. In a web browser, use the "view source" option. Does the text of your policy appear in the "main content" area of your page, or some semantic equivalent? Are you using h1-h6 tags appropriately? These are simple things to fix or do right.
  • Google your privacy policy and terms of service, and see what comes up. First, use the string "privacy policy OR terms of service [your_product_name]". See what comes up. Then, use the more focused "privacy policy OR terms of service site:yoursite.com" - in this search, be sure to omit the initial "www" so that your search picks up any subdomains.
  • Use an automated tool (like PhantomJS) to capture screenshots of your policies. If PhantomJS has issues grabbing a screenshot of your page, it's a sign that you have issues with the markup on your page.
  • Use a screenreader to read your page. Listen to how or if it works. Where we have observed issues with a page failing to behave in a screenreader, it's frequently due to faulty markup, or the page being loaded dynamically via javascript.

To people working on the web or in software development, these checks probably sound rudimentary - and they are. They are the technical equivalent of being able to tie your shoes, or walking and chewing gum at the same time.

In our research and analysis of privacy policies, we have seen the following issues repeated in many places; some of these issues are present on the sites of large companies. Also worth noting: this is a short list, highlighting only the most basic issues.

  • Pages where the policies are all wrapped in a form tag. For readers unfamiliar with html, the form tag is used to create forms to collect data.
  • Pages where, according to the markup, the policies are part of the footer.
  • Pages where, according to character count, the actual policies only account for 3% of the content on the page, with the other 97% being markup and scripts.
  • Sites where Google couldn't pick up the text of the policy and was only able to index the script that is supposed to load it.

We are not going to be naming names or pointing fingers, at least not yet, and hopefully never. These issues are easy to fix, and require skills that can be found in a technically savvy middle schooler. Vendors can and should be doing these reviews on their own. The fix for these issues is simple: use standard html for your policies.

We hear a lot of talk in the privacy world about how privacy concerns could stifle innovation - that's a separate conversation that will almost certainly be the topic of a different post, but it's also relevant here. When the people claiming to be the innovators have basic, demonstrable problems mastering html, it doesn't speak well to their ability to solve more complex issues. Let's walk before we run.

MySchoolBucks, or Getting Lunch with a Side of Targeted Adverising

6 min read

MySchoolBucks.com is an application that is part of services offered by Heartland Payment Systems, Inc, a company in New Jersey. MySchoolBucks processes payments from parents for school lunches.

Before we proceed any further, we must highlight one thing here: this post IS NOT about the federal, state, or local school lunch programs. This post addresses a vendor that has inserted itself between students, schools, and lunches.

The premise of MySchoolBucks is pretty simple. Parents put money into an account on the site. Accounts are tied to a card used by the student to pay for lunch, and the system keeps track of how much money families have in their accounts.

To make this system work, MySchoolBucks collects a parent name, the name of any children enrolled in school, and the school they attend. Parents add money to their MySchoolBucks account via credit card, so MySchoolBucks also processes credit card payments.

However, reading the Privacy Policy of MySchoolBucks shows some oddities that have nothing to do with supporting parents, students, or schools with lunch. It's also worth noting that MySchoolBucks has a "feature" I have never seen before on any other policy: after six or seven minutes, the privacy policy page automatically redirects you to the home page. It's almost like the company doesn't want you to read their privacy policy at all.

But, for those of use who persevere, we discover some oddness in this policy.

In the opening "Glossary" section, MySchoolBucks defines a Business Partner as follows:

"Business Partners" means, collectively, third parties with whom we conduct business, such as merchants, marketers or other companies.

Then, in Section 4, MySchoolBucks states:

We (or our Vendors on our behalf) may share your Personal Information ... with relevant Business Partners to facilitate a direct relationship with you.

So, business partners include marketers, and marketers can be given personal information. As noted above, the personal information collected in this application includes parent name, child's name, and the child's school.

Taking a look back at at the glossary, we get this definition of non-identifying information:

"Non-Identifying Information" means information that alone cannot identify you, including data from Cookies, Pixel Tags and Web Beacons, and Device Data. Non-Identifying Information may be derived from Personal Information.

This definition omits that many of these elements can be used to identify you. Thousands of web sites collect this information, which means that there is a large dataset of what this vendor inaccurately calls "non-identifying information."

Further down in the policy, MySchoolBucks states that they share "non-identifying information" pretty freely.

We may disclose Non-Identifiable Information which does not include Protected Data:

  • with Business Partners for their own analysis and research; or
  • to facilitate targeted content and advertisements.

Because Heartland Payment Systems shares what they misleadingly call "non-identifying information" with marketers and 3rd party ad servers with no prohibitions on how it can be used, this "non-identifying" data can be combined with other data sets, and then tied to your precise identity.

Accordingly, the claim of "non-identifying" data is probably accurate from a very narrow legal perspective, but it does not represent the reality of what is possible when data from multiple datasets are combined and mined.

MySchoolBucks also supports login via Facebook, which creates additional problems:

You may register to use our Services using your existing Facebook account. If you opt to use your Facebook account to register to use our Services, you authorize Heartland to collect, store, and use, in accordance with this Privacy Policy, any and all information that you agreed that Facebook, Inc. ("Facebook") could provide to Heartland or Heartland's third party authentication agent through Facebook's Application Programming Interface ("API"). Such information may include, without limitation, your first and last name, Facebook username, unique Facebook identifier and access token, and e-mail address.

The inclusion of the unique Facebook identifier, combined with a device ID (which is likely collected as part of the "non-identifying information") would be sufficient to tie a precise identity to many occasions where a person clicked a "like" link, or shared a link on Facebook. If someone could explain why this information is needed to pay for a 2nd grader's lunch, I'm all ears.

There are other issues with the privacy policy and terms of service of MySchoolBucks, but getting into the deep weeds of every single issue with the policies obscures the larger point: paying for a kid's lunch at school shouldn't expose the student or parent to targeted advertising.

MySchoolBucks and Portland Public Schools

The MySchoolBucks.com site came to my attention a couple weeks ago when I was reviewing back to school emails for my child. My local school district uses this service. I attempted to find any information about this site on the district web site - in particular, any contract that would give more information on how student and parent data use was limited - but found nothing.

To be clear: the lack of information and disclosure from Portland Public Schools is unnecessary, and fosters mistrust.

Portland Public Schools could take three immediate steps to address these issues:

  • List out the district and school level vendors that have been designated school officials. Link to the privacy policies and terms of service of these companies, and upload the text of any additional contracts in place between these vendors and Portland Public Schools.
  • List out vendors used within schools where the vendor has not been designated a school official. Link to the privacy policy and terms of service of these companies. This list would require input and feedback from schools, as they would need to collect up information about the software used within each school to support teaching and learning.
  • Document the process and criteria used to select technology vendors for district wide services. Right now, the decision making process is completely opaque to the point where it's impossible to know if there even is a process.

The distinction between vendors who have been declared school officials and vendors that require parental consent is key, as the rules around data use and sharing differ based on the status of the vendor. The lack of any documentation around contracts is also problematic. Contracts are public documents, and these purchases are made with public dollars.

It's worth noting that this is information that should be on the Portland Public Schools web site already. At the very least, parents shouldn't need to wonder who is processing their children's information. I understand that there are numerous details competing for attention within the district, but at some point, excuses need to stop, and be replaced with results. The current level of awareness and attention to student privacy issues within Portland Public Schools is problematic, at best. The communications about these issues have been factually inaccurate, which begs the question: how can we trust Portland Public Schools to get the complicated issues right when they appear to be missing the basics?


Facebook, Privacy, Summit Public Charters, Adaptive Learning, Getting Into Education, and Doing Things Well

7 min read

One of the things that I look for within schools is how solid a job they do telling their students and families about their rights under FERPA. One crude indicator is whether or not a school, district, or charter chain contains any information about FERPA on their web site. So, when I read that Facebook was partnering with Summit Public Charter Schools, I headed over to the Summit web site to check out how they notified students and parents of their rights under FERPA. Summit is a signatory of the Student Privacy Pledge and a key part of what they do involves tracking student progress via technology, so they would certainly have some solid documentation on student and parent rights.

Well, not so much.

It must be noted that there are other ways besides a web site to inform students and parents of their FERPA rights, but given the emphasis on technology and how easy it is to put FERPA information on the web, the absence of it is an odd oversight. I'm also assuming that, because Summit clearly defines themselves as a Public Charter school that they are required to comply with FERPA. If I'm missing anything in these assumptions, please let me know.

But, returning to the Facebook/Summit partnership, the news coverage has been pretty bland. In fairness, it's hard to do detailed coverage of a press release. Two examples do a pretty good job illustrating the range of coverage: The Verge really committed to a longform expanded version of the Facebook's press release, and the NY Times ran a shorter summary.

The coverage of the partnership consistently included two elements, and never mentioned a third. The two elements that received attention included speculation that Facebook was "just getting in" to the education market, and privacy concerns with Facebook having student data. The element that received no notice at all is the open question of whether the app would be any good. We'll discuss all of these elements in the rest of the post.

The first oversight we need to dispense with is that Facebook is "just getting in" to education. Facebook's origins are rooted in elite universities. The earliest versions of the application only allowed membership from people enrolled in selected universities - Ivy League schools, and a small number of other universities.

Also, let's tell the students interacting on these course pages on Facebook - or these schools hosting school pages on Facebook - or these PTAs on Facebook - that Facebook is "just getting in" to education. To be clear, Facebook has no need to build a learning platform to get data on students or teachers. Between Instagram and Facebook, and Facebook logins on other services, they have plenty. It's also worth noting that, in the past, Facebook founder Mark Zuckerberg has seemed to misunderstand COPPA while wanting to work around it.

Facebook - the platform - is arguably the largest adaptive platform in existence. However, the adaptiveness of Facebook isn't rooted in matching people with what they want to see. The adaptiveness of Facebook makes sure that content favored by adverisers, marketers, self promoters, and other Facebook customers gets placed before users while maintaining the illusion that Facebook is actually responding directly to people's needs and desires. The brilliance of the adaptiveness currently on display within Facebook is that, while your feed is riddled with content that people have paid to put there, it still feels "personalized". Facebook would say that they are anticipating and responding to your interests, but that's a difficult case to make with a straight face when people pay for the visibility of their content on Facebook. The adaptiveness of Facebook rests on the illusion that they allow users to select the content of their feeds, when the reality of Facebook's adaptiveness as manifested in their feeds is more akin to a dating service that matches ads to eyeballs.

Looking specifically at how this adaptiveness has fared in the past raises additional questions.

Facebook's algorithms and policies fail Native communities.

Facebook's algorithms and policies fail transgender people.

Facebook's algorithms and policies selectively censor political speech.

Facebook's algorithms and policies allow racism to flourish.

Facebook's algorithms and policies ruined Christmas (for real - maybe a slight overstatement, but I'm not making this up).

Facebook allowed advertisers to take a woman's picture and present it to her husband as part of a dating ad.

Facebook's algorithms and policies can't distinguish art.

Facebook's algorithms and policies experiment with human emotions, without consent.

I could continue - we haven't even talked about how Facebook simplified government surveillance, but you get the point: the algorithms and policies used by Facebook tilt heavily toward the status quo, and really miss some of the nuance and details that make the world a richer place. In an educational system, it's not difficult to see how similar algorithmic bias would fail to consider the full range of strengths and abilities of all the students within their systems. Facebook, like education, has a bad track record at meeting the needs of those who are defined as outside the mainstream.

In educational technology, we have heard many promises about technologies that will "disrupt" the status quo - the reality is that many of these technologies don't deliver more than a new UI on top of old systems.

There Is An Easy Solution Here

Fortunately, none of these problems are insurmountable. If Facebook released the algorithms to its learning platform under an open source license, no one would need to guess how they worked - interested parties could see for themselves. Facebook has done this with many projects in the past. Open sourcing their algorithms could potentially be an actual disruption in the adaptive learning marketplace. This would eliminate questions about how the adaptive recommendations work, and would allow a larger adoption of the work that Facebook and Summit are doing together. This wouldn't preclude Facebook or Summit from building a product on top of this work; it would just provide more choices and more options based on work that is already funded and getting done.

It's also worth highlighting that, while there will be many people who will say that Facebook has bad intentions in doing this work, that's not what I'm saying here. While I don't know any of the people doing work on the Facebook project, I know a lot of people doing similar work, and we all wake up wanting to build systems that help kids. In this post, I hope that I have made it very clear that I'd love to see a system that returned control of learning to the learner. Done right, adaptive learning could get us there - but "doing adaptive right" requires that we give control to the learner to define their goals, and to critique the systems that are put in place to help learners achieve. Sometimes, the systems around us provide needed support, and sometimes they provide mindless constraints. Adaptive learning needs to work both ways.

Open sourcing the algorithms would provide all of us - learners, teachers, developers, parents, and other people in the decision making process - more insight into and control over choosing what matters. Done right, that could be a very powerful thing.

How Spotify Creates Needless Barriers To Deleting an Account

2 min read

Spotify recently updated their terms of service. While their terms were never especially good (and the use of Facebook login exacerbated the situation), their updated terms appear to take contact lists and geographic information.

This is not necessary to play music. I've used Spotify since the early days (back from the dark ages when you could actually create an account without Facebook), but these updated terms are too much. I headed over to Spotify to cancel, and found a great example of how a company shows that it has no respect for its users. The account cancellation process at Spotify is foolishly, unnecessarily complicated. To demonstrate this, I made a video of the process, and how it stalls out.

As shown in the video, you need to submit a form that explains you want to cancel your account. This then triggers two emails: one confirmation that says that Spotify is working on your request. Then, several hours later, Spotify sends a second email outlining the process that needs to be followed to actually delete your account. This second email ended up in my spam account; the first one came through with no problem. If I was cynical, I might almost think that Spotify was messing with the headers of their emails to trigger spam filters. But no company hates their users that much (I hope).

It also appears that Spotify has taken steps to make the account cancellation process more complicated. Earlier versions - while still not good - at least eliminated a few steps.