Students and Social Media

10 min read

Introductory note: In this post, I reference hashtags and tweets I have seen that compromise student privacy. Ordinarily, I would link to the hashtags or tweets, and/or post obscured screenshots. In this post, I am doing neither because I do not want to further compromise the privacy of the people in the examples I have seen.

When teachers post pictures of students on social media, it raises the question of whose story is being told, and in whose voice, and for what audience. Multiple different answers exist for each of these questions: the "story" being can told can range from the story of a kid's experience in the class, to a teacher's documenting of class activities, to a teacher documenting activities that are prioritized within a district. In most cases, even when the story is told from the student's perspective, the voice telling the story is an adult voice. The audience for these pieces can also vary widely, from parents, to other teachers, to the district front office, to the broader education technology world.

While students often figure prominently in classroom images posted on social media, student voice is rarely highlighted, and students are rarely the audience. The recent example of the IWishMyTeacherKnew hashtag - where a teacher took student thoughts and words from 8, 9 and 10 year olds and posted them on Twitter, and parlayed that experience into a book deal - provides a clear example of student words appropriated to tell an adult story. As a side note, it's also worth highlighting that student handwriting is a biometric identifier under FERPA, so sharing samples of student handwriting online without prior parental consent is, at best, a legal gray area. To emphasize: asking your students what matters and what they care about is great. Publishing these personal details to the world via social media - especially when their words can be traced back to them within their local community - prioritizes an adult need over learner's needs.

When posting student pictures on social media, the adults in the room need to be careful about the details they include with their images. I have seen examples of teachers doing a great job documenting their classroom when they show pictures of kids working on a project, and the pictures focus on work, do not include student names, and do not include student faces (or only include them as part of a group shot, not as a close up portrait). Conversely, I have also seen teachers post pictures of kids that include a close up of the student's face and a student name tag, where the teacher's bio identifies their school and grade. Teachers should also ensure that the location services for photos is off, otherwise they can share precise geographic locations.

In some more extreme examples, I have seen teachers post portraits of students that include a name tag, grade, and a reference to a specific EdTech vendor. Whiled it is great to see teachers highlighting student effort and growth, including a specific tech vendor in the callout with a picture of an elementary school student looks a lot like a kid being used as an unpaid spokesperson to market a tech product. To make matters worse, I have also seen examples where these pictures included usernames and passwords of students. To be crystal clear, writing usernames and password names down in a publicly accessible place shouldn't happen. Posting these passwords to the open web is a surefire way to make this bad practice worse.

When a teacher posts a picture of a student that include the above details, they are potentially sharing directory information or parts of an educational record, as defined under FERPA. Beyond what is covered under FERPA, we must ask whose needs are served by sharing this information on commercial social media - the student, the teacher, or the school? Taking pictures is fine. Recognizing students for work and progress is obviously fine. Tying that work and progress to a specific app or vendor is less fine. Posting this collection of information on social media has the potential to cross some serious legal and ethical lines.

Given that some of this information is covered under FERPA, parents have some rights to control how and where information is shared. Schools and districts can play a key role in ensuring that learners have full and unfettered access to their basic rights to privacy. Unfortunately, many districts do not approach this issue with adequate flexibility or understanding of how their policies can protect or impair a parent's ability to access their rights. For a pretty typical example, we will take a look at the opt-out and disclosure form from Baltimore County Public Schools.

The form has three sections: FERPA Directory Information Opt-Out, Intellectual Property Opt-Out, and Student Photographs, Videos and/or Sound Recordings Opt-Out. These are the right categories to include in an opt-out form, but the way the opt-outs are structured are hostile to student privacy.

Taking a closer look, starting with the FERPA Directory Information Opt-Out, the section closes with this explanatory note, followed by three options.

BCPS opt-out excerpt - FERPA

Note: If you “opt-out” of the release of directory information, BCPS will not release your child’s directory information to anyone, including, but not limited to: Boys and Girls Clubs, YMCA, scouts, PTA, booster clubs, yearbook/memory book companies that take photographs at schools and/or other agencies and organizations.

The reference to Boys and Girls Clubs and the YMCA is telling here: these outside vendors are used to run childcare programs for parents that need it. Because the district takes a blanket approach where parents are required to choose all or nothing, the current district opt-out policy appears to place a barrier in the way of parents who want to protect their child's privacy and need childcare. The likely scenario here is that parents who opt out of data sharing at the district level need to make additional arrangements with the childcare providers at the school level. While this is not an insurmountable obstacle, it creates unneeded friction for parents, which can be read as a disincentive for parents and children to access their rights under FERPA.

Districts can address this issue very easily by adding a single check box to the their form that authorized the release of directory information to school-approved childcare providers.

Moving on to the Intellectual Property Opt-Out section, Baltimore County Public Schools takes a similarly broad approach with student's IP rights. The terms of the opt-out form combine multiple different activities, with multiple different means of both publishing and distribution, into an all-or-nothing option.

BCPS opt-out form - IP Rights

Having a student's intellectual property uploaded to a web site with weak privacy protections is a very different situation than having a kid covered in the news, or having a kid participating in a school-sponsored video. The fact that a district conflates these very different activities undercuts the protections available to learners. This also creates the impression that the district values district-created processes more than student privacy and learner agency.

Moving on to Student Photographs, Videos and/or Sound Recordings Opt-Out, Baltimore County Public Schools again takes an all-or-nothing approach.

BCPS opt-out - Photos, Videos, Recordings

If the parent denies such permission, the student’s picture will not be used in any BCPS publication or communication vehicle, including, but not limited to, printed materials, web sites, social media sites or the cable television channel operated, produced or maintained by BCPS’ schools or offices, nor will my child’s picture be part of a school yearbook, memory book, memory video, sports team, club or any other medium.

Social media, yearbook, childcare, and sports activities are all very different events. When schools structure permissions in a way that removes agency from parents and kids, they burn goodwill. Also, given that teachers and districts are still publishing pictures of kids online in ways that share personal information, including (on some rare occasions) passwords, parents should have some granular ability to differentiate between sharing in a yearbook and sharing on Instagram, Facebook, or Twitter. Until schools and districts consistently get this right, they have an obligation to err on the side of restraint. To state the obvious, kids don't walk through the school doors so adults can use their likeness and work on social media. Similarly, yearbooks and social media are very different things, and yearbook companies and social media companies have very different business models, and - in most cases - very different approaches to handling data.

The solution here is pretty straightforward: provide parents a granular set of options. A parent or kid should be able to say that they want to be in the yearbook; a high school athlete should be able to say they want to be in the program or in the paper; a musician should be able to be acknowledged in a newsletter - and these options do not need to be tethered to sharing directory information, streamlined access to childcare, or indiscriminate sharing on social media. That is a reasonable request, and if a teacher, school, or district lacks the data handling and media literacy skills requried to make that happen, then we have an opportunity for teachers and district staff to develop and grow professionally.

The argument we generally hear against allowing parents and students real choices over their privacy rights is that the burden would be too much for schools to handle. However, we only need to look at how parental rights are managed with regards to health curriculum to see the hollowness of that argument. In Baltimore County Public Schools - as with many schools in many districts nationwide - parents and students can opt-out of individual units in the health curriculum. Districts have been managing this granular level of opt out for years, and somehow - miraculously - the educational system has not tumbled into ruin as a result.

The main difference, of course, is that in many states parental opt-out rights are required and defined by law.

For parents: use the opt-out form provided by the World Privacy Forum to assert your rights. In an email accompanying the form, explain that you would like to see your district develop more flexible policies on opt-out and data sharing.

For teachers: if you are going to share student images and work on social media, make intentional choices about what you share, how you share, and why you share. Additionally, ask your district about more granular policies for parents and learners. While the initial change might be hard, over time the more flexible rules will make your work easier, and increase trust between you, your students, and their guardians.

For districts: get ahead of the curve and start offering more flexible options. As we have seen with health curriculum and with privacy in the last few years, state legislatures are not shy about introducing and passing legislation. Districts have an opportunity to address these concerns proactively. It would be great to see them take advantage of this opportunity.

Can 2017 Be the Year Of the Feature Freeze?

6 min read

Yesterday, the Intercept published an article on a project led by Peiter and Sarah Zatko, the founders of the Cyber Independent Testing Lab. The lab has developed a testing protocol to evaluate the potential for security issues within software. A big part of the reason for the excitement (or fear) about this project is due to the founders: Peiter Zatko (aka Mudge) and Sarah have a track record of great work dating to the 90s. The entire piece is worth a read, and it highlights some common issues that affect software development and our understanding of security.

In very general terms, the first phase of their analysis examines the potential for vulnerabilities in the code.

During this sort of examination, known as “static analysis” because it involves looking at code without executing it, the lab is not looking for specific vulnerabilities, but rather for signs that developers employed defensive coding methods

In other words, the analysis looks for indications of the habits and practices of developers who understand secure development practice. This is roughly comparable to code smell - while it's not necessarily a problem, it's often an indicator of where issues might exist. 

Modern compilers of Linux and OS X not only add protective features, they automatically swap out bad functions in code with safer equivalent ones when available. Yet some companies still use old compilers that lack security features.

We'll return to this point later in this post, but this cannot be emphasized enough: organizations creating software need to be using a current toolkit. It takes time to update this infrastructure - and to the suits in an organization, this often feels like lost time, but organizations shortchange this time at their peril.

The lab is also looking at the number of external software libraries a program calls on and the processes it uses to call them. Such libraries make life more convenient for programmers, because they allow them to repurpose useful functions written by other coders, but they also increase the amount of potentially vulnerable code, increasing what security experts refer to as the "attack surface."

As the article highlights, third party libraries are not necessarily an issue, but any issue in a third party library can potentially be an issue within apps that use the library. To use a metaphor that is poetically but not technically accurate: let's say you're going out with your friends, and one of your friends says, "Hey - can I bring my new boyfriend Johnny?" And you say, sure, why not. But then, later that night, Johnny turns out to be a real jackass - drinking too much, not tipping, talking all the time, laughing at his own jokes.

Potentially, third party libraries are like Johnny - not necessarily a problem, but when they are, they can be very unpleasant to deal with.

The people running the evaluation lab are also clear on what their tests show, and what they don't show. 

Software vendors will no doubt object to the methods they’re using to score their code, arguing that the use of risky libraries and old compilers doesn’t mean the vendors’ programs have actual vulnerabilities. But Sarah disagrees.

"If they get a really good score, we’re not saying there are no vulnerabilities," says Sarah. But if they get a really low score, "we can guarantee that ... they’re doing so many things wrong that there are vulnerabilities [in their code]."

The potential for risk articulated here runs counter to what people want, and it's one of the reasons that many people balk at reading security analyses. People want an absolute; they want a guarantee - but vulnerabilities can exist anywhere. Secure coding practices are not new; it's not arcane knowledge - but up until this point, many vendors have not made securing their work a priority.

However, the lede is thoroughly buried in this piece. We get this gem near the end. 

They’ve examined about 12,000 programs so far and plan to release their first reports in early 2017. They also plan to release information about their methodology and are willing to share the algorithms they use for their predictive fuzzing analysis if someone wants them.

We should have no illusions about the contents of this data set. We would likely see a small number of companies doing very well, a large number of companies in a very crowded middle, and a number of companies (looking at you, legacy enterprise software who insists on maintaining backwards compatibility) with pretty abysmal numbers. This is reality. Bad decisions get made in software development all the time, often for reasons that feel defensible - even logical - at the time. But over time, if this technical debt never gets paid down, these problems fester and grow.

To all the marketing people who ignored developer input in order to meet a PR-driven deadline: this is on you.

To all the salespeople who promised features and fabricated a timeline without consulting your development team: this is on you.

To all the CxOs who supported marketing and sales over the best advice of your dev team in order to "hit numbers": this is on you.

To all the developers who never said no for the right reasons, and just put your head down and delivered: this is on you as well.

We all have a level of responsibility here. But now, we need to fix it.

The end of the piece closes with a quotation from Mudge that is arguably the subtext for many of the ongoing conversations about security: 

"We’ve been begging people to give a shit about security for a decade ...[But] there’s very little incentive if they’ve already got a product to change a product."

I'm saying this partially tongue in cheek, but I'd love to see 2017 be the year of the feature freeze, where we all agree to get our acts together. Companies could give their development teams the time to pay down technical debt. People could get their privacy policies in order and up to date. Organizations could take some time to figure out business plans that aren't predicated on selling data. Consumers could get weaned off the artificial expectation that online services built through the time and talent of countless people should be free.

We can have a tech infrastructure that isn't broken. We can make better choices. If this project is part of the work that pushes us there, then full speed ahead.

 

Advertising and Rape Threats

2 min read

On the same day that Donald Trump gets a safe space carved out for him on Reddit , Jessica Valenti's five year old child receives a death and rape threat.

We have people claiming they can predict crime. We have companies marketing their prowess at using predictive analytics to support policing. We have law enforcement using social media monitoring to target city kids. We have schools using social media monitoring against kids. We have law enforcement arresting people who make online threats against police officers (and to be clear: online threats of violence are not okay - they *should* be investigated). Our ISPs and our mobile providers can partner to target specific ads to specific devices in a specific home.

Yet we can't do anything about online rape threats.

We have data brokers creating profiles on all of us, for just about any reason, and selling these profiles as to companies that see this data as a competitive advantage. These same data brokers are pretty adept at slicing the population into specific demographics, and targeting ads to them.

Yet we can't do anything about online death threats.

Political campaigns microtarget individuals.  Learning analytics companies tout a "robot tutor" that can "read your mind."

So, what is it? Is the marketing real? Do we have the tools to target ads to individuals, to know what people are thinking, to have true, penetrating insight into what people like and dislike? Because if that's true, we have a solid toolkit to use against online threats.

Or: the marketing is all a lie, and we are actually powerless to know more about the people who are threatening to rape kids. 

One thing we should know for sure: we either have solid analytics, or we don't.

Concrete Steps to Take to Minimize Risk While Playing Pokemon GO

5 min read

The launch of Pokemon GO highlights various privacy, security, safety, and privilege concerns with how we use and access tech. While these concerns existed prior to Pokemon GO, and will continue to exist long afterwards, this provides an opportunity to highlight some concrete steps about how we can use technology more safely, and take control over data collected about us. While none of the steps outlined in this post are a panacea, they all allow for incrementally more control over data collected about us. Also, this post focuses on the privacy and security concerns. The safety and privilege concerns are equally real and worthy of attention - as with all tech, we need to take a hard look at who can access the tech, who is pushing adoption of the tech, who benefits most from its use, and who profits most from its use. Time permitting, I will did into these concerns in a different post.

Art LaFlamme also put out a post that covers additional details - it's definitely worth a read.

Without further ado, here are some concrete steps you can take to reduce data collected about you.

1. Turn off services that can be used to collect location information.

Apps with location based services all collect precise location information. A short list of apps that collect location information includes Uber, Disney Experience, Snapchat, Facebook, Pokemon GO, insurance devices, FitBit, Google, Twitter, virtually all of the apps marketed to parents that track their kids in the name of "safety," Voxer, Reward progam apps (like Marriott and Starbucks), and banking apps - so Pokemon GO is not unique in it's aggressive collection of location information. The primary concern with aggressive collection of location data is that it will be used for targeted marketing. A secondary concern is that it will be used and stored and used indefinitely by data brokers, and incorporated into data profiles about us that we will never be able to access.

The concerns listed above are very valid, but it's also worth noting that this steady flow of location data can also be accessed by law enforcement. The privacy policies for most applications contain a clause that explicitly permits personal information - including location - to be turned over to law enforcement. 

For example, Fitbit can release data "If we believe that disclosure is reasonably necessary to comply with a law, regulation, valid legal process(.)" 

Progressive Insurance will release data "when we're legally required to provide the data(.)" 

We are not used to thinking about generating a data stream while playing a video game, but we really need to adjust. But now, with apps like Pokemon GO, our location can become a target for law enforcement. If one kid accuses another of an assault, or of taking part in a robbery, location data collected by the app is now evidence.

To minimize the risk of location based data being collected, toggle location based services off until you absolutely need them. When you leave your house, turn off location services, bluetooth, and wireless. This means that the only company tracking your location on an ongoing basis is your mobile phone carrier.

2. Create a separate email for games

Create a gaming email (via GMail, Yahoo, Outlook, etc), and use this exclusively for games. Ideally, tie this gaming email to usernames and even demographic information that does not identify you personally (and note - doing this can potentially violate the terms of service of some apps that insist upon "accurate information"). However, for games that broadcast your username and location, this additional layer of separation between your username, you "regular" email, and your actual name can provide a small layer of insulation from other players.

Note that this does not prevent companies from identifying you. Companies will still have (at minimum) your device ID, and the IP addresses from which you connect. Additionally, many apps will access your phone number, your call log, your text log, your contact list, and the other apps you have installed on your phone, among other things. The Google Play store lists out the permissions required, so it's easier to spot these types of intrusions into our privacy on Android based phones than on iOS devices. On apps that support both platforms, you can do a rough cross reference from Android to iOS. As an aside, I do not understand why Apple doesn't list app permissions in the same way as the Play store. 

3. Login with your gaming email whenever possible

Avoid social login with your Google, Twitter, Facebook, etc, account. This is arguably less convenient, but it creates an incremental barrier to inappropriate access of your personal information, and to these companies getting more detailed information about your online behavior.

4. Review your authorized apps

Every month, review what apps are authorized to access your accounts.

By aggressively removing apps that no longer need access, you minimize the risk that one of these apps could be used to compromise a different account.

5. Reset your advertising ID.

This should be done monthly. However, it should be noted that vendors are not required to use the advertising, and that many companies collect device specific IDs that are more difficult to alter.

These changes won't prevent more aggressive companies from collecting your device ID, but it provides some incremental improvements.

Summary

As noted in the introduction, none of these steps are panaceas, and none of these steps will eliminate data collection. However, these steps will minimize exposure, and will bring back a degree of control to those of us looking to both use tech and maintain a modicum of privacy.

Civil Rights Complaint Filed Against Portland Public Schools

5 min read

UPDATE - July 6: According to Willamette Week, Paul Anthony met with Carole Smith on January 26, 2016, and documented the core findings that drove his civil rights complaint. He filed his complaint on May 26, 2016. Between January and May, the Portland Public School Board - of which Anthony is a member - reviewed the District-wide Boundary Review Advisory Committee (DBRAC) recommendations. On April 12th, Anthony is quoted in this DBRAC-related story talking about implementation plans moving forward for East side schools. As noted in this document about the goals of DBRAC, ensuring equitable access to resources is a key goal.

So, if I'm understanding this correctly, a school board member had evidence that, in his estimation, rose to the level of a civil rights complaint. He had this evidence in January, and was part of a review process that was designed, in part, to address these very needs. Yet, during the entire review and discussion process, the documentation was never shared. Why not? Why sit on data that documents the exact issue the district is trying to solve when you are part of a group of people providing guidance and input to this process?

If I am mistaken here, and the documentation used to drive the civil rights complaint was introduced during a public school board meeting, please share the link to the meeting video, and I'll watch the video and update this post accordingly. But, if this information was never shared publicly as part of the DBRAC review process, or as part of any budget review process, I'd love to know the rationale for keeping it private, rather than for sharing it publicly in the middle of a process where the information could have been put to good use. END UPDATE.

ORIGINAL POST

Paul Anthony, a Portland School Board member appears to have filed a federal complaint alleging racial discrimination. According to the Oregonian, the complaint was filed in late May.

The Oregonian story linked above has some quotes from the complaint (note - I have not read the complaint, save for the excerpts covered in the Oregonian story), including:

"Superintendent Smith permits her staff to discriminate on the basis of race, color and national origin in access to educational course offerings and programs," the complaint says. "PPS data proves that students of color cannot access courses tied to long term academic achievement. For example, they disproportionately are not offered access to foreign language, academic supports, and electives that white students access."

I'm glad to see this issue getting increased attention. It mirrors and amplifies what people with kids have been saying for years. It also mirrors trends we see with how parent fundraising amplifies and enshrines the inequitable distribution of resources. But no one has the appetite or will to take on inequities based on parent fundraising. We allow the Equity Fund to apply band-aids, when what we need is surgery.

The Civil Rights complaint also begs the question of why the school board didn't use its oversight of the redistricting process to address inequitable access to resources. True leadership would have used that opportunity.

And while this might be covered in the text of the complaint, it would be great to see the ongoing problems with racial bias in discipline within Portland Public Schools addressed. We should have school and classroom level data about the source of referrals and suspensions. Looking at this data will make a lot of people very uncomfortable, but in an ideal world, this would have full teacher's and prinicipal's union support within the context of ongoing professional development for their members. Suspensions, expulsions, and disciplinary referrals don't happen out of thin air - they represent choices by teachers, principals, counselors, and other school staff. But, for all the obvious reasons, taking the needed hard look at this issue would almost certainly face strong and determined resistance.

Quoting again from the Oregonian article:

(Paul Anthony) said after painstakingly seeking the data and arranging it in spreadsheets, he couldn't get the traction he wanted on the issues.

In the imediate aftermath of the ongoing issues around lead in our school's drinking water, many people defended the school board by saying that the board doesn't have the ability to request detailed information outside of a general supervisory role. However, what we are reading about here is real, actual inquiry - and that's a very good thing. That's exactly what we want and should demand from members of the school board. The fact that this didn't happen around issues of lead - despite a well documented history of lead in school water in Portland, going back to 2001 - calls into question the role of the board in Portland's ongoing lead issues.

It's easy to beat up on the district. It's much harder - and more politically costly - to tackle the ossified issue of school funding. It's even more difficult to engage the teacher's and/or principal's unions over issues of racially biased discipline. If we're serious about equity, we can't approach this piecemeal, and we can't just take on the easy fights. An engaged school board willing to listen to the community, and take on the hard issues, is an essential piece of what school improvement will look like.

Amazon Inspire, Open Educational Resources, and Copywrong

3 min read

On Monday, June 27th, Amazon announced Inspire, another free lesson sharing site. What made this effort interesting is, of course, the context: Amazon knows marketplaces, and Amazon knows text distribution. This effort is also part of the Federal Department of Education's "Go Open" work, where Amazon was an early partner.

On June 29th, Amazon removed some materials due to copyright issues. The fact that this happened so early in the process (literally, screenshots distributed to media contained content encumbered by copyright) suggests a few things:

All of these potential issues are directly related to implementation, and have nothing to do with the merits of using Open Educational Resources. However, these obvious issues point to the possibility of other - more subtle, less obvious - issues with the underlying content. For example, if any EngageNY content is reused within Amazon Inspire, that would almost certainly run afoul of the Non-Commercial license used on EngageNY content.

Just so it's clear, the mistakes made within the Inspire platform are all completely avoidable. People using openly licensed content have been successfully navigating these issues for years.

But the other, more troubling development that is implied by the issues surrounding the very avoidable errors with the Inspire platform is that the platform focuses on the least interesting element of open educational resources: distribution. It would have been great to see a high-profile effort that simplified and supported authorship and remixing. The current conversations about OER remain mired in the very narrow vision of textbook replacement. The transformational potential of OER will come when we embrace the potential of both teacher and learner as creator. Open licensing makes this potential easier to realize, as it removes many of the barriers enshrined within traditional publishing and licensing schemes. 

However, when one of the most visible platforms within the latest high-profile foray remains focused on distribution, and can't even address copyright issues within a press launch, it's clear we have a ways to go. The mistakes made in the Inspire announcement are completely avoidable. These mistakes have nothing to do with open educational resources, and everything to do with the specifics of creating a marketplace. When we build tools that focus on redistribution, we create a natural opportunity to address issues of licensing. Ironically, the approach that has the potential to transform the way we view authorship and learning also has the potential to eliminate licensing issues.

Hopefully, someday, our platforms will catch up with the work.

Some Observations on Kahoot!

3 min read

NOTE, from July 1, 2016: Kahoot! updated their app, and their privacy policies. The issues flagged in this post have all been addressed. Also worth noting: their turnaround time in addressing these issues was incredibly fast. For what it's worth, I'm impressed by both the speed and the quality of the response. END NOTE.

In the screencast below, I highlight some issues with Kahoot!, a quiz platform that, according to the company, was used by 20 million US students in the month of March, 2016.

In the screencast, I use two demo accounts to show how an 11 year old student can create an account with no parental consent, and subsequently share content with a random adult within the application. I also highlight a less serious issue with how PINs can be shared to allow for open access over the internet to anyone who has the PIN. 

(note: the screencast has no volume - so don't think your audio settings are on the fritz :) )

Recommendations for Kahoot!

Some of these recommendations look at Kahoot's terms of service and privacy policy. A full evaluation of their terms is outside the scope of this post, but currently the terms lack meaningful detail about important points, such as how data can be used for advertising, or shared with third parties. In addition to a full review of their current privacy policy, a short list of improvements for Kahoot! includes:

  • Implement verifiable parental consent for accounts for people under 13; this should be accompanied by corresponding language in the privacy policy.
  • Inside the service, implement friend lists, and limit sharing to and from student accounts to approved friend lists.
  • Update their infrastructure to improve encryption on their login and account creation pages. Currently, these pages get an F using the Qualys SSL verification service.
  • Update their terms of service to clarify what ownership they are claiming over student and teacher work. Their current terms claim full ownership over all content created using "any open communication tools on our website" - this effectively means that Kahoot! owns all student and teacher work created in their platform, and that they can use that work without limits, in any way they want. While I don't think this is what they intend, they should clarify the details. The precise language from the terms of service is included below.

However, any content posted by you using any open communication tools on our website, provided that it doesn't violate or infringe on any 3rd party copyrights or trademarks, becomes the property of Kahoot! AS, and as such, gives us a perpetual, irrevocable, worldwide, royalty-free, exclusive license to reproduce, modify, adapt, translate, publish, publicly display and/or distribute as we see fit. This only refers and applies to content posted via open communication tools as described, and does not refer to information that is provided as part of the registration process, necessary in order to use our Resources.

There are other suggestions that would improve the service, but this short list highlights some of the more pressing issues documented in the screencast.

Rostering, Provisioning, Owning Your Stack, and Transparency: a Look at Lewis Palmer

6 min read

Through the continuing wonderful work over at Databreaches.net, I read about an odd situation in Lewis Palmer School District 38. The details are still unfolding, but based on the article on databreaches.net and the original report in Complete Colorado, there are a few layers at play here. First, the Complete Colorado piece isn't clear on the technical details - and that's a good thing, because they were printing a story about an unfixed security issue (note - as of today, the affected systems have been taken offline). The ethics of printing information about an unpatched issue are questionable, at best - but we'll return to that later.

Two pieces stand out in the story. First, the data that was potentially exposed seems very sensitive. The exposed data included:

names, addresses, and phone numbers for students, parents, siblings, and emergency contacts; schedules; attendance records; grades; locker numbers and combinations; transportation details, including where and when bus pickups take place; and health records.

Second, the Complete Colorado piece includes reporting from a school board meeting held on May 19th. In the exchange below, pulled from the Complete Colorado piece, Sarah Sampayo (a school board member) is speaking with Liz Walhof, the district technology director.

Sampayo questioned the district’s technology director, Liz Walhof, about whether the district planned to make changes to the Gmail accounts. “How easily accessible is that uniquely identifying [student identification] number to the vast community,” Sampayo asked. “And is our kids’ information then protected because you can then log in … with just the kid’s ID number.” Walhof said they continue to look into better formats, but added that right now it is not possible to issue an email without using the student’s ID number.

At the 5/19 school board meeting, a parent shared her experience speaking with the district IT staff. In her public comments, she shared talking with school officials in the fall of 2015 about some of her concerns. The testimony begins at the 53:40 mark of the video. In her testimony, it appears like the student's login id to Google Apps is the same as their student ID. Therefore, based on how Google Apps works, student emails would also be student IDs, thus ensuring that kids in a class know everyones login ID.

I'm concerned that children are having to log into GAFE with their student ID numbers. And I was told that is just the way it is.

At this point, it's worth noting that just knowing someone's login ID is not sufficient to gain access. If, however, passwords were known, then that is a serious privacy issue.

And, it appears that the Lewis Palmer School District used birthdays as passwords, and announced this online from at least September 24, 2013 to March 14, 2016.

The two screenshots below were taken with the Wayback Machine. The first was crawled on September 24, 2013.

Wayback machine screenshot

The second screenshot, below, was taken on March 14, 2016.

Wayback screenshot

Both of the screenshots (and the ones taken between these two dates) contain this text:

Due to a security enhancement within Infinite Campus, your network and IC passwords have been changed! You must now enter the prefix, Lp@ before your regular birthday password (i.e. Lp@032794). Additionally, you may change this password by entering Ctrl+Alt+Delete and then picking Change a Password. Changing your password this way ONLY works if you are logged into the school network, NOT from home.

This information suggests a couple things. Starting with the most obvious, passwords appear to be created using a commonly known structure based on a person's birthday.

Second, the instructions about being connected to the school network and changing your password suggests (although I'm not certain on this) that usernames and passwords are centrally managed, meaning that a student has a single login ID and password.

It also should be highlighted that username and password issues do not appear directly related to security issues in either Infinite Campus or GAFE. This sounds a lot like an issue with how accounts were provisioned.

Based on the information available here, it appears that the way the district provisioned emails ensured that every student's login ID was easily available. Because the district both used an insecure default password structure and published that password structure on the open web for over three years, the district created a structure that allowed many people within the community to easily know the usernames and passwords of their peers.

It also appears - based on the parent testimony at the board meeting - that these concerns were brought to the district's attention in the fall of 2015, and were dismissed. Based on some of the other descriptions regarding access to health records, it also sounds like there might be some issues related to Infinite Campus and how it was set up, but that's unclear.

What is clear, however, is that the district is not being as forthright as they need to be. The board meeting with parent testimony was May 19th; Complete Colorado article ran on May 24th. The data privacy page on the Lewis Palmer web site was updated on May 25th, with the following statement:

Yesterday, we discovered a possible security breach through normal monitoring of IP addresses accessing our systems.

Given that the security issue was covered in the local press the day prior, and that the district was publishing their password structure for over three years, I'd recommend they look at their logs going back a while. I'd also recommend that the district own their role exacerbating this issue.

For districts, parents, teachers, and students: if there is a commonly known structure to how you provision both usernames and passwords, that is potentially a serious red flag. The process of provisioning accounts is time consuming and not fun (which is part of the reason why we see people starting to rush into the rostering space), but if you can't do it securely, you should put your tech programs on hold until you get it sorted out.

Tracking the Trackers

2 min read

Third party trackers are tools that companies use to track us as we navigate through the web. While most of us don't pay much attention to trackers, they are present on many of the sites we visit. They collect information about our online activities, ranging from the pages we visit, the terms we search for, to how long we stay on a page, and more, and they collect and organize this information into a profile that can then be used for many different purposes. Because tracking takes place behind the scenes, most of never get a glimpse of how tracking is set up, and how it follows us.

If you are ever curious about how third party trackers work, Lightbeam is a freely available add on from Mozilla that displays information on third party trackers. This recent web census is both up to date on the current state of tracking, and the different technologies used to track.

When evaluating EdTech apps, Lightbeam can be used to get a clear sense of what trackers are placed on a site. To get accurate results, you will need to get Firefox ready to test. Then, log in to the application you want to evaluate, and let Lightbeam do the rest.

The video below shows how trackers get placed. In the video, I visit three sites: WebMD, the Huffington Post, and the Weather Channel. In the process of visiting just these three sites, 139 third party trackers were placed on my browser.

Targeted Ads Compromising Privacy in Healthcare

2 min read

For a current example of how and why privacy matters, we need look no further than the practices of a company that uses "mobile geo fencing and IP targeting services" to target people with ads.

In this specific case, the company is targeting ads to women inside Planned Parenthood clinics with anti-choice materials. The anti-choice messaging - euphemestically referred to as "pregnancy help" - gets delivered to women who enter selected health clinics.

"'We can set up a mobile geo fence around an area—Planned Parenthood clinic, hospitals, doctor's offices that perform abortions,' Flynn said. 'When a smartphone user enters the geo fence, we tag their smartphone's ID. When the user opens an app [the ad] appears.'"

Let's stop pretending that a phone ID isn't personal information. This is how data can be used to compromise people's privacy. We should also note that the anti-choice groups are now clearly in the business of harvesting personal information about women who visit health clinics, and who knows what they are doing with that information. With the device ID in hand, they can easily combine that dataset with data from any of the big data brokers and get detailed profiles of the people they are targeting.

This is how private institutions target and exploit individuals. However, they are using techniques adopted by advertisers and political campaigns.

Tech isn't neutral. Whenever you hear talk of "place based advertising", this is what we are talking about.