Surveillance, Worst Case Scenarios, and the Winceable Moment

4 min read

In discussing issues related to privacy, people often devolve to trying to identify and define immediate harm and/or a worst case scenario. Both of these lenses are reductive and incomplete. Because data analysis often occurs invisibly to us, via proprietary algorithms that we don't even know are in play, assigning harm can be a matter of informed guesswork and inference. As one example, try explaining how and why your credit score is determined - this algorithmically defined number determines many opportunities we receive or don't receive, yet few of us can say with any certainty how this number is derived. Algorithms aren't neutral - they are a series of human judgments automated in a formula. There isn't any single worst case scenario, and discussions of worst case scenarios risk creating a false vision that there is a single spectrum with "privacy" at one one and some vague "worst case scenario" at the other - and this is not how it works.

The reason privacy matters - and the reason that profiling matters - is that we are seeing increasingly experimental and untested uses of data, especially in the realm of predictive analytics. Products using new statistical methods are used in hiring decisions, lending, mortgage decisions, finance, search, and personalization. The hype is that these new - or "innovative" or "disruptive" - uses of data will help us get more efficient, and push pass the biases of the past. However, this fails in at least two ways: first, algorithms contain the biases of their creators. Second, the performance of these products fails to live up to the hype, which in turn doesn't justify the risk.

Data collected in an educational setting - by definition - is data collected on people in the midst of enormous development, questioning, and growth. If people are doing adolescence right, they will make mistakes, ask questions, break things - all in the name of growth and learning. In the context of, for example, an eighth grade classroom, it all makes sense. But outside that context, it's very different. One of the promises of Big Data and Learning Analytics is that the data sets will be large enough to allow researchers to distill signal from noise, but as noted earlier, the reality fails to live up to the hype.

How many of us have memories of our behavior from high school, middle school, and elementary school that make us wince? Those winceable moments are our data trail. I mentioned earlier that talking about worst case scenarios is an inaccurate frame, and this is why: there is no single data point that, if undone, can "fix" our past. However, data collected from our adolescence is bound to contain things that are inaccurate, temporary, flawed, or confusing - for us, and for people attempting to find patterns.

If people are aware of surveillance, it shifts the way we act. When students are habituated to surveillance from an early age, it has the potential to shift the way we develop. If this data is shared outside of an educational context, it creates the potential that every person attending a public school is fully profiled before they graduate. A commonly overlooked element of this conversation is that profiles never come from a single source - they are assembled and combined from multiple sources. When data collected within an educational context gets combined with data sets collected from social media or our personal browsing history, different stories emerge.

For most people over 30 reading this post, our detailed records begin early in the 21st century, when we were 15 or older. For some kids in school now, their data profile begins when their parents posted their ultrasound on Facebook. While targeted advertising to kids is an immediate concern, at least targeted advertising is visible. Profiling by alogorithm is invisible, and is forever. Requiring students to pay for their public education with the data that will then be used to judge them sells out our kids. We can use data intelligently, but we need to have a candid conversation about what that means.

An in depth read from @richelord: Surveillance Society: Students easy targets for data miners

Story on edtech and privacy - I'm a source on this one, along with Khaliah Barnes and Joel Reidenberg.

FERPA, Video Surveillance, and Law Enforcement Units

4 min read

In this post, we will take a look at what is potentially a large loophole in FERPA that has some obvious implications for school to prison pipeline issues.

However, I need to open with an enormous caveat. First, the FERPA brochure referenced in this post is from 2007. It is possible that these regulations have been updated over the last eight years. I searched in an effort to find updated versions, and asked other people if they knew of any more recent clarifications, and the closest thing I found from the Department of Education was this doc written after the Virginia Tech shooting. However, the fact that I didn't find anything more recent doesn't mean that additional clarification doesn't exist. If anyone reading this post knows of any more recent information on the use of surveillance cameras in schools, and how they are viewed under FERPA, please let me know either via email (bill at funnymonkey dot com) or on Twitter.

As the title of the 2007 brochure from the Department of Education indicates, the Department of Education is offering guidance on how to balance privacy of students with the security of schools, while complying with FERPA. The brochure highlights the role of "law enforcement units" - people or offices within the school who have been designated as having official responsibilities for enforcing laws, or communicating with law enforcement. FERPA specifically exempts records created or maintained by law enforcement units from protection under FERPA.

Under FERPA, investigative reports and other records created and maintained by these "law enforcement units" are not considered "education records" subject to FERPA. Accordingly, schools may disclose information from law enforcement unit records to anyone, including outside law enforcement authorities, without parental consent. See 34 CFR § 99.8.

As stated in FERPA, and highlighted here in this brochure, data collected or maintained by law enforcement units is not considered an educational record. Therefore, both parental and student rights over these records is limited.

The Department continues to offer the following advice (emphasis added):

Schools are increasingly using security cameras as a tool to monitor and improve student safety. Images of students captured on security videotapes that are maintained by the school's law enforcement unit are not considered education records under FERPA. Accordingly, these videotapes may be shared with parents of students whose images are on the video and with outside law enforcement authorities, as appropriate. Schools that do not have a designated law enforcement unit might consider designating an employee to serve as the "law enforcement unit" in order to maintain the security camera and determine the appropriate circumstances in which the school would disclose recorded images.

According to how FERPA is written, and based on the Department's own advice, schools appear to be encouraged to classify specific employees as "law enforcement units" to collect and manage data inside the school that is not protected by the specific law designed to protect data collected inside schools. This detail is odd on its own, but given that the stated purpose of this exemption is to stovepipe data sharing with law enforcement, this recommendation is highly problematic. Given that this FERPA brochure specifically addresses surveillance camera data, it remains an open question how this would affect the use of body cameras in schools.

In this Iowa school district, where it appears that principals and assistant principals will be wearing body cams to record interactions with students, it's unclear whether the data from the cameras is considered an educational record or not. However, in Houston, where all school resource officers will wear body cameras, it seems pretty clear that the officers - and all data collected via their body cams - are part of law enforcement units, and that the data collected by police within these schools will not be protected under FERPA.

We want kids to be treated as learners, not as the objects of surveillance. Creating a special class of employee and a special class of data that is collected inside yet handled outside the educational system seems destructive, and against the interests of learners. Mistakes are viewed differently by education and law enforcement. The broad exemptions granted under the auspices of a law enforcement unit provide ample opportunity for even well intentioned adults to make decisions that have long lasting negative repercussions for kids. The school to prison pipeline is real, and loopholes created by law enforcement units are part of the problem.

Filtering and Surveillance Should Not Be Considered Protection

11 min read

Yesterday, two applications that use student and parent data were written up on EdSurge. Both of the applications put student social media use under surveillance, and attempt to tie this surveillance and data collection to the students' best interest.

The two apps are Securly and Mevoked - Securly describes itself as a "filtering 2.0 for schools and families"; Mevoked describes itself as "bridging the gap between mental health and technology".

Securly

In the EdSurge piece, a director of instructional innovation describes what Securly offers:

From the Securly dashboard, the administrators can see what students have and haven’t been able to access,” she explains. “If I want to see what kids are posting on Twitter or Facebook, I can--everything on our Chromebooks gets logged by Securly."

So, let's parse this out. Securly logs and tracks what kids are doing on Facebook and Twitter. Because these activities are on Chromebooks, we can assume some level of email, search, and docs logging, at the very least. But it's necessary to remember that technology is not neutral, and that the direction a technology takes can be shaped by the context within which it's used.

The social media filtering makes an especially significant difference at schools like [name redacted], a Catholic all-girls high school where most students bring their school-supplied Chromebooks home. “Most of our students are economically disadvantaged, and use our device as their only device,” [name redacted] explains. “Students take Chromebooks home, and the Securly filters continue there.”

So, at a Catholic girls school serving poor kids, the school issues a device to all students that tracks their online behavior, fully aware that for many of their kids this is their main conduit to the internet. I can only imagine what a response to a kid would look like if/when a student at the school looks for resources on coming out, or looks for resources or protection from abuse.

And in case anyone was unclear on the vision and direction of the company, Awais Ahsan, the company founder, lays it out:

Ahsan envisions Securly as eventually connecting educators and parents in monitoring all social use of technology by students. “A tool for parents to log in and see a view for their particular child, and have alerts through SMS for these activities, would complete the picture in our eyes,” he says.

Nothing supports student autonomy and growth like text messages to parents when their kid is searching online. As to any argument that online learning can support open ended inquiry, or that personal interests can inform and drive a student's academic growth? Nope.

Ahsan reminds that so far, Securly only functions on school-issued devices, which students ostensibly should not be using for personal social media use in the first place.

Those pesky student interests should never get mingled in with the important stuff: teacher directed activities, which need to be constantly logged to protect kids.

It also bears highlighting that describing an internet filter and activity tracker as a bullying prevention tool is some incredible marketing spin.

Mevoked

Mevoked has some similarities to Securly, with a different focus. The following quotation is from the EdSurge piece and Mevoked founder Arun Ravi:

Like Securly, Mevoked analyzes social mobile and online data, but focuses on mental health and connecting individuals with online and in-person resources. “We want to fill in the gap of identifying negative behavior and be the conduit to managing your condition,” Ravi says.

While the precise nature of the "condition" requiring management remains vague, we clearly need data to do what we need to do. With regards to data collection and analysis, Mevoked falls back on the "Google is already doing more of this" explanation:

Ravi explains that Mevoked accumulates data about how students use technology that is already largely accessible. “There’s no barrier in collecting this data,” he says. “We’re doing exactly what Google does when they advertise to you, using the same algorithms to assess mental health.”

While this comparison appears to create an odd parallel between advertising and mental illness, the statement doesn't hold up. Granted, Google sucks up data like a caffeinated Dyson salesperson, but Google is not selling a mental health big data app. The use of the data matters. Mevoked aims to use the data it collects to put pressure on teachers:

By offering Mevoked to schools, Ravi hopes “to put the onus on educators to take a more active interest” in student mental health.

To be very clear: increasing emotional support for kids in schools is a very good thing. But, contextualizing that support within a mental health framework is problematic, and placing non-mental health professionals on the front line with mental health issues has the real potential to do more harm than good. It's hard to tell what's worse: equipping teachers who may or may not have any expertise in mental health with a name of a student in need, or outsourcing these judgments to an algorithm outside any form of informed local professional review.

The EdSurge piece also cites an ongoing study of the app with Lewis & Clark College students. The article states that a "senior psychology major at Lewis & Clark ... is conducting the study with Mevoked." It's very unclear how and what the study includes, and what the level of supervision is, but the way the study is described it sounds like an undergrad psych major is running a study on classmates with the support of a tech company. I suspect and hope I am missing some key details here, because this work sounds like it tramples all over the gray area between a tech pilot and a research experiment on human subjects. I hope and trust that any work that asks students to share mental health data includes supervision of mental health and/or medical professionals. While this is likely/hopefully in place, any supervision is not mentioned in the article.

While I was writing this piece, the people running the Mevoked Twitter account reached out to dispute some of my descriptions of their privacy policy, and to highlight that they are likely pivoting to work more with adults. Moving away from direct outreach to schools would be a good thing, but even in the case of a full pivot to only working with adults, the privacy issues highlighted here still need clarification. I took a screenshot of the conversation, as well as of the Mevoked privacy policy in place when this post was written.

Online Filtering, Mental Health, Surveillance, and Privacy

After reading the EdSurge piece, I took a quick jump over to the privacy policies of Securly and Mevoked. I didn't do a full review of their policies, but a quick read showed some of the common issues where data could potentially leak out.

Securly - the monitoring and logging application used in a school serving "disadvantaged" kids - reserves the right to sell any data collected "in connection with a sale of all or substantially all of the assets of Securly or the merger of Securly into another entity". Additionally, Securly reserves "the right to fully use and disclose any information that is not Personal Information (such as statistics, most frequented domains, etc)." Based on the amount of data collected by a logging service (for example, imagine what you have done on your computer and on the internet in the last 45 minutes), Securly would appear to have a sizeable trove of data on student usage patterns. Securly also reserves the right to change their terms of service and privacy policies at any time, with no notice.

A reasonable person might expect that an app like Mevoked, aiming to "bridg[e] the gap between mental health and technology" would have a more solid privacy policy. However, Mevoked quickly dashes that expectation. Mevoked uses information collected from parents and about children to advertise to parents:

We may use information collected from parents,to send such users you (sic) news and newsletters, special offers, and promotions, or to otherwise contact such users about products or information we think may be of interest. We will not send marketing or promotional materials to children.

So, an app created to support mental health will use data collected within the app to market to parents. The opportunities for opportunistic marketing here are mind boggling - and one can only imagine what Big Pharma would do with this dataset. Of course, this trove of curated mental health data is also a business asset, and can be transferred with no conditions:

If we are acquired by or merged with another company, if substantially all of our assets are transferred to another company, or as part of a bankruptcy proceeding, we may transfer the information we have collected from you to the other company.

And, of course, data collected within Mevoked can be shared in aggregate or de-identified form:

We may share aggregate or de-identified information about users with third parties for marketing, advertising, research, or similar purposes.

This assumes, of course, that the dataset can be adequately de-identified, and won't be combined with any other external datasets. And we need to re-emphasize here: the dataset in question contains data points that create a partial picture of the mental health of individuals, tied to their identity, and their privacy policy doesn't mention anything prohibiting the recombination of the Mevoked dataset with other datasets.

Both the Securly and Mevoked terms could be dramatically improved by stating that user data will never be included as part of any sale or transfer. If Securly is serious about being a filtering tool, it should not attempt to make user logs an asset. With regards to Mevoked, given that the company collects and analyzes data around an individual's mental health, treating that information as a financial asset seems poorly thought out, at best. Mevoked should also restrict any advertising or marketing related uses of its data. People are going to Mevoked for mental health reasons, not to help marketers get better at selling, and using mental health data in any form - with PII, in aggregate, or de-identified - to fuel marketing seems uncaring.

Conclusions

If you are a parent, concern for your children is natural. It's something we live with. But addressing concern doesn't require constant surveillance, and throwing the data trail generated by that surveillance into the hands of a tech company. If you work in a school and you absolutely must filter, don't sell out your kid's online habits under a misplaced sense of "safety." If you are a kid, look for these intrusive devices, and ask pointed questions about them. If you encounter a filter, ask what is logged, and why. This is your education, and you deserve the freedom to pursue it on your terms, without having every keystroke logged. If you are a teacher who really thinks you need to intrude this deeply into your students lives to make a difference, check yourself.

If you are running a tech company that is selling to schools and will get information on children as a direct result of your app, set up privacy policies that actually protect privacy and respect student autonomy and experimentation. Until you do that, you don't deserve our trust. Funders - the more you fund companies that trample on user privacy, the more you strengthen the impression that you care more about profits than people. And journalists and tech writers - please, push back on the techno-utopian narratives that get pushed your way. Read privacy policies, and ask companies questions about them. Think about the implications of the software you describe. Don't be afraid to call bullshit when it's needed.

And, as always, it circles back to the role students play in the learning environments we create. Securly and Mevoked - like most every EdTech app out there - treat students like observed objects, rather than creative people with agency. The EdTech space is filled with people trumpeting the potential for student directed learning, and technology that reinforces the traditional paradigm of a teacher leading obedient students. We can't transform the process of learning while celebrating tools that remain rooted in the power structures we seek to change.

Please, Correct Any Misconceptions Here

I'm always open to the possibility/certainty that I have gotten something wrong. Please, if you see something here that is inaccurate, let me know.

, , ,

Privacy, Surveillance, and Learner's Rights

3 min read

Between the Snowden revelations and the swirling misinformation about data collection and Common Core, there have been an increased number of conversations addressing student privacy. While the conclusions reached within these conversations are unchanged, it's nice to see the topic even being addressed. The calculus generally follows this arc: "free" and "convenient" provide an acceptable reason to downplay or ignore privacy concerns.

There are wonderful things that can be done with free or low-cost tools online. The point of this conversation has nothing to do with the usefulness of working online.

However, if you are not paying (much) for the product, YOU ARE THE PRODUCT.

For the sake of argument, let's say that the US Government created a web portal. In exchange for your first name, last name, address, and email, you would be given free access to a suite of online productivity tools. How would parents and kids react if you told them that they needed to hand over this information, and store all of their work on servers provided by the government?

We would never think about doing this for the US Government, yet we do it for corporations with little or no thought.

Or, let's say that a school's admission policy stated that, as a precondition to attending school, all kids needed to agree to be part of unpaid marketing research for companies providing services to the school?

If the privacy terms of a service you use allow the company providing the service to use data to improve the service or to share data with third party affiliates to improve the service, then data is being used for, among other things, marketing.

Additionally, one way that VC funded companies attempt to demonstrate value is through developing as large a rolodex of user info as possible. That data - and the usage patterns attached to it - is incredibly valuable, and most terms of service do not guarantee the rights of people to remove their data in case of an acquisition, or in case of a change in the privacy policies.

And to emphasize: none of this is wrong. It's completely within the rights of companies to write whatever privacy policies they want, and these services provide useful options for teaching and learning.

Reports of abuse (aside from the initial data storage and collection) have been infrequent, but they exist, such as the example of the Google tech that stalked and harassed minors.

However, in addition to the examples of obvious, pernicious abuse, what happens to the learning process when it occurs within a context of constant surveillance, where the learner is the product? This paradigm is equally present in the usage of sites like Turnitin.com, services like Coursera, ecosystems like the app store, and the various productivity suites given to schools at no or reduced cost. We need look at the model of "student as product" and address how that paradigm affects our view of learning - and, more importantly, how that paradigm affects learner's views of their rights to privacy, and of their sense of agency in advocating for and protecting those rights.

, ,

How Are Schools Using Apple, Google, Microsoft, and Facebook Explaining Surveillance?

2 min read

At the risk of stating the obvious, I've been following the news of widespread data collection by the NSA with some interest.

After watching things continue to unfold today - including President Obama's underwhelming defense of the program - these are some random thoughts and questions I have:

  • I'd like banks to get comparable surveillance as civilians.
  • I'd like to see the discussion broadened to include corporate responsibility for just acquiescing to these data requests.
  • Schools that went all in with iPads - how are you explaining to parents that your 21st Century Learning enrolled their children in 21st Century Surveillance?
  • Schools that went all-in with Google Apps or Microsoft EDU - how are you explaining that the benefits of cost savings appear to be offset by passive monitoring of the work within the school?
  • Schools that put a lot of time into building your Facebook presence - how will you explain that, by joining the school community on Facebook, you are also throwing your data into NSA servers?
  • For those of you who spent time analyzing and teaching others the "privacy" settings of Facebook, does this feel like time well spent, considering that - to at least the government and Facebook - there is no such thing as a privacy setting that works as adertised?
  • It sounds like, with Prism, the government outsourced TIA.
  • Given this level of cooperation between government and tech companies, how about we put that spirit of collaboration to work and solve the real problem of veterans waiting years for their benefits? If ever there was a problem that could benefit from good data management, the VA benefit system is it.

And yes, it is unclear how much - if any - student data is getting dropped into the net of data that continues to be given by American companies to the American government. To assume none is an act of willful naiveté that strains credibility.

The one thing I will say for Prism - according to a slide shown in the original piece, the program only costs 20 Million a year to run. 20 million a year, to maintain and update a data store to spy on 300,000,000 people? It is, ironically, an example of efficient government spending. To put that in relative terms, that's only 3 million more than the cost of a single drone.

, , , ,

Social Media and Cooperative Surveillance

5 min read

So the Bruins won the Super Bowl. Or something like that.

And in the aftermath, people rioted in Vancouver. And in those riots, pictures and videos were taken.

And some people took it upon themselves to identify the rioters.

Stanley Cup

And after the aftermath - with nearly 170 people treated in hospitals, volunteers cleaning up the city, people began to ask questions about surveillance and the role of social media.

In the comments of her post linked above, Alexandra Samuel extends her original thoughts to include the "slippery slope" argument:

I don't see how we can claim to be uncomfortable with mass surveillance -- to fear Big Brother -- but then make exceptions when it's convenient, or feels important. This is a slippery slope and we can't draw too many simple lines -- even a line based on exposing illegal behaviour (as opposed to legal but controversial). Remember that there are places where it's illegal to smoke dope, or criticize the government, or hold hands with someone who is the same gender as you. Do we accept social media surveillance in those contexts?

To start, it's worth pointing out that most slippery slope arguments aren't worth the air required to set them loose. A "slippery slope" argument assumes that we live in a world with moral absolutes, and that making a "wrong" choice plunges us into the abyss of uncertainty and ambiguity.

But with that said, to all those who argue that using social media to identify rioters to the state are engaging in community surveillance/crowdsourcing big brother/engaging in nefarious deeds to further the expansion of the omnipresent nanny state: you are late to the game. That ship has sailed. People are reporting on one another, and have been for years, well before the advent of the social web. Perversely enough, people using Facebook are complicit in building their own Panopticon. And, in using sites like Facebook - where people throw their contact information, their interests, the places they like to go, the people they like and dislike, things they buy, games they play (and how they play them), what they look like, what their friends look like, etc, etc - people leave a broad data trail. Even rough data shows a lot about individuals; more sophisticated datasets allow for more sophisticated predictions.

It would be interesting to look at what could be discerned from a person's datastream on Facebook, combined with the data accessible via the phones and laptops we use, and how close that woud come to supporting the data needed to make the Information Awareness Office a reality.

But to return to the argument of what constitutes an appropriate use for social media, and what level of privacy is reasonable to expect: we need to ground these conversations within the historical reality that people have been disagreeing, behaving badly, attempting to avoid responsibility - and then talking about it - for centuries (as an aside, Augustine would have had an AWESOME twitter feed). Social media just lets us get the word out faster.

And, if you are now concerned about privacy, and the relationship between surveillance, privacy, and the state, there is one thing you can do right now to make it better: stop using Facebook, Foursquare, Twitter, etc, as outreach and communication tools. To use social media is to participate in a continuous act of cooperative surveillance: sometimes we're watching ourselves, sometimes we're watching others, sometimes we're being watched, but the difference between sharing and observing is largely a matter of the side of the window you're on.

For the many self-proclaimed "social media consultants": stop advocating an expanded use of Facebook, Twitter, etc, to the detriment of an organization's primary web site. If you have engaged in such unseemly behavior in the past, it's never too late to admit your mistakes. Just stop repeating them. And if you have been working in social media for more than 15 minutes and are actually surprised by privacy implications, you can always go back to selling cars.

Seriously, though, if you are giving advice to an organization that does social justice work, be very careful of the relationships you encourage them to foster on external social sites. Given Facebook's unclear direction in China, the ease in which apps can access and store user data, the way bugs leak private data, and Facebook's own hamfisted "privacy" efforts (from Beacon to facial recognition and everything in between), encouraging social justice-oriented groups to work on Facebook could be putting people at unnecessary risk.

As we talk about privacy and surveillance, we need to remember that a key difference between a surveillance tool and a tool for individual or collective empowerment is who controls the data, and how that data is used.

Image Credit: "Patrice Bergeron" taken by slidingsideways, published under an Attribution Non-Commercial No Derivatives license.

, , , , ,