Bill has worked in education as an English and history teacher, an administrator, and a technology director. Bill initially discovered the Internet in the mid-1990's at the insistence of a student who wouldn't stop talking about it.

twitter.com/funnymonkey

github.com/billfitzgerald

drupal.org/user/19631

Google, Lawsuits, and the Importance of Good Documentation

8 min read

This week, the Mississippi Attorney General sued Google, claiming that Google is mining student data. In this post, I'll share some general, personal thoughts, and some recommendations for Google.

To start, it's worth watching a statement from the press conference where the suit was announced - this video clip was shared by Anna Wolfe, a journalist who covered the event.

At 1:46 in the video, the AG describes the "tests" that were run. To be blunt, these tests don't sound like actual tests - it sounds more like browsing and looking at the screen. Unless the student account they were using was relatively new, had never done any searches on the topic being "tested," had never browsed while logged in to any non-Google site that had ad tracking, and all testing browsers had their cache, cookies, and browsing history cleared, there are a range of benign options that could explain behavior that looks like targeted ads. And that doesn't even take into account the difference between targeted ads based on past behavior, and content-based ads delivered because a page describes a specific subject.

Without additional detail from the Mississippi AG on how they tested for tracking, the current claims of tracking are less than persuasive.

G Suite Terms, and (a Lack of) Clarity

An area where Google can improve is highlighted in the suit: Google's terms, and the way Google describes how educational data are handled, are not easily accessible or comprehensible (all the necessary disclaimers apply: I am not a lawyer, this is not legal advice, etc, etc). This commentary is limited to transparency and clarity. With that said, Google could blunt a lot of the claims and criticisms they receive with better documentation. The people who are doing this work at Google are smart and talented - they should be allowed to describe the details of their work more effectively.

Google has built a "Trust" page for G Suite, formerly known as Google Apps for Education. The opening paragraphs of text on this page highlight the confusing complexity of Google's terms.

Opening text from Trust page

In this opening text, Google links to five different policies that govern use of Google products in education:

However, this list of five different legal documents leaves out five additional documents that potentially govern use of G Suite in Education:

Of these five additional documents, two (the Data Processing Amendment and the Model Contract Clauses) are optional. However, these ten documents are not listed together in a single, coherent list anywhere on the Google site that I have found. The trust page also links to this list of Google services that are not included in G Suite/Google Apps for Education, but that can be enabled within G Suite. The list includes over 40 individual services, which are all covered by different sets of terms.

Moving down the "Trust" page, we see several different words or phrases used to refer to the Education Terms: "contracts," "G Suite Agreement," and "agreements." These all link to the same document, but the different names for the same document make it more difficult to follow than it needs to be.

Some simple things Google could do on the "Trust" page:

  • list out all applicable terms and policies, with a simple description of what is covered;
  • list out the order of precedence among the different documents that govern G Suite use. If there is a contradiction between different any of these different documents, identify what document is authoritative. As just one example, the Data Processing Agreement and the G Suite Agreement define key terms like "affiliate" in slightly different ways;
  • highlight what documents are optional;
  • create a simple template for districts (or state departments of ed, or universities) to document the agreements governing a particular G Suite/Google Apps implementation;
  • standardize language used when referring to different policies;
  • define the differences between the Education-specific contracts and the Consumer contracts;
  • in each of their legal terms, create IDs that allow for linking directly to a section of a document.

While the above steps would be an improvement, creating standalone, education-specific terms that were fully independent of the consumer terms would add additional clarity. From a product development place, this legal review would force an internal review to ensure that legal terms and technical implementation were in sync. To be clear, this is an enormous undertaking, but if Google did this, it would add some much-needed clarity. Practically speaking, Google could use this step to generate some solid PR as well. The PR messaging on this practically writes itself: "Google has always prided itself on being a leader in security, data privacy, and transparency. As our products evolve and improve, we are always making sure that our agreemets evolve and improve as well."

G Suite and Advertising

Google has stated on multiple occasions that "There are no ads in the suite of G Suite core services." Here, it's worth noting that "core services" for education only includes Gmail, Google Calendar, Google Talk, Google Hangouts, Google Drive, Google Docs, Google Sheets, Google Slides, Google Forms, Google Sites, Google Contacts, and Google Vault. Other services - like Maps, Blogger, YouTube, History, and Custom Search - are not part of the core services, and are not covered under educational terms.

Ads text from Trust page

There are differences, however, between showing ads, targeting ads, and collecting data for use in profiles. Ads can be shown on the basis of the content of the page (ie, read an article about canoeing, see an ad for canoes), and this requires no information about the person reading the page.

Targeted ads use information collected from or about a user to target them, or their general demographic, with specific ads. However, while targeted ads are annoying and intrusive, they provide visual evidence that personal data is being collected and organized into a profile.

On their "Trust" page, as pictured above, Google states that "Google does not use any user personal information (or any information associated with a Google Account) to target ads."

In Google's Educational Terms, they state that they collect the following information from users of their educational services:

  • device information, such as the hardware model, operating system version, unique device identifiers, and mobile network information including phone number of the user;
  • log information, including details of how a user used our service, device event information, and the user's Internet protocol (IP) address;
  • location information, as determined by various technologies including IP address, GPS, and other sensors;

While it is great that Google states that they don't use information collected from educational users, Google also needs to provide a technical explanation that demonstrates how they ensure that IP addresses collected from students, unique IDs that are tied to student devices, and student phone numbers are explicitly excluded from advertising activity. Also, Google should clearly define what they mean when they say "advertising purposes", as this phrase is vague enough to take on many different meanings, often showing more about the opinions of the reader than the practice of Google.

This technical explanation should also include how the prohibitions against advertising based on data collected in Google Apps can square with this definition of advertising pulled from the optional Data Processing Agreement:

"'Advertising' means online advertisements displayed by Google to End Users, excluding any advertisements Customer expressly chooses to have Google or any Google Affiliate display in connection with the Services under a separate agreement (for example, Google AdSense advertisements implemented by Customer on a website created by Customer using the "Google Sites" functionality within the Services)."

There are many ways that all of these statements can be true simultaneously, but without a technically sound explanation of how this is accomplished, Google is essentially asking people to trust them with no demonstration of how this is possible.

Conclusion

Google has been working in the educational space for years, and they have put a lot of thought into their products. However, real questions still exist about how these products work, and about how data collected from kids in these products is handled. Google has created copious documentation, but - ironically - that is part of the problem, as the sheer volume of what they have created contains contradictions and repetitions with slight degrees of variance that impede understanding. Based on seeing both Google's terms evolve over the years and from seeing terms in multiple other products, these issues actually feel pretty normal. This doesn't mean that they don't need to be addressed, but I don't see malice in any of these shortcomings.

However, the concern is real, for Google and other EdTech companies: if your product supports learning today, it shouldn't support redlining and profiling tomorrow.

Why I Signed neveragain.tech

4 min read

Yesterday, December 14th, was an interesting day in technology. Evernote announced an update to their terms of service that appears to allow selected employees to read notes stored in their system, with no opt-out, in the interest of improving machine learning. People using Evernote are - rightly - talking about abandoning the service en-masse, which seems like a pretty reasonable response to such horrible privacy practice. Of course, I have heard nary a peep from Evernote's education ambassadors about this. Who knows - maybe if they actually said something they might have to give back their t-shirts and stickers.

But Evernote's issues were a footnote compared to the spectacle of major tech leaders shuffling into Trump Tower to meet with the president-elect, the incoming vice-president, and the children of the president-elect. If we are searching for a situation that illustrates how ethics get bent for reasons of politics and profit, we don't need to look much further than this event.

Trump Tower tech meeting

An additional backdrop here is that Trump ascended to the presidency with the help of the company that he didn't invite because they refused his emoji. And, during a campaign that was marked by promises of creating a registry for Muslims, the Trump campaign was steadily creating a version of that registry, and more, with data pulled from Facebook, assembled and augmented by Cambridge Analytica, and further extended by data purchased from the major data brokers here in the US that combines in-person and online habits, with up to 5000 individual data points on 220 million Americans. This data set is privately held, so potentially, White House advisors like Richard Spencer and Steve Bannon could be using this data set to inform their work. But let's be clear - this data set exists because of the work of the tech industry, and the data it collects.

Third party tracking is pervasive on the web. This technology creates marked and growing information asymmetry, where the odds are increasingly stacked against people, and stacked for corporations. Technology fuels this power imbalance, and technologists build the tools that make it possible.

The day before the leading technologists in our country shuffled into Trump Tower, news broke of 200 million records for sale on the dark web containing information that appears to come from a data broker. The records identify individuals, and include details like spending habits, political contributions, political leaning, credit rating, charitable contributions, travel habits, and information on gambling habits/tendencies. These records were certainly assembled and stored via different tracking technologies.

With this as a backdrop, when I see something like neveragain.tech I will admit a degree of skepticism. The profiling tools are built, and the data sets are assembled, multiple times over. I also want to make explicitly clear that my signature, or lack of signature, on the list is pretty unimportant in the larger scheme of things. But with all that said - and with all the technology that has been built, and is right now humming along, collecting data, serving bad search results, and tracking us - we can still make things better. Hell, we might even be able to make things right.

With regard to privacy, people often use two metaphors to describe why the efforts to increase privacy protections are meaningless: "the genie is out of the bottle" and "the train has left the station." What people using these metaphors fail to recognize is that the stories end with the genie returning to the bottle, and the train pulling into another station. "Too late" is the language of the lazy or the overwhelmed. Change starts with awareness, and change grows with organized voices. That's something I can get behind, and is the reason I signed neveragain.tech.

Facebook, Voter Suppression, and AdTech

3 min read

This piece over on Medium ties together several news stories that have been written about the Trump campaign's use of Dark Posts on Facebook to supress the vote among Clinton voters. There are some great details in the post, and you should read it in full. A few details stood out that bear highlighting.

The Trump campaign used pre-built tools within Facebook, and data on users exposed by Facebook. In other words, Facebook already had the tools to support vote suppression built into their system. I don't think that this was done intentionally by Facebook, but it really hammers home the point: all tech has unintended consequences. When we look at tech, we need to evaluate the fringes, and ask hard questions about what the tech can break, because we humans are great at breaking things. But in this case, the mechanisms for manipulating behavior via ads worked very well for suppressing turnout in the electorate. Predictive analytics lost, but mood manipulation via big data worked well.

The Trump campaign used data from within Facebook to suppress turnout among Clinton supporters. This means that every progressive organization on any issue that has been organizing on Facebook helped provide the Trump campaign with a list of potential voters to receive Dark Posts to suppress their vote (and in brief, Dark Posts are private ads microtargeted to specific demographics. On some days, the Trump campaign delivered 100,000 different ads, tailored by demographic data). But the message to progressive orgs should be clear: when you organize on Facebook, you expose your organization and your stakeholders to profiling and targeted political ads by your opponenents. Use better tools.

Finally, according to the piece, the Trump campaign created a privately owned database that contains between 4000-5000 data points on the online and offline behavior (ie, where we go, our credit card purchases, etc) of approximately 220 million Americans. This database was compiled from multiple sources, including Cambridge Analytica, Experian PLC, Datalogix, Epsilon, and Acxiom Corporation. It's privately held, and it's unclear what restrictions, if any, exist around who can access this database. Unlike data collected on us by the NSA, where there are levels of bureaucracy tracking access, the dataset compiled during the campaign is a much more openly accessible resource to people within the Trump campaign.

Also worth noting: Facebook explicitly offers advertising services that tie online and offline behaviors. If you look at the list opf partners, you will see some of the same players that determine our credit scores.

Data Clean Up - No Time Like the Present

1 min read

I can't think of a better time than the present for schools to clean up some of their existing demographic data collected on students. Ideally, demographic data can be used to ensure that students and schools get resources they need, but in some cases, the same demographic data used to help deliver services could also be used to help identify parents or families that have have stayed past the time permitted on their visa. 

For example, information on languages spoken in the home, the presence or absence of a social security number, or questions that look directly at immigration status can all be used in multiple ways. Given that collecting accurate data on sensitive topics is never easy, deleting this data as a means to insuring that it isn't misused or misconstrued is a recommended path. 

If you don't have data, it can't be compromised, leaked, or misused. For schools that have sensitive demographic data on the students entrusted to their care, now is an ideal time to clean up.

How Do We Support Each Other As We Do The Work?

3 min read

Donald Trump and Mike Pence won the election last night. This raises a whole slew of questions, but I'll start here with some questions grounded in an educational context:

  • What does it mean to create a safe space for learning for black and brown kids when the leader of the country considers people that look like them to be terrorists, rapists, or drug dealers who should be kicked out of the country?
  • What does it mean to stand up against bullying when we have a leader who incorporated abusive behavior as a campaign strategy?
  • What does it mean to encourage honesty when we have a leader who actively ignores the truth?
  • What does it mean to educate women when we have a leader who consistently demeans women based on their physical appearance, and who brags of sexual assault?

I don't have answers to any of these questions - and really, the answers to these questions reside in our day to day actions. We - all of us - will have a constant series of small interactions where we will have the opportunity to do well, or to do something else. Hopefully, we will get it right more than we get it wrong, and hopefully, when we get it wrong we will have the humility to admit it, make amends, and move forward.

The conditions that led to the election of Donald Trump existed well before Donald Trump announced his candidacy. The racism, misogyny, and xenophobia that he voiced while campaigning has been well documented. However, racism, misogyny, and xenophobia are well worn in the history of the United States. I don't say this to be inflammatory, but rather to acknowledge a basic reality. I mean, I'm writing this within the border of a state that was founded as a bastion of white supremacy

So here we are. To state the obvious: the need to do the work would have existed regardless of who won, but the Trump/Pence victory amplifies the need to center intersectional social justice in our work. And yes, I am being intentionally vague when I say, "the work." We all need to define it in the way that makes sense to us - for some of us, it will be intensely local; for others, it will be organizing at the national level. For most of us, it will be something in between. For people who look like me, let's consider talking less, and listening more.

Food is my thing. When I woke up this morning, I peeled and diced some shallots, onions, parsnips, garlic, and a turnip and threw them in a pot with a chicken, salt, pepper, brown sugar, soy sauce, and some spices. It's simmering now as I write, and soon, the smell will fill the house.

It's not a solution, but it'll feed those around me. Today starts now. How do we support each other as we do the work? 

Ransomware Focused on K12 and Government

1 min read

While ransomware attacks have been on the rise, education has seen (fortunately) few attacks. However, as reported in Softpedia, that could be changing.

The focus on educational and government users attempts to take advantage of (among other things) weak or nonexistent disaster recovery strategies.

By going after government institutions, they might get lucky and infect a target that has failed to implement a proper backup procedure, effectively shutting down its system until a ransom has been paid. The chances of squeezing a ransom payment out of these targets are higher than with regular home users.

The attack has been delivered using bogus ticket confirmations, which in turn contain a link to a the ransomware. Now is the time to do two things:

  • Test your backup and disaster recovery strategy; and
  • Review good email and download habits with your colleagues. This will protect against phishing, social engineering, and ransomware attacks.

Students, Directory Information, and Social Media - Part 2

5 min read

Last week, I put out a post on social media and kids. Apparently, it was read by more than a couple people. I don't keep track of pageviews or reach here - I have no analytics running on this blog, and while I will talk with people on Twitter about things, I have disabled comments on the blog - thanks, spammers and trolls! So, I have no sense of what posts I write on my personal blog here resonate with people. My approach to writing online is to treat it as my outboard brain - the process of writing helps me figure out what I'm thinking, what I'm getting wrong, and where I need to look and learn more. Based on the feedback and response I received on the last post, I wanted to clarify and expand on a couple things.

The premise of the post is that parents should be able to opt their kids out of directory information sharing and having their kid's information (photo and name) shared on social media, and not have this be a barrier to other school activities like yearbook, local news, athletic and music publications, class pictures, and streamlined access to childcare. 

The conversation would be different if directory information was limited to basic information, and if teacher sharing on social media showed higher levels of restraint, but, unfortunately, that is not where we are.

Some school districts - not all, but some - consider a student's name, address, email address, phone number, picture, date of birth, place of birth, and enrollment status to be directory information under FERPA. Under FERPA, local education agencies have the right to define what constitutes directory information. FERPA allows directory information to be shared without consent. It's worth noting that if a company had an incident where this same data was accessed, this would be considered a data breach. Yet, for a kid in kindergarten, schools and districts have the right to designate this as information that can be shared freely. To get a sense of how districts are defining directory information and managing opt-out, take some time and read through district forms. These forms are pretty short, and most of them can be read in under five minutes.

Moving on to social media, some teachers who make regular use of social media often overshare. This is not the case for all teachers - there are many teachers who have don't show kid's faces, only share images of larger activities, don't share student names, and don't share other personal information collected from students. But, the small subset of teachers who overshare complicate the space for their peers, and for school districts attempting to balance proactive outreach with real concerns about learner privacy. When teachers share their school and grade in their bio, that information can be combined with what is shared in social media posts. It's also worth noting that, in many cases, a search across a username on social media sites reveals additional information about people.

While opinions vary on what I am about to say, I do not consider a school web site to be part of social media. Most school web sites have nowhere near the traffic or visibility (to people or search engines) as social media sites. Opinions differ on that point, but I want that distinction clear in this post.

Three recommendations for districts that would address these basic issues include:

  • Limit what is covered under directory information. Ideally, information that allows a kid to be contacted directly would be excluded from directory information;
  • Create a social media policy for teachers that limits the amount of information that teachers can share about a student via an individual teacher's social media account (Twitter, Facebook, Instragram, Snapchat, etc). Ideally, names should not accompany portraits, and call outs to edtech vendors should not accompany a kid's image;
  • Avoid grouping parental and learner rights around data sharing into all-or-nothing buckets. These two forms are good examples of how districts are proactively addressing these needs.

If a district takes steps to minimize what is considered directory information and has a sound social media policy in place, the number of people who feel the need to opt out will likely decrease. This is an opinion based on multiple conversations over years, and like all opinions requires time to see if it holds water. However, my sense (from talking with people who care about learner privacy from within parent and school communities) is that having the option to opt out would reduce concerns about the need to opt out - in other words, when schools recognize the need for the option, people have more trust that the schools understand the issues are are addressing them effectively. 

And, a closing thought: parents also have a role to play here in their sharing on their own social media feeds. Periodically, take a step back and review your social media presence with an eye toward seeing what information you have shared about your family and friends. If we want to emphasize the need for privacy with our kids, we have an obligation to model that with our own behavior as well.

Students and Social Media

10 min read

Update: I put out a second part to this post based on some conversations. End Update.

Introductory note: In this post, I reference hashtags and tweets I have seen that compromise student privacy. Ordinarily, I would link to the hashtags or tweets, and/or post obscured screenshots. In this post, I am doing neither because I do not want to further compromise the privacy of the people in the examples I have seen.

When teachers post pictures of students on social media, it raises the question of whose story is being told, and in whose voice, and for what audience. Multiple different answers exist for each of these questions: the "story" being can told can range from the story of a kid's experience in the class, to a teacher's documenting of class activities, to a teacher documenting activities that are prioritized within a district. In most cases, even when the story is told from the student's perspective, the voice telling the story is an adult voice. The audience for these pieces can also vary widely, from parents, to other teachers, to the district front office, to the broader education technology world.

While students often figure prominently in classroom images posted on social media, student voice is rarely highlighted, and students are rarely the audience. The recent example of the IWishMyTeacherKnew hashtag - where a teacher took student thoughts and words from 8, 9 and 10 year olds and posted them on Twitter, and parlayed that experience into a book deal - provides a clear example of student words appropriated to tell an adult story. As a side note, it's also worth highlighting that student handwriting is a biometric identifier under FERPA, so sharing samples of student handwriting online without prior parental consent is, at best, a legal gray area. To emphasize: asking your students what matters and what they care about is great. Publishing these personal details to the world via social media - especially when their words can be traced back to them within their local community - prioritizes an adult need over learner's needs.

When posting student pictures on social media, the adults in the room need to be careful about the details they include with their images. I have seen examples of teachers doing a great job documenting their classroom when they show pictures of kids working on a project, and the pictures focus on work, do not include student names, and do not include student faces (or only include them as part of a group shot, not as a close up portrait). Conversely, I have also seen teachers post pictures of kids that include a close up of the student's face and a student name tag, where the teacher's bio identifies their school and grade. Teachers should also ensure that the location services for photos is off, otherwise they can share precise geographic locations.

In some more extreme examples, I have seen teachers post portraits of students that include a name tag, grade, and a reference to a specific EdTech vendor. Whiled it is great to see teachers highlighting student effort and growth, including a specific tech vendor in the callout with a picture of an elementary school student looks a lot like a kid being used as an unpaid spokesperson to market a tech product. To make matters worse, I have also seen examples where these pictures included usernames and passwords of students. To be crystal clear, writing usernames and password names down in a publicly accessible place shouldn't happen. Posting these passwords to the open web is a surefire way to make this bad practice worse.

When a teacher posts a picture of a student that include the above details, they are potentially sharing directory information or parts of an educational record, as defined under FERPA. Beyond what is covered under FERPA, we must ask whose needs are served by sharing this information on commercial social media - the student, the teacher, or the school? Taking pictures is fine. Recognizing students for work and progress is obviously fine. Tying that work and progress to a specific app or vendor is less fine. Posting this collection of information on social media has the potential to cross some serious legal and ethical lines.

Given that some of this information is covered under FERPA, parents have some rights to control how and where information is shared. Schools and districts can play a key role in ensuring that learners have full and unfettered access to their basic rights to privacy. Unfortunately, many districts do not approach this issue with adequate flexibility or understanding of how their policies can protect or impair a parent's ability to access their rights. For a pretty typical example, we will take a look at the opt-out and disclosure form from Baltimore County Public Schools.

The form has three sections: FERPA Directory Information Opt-Out, Intellectual Property Opt-Out, and Student Photographs, Videos and/or Sound Recordings Opt-Out. These are the right categories to include in an opt-out form, but the way the opt-outs are structured are hostile to student privacy.

Taking a closer look, starting with the FERPA Directory Information Opt-Out, the section closes with this explanatory note, followed by three options.

BCPS opt-out excerpt - FERPA

Note: If you “opt-out” of the release of directory information, BCPS will not release your child’s directory information to anyone, including, but not limited to: Boys and Girls Clubs, YMCA, scouts, PTA, booster clubs, yearbook/memory book companies that take photographs at schools and/or other agencies and organizations.

The reference to Boys and Girls Clubs and the YMCA is telling here: these outside vendors are used to run childcare programs for parents that need it. Because the district takes a blanket approach where parents are required to choose all or nothing, the current district opt-out policy appears to place a barrier in the way of parents who want to protect their child's privacy and need childcare. The likely scenario here is that parents who opt out of data sharing at the district level need to make additional arrangements with the childcare providers at the school level. While this is not an insurmountable obstacle, it creates unneeded friction for parents, which can be read as a disincentive for parents and children to access their rights under FERPA.

Districts can address this issue very easily by adding a single check box to the their form that authorized the release of directory information to school-approved childcare providers.

Moving on to the Intellectual Property Opt-Out section, Baltimore County Public Schools takes a similarly broad approach with student's IP rights. The terms of the opt-out form combine multiple different activities, with multiple different means of both publishing and distribution, into an all-or-nothing option.

BCPS opt-out form - IP Rights

Having a student's intellectual property uploaded to a web site with weak privacy protections is a very different situation than having a kid covered in the news, or having a kid participating in a school-sponsored video. The fact that a district conflates these very different activities undercuts the protections available to learners. This also creates the impression that the district values district-created processes more than student privacy and learner agency.

Moving on to Student Photographs, Videos and/or Sound Recordings Opt-Out, Baltimore County Public Schools again takes an all-or-nothing approach.

BCPS opt-out - Photos, Videos, Recordings

If the parent denies such permission, the student’s picture will not be used in any BCPS publication or communication vehicle, including, but not limited to, printed materials, web sites, social media sites or the cable television channel operated, produced or maintained by BCPS’ schools or offices, nor will my child’s picture be part of a school yearbook, memory book, memory video, sports team, club or any other medium.

Social media, yearbook, childcare, and sports activities are all very different events. When schools structure permissions in a way that removes agency from parents and kids, they burn goodwill. Also, given that teachers and districts are still publishing pictures of kids online in ways that share personal information, including (on some rare occasions) passwords, parents should have some granular ability to differentiate between sharing in a yearbook and sharing on Instagram, Facebook, or Twitter. Until schools and districts consistently get this right, they have an obligation to err on the side of restraint. To state the obvious, kids don't walk through the school doors so adults can use their likeness and work on social media. Similarly, yearbooks and social media are very different things, and yearbook companies and social media companies have very different business models, and - in most cases - very different approaches to handling data.

The solution here is pretty straightforward: provide parents a granular set of options. A parent or kid should be able to say that they want to be in the yearbook; a high school athlete should be able to say they want to be in the program or in the paper; a musician should be able to be acknowledged in a newsletter - and these options do not need to be tethered to sharing directory information, streamlined access to childcare, or indiscriminate sharing on social media. That is a reasonable request, and if a teacher, school, or district lacks the data handling and media literacy skills requried to make that happen, then we have an opportunity for teachers and district staff to develop and grow professionally.

The argument we generally hear against allowing parents and students real choices over their privacy rights is that the burden would be too much for schools to handle. However, we only need to look at how parental rights are managed with regards to health curriculum to see the hollowness of that argument. In Baltimore County Public Schools - as with many schools in many districts nationwide - parents and students can opt-out of individual units in the health curriculum. Districts have been managing this granular level of opt out for years, and somehow - miraculously - the educational system has not tumbled into ruin as a result.

The main difference, of course, is that in many states parental opt-out rights are required and defined by law.

For parents: use the opt-out form provided by the World Privacy Forum to assert your rights. In an email accompanying the form, explain that you would like to see your district develop more flexible policies on opt-out and data sharing.

For teachers: if you are going to share student images and work on social media, make intentional choices about what you share, how you share, and why you share. Additionally, ask your district about more granular policies for parents and learners. While the initial change might be hard, over time the more flexible rules will make your work easier, and increase trust between you, your students, and their guardians.

For districts: get ahead of the curve and start offering more flexible options. As we have seen with health curriculum and with privacy in the last few years, state legislatures are not shy about introducing and passing legislation. Districts have an opportunity to address these concerns proactively. It would be great to see them take advantage of this opportunity.

Can 2017 Be the Year Of the Feature Freeze?

6 min read

Yesterday, the Intercept published an article on a project led by Peiter and Sarah Zatko, the founders of the Cyber Independent Testing Lab. The lab has developed a testing protocol to evaluate the potential for security issues within software. A big part of the reason for the excitement (or fear) about this project is due to the founders: Peiter Zatko (aka Mudge) and Sarah have a track record of great work dating to the 90s. The entire piece is worth a read, and it highlights some common issues that affect software development and our understanding of security.

In very general terms, the first phase of their analysis examines the potential for vulnerabilities in the code.

During this sort of examination, known as “static analysis” because it involves looking at code without executing it, the lab is not looking for specific vulnerabilities, but rather for signs that developers employed defensive coding methods

In other words, the analysis looks for indications of the habits and practices of developers who understand secure development practice. This is roughly comparable to code smell - while it's not necessarily a problem, it's often an indicator of where issues might exist. 

Modern compilers of Linux and OS X not only add protective features, they automatically swap out bad functions in code with safer equivalent ones when available. Yet some companies still use old compilers that lack security features.

We'll return to this point later in this post, but this cannot be emphasized enough: organizations creating software need to be using a current toolkit. It takes time to update this infrastructure - and to the suits in an organization, this often feels like lost time, but organizations shortchange this time at their peril.

The lab is also looking at the number of external software libraries a program calls on and the processes it uses to call them. Such libraries make life more convenient for programmers, because they allow them to repurpose useful functions written by other coders, but they also increase the amount of potentially vulnerable code, increasing what security experts refer to as the "attack surface."

As the article highlights, third party libraries are not necessarily an issue, but any issue in a third party library can potentially be an issue within apps that use the library. To use a metaphor that is poetically but not technically accurate: let's say you're going out with your friends, and one of your friends says, "Hey - can I bring my new boyfriend Johnny?" And you say, sure, why not. But then, later that night, Johnny turns out to be a real jackass - drinking too much, not tipping, talking all the time, laughing at his own jokes.

Potentially, third party libraries are like Johnny - not necessarily a problem, but when they are, they can be very unpleasant to deal with.

The people running the evaluation lab are also clear on what their tests show, and what they don't show. 

Software vendors will no doubt object to the methods they’re using to score their code, arguing that the use of risky libraries and old compilers doesn’t mean the vendors’ programs have actual vulnerabilities. But Sarah disagrees.

"If they get a really good score, we’re not saying there are no vulnerabilities," says Sarah. But if they get a really low score, "we can guarantee that ... they’re doing so many things wrong that there are vulnerabilities [in their code]."

The potential for risk articulated here runs counter to what people want, and it's one of the reasons that many people balk at reading security analyses. People want an absolute; they want a guarantee - but vulnerabilities can exist anywhere. Secure coding practices are not new; it's not arcane knowledge - but up until this point, many vendors have not made securing their work a priority.

However, the lede is thoroughly buried in this piece. We get this gem near the end. 

They’ve examined about 12,000 programs so far and plan to release their first reports in early 2017. They also plan to release information about their methodology and are willing to share the algorithms they use for their predictive fuzzing analysis if someone wants them.

We should have no illusions about the contents of this data set. We would likely see a small number of companies doing very well, a large number of companies in a very crowded middle, and a number of companies (looking at you, legacy enterprise software who insists on maintaining backwards compatibility) with pretty abysmal numbers. This is reality. Bad decisions get made in software development all the time, often for reasons that feel defensible - even logical - at the time. But over time, if this technical debt never gets paid down, these problems fester and grow.

To all the marketing people who ignored developer input in order to meet a PR-driven deadline: this is on you.

To all the salespeople who promised features and fabricated a timeline without consulting your development team: this is on you.

To all the CxOs who supported marketing and sales over the best advice of your dev team in order to "hit numbers": this is on you.

To all the developers who never said no for the right reasons, and just put your head down and delivered: this is on you as well.

We all have a level of responsibility here. But now, we need to fix it.

The end of the piece closes with a quotation from Mudge that is arguably the subtext for many of the ongoing conversations about security: 

"We’ve been begging people to give a shit about security for a decade ...[But] there’s very little incentive if they’ve already got a product to change a product."

I'm saying this partially tongue in cheek, but I'd love to see 2017 be the year of the feature freeze, where we all agree to get our acts together. Companies could give their development teams the time to pay down technical debt. People could get their privacy policies in order and up to date. Organizations could take some time to figure out business plans that aren't predicated on selling data. Consumers could get weaned off the artificial expectation that online services built through the time and talent of countless people should be free.

We can have a tech infrastructure that isn't broken. We can make better choices. If this project is part of the work that pushes us there, then full speed ahead.

 

Advertising and Rape Threats

2 min read

On the same day that Donald Trump gets a safe space carved out for him on Reddit , Jessica Valenti's five year old child receives a death and rape threat.

We have people claiming they can predict crime. We have companies marketing their prowess at using predictive analytics to support policing. We have law enforcement using social media monitoring to target city kids. We have schools using social media monitoring against kids. We have law enforcement arresting people who make online threats against police officers (and to be clear: online threats of violence are not okay - they *should* be investigated). Our ISPs and our mobile providers can partner to target specific ads to specific devices in a specific home.

Yet we can't do anything about online rape threats.

We have data brokers creating profiles on all of us, for just about any reason, and selling these profiles as to companies that see this data as a competitive advantage. These same data brokers are pretty adept at slicing the population into specific demographics, and targeting ads to them.

Yet we can't do anything about online death threats.

Political campaigns microtarget individuals.  Learning analytics companies tout a "robot tutor" that can "read your mind."

So, what is it? Is the marketing real? Do we have the tools to target ads to individuals, to know what people are thinking, to have true, penetrating insight into what people like and dislike? Because if that's true, we have a solid toolkit to use against online threats.

Or: the marketing is all a lie, and we are actually powerless to know more about the people who are threatening to rape kids. 

One thing we should know for sure: we either have solid analytics, or we don't.