Bill has worked in education as an English and history teacher, an administrator, and a technology director. Bill initially discovered the Internet in the mid-1990's at the insistence of a student who wouldn't stop talking about it.

twitter.com/funnymonkey

github.com/billfitzgerald

drupal.org/user/19631

Daily Post - October 18, 2017

4 min read

Some of the articles and news that crossed my desk on )ctober 18, 2017. Enjoy!

Facebook and Google Worked with Racist Campaigns, at Home and Abroad

Both Facebook and Google worked closely with an ad agency running blatantly racist ads during the 2016 campaign. Both companies worked on targeting more precisely, and provided a range of technical support.

Facebook advertising salespeople, creative advisers and technical experts competed with sales staff from Alphabet Inc.’s Google for millions in ad dollars from Secure America Now, the conservative, nonprofit advocacy group whose campaign included a mix of anti-Hillary Clinton and anti-Islam messages, the people said.

Facebook also worked with at least one campaign putting racist ads in Germany to target German voters. This is what the "neutrality" of tech looks like: racism with money behind it is always welcome. The data collection and subsequent profiling of people is a central element of how racism is spread, and how data brokers and advertising companies work together to profit.

Russia Recruited Activists to Stage Protests

The people who were recruited didn't know they were working with Russians. But this is an odd corner of Russian attempts to create noise and conflict around issues related to race.

Russia’s most infamous troll farm recruited US activists to help stage protests and organize self-defense classes in black communities as part of an effort to sow divisions in US society ahead of the 2016 election and well into 2017.

As always, research your funders and contacts.

US Government Wants the Right to Access Any Data Stored Anywhere

The US Supreme Court will hear a case that looks at whether a legal court order can compel a company to hand over information, even if that information is stored outside the US.

In its appeal to the high court, meanwhile, the US government said that the US tech sector should turn over any information requested with a valid court warrant. It doesn't matter where the data is hosted, the government argues. What matters, the authorities maintain, is whether the data can be accessed from within the United States.

This has the potential to open the floodgates for personal data to be accessed regardless of where it is stored. This would also gut privacy laws outside the US (or create a legal mess that will take years to untangle, and make lawyers very rich). It will also kills the tech economy and isolate the US, because who outside the US would want to connect to a mess like that?

For $1000 US, You Can Use AdTech to Track and Identify an Individual

A research team spent $1000 with an ad network, and used that to track an individual's location via targeted ads.

An advertising-savvy spy, they've shown, can spend just a grand to track a target's location with disturbing precision, learn details about them like their demographics and what apps they have installed on their phone, or correlate that information to make even more sensitive discoveries—say, that a certain twentysomething man has a gay dating app installed on his phone and lives at a certain address, that someone sitting next to the spy at a Starbucks took a certain route after leaving the coffee shop, or that a spy's spouse has visited a particular friend's home or business.

The researches didn't exploit any bugs in mobile ad networks. They used them as designed. So, aspiring stalkers, abusers, blackmailers, home invaders, or nosy creeps: rest easy. If you have $1000 US, AdTech has your back.

Watches Designed for Helicopter Parents Have Multiple Security and Privacy Issues. Cue Surprise

In what should surprise absolutely no one, it looks like spyware designed for the hypervigilant and short-sighted parent have multiple security flaws that expose kids to focused risk.

Together with the security firm Mnemonic, the Norwegian Consumer Council tested several smartwatches for children. Our findings are alarming. We discovered significant security flaws, unreliable safety features and a lack of consumer protection.

Surveillance isn't caring. I completely understand that raising a kid can be petrifying, but when we substitute technology for communication, we create both unintended consequences and multiple other points of potential failure.

DailyPost - October 17, 2017

6 min read

I've been thinking and rethinking how I use Twitter. I've been on the service for a while, but I am increasingly uncomfortable with the service and the company. Between Twitter's blatant failures at curbing abuse, curbing the spread of misinformation, and the general privacy issues that plague corporate social media, I will be leaving Twitter at some point in the future.

However, I still have interesting conversations on Twitter. I still learn things. I still meet people I wouldn't meet otherwise. So, while I am staying on the site for now, I am also looking at things I can change to make leaving Twitter easier - which brings us to this post.

I use Twitter as a way of storing links I will read later. I'm going to change that, and store information in a space I control, in a format that works for me. I'm hoping that this will also make be a better reader and sharer - rather than skimming and being superficial, I will spend a little more time selecting what I want to retain. For now, I'm thinking I'll keep a running list of information I encounter during the day, and rather than spin it out on Twitter over the course of the day, I'll collect them into a list, with short commentary.

This isn't revolutionary - really, it's what a whole bunch of people did before Twitter, back in Ye Olde Days of the Blogge. I see myself putting out posts like this every few days. Over time, we'll see what develops.

Collection of data in the UK

In the UK, there appears to be widespread collection of data from social media accounts:

It remains unclear exactly what aspects of our communications they hold and what other types of information the government agencies are collecting, beyond the broad unspecific categories previously identified such as “biographical details”, “commercial and financial activities”, “communications”, “travel data”, and “legally privileged communications”.

It's unclear if this information is collected via publicly available information, or via some type of access granted by the company.

Old, but always timely: How to Write a Tom Friedman Article

From 2004, but, unfortunately, timeless. How to Write a Tom Friedman Article.

What’s important, however, is that we focus on what these events mean [on the ground/in the street/to the citizens themselves]. The [media/current administration] seems too caught up in [worrying about/dissecting/spinning] the macro-level situation to pay attention to the important effects on daily life. Just call it missing the [desert for the sand/fields for the wheat/battle for the bullets].

You too can write like intellectually lazy hot takes. Because we need more of those.

InfoSec Pros Among Worst Offenders of Employer Snooping

Who knew? Information Security professionals often access information they .

And it turns out that IT security executives were the worst offenders of this snooping behavior, compared to the rest of their team, according to the Dimensional Research survey commissioned by One Identity.

Executives are more likely to engage in unethical behavior than lower level employees. Shocking.

More on Harvey Weinstein

We will be hearing about Harvey Weinstein for a good long tiome, I suspect. The latest is that he fired a director and recast the lead in movie because the director's choice "wasn't 'fuckable'".

“I was furious after being kicked off my film and I told them all about what happened, I told them about the harassment claims and I said here is your quote: ‘I don’t cast films according to Harvey Weinstein’s erection,’ and they just laughed,” Caton-Jones said.

And, of course, the press knew, and other people knew, and no one did anything. We shouldn't kid ourselves that the attention on Harvey Weinstein is fixing the root of the problem. Weinstein deserves everything he gets, but if you think Weinstein is unique, or that Hollywood is unique, think again. Harassment is pervasive. When women speak, we need to believe them.

More on Insecure IoT Devices

Many IoT devices use Bluetooth Low Energy to connect. Sex toys are no exception, including the occasional butt plug.

This is the final result. I paired to the BLE butt plug device without authentication or PIN from my laptop and sent the vibrate command.

I hope that we can look past the butt plug (figuratively) to see how many standard IoT implementations are hopelessly insecure.

No One Reads Terms and Conditions

From 2016, but still relevant.

What we did is we went to the extreme, and we included this - a firstborn clause suggesting that if you agreed to these policies that as a form of payment, you'd be giving up a first-born child. And 98 percent of the participants that took the study didn't even notice this particular clause.

I know parenting is hard, people, but seriously -- pay attention.

OpEd by a Student on Navigating White Educators

The author is a black student who has been taught by predominantly white teachers.

(s)tudents of color make up 85 percent of the population... Our teaching staff is proportionally opposite: more than 85 percent white. That racial disparity between students and staff is a problem. There are subliminal and subconscious micro aggressions, uncomfortable questions about black hair, attempts to invalidate students' experiences of racism and constant assumptions about their backgrounds.

We need to listen to students, even if it makes us uncomfortable -- or especially when it makes us feel uncomfortable.

Privacy and Tracking on State Department of Education Web Sites

Doug Levin has started what looks to be like a great series on State Departments of Education and how they respect (or don't) the privacy of people who visit them.

(t)he web is not—nor will ever be—static. New technologies, tools, and services routinely offer up innovative new capabilities and personalized experiences. And, with every new digital experience that may amaze and delight website visitors, potential new threats can be introduced. While not frequently on the cutting edge of technology, school websites and information technology systems are not immune to these larger trends

This work will be coming out over the next few days/weeks - I look forward to seeing where it leads.

Google Serves Fake News Ads on Fact Checking Sites

You can't make this stuff up. Google AdWords was used to spread misinformation on sites dedicated to debunking information. As usual, Google provided no information about how their system was exploited, or how much money they made from ads placed by these fraudulent sites.

Google declined to explain the specifics of how the fake news ads appeared on the fact-checking sites.

As I and others have written about, Google is complicit in this, and Google and other adtech vendors profit from misinformation.

Filter Bubbles and Privacy, and the Myth of the Privacy Setting

6 min read

When discussing information literacy, we often ignore the role of pervasive online tracking. In this post, we will lay out the connections between accessing accurate information, tracking, and privacy. We will use Twitter an as explicit example. However, while Twitter provides a convenient example, the general principles we lay out here are applicable across the web.

Major online platforms "personalize" the content we see on them. Everything from Amazon's shopping recommendations to Facebook's News Feed to our timelines on Twitter are controlled by algorithms. This "personalization" uses information that these companies have collected about us to present us with an experience that is designed to have us behave in a way that aligns with the company's interests. And we need to be clear on this: personalization is often sold as "showing people more relevant information" but that definition is incomplete. Personalization isn't done for the people using a product; it's done to further the needs of the company offering the product. To the extent that personalization shows people "more relevant information," this information furthers the goals of the company first, and the needs of users second.

Personalization requires that companies collect, store, and analyze information about us. Personalization also requires that we are compared against other people. This process begins with data collection about us -- what we read, what we click on, what we hover over, what we share, what we "like", sites we visit, our location, who we connect with, who we converse with, what we buy, what we search for, the devices we use, etc. This information is collected in many ways, but some of the more visible methods companies use to get this information is via cookies that are set by ad networks, or social share icons. Of course, every social network (Facebook, Instagram, Twitter, Pinterest, Musical.ly, etc) collects this information from you directly when you spend time on their sites.

The web, flipping us the bird

When you see social sharing icons, know that when a site flips you the bird,  your browsing information is being widely shared with these companies and other ad brokers.

This core information collected by sites can be combined with information from other sources. Many companies explicitly claim this right in their terms of service. For example, Voxer's terms claim this right using this language:

Information We May Receive From Third Parties. We may collect information about you from other Product users, such as when a friend provides friend details or contact information, or indicates a relationship with you. If you authorize the activity, Facebook may share with us certain approved data, which may include your profile information, your image and your list of friends, their profile information and their images.

By combining information from other sources, companies can have information about us that includes our educational background, employment history, where we live, voting records, any criminal justice information from parking tickets to arrests to felonies, in addition to our browsing histories. With these datasets, companies can sort us into multiple demographics, which they can then use to compare us against other people pulled from other demographics.

In very general terms, this is how targeted advertising, content recommendation, shopping recommendation, and other forms of personalization all work. Collect a data set, mine it for patterns and the probablity that these patterns are significant and meaningful. Computers make math cheap, so this process can be repeated and refined as needed.

However, while the algorithms can churn nearly indefinitely, they need data and interaction to continue to have relevance. In this way, algorithms can be compared to the annoying office mate with pointless gossip and an incessant need to publicly overshare: they derive value from their audience.

And we are the audience.

Twitter's "Personalization and Data" settings provice a great example of how this works. As we state earlier, while Twitter provides this example, they are not unique. The settings shown in the screenshot below highlight some of the data that is collected, and how this information is used. The screenshot also highlights how, on social media, there is no such thing as a privacy setting. What they give us is a visibility setting -- while we have minimal control over what we might see, nothing is private from the company that offers the service.

Twitter's personalization settings

From looking at this page, we can see that Twitter can collect a broad range of information that has nothing to do with the core functionality of Twitter, and everything to do with creating profiles about us. For example, why would Twitter need to know the other apps on our devices to allow us to share 140 character text snippets?

Twitter is also clear that regardless of what we see here, they will personalize information for us. If we use Twitter, we only have the option to play by their rules (to the extent that they enforce them, of course):

Twitter always uses some information, like where you signed up and your current location, to help show you more relevant content.

What this explanation leaves out, of course, is for whom the content is most relevant: the person reading it, or Twitter. Remember: their platform, their business, their needs.

But when we look at the options on this page, we also need to realize that the data they collect in the name of personalization is where our filter bubbles begin. A best-case definition of "relevant content" is "information they think we are most interested in." However, a key goal of many corporate social sites is to make it more difficult to leave. In design, dark patterns are used to get people to act against their best interest. Creating feeds of "relevant content" -- or more accurately, suppressing information according to the dictates of an algorithm -- can be understood as a dark information pattern. "Relevant content" might be what is most likely to keep us on a site, but it probably won't have much overlap with information that challenges our bias, breaks our assumptions, or broadens our world.

The fact that our personal information is used to narrow the information we encounter only adds insult to injury.

We can counter this, but it takes work. Some easier steps include:

  • Using ad blockers and javascript blockers (uBlock Origin and Privacy Badger are highly recommended as ad blockers. For javascript blockers, try Scriptsafe for Chrome and NoScript for Firefox ).
  • Clear your browser cookies regularly.
  • When searching or doing other research, use Tor and/or a VPN.

These steps will help minimize the amount of data that companies can collect and use, but they don't eliminate the problem. The root of the problem lies in information assymetry: companies know more about us than we know about them, and this gap increases over time. However, privacy and information literacy are directly related issues. The more we safeguard our personal information, the more freedom we have from filter bubbles.

 

Bearistotle

4 min read

In January 2017, Mattel and Microsoft announced the launch of Aristotle, a digital assistant explicitly focused on very young children. The device was marketed by Mattel, and used Microsoft's AI technology. The device was literally intended to work with children from the first weeks of their lives. 

Crying, for example, can trigger Aristotle to play a lullaby or a recorded message from the parent. Conversely, a child’s crying can also trigger nothing at all, to let the kid settle down on his own. Parents will be able to configure these behaviors via the app.

To state the obvious, the developmental risks to a newly born child from having a recorded message in lieu of parental attention are not clear, but I don't think we are at a place where we want to "disrupt" parenting.

Concerns about Aristotle mounted after the initial announcement. Many of these concerns are privacy-related, but many had nothing to do with privacy and focused on the blatant irresponsibility and lack of humanity involved in outsourcing care for a child to a plastic gadget that collected data and shuffled it off to remote storage. As recently as six days ago, Mattel talked about the product as if it was going to be released

The following quotation cites Alex Clark, a Mattel spokesperson, in an article from September 29th.

Aristotle wasn’t designed to store or record audio or video, Clark said. No third parties will have access to any personally-identifiable information, and any data shared is entirely anonymous and fully encrypted, he said.

A few key points jump out from this fantastic piece of doublespeak.

  • First, as of six days ago, the company was defending Aristotle. This suggests that they were still considering releasing this device.
  • Second, the definition of "store" needs to be clarified. Are they saying that the device has no local storage, and it just transmits everything it contacts? This statement is empty. An statement with actual use would be to define what this device transmits, what it stores, and who can access it. But, of course, he is just a spokesperson. Truth costs extra.
  • Third, the last sentence makes two astounding claims: third parties can't access personally identifiable information, and any data shared is "entirely anonymous and fully encrypted." To start, it's refreshing to hear explicit confirmation that Mattel was planning on sharing data with third parties. However, their claims about not sharing personal information are a red herring. Without clarity on how they are anonymizing information, what the prohibitions are on attempts to re-identify the data set, why they are sharing data, and with whom they are sharing data, they aren't offering anything reassuring here. Finally, claiming that data are "fully encrypted" is meaningless: encrypted in transit? At rest? Is encryption in place between storage devices inside their network? While strong encryption is a necessary starting point, encryption isn't a blanket. There are multiple layers to using encryption to protect information, and a robust security program focuses on human and technical steps. Encryption is a piece of this, but only a piece.

Yesterday, Mattel announced that they were cancelling Aristotle. This is the right decision, but we shouldn't confuse this with good news. It was only two years ago that Mattel brought Spy Barbie -- complete with multiple security issues -- into the world.

People of all ages are all currently exposed via devices that have sub-par privacy and security practices, and privacy policies that do not respect the people buying and using the products. Everything from Amazon's Echo and Alexa products, to Google Home and Family products, to Siri, to Cortana, to Google's voice search on phones, to "Smart" TVs, to connected toys, to online baby monitors -- all of these devices have potential security issues, and opaque privacy terms. In most cases, people using these products have no idea about what information is collected, when it is collected, how long it is stored, who can access it, and/or how it can be used over time. When adults use these devices around kids, we send the clear message that this invisible and constant surveillance should not be questioned because it provides a convenience.

The mistake Mattel made this time was introducing a utilitarian object. If they had wrapped Aristotle in a toy, they'd be home free.

My prediction: in 2018, Bearistotle will be the must have toy of the season -- the friendliest, most helpful bear any child will ever need. It will retail for the bargain price of 499.99, and if you enable geotagging it will create a digital portfolio of childhood highlights to use in preschool appications.

Twitter and Facebook No Longer Understand Twitter and Facebook

2 min read

Twitter thinks that ads are a problem on Twitter. 

Twitter - your ads might not be good, but I'm going to lay this out for you: the key problems with your platform are misinformation and abuse. You are equally bad at dealing with both, and your most recent response is deficient in multiple ways. Facebook is the platform with problems with advertising, and misinformation and abuse. 

As I have noted before in this piece co-authored with Kris Shaffer, Twitter is either misrepresenting the effectiveness of their ad network, or they are misrepresenting their ability to detect bots.

Facebook's ineptitude is summed up most succinctly in this quotation from Mark Zuckerberg. Zeynep Tufecki has a great thread about it, but Zuckerberg's own words provide insight about how top leadership within tech misunderstand the situation they have created.

Zuckerberg and both sides

To his credit, Zuckerberg managed to pack a large amount of misunderstanding into a short message, so he deserves kudos for concision. But Zuckerberg misses the point entirely: this is not about ideas and content. This is about power and manipulation. Zuckerberg was manipulated by Trump into responding to a baseless charge, and Zuckerberg fell back onto the "both sides" fallacy cited by, among other people, Trump himself when Trump was justifying white supremacists and neo-Nazis.

Our tech industry has created platforms that are easy to game. For all the talk of disruptive innovation, how tech entrepreneurs are the smartest people in the room, etc, etc, we are now in a situation where billions of dollars have been spent creating platforms that the creators neither control nor understand. Given the outsize role these platforms play in delivering information and shaping public discourse, that should make us all very nervous.

PS: Twitter: want to identify some bots? Look at the networks pushing the "Zuckerberg/Podesta" and "Zuckerberg/Russia" stories, right now. Seriously, step up your game.

Privacy and Security Exercise

2 min read

Do this exercise with your phone, tablet, and/or any computer you use regularly.

Imagine that someone has accessed your device and can log in and access all information on the device.

  • If they were a thief, what information could they access about you?
  • If they were a blackmailer, what information could they access about you?
  • What information could they access about your friends, family, or professional contacts?
  • If you work as a teacher, counselor, consultant, or other type of advisor: what information could someone glean about the people you work with?

As you do this exercise, be sure to look at all apps (on a phone or tablet), online accounts accessible via a web browser, address books, and ways that any of this information could be cross referenced or combined. For example, what information could be accessed about people you "know" via social media accounts?

  • What steps can you take to protect this information?
  • Assuming that someone you know has comparable information about you, what steps would you want them to take?

Are there differences between the steps you could take, and the steps you would want someone else to take? What accounts for those differences?

When it comes to protecting information, we are connected. At some level, we are as private and secure as our least private and secure friend.

Protecting Ourselves From the Equifax Data Breach, and Data Brokers in General

7 min read

On September 7, news broke that Equifax's security failed and that 143 million people had their data accessed in a breach. While the breach was discovered in July, people affected by the breach were not notified until September. The information that was accessed included contact information, birth dates, Social Security numbers, and, in some cases, driver's license numbers, credit card numbers, and credit dispute information. As this piece is being written, it's not clear if we have been told the full range of personal information that was accessed.

Equifax is one of three large data brokers in the US that, in addition to making money by collecting and selling information about all of us, also issue credit reports that are considered authoritative. The other two companies are Transunion and Experian. While Equifax is getting the lion's share of attention at present, we need to remember that none of the credit verification companies have stellar records, and that any of them could have comparable sensitive information breached.

A short overview includes:

Transunion, Equifax, and Experian provide a range of resources around credit verification and risk analysis for industries ranging from rental markets, insurance, and finance. This article from the New York Times gives an overview of the various services offered by data brokers, and Frank Pasquale's Black Box Society remains one of the most informative books on this topic.

Recently, these data brokers were part of the larger story of how the Trump campain used data - and Facebook ads - to suppress the vote in selected districts and spread misinformation.

Trump’s Project Alamo database was also fed vast quantities of external data, including voter registration records, gun ownership records, credit card purchase histories, and internet account identities. The Trump campaign purchased this data from certified Facebook marketing partners Experian PLC, Datalogix, Epsilon, and Acxiom Corporation. (Read here for instructions on how to remove your information from the databases of these consumer data brokers.)

In June, 2017, the Republican National Convention was informed that they had leaked voting data from 200 million Americans. Given that their data strategy incorporated data from Experian, it's possible that this earlier breach leaked a subset of the same data as the Equifax breach. https://www.upguard.com/breaches/the-rnc-files

Of course, as a side note, we can't let Republicans have all the fun. In 2015, NationBuilder leaked voting details on 191 million Americans. https://www.databreaches.net/191-million-voters-personal-info-exposed-by-misconfigured-database/

But What Can I Do About The Equifax Breach?

In response to the Equifax breach, there are some immediate things we can do, and a range of secondary things. None of these suggestions are revolutionary, and all of them are a smaller part of good personal data hygiene.

  1. Get credit monitoring in place. While Equifax, Transunion, and Experian all offer credit monitoring services, I do not recommend giving any of these companies money to perform this service. For example, LastPass - the password manager - offers credit monitoring as an add on service.
  2. Consider freezing your credit. If you are planning a major purchase where you will need credit (buying a car, getting a mortgage, etc), you will need to un-freeze your credit to allow transaction to happen, but freezing your credit will stop most attempts of credit fraud.
  3. Get a copy of your credit report, and review it for accuracy. The Consumer Finance Protection Bureau has good resources for this.
  4. File an Identity Theft Affidavit (pdf download) with the IRS. This can help prevent someone filing a false tax return in your name.
  5. Opt out of data brokers. Stop Data Mining has a good list. There are also services that do this for a fee, but before giving any information or money to a service research the privacy and business practices of the service.

Secondary responses include standard practices to protect our personal privacy and security.

  • In the aftermath of a large breach, be wary of emails coming in "alerting" you to details regarding fraud. The days and weeks after a breach are fertile opportunities for phishing, so don't click on links or download files. Check links using the options outlined in this post.
  • Change old passwords, and use a password manager to protect your passwords. This is good practice in general, but especially useful if you have any passwords that incorporate personal information as part of the password.
  • Turn on two factor authentication. If you want to go full on, use something like a Yubikey. If you are just getting started, use other methods, with the most popular being a text message to your phone.

As part of a longer term strategy, define what you want to protect, and the steps you are willing to take to protect it. The technical term for this is threat modeling. This process will help you set realistic and achievable goals for protecting your privacy in a way that works for you. For an overview of steps you can take to assess and mitigate risk, review the information in these posts.

Unfortunately, there is no silver bullet to protect us from overcollection of our information by organizations, and the sloppy stewardship of that data. However, taking steps to minimize when we share information, and what we share, can reduce our exposure to risk.

Think about it like handwashing. We all know that, regardless of how often we wash our hands, we will catch a cold at some point. That doesn't mean we stop washing our hands (besides being unhealthy, that's just gross). Sound data practices should be understood in the same way - we take reasonable steps to mitigate risks, with adequate precautions to protect ourselves when bad things happen.

Breaches Are Only Part of the Risk

We tend to get concerned about how our data is used when we learn that it has been breached, but these concerns only address part of the problem. The reason Equifax could compromise information about 143 million of us is because it has information about more than 143 million of us. Equifax, Transunion, Experian, and others have been profiting from our information for years. Their business is selling the details of our lives to companies and people who want to exploit those details. We are not asked if these transactions are okay, and we are not told when they happen.

Equifax - data breaches are a risk. Really. You don't say.

Image Source: Equifax web site

Moreover, because data brokers are in the business of selling our data to third parties, these data brokers increase our risk of being exposed to fraud and identity theft. It's worth remembering here that, as linked above, at least one data broker sold consumer information directly to an identity thief. When data brokers both sell our information, and sell services that claim to monitor our credit, the data brokers are actually monitoring for misuse of the data that they profit from selling. In this way, data brokers resemble a hedge fund, with the capacity to profit no matter what happens.

The Experian breach illustrates this perfectly. After Experian learned of the breach, several company executives sold their Experian stock. The stock sales occurred over a month before people affected by the breach were notified. 

Breaches draw our attention to the risks from unauthorized uses of our data. However, we need to stop kidding ourselves: authorized uses of our data expose us to varying degrees of risk every day. We are almost never informed when our data is used or sold, and data brokers operate with few obligations towards the people whose data they control. Breaches are terrible, but the mechanics of breach disclosure are one of the few times that data brokers are required to be honest with us about the information they have about us.

Less Empowering, More Silence

1 min read

In the quest for "authentic" learning, we often wade into and through conversations about student voice, and how to empower it.

A couple notes and observations on student voice: 

  • Adults don't need to empower student voice. Students have it; whether or not they choose to share it in your presence or your class is a different question.
  • If you're serious about student voice, you need to be comfortable hearing things that are inconvenient, and are difficult to hear.
  • Student voice also incorporates the notion of student presence. What are you doing in your interpersonal communication and in your classroom setup to ensure that the presence of every student is implicitly and explicitly valued?
  • Sometimes respecting student voice means respecting the rights of students not to speak.
  • Student voice requires that the adults respect where students come from, and who they are.
  • If a key element of "student voice" requires sharing student work, ask yourself who the sharing benefits most. Appropriating student words is not the same as student voice.

Getting comfortable with student voice means recognizing the need for adult silence.

The Google Anti-Diversity Screed

4 min read

Last night, a screed written by a Google employee that questions the value and legitimacy of diversity work was made public. It had already been shared widely throughout Google. The Google anti-diversity screed is not remarkable for its originality or its style. It rehashes misinformation that would feel right at home in an MRA discussion board with the stylistic flourish of a 10th grader with a good vocabulary.

However, this piece didn't come from a high school sophomore or an MRA discussion: it came from within Google. Given Google's role in how we find information, which in turn shapes reputation and, in some cases, business competition, opinions held within Google can scale. Google also collects and stores huge amounts of information about most of us on the internet through their advertising and tracking business. Given the amount of information they collect, and the opacity with which they use it, the opinions of people within Google matter.

Google has had issues with clear bias in their algorithms. What does it mean that when I go to Google and search for a baby (and I searched as an anonymous user, logged in via a VPN, and both with and without Tor) I am shown results that are almost exclusively of white children?

vpn search of babies

vpn only

vpn with tor

vpn with Tor

When people within Google speak about diversity, what they say matters. While Google is an enormous company, and we have no idea where the author works within this larger structure, we also don't know how widely these ideas are shared within the organization. It's also worth remembering the effect of the heckler's veto, where a small minority can squelch progress.

Ideas don't spring fully formed from a vacuum. When ideas make it into the light of day -- especially in the form of a multi-page screed -- it's a sign that the author has been thinking them over for a while, sharing them with peers, and/or creating drafts. All of these things take time. Now is also a good time to note that if these ideas were shared among peers before making it into written form, they were likely given a warm initial reception.

It's also worth noting that the piece does not represent Google's corporate policy. However, the piece does provide some interesting context for Google's ongoing failures to improve the diversity of its workforce. The most enlightened corporate policy in the world will fail without the support of the workforce. Given that the perspectives described in the Google anti-diversity screed also read like a laundry list of the bias that women in tech continue to face, it begs the question of how deeply Google's corporate policy has been embraced throughout the organization.

I'd also be curious about how educators who rely on Google's services are reacting to this news. Up to this point, I haven't heard anything, but given Google's increasingly large role in shaping what happens in the classroom, it would be great to hear educator perspectives on this. This also brings to mind the challenge faced by educators when their colleagues voice opinions about kids and families that demonstrate bias. 

Silence isn't an option, and the answers aren't easy, but we can start to have a better conversation when we call out that disagreeing with people who espouse gender bias or racial bias is necessary. We aren't "silencing" people when we disagree with hateful and misinformed opinions. We're talking; ironically, many of our free speech advocates have a hard time with that.

Update, August 7: Based on reporting at Motherboard, there is at least some support within Google for the author of the anti-diversity piece.

This piece, written by ex-Google employee Yonatan Zunger, provides some excellent insight from an insider's perspective. 

Thirty Seconds

3 min read

In my years working in and around education, I have heard a lot of arguments about how to "reach" teachers in order to provide them information. A lot of these arguments have the stench of SEO optimization, and quickly devolve into keyword placement, catchy titles, finding the right post length, using pictures, using video, and making sure to embed current jargon. At some point in this screed, the question of time gets raised. Teachers are busy, they will say. They need to make a decision in [X seconds] or [Y minutes]. Any longer than that and we've lost our chance.

And when I hear these arguments, I'm always at a loss on how to proceed. Teachers are busy, but teachers are also caring, informed professionals. Far too frequently, when I hear people talk about "reaching" people, or how to make pages "sticky," I hear the language of trickery. It's the language used when -- consciously or unconsciously -- people view attention as something to be gamed, not earned -- as something to be taken, not offered. It's the language of people who lack a thorough confidence in what they offer, and feel their first and best recourse is to resort to gimmickry to keep people engaged.

And when I ask questions about how they are working to improve their information, or talking with the people they want to reach, or make room to elevate voices within their readership, or what their unique perspective on a specific issue might be, it often feels like I'm addressing a native English speaker in Greek. When I suggest spending less time and money on the frills that adorn a piece and more time figuring out how a specific piece offers something new or unique, the conversations generally grind to a halt.

And that's too bad, because if you write well, and write with a purpose, and have an actual vision that makes sense, people will read. If you want to make sure that you have an edge in search, encrypt your site, and make sure it uses standards-compliant markup. But assuming that your best ideas need to be accessible in under [X seconds/Y minues] patronizes the people who might be have a deep interest in your posts. It also encourages unexamined oversimplifications, which leads to sloppy thought. There are some decisions that shouldn't be made in under 30 seconds, or under 2 minutes. And while there is a balance that needs to be struck between accessibility and depth, the content should drive where that line is drawn. I'd argue we create more useful information in educational content when we err on the side of an intelligent reader. 

Thinking is okay. Acknowledging that aspects of the world are complex, and don't fit into easily consumed chunks, is a part of how we "reach" people. We need to keep the simple things simple, and we need to explain the complex things well. Attempting to take shortcuts through intellectual complexity is another facet of technology as solutionism. The only people who win are the folks selling shortcuts -- and they have generally cashed their checks by the time the rest of us are cleaning up their messes.