Bearistotle

4 min read

In January 2017, Mattel and Microsoft announced the launch of Aristotle, a digital assistant explicitly focused on very young children. The device was marketed by Mattel, and used Microsoft's AI technology. The device was literally intended to work with children from the first weeks of their lives. 

Crying, for example, can trigger Aristotle to play a lullaby or a recorded message from the parent. Conversely, a child’s crying can also trigger nothing at all, to let the kid settle down on his own. Parents will be able to configure these behaviors via the app.

To state the obvious, the developmental risks to a newly born child from having a recorded message in lieu of parental attention are not clear, but I don't think we are at a place where we want to "disrupt" parenting.

Concerns about Aristotle mounted after the initial announcement. Many of these concerns are privacy-related, but many had nothing to do with privacy and focused on the blatant irresponsibility and lack of humanity involved in outsourcing care for a child to a plastic gadget that collected data and shuffled it off to remote storage. As recently as six days ago, Mattel talked about the product as if it was going to be released

The following quotation cites Alex Clark, a Mattel spokesperson, in an article from September 29th.

Aristotle wasn’t designed to store or record audio or video, Clark said. No third parties will have access to any personally-identifiable information, and any data shared is entirely anonymous and fully encrypted, he said.

A few key points jump out from this fantastic piece of doublespeak.

  • First, as of six days ago, the company was defending Aristotle. This suggests that they were still considering releasing this device.
  • Second, the definition of "store" needs to be clarified. Are they saying that the device has no local storage, and it just transmits everything it contacts? This statement is empty. An statement with actual use would be to define what this device transmits, what it stores, and who can access it. But, of course, he is just a spokesperson. Truth costs extra.
  • Third, the last sentence makes two astounding claims: third parties can't access personally identifiable information, and any data shared is "entirely anonymous and fully encrypted." To start, it's refreshing to hear explicit confirmation that Mattel was planning on sharing data with third parties. However, their claims about not sharing personal information are a red herring. Without clarity on how they are anonymizing information, what the prohibitions are on attempts to re-identify the data set, why they are sharing data, and with whom they are sharing data, they aren't offering anything reassuring here. Finally, claiming that data are "fully encrypted" is meaningless: encrypted in transit? At rest? Is encryption in place between storage devices inside their network? While strong encryption is a necessary starting point, encryption isn't a blanket. There are multiple layers to using encryption to protect information, and a robust security program focuses on human and technical steps. Encryption is a piece of this, but only a piece.

Yesterday, Mattel announced that they were cancelling Aristotle. This is the right decision, but we shouldn't confuse this with good news. It was only two years ago that Mattel brought Spy Barbie -- complete with multiple security issues -- into the world.

People of all ages are all currently exposed via devices that have sub-par privacy and security practices, and privacy policies that do not respect the people buying and using the products. Everything from Amazon's Echo and Alexa products, to Google Home and Family products, to Siri, to Cortana, to Google's voice search on phones, to "Smart" TVs, to connected toys, to online baby monitors -- all of these devices have potential security issues, and opaque privacy terms. In most cases, people using these products have no idea about what information is collected, when it is collected, how long it is stored, who can access it, and/or how it can be used over time. When adults use these devices around kids, we send the clear message that this invisible and constant surveillance should not be questioned because it provides a convenience.

The mistake Mattel made this time was introducing a utilitarian object. If they had wrapped Aristotle in a toy, they'd be home free.

My prediction: in 2018, Bearistotle will be the must have toy of the season -- the friendliest, most helpful bear any child will ever need. It will retail for the bargain price of 499.99, and if you enable geotagging it will create a digital portfolio of childhood highlights to use in preschool appications.

Twitter and Facebook No Longer Understand Twitter and Facebook

2 min read

Twitter thinks that ads are a problem on Twitter. 

Twitter - your ads might not be good, but I'm going to lay this out for you: the key problems with your platform are misinformation and abuse. You are equally bad at dealing with both, and your most recent response is deficient in multiple ways. Facebook is the platform with problems with advertising, and misinformation and abuse. 

As I have noted before in this piece co-authored with Kris Shaffer, Twitter is either misrepresenting the effectiveness of their ad network, or they are misrepresenting their ability to detect bots.

Facebook's ineptitude is summed up most succinctly in this quotation from Mark Zuckerberg. Zeynep Tufecki has a great thread about it, but Zuckerberg's own words provide insight about how top leadership within tech misunderstand the situation they have created.

Zuckerberg and both sides

To his credit, Zuckerberg managed to pack a large amount of misunderstanding into a short message, so he deserves kudos for concision. But Zuckerberg misses the point entirely: this is not about ideas and content. This is about power and manipulation. Zuckerberg was manipulated by Trump into responding to a baseless charge, and Zuckerberg fell back onto the "both sides" fallacy cited by, among other people, Trump himself when Trump was justifying white supremacists and neo-Nazis.

Our tech industry has created platforms that are easy to game. For all the talk of disruptive innovation, how tech entrepreneurs are the smartest people in the room, etc, etc, we are now in a situation where billions of dollars have been spent creating platforms that the creators neither control nor understand. Given the outsize role these platforms play in delivering information and shaping public discourse, that should make us all very nervous.

PS: Twitter: want to identify some bots? Look at the networks pushing the "Zuckerberg/Podesta" and "Zuckerberg/Russia" stories, right now. Seriously, step up your game.

Privacy and Security Exercise

2 min read

Do this exercise with your phone, tablet, and/or any computer you use regularly.

Imagine that someone has accessed your device and can log in and access all information on the device.

  • If they were a thief, what information could they access about you?
  • If they were a blackmailer, what information could they access about you?
  • What information could they access about your friends, family, or professional contacts?
  • If you work as a teacher, counselor, consultant, or other type of advisor: what information could someone glean about the people you work with?

As you do this exercise, be sure to look at all apps (on a phone or tablet), online accounts accessible via a web browser, address books, and ways that any of this information could be cross referenced or combined. For example, what information could be accessed about people you "know" via social media accounts?

  • What steps can you take to protect this information?
  • Assuming that someone you know has comparable information about you, what steps would you want them to take?

Are there differences between the steps you could take, and the steps you would want someone else to take? What accounts for those differences?

When it comes to protecting information, we are connected. At some level, we are as private and secure as our least private and secure friend.

Protecting Ourselves From the Equifax Data Breach, and Data Brokers in General

7 min read

On September 7, news broke that Equifax's security failed and that 143 million people had their data accessed in a breach. While the breach was discovered in July, people affected by the breach were not notified until September. The information that was accessed included contact information, birth dates, Social Security numbers, and, in some cases, driver's license numbers, credit card numbers, and credit dispute information. As this piece is being written, it's not clear if we have been told the full range of personal information that was accessed.

Equifax is one of three large data brokers in the US that, in addition to making money by collecting and selling information about all of us, also issue credit reports that are considered authoritative. The other two companies are Transunion and Experian. While Equifax is getting the lion's share of attention at present, we need to remember that none of the credit verification companies have stellar records, and that any of them could have comparable sensitive information breached.

A short overview includes:

Transunion, Equifax, and Experian provide a range of resources around credit verification and risk analysis for industries ranging from rental markets, insurance, and finance. This article from the New York Times gives an overview of the various services offered by data brokers, and Frank Pasquale's Black Box Society remains one of the most informative books on this topic.

Recently, these data brokers were part of the larger story of how the Trump campain used data - and Facebook ads - to suppress the vote in selected districts and spread misinformation.

Trump’s Project Alamo database was also fed vast quantities of external data, including voter registration records, gun ownership records, credit card purchase histories, and internet account identities. The Trump campaign purchased this data from certified Facebook marketing partners Experian PLC, Datalogix, Epsilon, and Acxiom Corporation. (Read here for instructions on how to remove your information from the databases of these consumer data brokers.)

In June, 2017, the Republican National Convention was informed that they had leaked voting data from 200 million Americans. Given that their data strategy incorporated data from Experian, it's possible that this earlier breach leaked a subset of the same data as the Equifax breach. https://www.upguard.com/breaches/the-rnc-files

Of course, as a side note, we can't let Republicans have all the fun. In 2015, NationBuilder leaked voting details on 191 million Americans. https://www.databreaches.net/191-million-voters-personal-info-exposed-by-misconfigured-database/

But What Can I Do About The Equifax Breach?

In response to the Equifax breach, there are some immediate things we can do, and a range of secondary things. None of these suggestions are revolutionary, and all of them are a smaller part of good personal data hygiene.

  1. Get credit monitoring in place. While Equifax, Transunion, and Experian all offer credit monitoring services, I do not recommend giving any of these companies money to perform this service. For example, LastPass - the password manager - offers credit monitoring as an add on service.
  2. Consider freezing your credit. If you are planning a major purchase where you will need credit (buying a car, getting a mortgage, etc), you will need to un-freeze your credit to allow transaction to happen, but freezing your credit will stop most attempts of credit fraud.
  3. Get a copy of your credit report, and review it for accuracy. The Consumer Finance Protection Bureau has good resources for this.
  4. File an Identity Theft Affidavit (pdf download) with the IRS. This can help prevent someone filing a false tax return in your name.
  5. Opt out of data brokers. Stop Data Mining has a good list. There are also services that do this for a fee, but before giving any information or money to a service research the privacy and business practices of the service.

Secondary responses include standard practices to protect our personal privacy and security.

  • In the aftermath of a large breach, be wary of emails coming in "alerting" you to details regarding fraud. The days and weeks after a breach are fertile opportunities for phishing, so don't click on links or download files. Check links using the options outlined in this post.
  • Change old passwords, and use a password manager to protect your passwords. This is good practice in general, but especially useful if you have any passwords that incorporate personal information as part of the password.
  • Turn on two factor authentication. If you want to go full on, use something like a Yubikey. If you are just getting started, use other methods, with the most popular being a text message to your phone.

As part of a longer term strategy, define what you want to protect, and the steps you are willing to take to protect it. The technical term for this is threat modeling. This process will help you set realistic and achievable goals for protecting your privacy in a way that works for you. For an overview of steps you can take to assess and mitigate risk, review the information in these posts.

Unfortunately, there is no silver bullet to protect us from overcollection of our information by organizations, and the sloppy stewardship of that data. However, taking steps to minimize when we share information, and what we share, can reduce our exposure to risk.

Think about it like handwashing. We all know that, regardless of how often we wash our hands, we will catch a cold at some point. That doesn't mean we stop washing our hands (besides being unhealthy, that's just gross). Sound data practices should be understood in the same way - we take reasonable steps to mitigate risks, with adequate precautions to protect ourselves when bad things happen.

Breaches Are Only Part of the Risk

We tend to get concerned about how our data is used when we learn that it has been breached, but these concerns only address part of the problem. The reason Equifax could compromise information about 143 million of us is because it has information about more than 143 million of us. Equifax, Transunion, Experian, and others have been profiting from our information for years. Their business is selling the details of our lives to companies and people who want to exploit those details. We are not asked if these transactions are okay, and we are not told when they happen.

Equifax - data breaches are a risk. Really. You don't say.

Image Source: Equifax web site

Moreover, because data brokers are in the business of selling our data to third parties, these data brokers increase our risk of being exposed to fraud and identity theft. It's worth remembering here that, as linked above, at least one data broker sold consumer information directly to an identity thief. When data brokers both sell our information, and sell services that claim to monitor our credit, the data brokers are actually monitoring for misuse of the data that they profit from selling. In this way, data brokers resemble a hedge fund, with the capacity to profit no matter what happens.

The Experian breach illustrates this perfectly. After Experian learned of the breach, several company executives sold their Experian stock. The stock sales occurred over a month before people affected by the breach were notified. 

Breaches draw our attention to the risks from unauthorized uses of our data. However, we need to stop kidding ourselves: authorized uses of our data expose us to varying degrees of risk every day. We are almost never informed when our data is used or sold, and data brokers operate with few obligations towards the people whose data they control. Breaches are terrible, but the mechanics of breach disclosure are one of the few times that data brokers are required to be honest with us about the information they have about us.

Less Empowering, More Silence

1 min read

In the quest for "authentic" learning, we often wade into and through conversations about student voice, and how to empower it.

A couple notes and observations on student voice: 

  • Adults don't need to empower student voice. Students have it; whether or not they choose to share it in your presence or your class is a different question.
  • If you're serious about student voice, you need to be comfortable hearing things that are inconvenient, and are difficult to hear.
  • Student voice also incorporates the notion of student presence. What are you doing in your interpersonal communication and in your classroom setup to ensure that the presence of every student is implicitly and explicitly valued?
  • Sometimes respecting student voice means respecting the rights of students not to speak.
  • Student voice requires that the adults respect where students come from, and who they are.
  • If a key element of "student voice" requires sharing student work, ask yourself who the sharing benefits most. Appropriating student words is not the same as student voice.

Getting comfortable with student voice means recognizing the need for adult silence.

The Google Anti-Diversity Screed

4 min read

Last night, a screed written by a Google employee that questions the value and legitimacy of diversity work was made public. It had already been shared widely throughout Google. The Google anti-diversity screed is not remarkable for its originality or its style. It rehashes misinformation that would feel right at home in an MRA discussion board with the stylistic flourish of a 10th grader with a good vocabulary.

However, this piece didn't come from a high school sophomore or an MRA discussion: it came from within Google. Given Google's role in how we find information, which in turn shapes reputation and, in some cases, business competition, opinions held within Google can scale. Google also collects and stores huge amounts of information about most of us on the internet through their advertising and tracking business. Given the amount of information they collect, and the opacity with which they use it, the opinions of people within Google matter.

Google has had issues with clear bias in their algorithms. What does it mean that when I go to Google and search for a baby (and I searched as an anonymous user, logged in via a VPN, and both with and without Tor) I am shown results that are almost exclusively of white children?

vpn search of babies

vpn only

vpn with tor

vpn with Tor

When people within Google speak about diversity, what they say matters. While Google is an enormous company, and we have no idea where the author works within this larger structure, we also don't know how widely these ideas are shared within the organization. It's also worth remembering the effect of the heckler's veto, where a small minority can squelch progress.

Ideas don't spring fully formed from a vacuum. When ideas make it into the light of day -- especially in the form of a multi-page screed -- it's a sign that the author has been thinking them over for a while, sharing them with peers, and/or creating drafts. All of these things take time. Now is also a good time to note that if these ideas were shared among peers before making it into written form, they were likely given a warm initial reception.

It's also worth noting that the piece does not represent Google's corporate policy. However, the piece does provide some interesting context for Google's ongoing failures to improve the diversity of its workforce. The most enlightened corporate policy in the world will fail without the support of the workforce. Given that the perspectives described in the Google anti-diversity screed also read like a laundry list of the bias that women in tech continue to face, it begs the question of how deeply Google's corporate policy has been embraced throughout the organization.

I'd also be curious about how educators who rely on Google's services are reacting to this news. Up to this point, I haven't heard anything, but given Google's increasingly large role in shaping what happens in the classroom, it would be great to hear educator perspectives on this. This also brings to mind the challenge faced by educators when their colleagues voice opinions about kids and families that demonstrate bias. 

Silence isn't an option, and the answers aren't easy, but we can start to have a better conversation when we call out that disagreeing with people who espouse gender bias or racial bias is necessary. We aren't "silencing" people when we disagree with hateful and misinformed opinions. We're talking; ironically, many of our free speech advocates have a hard time with that.

Update, August 7: Based on reporting at Motherboard, there is at least some support within Google for the author of the anti-diversity piece.

This piece, written by ex-Google employee Yonatan Zunger, provides some excellent insight from an insider's perspective. 

Thirty Seconds

3 min read

In my years working in and around education, I have heard a lot of arguments about how to "reach" teachers in order to provide them information. A lot of these arguments have the stench of SEO optimization, and quickly devolve into keyword placement, catchy titles, finding the right post length, using pictures, using video, and making sure to embed current jargon. At some point in this screed, the question of time gets raised. Teachers are busy, they will say. They need to make a decision in [X seconds] or [Y minutes]. Any longer than that and we've lost our chance.

And when I hear these arguments, I'm always at a loss on how to proceed. Teachers are busy, but teachers are also caring, informed professionals. Far too frequently, when I hear people talk about "reaching" people, or how to make pages "sticky," I hear the language of trickery. It's the language used when -- consciously or unconsciously -- people view attention as something to be gamed, not earned -- as something to be taken, not offered. It's the language of people who lack a thorough confidence in what they offer, and feel their first and best recourse is to resort to gimmickry to keep people engaged.

And when I ask questions about how they are working to improve their information, or talking with the people they want to reach, or make room to elevate voices within their readership, or what their unique perspective on a specific issue might be, it often feels like I'm addressing a native English speaker in Greek. When I suggest spending less time and money on the frills that adorn a piece and more time figuring out how a specific piece offers something new or unique, the conversations generally grind to a halt.

And that's too bad, because if you write well, and write with a purpose, and have an actual vision that makes sense, people will read. If you want to make sure that you have an edge in search, encrypt your site, and make sure it uses standards-compliant markup. But assuming that your best ideas need to be accessible in under [X seconds/Y minues] patronizes the people who might be have a deep interest in your posts. It also encourages unexamined oversimplifications, which leads to sloppy thought. There are some decisions that shouldn't be made in under 30 seconds, or under 2 minutes. And while there is a balance that needs to be struck between accessibility and depth, the content should drive where that line is drawn. I'd argue we create more useful information in educational content when we err on the side of an intelligent reader. 

Thinking is okay. Acknowledging that aspects of the world are complex, and don't fit into easily consumed chunks, is a part of how we "reach" people. We need to keep the simple things simple, and we need to explain the complex things well. Attempting to take shortcuts through intellectual complexity is another facet of technology as solutionism. The only people who win are the folks selling shortcuts -- and they have generally cashed their checks by the time the rest of us are cleaning up their messes.

Who Knows What You Read?

2 min read

I spend a lot of time thinking about ways to help people understand data collection and privacy. I've done workshops on tracking before, but over the next year I'd like to try this with a group of teachers, or possibly at an EdCamp. This activity could also work at the high school level, and possibly even with middle school students.

The goal of the activity is to provide participants the skills and tools to begin analyzing how online trackers work, and how to spot and identify them.

If anyone runs this activity, or has suggestions on how to improve or modify it, please let me know

Select an individual news article, and document how you found it. 

Then, read the article.

Then, document:

  • a. how you chose this individual article;
  • b. what device you read it on;
  • c. how long you spent reading it;
  • d. the web page you visited after you read it;
  • e. your physical location when you read it;
  • f. when during the day you read it.

Then, describe who else would know the answers to questions a-f, listed above. Include any companies that might be tracking any of the pages you visited, including the company that owns/controls the site that published the article. How difficult or easy would it be for them to share that information with other companies? How would you know if/when any of this information was shared, or how it was used?

Then, compare this process to reading an article in a newspaper or magazine. When we read something in print, who else knows about it? How do they know?

Using just the information from only one article, what statements,judgments, or assumptions could someone make about you?

How would this change if they had information about 10 articles you read?

How would this change if they had access to your reading habits for the last week? The last month? The last year? 

To get a sense of the trackers on a page, use a tool like Ghostery or Lightbeam (Firefox only). While neither are as accurate as an intercepting proxy, they are both very accessible, and help illustrate the point with much less work.

Amazon and Whole Foods: Can I Have Some Data with that Kale?

4 min read

It looks like Amazon is buying Whole Foods

Let's take a step back and look at the data involved here. We will start by looking at a person who only uses Amazon to shop online, buys food from Whole Foods, and reads using the Kindle app.

For anyone who has ever bought something, Amazon has our home address, and possibly related shipping addresses (ie, ifyou have ever bought something as a gift and had it shipped directly to the recipient). Amazon potentially has one or more credit cards stored for us. Amazon has our purchasing history, and our browsing history. If we ever responded to an ad online for an Amazon product, Amazon has that referrer history, and can infer and expand their profile on us based on the sites that refer us to Amazon.

And, of course, Amazon collects information about all the different devices you use to access Amazon services - so Amazon has a precise record of all the hardware and software you use when you shop, potentially going back to when you first started shopping online. If you can't remember the phone you used in 2007, Amazon could probably tell you.

Moving on to Whole Foods, every time someone uses a credit card in the store, Whole Foods gets the person's name, their credit card number, their geographic location (the store), the time they were there, and the list of items they have purchased. Cross referencing this information with data collected by Amazon, the credit card number or name and zip code could be sufficient to connect these data sets with close to 100% certainty.

For people who use the Whole Foods App, the list of data collected by Whole Foods expands dramatically. The application collects geographic location, device information (ie, the brand of phone or tablet, some form of device ID, the IP addresses it uses, etc), presumably an email address, and the ability to read and access wireless and bluetooth connections. I'm not sure if Whole Foods does tracking via bluetooth beacons, but the app permissions for the android app leave that open as a possibility. If the Whole Foods app does ship with bluetooth tracking enabled, anyone with the app installed and running can be tracked via bluetooth beacons from just about anywhere. Potentially, if tracking was set up between any of Amazon's home devices (the Echo, etc) and the Whole Foods app that Amazon can now access, that would be a very effective way to map in-person social connections and online/offline activity.

If a person shops online at Amazon, buys (expensive) food at Whole Foods, and reads using the Kindle app, then we are also sharing our reading history, patterns, reading speed, and book buying history with Amazon. This data can also be used to infer interests (a person reads one type of book over another, and reads this type of book faster than another), habits (a person generally reads in the morning, and for a certain amount of time), and other personal patterns. When reading habits are cross-referenced against other personal habits (like the food we buy or the items we shop for) it creates a more complete profile of an individual. 

It doesn't take much of a leap to see how a list of the food we buy, the items we shop for, the information we read, and where and when we do each of these actions would be of interest in things like health care. 

And, of course, Amazon has been moving into health care. And, given that we are seeing more experiments using things like sentiment analysis and wearable tech as a means to adjust insurance rates, scenarios that include shopping lists in insurance calculations aren't a stretch.

It's also worth noting that the depth of the Whole Foods data set will be a boon for companies like Amazon that look at differential pricing. Amazon will now be in a great position to identify people willing to pay more for everyday items.

So, have fun shopping at Whole Foods. That organic, free range, hormone free chicken you will be eating tonight will be pecking in your data trail for a while. 

Twitter's Misleading User Experience When Reporting Abuse

2 min read

Twitter's history of combating trolls and abuse has been problematic, at best.

Recently, I discovered a corner in their toolkit that highlights why Twitter's current efforts remain ineffective.

When reporting a person for abuse (or, more likely, a bot), Twitter leads you through a multi-step process. 

In the first step, we select an account or a tweet to report.

Step 1 

In the second step, we define the reason for the report.

Step 2

In the third step, we provide additional details.

Step 3

In the fourth step, we indicate who is being harassed.

Step 4

In the fifth step, we select up to five tweets that demonstrate the harassment.

Step 5

In the sixth step, we decide whether we want to block the account, mute the account, or do neither. When we click "Done", the offending tweets we reported are no longer visible. Voila. The process has worked.

Step 6

Except, it hasn't. Despite appearances, Twitter has done nothing to address the abuse. When you are logged in, you can't see the Tweets you reported. To the rest of the world - including, literally, everyone who isn't you - the content is still visible. This almost certainly includes search engines.

From your perspective, it actually looks like Twitter has done something, but from a practical perspective, Twitter has engaged in a game of smoke and mirrors. This happens regardless of whether or not we select "Block" or "Mute"; Twitter still hides the tweets you reported from you, and you alone.

This is dangerous. If a person has been doxxed on Twitter and they report the tweet, Twitter's UX creates the misleading impression that the offending content has been removed. The solution to this problem is simple: Twitter should let the "Block" or "Mute" options work as intended. While this wouldn't fix Twitter's abysmal record of responding to abuse, it would at least provide a more honest user experience.

When Twitter automatically hides offensive content from the people who have reported it, they create the impression that they have done something, when they have done nothing. Design choices like this demonstrates Twitter's apathy towards effectively addressing hate and abuse on their platform.