Dark Patterns when Deleting an Account on Facebook

3 min read

By default, Facebook makes it more complicated than it needs to be to delete an account. Their default state is to have an account be deactivated, but not deleted.

However, both the deactivation and deletion process can be undone if a person logs back into Facebook.

To make matters worse, to fully delete an account, a person needs to make a separate request to Facebook to start the account deletion process. Facebook splits the important information across two separate pages, which further complicates the process of actually deleting an account. The main page for deleting an account has some pretty straightforward language.

However, this language is undercut by the information on the . page that describes the difference between deactivating and deleting an account.

Some key details from the second page that are omitted from the main page on deleting an account include this gem:

We delay deletion a few days after it's requested. A deletion request is cancelled if you log back into your Facebook account during this time.

This delay is critical, and the fact that it can be undone is also something that needs additional attention.

Facebook further clarifies what they consider "logging in" in a third, separate page, where they describe deactivating an account.

If you’d like to come back to Facebook after you’ve deactivated your account, you can reactivate your account at anytime by logging in with your email and password. Keep in mind, if you use your Facebook account to log into Facebook or somewhere else, your account will be reactivated.

While Facebook's instructions aren't remotely as clear as they should be, the language they use here implies that an account deletion request can be undone if a person logs in (or possibly just uses a service with an active Facebook login) at any point during the "few days" after a person has requested their account deletion. It's also unclear what this means if someone logs into Messenger. And, of course, the avcerage person will never know that their Facebook account hasn't been deleted because they won't be going back to Facebook to check.

My recommendations here for people looking to leave Facebook:

  • First, identify any third party services where you use Facebook login. If possible, migrate those accounts to a separate login unconnected from Facebook.
  • Second, delete the Facebook app from all mobile devices.
  • Third, using the web UI on a computer, request account deletion from within Facebook.
  • Fourth, install an ad blocker so Facebook has a harder time tracking you via social share icons.

Facebook, Cambridge Analytica, Privacy, and Informed Consent

4 min read

There has been a significant amount of coverage and commentary on the new revelations about Cambridge Analytica and Facebook, and how Facebook's default settings were exploited to allow personal information about 50 million people to be exfiltrated from Facebook.

There are a lot of details to this story - if I ever have the time (unlikely), I'd love to write about many of them in more detail. I discussed a few of them in this thread over on Twitter. But as we digest this story, we need to move past the focus on the Trump campaign and Brexit. This story has implications for privacy and our political systems moving forward, and we need to understand them in this broader context.

But for this post, I want to focus on two things that are easy to overlook in this story: informed consent, and how small design decisions that don't respect user privacy allow large numbers of people -- and the systems we rely on -- to be exploited en masse.

The following quote is from a NY Times article - the added emphasis is mine:

Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.

All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”

He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million — a number previously reported by The Intercept — contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.

The first highlighted quotation gets at what passes for informed consent. However, in this case, for people to make informed consent, they had to understand two things, neither of which are obvious or accessible: first, they had to read the terms of service for the app and understand how their information could be used and shared. But second -- and more importantly -- the people who took the quiz needed to understand that by taking the quiz, they were also sharing personal information of all their "friends" on Facebook, as permitted and described in Facebook's terms. This was a clearly documented feature available to app developers that wasn't modified until 2015. I wrote about this privacy flaw in 2009 (as did many other people over the years). But, this was definitely insider knowledge, and the expectation that a person getting paid three dollars to take an online quiz (for the Cambridge Analytica research) would read two sets of dense legalese as part of informed consent is unrealistic.

As reported in the NYT and quoted above, only 270,000 people took the quiz for Cambridge Analytica - yet these 270,000 people exposed 50,000,000 people via their "friends" settings. This is what happens when we fail to design for privacy protections. To state this another way, this is what happens when we design systems to support harvesting information for companies, as opposed to protecting information for users.

Facebook worked as designed here, and this design allowed the uninformed decisions of 270,000 people to create a dataset that potentially undermined our democracy.

Twitter and Facebook No Longer Understand Twitter and Facebook

2 min read

Twitter thinks that ads are a problem on Twitter. 

Twitter - your ads might not be good, but I'm going to lay this out for you: the key problems with your platform are misinformation and abuse. You are equally bad at dealing with both, and your most recent response is deficient in multiple ways. Facebook is the platform with problems with advertising, and misinformation and abuse. 

As I have noted before in this piece co-authored with Kris Shaffer, Twitter is either misrepresenting the effectiveness of their ad network, or they are misrepresenting their ability to detect bots.

Facebook's ineptitude is summed up most succinctly in this quotation from Mark Zuckerberg. Zeynep Tufecki has a great thread about it, but Zuckerberg's own words provide insight about how top leadership within tech misunderstand the situation they have created.

Zuckerberg and both sides

To his credit, Zuckerberg managed to pack a large amount of misunderstanding into a short message, so he deserves kudos for concision. But Zuckerberg misses the point entirely: this is not about ideas and content. This is about power and manipulation. Zuckerberg was manipulated by Trump into responding to a baseless charge, and Zuckerberg fell back onto the "both sides" fallacy cited by, among other people, Trump himself when Trump was justifying white supremacists and neo-Nazis.

Our tech industry has created platforms that are easy to game. For all the talk of disruptive innovation, how tech entrepreneurs are the smartest people in the room, etc, etc, we are now in a situation where billions of dollars have been spent creating platforms that the creators neither control nor understand. Given the outsize role these platforms play in delivering information and shaping public discourse, that should make us all very nervous.

PS: Twitter: want to identify some bots? Look at the networks pushing the "Zuckerberg/Podesta" and "Zuckerberg/Russia" stories, right now. Seriously, step up your game.

Facebook, Voter Suppression, and AdTech

3 min read

This piece over on Medium ties together several news stories that have been written about the Trump campaign's use of Dark Posts on Facebook to supress the vote among Clinton voters. There are some great details in the post, and you should read it in full. A few details stood out that bear highlighting.

The Trump campaign used pre-built tools within Facebook, and data on users exposed by Facebook. In other words, Facebook already had the tools to support vote suppression built into their system. I don't think that this was done intentionally by Facebook, but it really hammers home the point: all tech has unintended consequences. When we look at tech, we need to evaluate the fringes, and ask hard questions about what the tech can break, because we humans are great at breaking things. But in this case, the mechanisms for manipulating behavior via ads worked very well for suppressing turnout in the electorate. Predictive analytics lost, but mood manipulation via big data worked well.

The Trump campaign used data from within Facebook to suppress turnout among Clinton supporters. This means that every progressive organization on any issue that has been organizing on Facebook helped provide the Trump campaign with a list of potential voters to receive Dark Posts to suppress their vote (and in brief, Dark Posts are private ads microtargeted to specific demographics. On some days, the Trump campaign delivered 100,000 different ads, tailored by demographic data). But the message to progressive orgs should be clear: when you organize on Facebook, you expose your organization and your stakeholders to profiling and targeted political ads by your opponenents. Use better tools.

Finally, according to the piece, the Trump campaign created a privately owned database that contains between 4000-5000 data points on the online and offline behavior (ie, where we go, our credit card purchases, etc) of approximately 220 million Americans. This database was compiled from multiple sources, including Cambridge Analytica, Experian PLC, Datalogix, Epsilon, and Acxiom Corporation. It's privately held, and it's unclear what restrictions, if any, exist around who can access this database. Unlike data collected on us by the NSA, where there are levels of bureaucracy tracking access, the dataset compiled during the campaign is a much more openly accessible resource to people within the Trump campaign.

Also worth noting: Facebook explicitly offers advertising services that tie online and offline behaviors. If you look at the list opf partners, you will see some of the same players that determine our credit scores.

Facebook, Privacy, Summit Public Charters, Adaptive Learning, Getting Into Education, and Doing Things Well

7 min read

One of the things that I look for within schools is how solid a job they do telling their students and families about their rights under FERPA. One crude indicator is whether or not a school, district, or charter chain contains any information about FERPA on their web site. So, when I read that Facebook was partnering with Summit Public Charter Schools, I headed over to the Summit web site to check out how they notified students and parents of their rights under FERPA. Summit is a signatory of the Student Privacy Pledge and a key part of what they do involves tracking student progress via technology, so they would certainly have some solid documentation on student and parent rights.

Well, not so much.

It must be noted that there are other ways besides a web site to inform students and parents of their FERPA rights, but given the emphasis on technology and how easy it is to put FERPA information on the web, the absence of it is an odd oversight. I'm also assuming that, because Summit clearly defines themselves as a Public Charter school that they are required to comply with FERPA. If I'm missing anything in these assumptions, please let me know.

But, returning to the Facebook/Summit partnership, the news coverage has been pretty bland. In fairness, it's hard to do detailed coverage of a press release. Two examples do a pretty good job illustrating the range of coverage: The Verge really committed to a longform expanded version of the Facebook's press release, and the NY Times ran a shorter summary.

The coverage of the partnership consistently included two elements, and never mentioned a third. The two elements that received attention included speculation that Facebook was "just getting in" to the education market, and privacy concerns with Facebook having student data. The element that received no notice at all is the open question of whether the app would be any good. We'll discuss all of these elements in the rest of the post.

The first oversight we need to dispense with is that Facebook is "just getting in" to education. Facebook's origins are rooted in elite universities. The earliest versions of the application only allowed membership from people enrolled in selected universities - Ivy League schools, and a small number of other universities.

Also, let's tell the students interacting on these course pages on Facebook - or these schools hosting school pages on Facebook - or these PTAs on Facebook - that Facebook is "just getting in" to education. To be clear, Facebook has no need to build a learning platform to get data on students or teachers. Between Instagram and Facebook, and Facebook logins on other services, they have plenty. It's also worth noting that, in the past, Facebook founder Mark Zuckerberg has seemed to misunderstand COPPA while wanting to work around it.

Facebook - the platform - is arguably the largest adaptive platform in existence. However, the adaptiveness of Facebook isn't rooted in matching people with what they want to see. The adaptiveness of Facebook makes sure that content favored by adverisers, marketers, self promoters, and other Facebook customers gets placed before users while maintaining the illusion that Facebook is actually responding directly to people's needs and desires. The brilliance of the adaptiveness currently on display within Facebook is that, while your feed is riddled with content that people have paid to put there, it still feels "personalized". Facebook would say that they are anticipating and responding to your interests, but that's a difficult case to make with a straight face when people pay for the visibility of their content on Facebook. The adaptiveness of Facebook rests on the illusion that they allow users to select the content of their feeds, when the reality of Facebook's adaptiveness as manifested in their feeds is more akin to a dating service that matches ads to eyeballs.

Looking specifically at how this adaptiveness has fared in the past raises additional questions.

Facebook's algorithms and policies fail Native communities.

Facebook's algorithms and policies fail transgender people.

Facebook's algorithms and policies selectively censor political speech.

Facebook's algorithms and policies allow racism to flourish.

Facebook's algorithms and policies ruined Christmas (for real - maybe a slight overstatement, but I'm not making this up).

Facebook allowed advertisers to take a woman's picture and present it to her husband as part of a dating ad.

Facebook's algorithms and policies can't distinguish art.

Facebook's algorithms and policies experiment with human emotions, without consent.

I could continue - we haven't even talked about how Facebook simplified government surveillance, but you get the point: the algorithms and policies used by Facebook tilt heavily toward the status quo, and really miss some of the nuance and details that make the world a richer place. In an educational system, it's not difficult to see how similar algorithmic bias would fail to consider the full range of strengths and abilities of all the students within their systems. Facebook, like education, has a bad track record at meeting the needs of those who are defined as outside the mainstream.

In educational technology, we have heard many promises about technologies that will "disrupt" the status quo - the reality is that many of these technologies don't deliver more than a new UI on top of old systems.

There Is An Easy Solution Here

Fortunately, none of these problems are insurmountable. If Facebook released the algorithms to its learning platform under an open source license, no one would need to guess how they worked - interested parties could see for themselves. Facebook has done this with many projects in the past. Open sourcing their algorithms could potentially be an actual disruption in the adaptive learning marketplace. This would eliminate questions about how the adaptive recommendations work, and would allow a larger adoption of the work that Facebook and Summit are doing together. This wouldn't preclude Facebook or Summit from building a product on top of this work; it would just provide more choices and more options based on work that is already funded and getting done.

It's also worth highlighting that, while there will be many people who will say that Facebook has bad intentions in doing this work, that's not what I'm saying here. While I don't know any of the people doing work on the Facebook project, I know a lot of people doing similar work, and we all wake up wanting to build systems that help kids. In this post, I hope that I have made it very clear that I'd love to see a system that returned control of learning to the learner. Done right, adaptive learning could get us there - but "doing adaptive right" requires that we give control to the learner to define their goals, and to critique the systems that are put in place to help learners achieve. Sometimes, the systems around us provide needed support, and sometimes they provide mindless constraints. Adaptive learning needs to work both ways.

Open sourcing the algorithms would provide all of us - learners, teachers, developers, parents, and other people in the decision making process - more insight into and control over choosing what matters. Done right, that could be a very powerful thing.

Facebook Wants You to Vote on Tuesday. Here's How It Messed With Your Feed in 2012.

Facebook tinkering with participation in our democracy.

Facebook Tinkers With Users’ Emotions in News Feed Experiment, Stirring Outcry

Things to consider as Facebook begins building educational software.