May 01, 2018 by nwilliams
I have multiple friends who are conscientious objectors to social media. Their primary arguments for why they choose to abstain mostly center around one thing: privacy. “Don’t you know that Facebook and Instagram steal your information?” they’ll ask me, as I nonchalantly plan how I will Instagram the dinner I’m eating with them. In the past, my posture has been to respectfully listen to their concerns, but take them with a grain of salt. After all—how bad could it possibly be? How much proof do we really have that Facebook cares about our personal details, beyond accurately predicting—often frighteningly so—what kind of sidebar ads will appeal to us or who we might want to “friend?” Up until a couple of weeks ago, the answer might have been “not much.” Discussion and conjecture about social media is, after all, almost as difficult to pin down as social media itself. We live in a world that abhors definition.
But then, in an incident that will probably forever characterize the way we think about those tempting, time-sucking little apps on our phones, Facebook CEO Mark Zuckerberg got called out for selling user information to a UK-based political consulting firm called Cambridge Analytica. According to the New York Times, the firm mined information about what individuals had “liked” and what their friend networks looked like in order to create personality schematics, allegedly for the purpose of influencing voter decisions in the 2016 presidential election. This maneuver began back in 2014, but was only discovered a few weeks ago. Since then, Zuckerberg has been caught up in a veritable firestorm—he has been asked to appear before both Congress and British Parliament, although he apparently refused Parliament’s request (maybe not the best move).
But the thing that makes this particular scandal so interesting and complicated from a cyber-security standpoint is the precise way in which the data was actually collected. Several years ago, a Cambridge University professor hired by Cambridge Analytica to collect data released an app called “thisisyourdigitallife.” It was essentially an app designed to capture different personalities, and because it used Facebook Login (the service apps use to allow users to login to their site using Facebook), it requested various Facebook data from users as they entered the app. Every app that utilizes Facebook Login does this—it asks for permission to view your profile, your friends list, your email address. But the kicker is that, when this app asked for permission to access users’ friend lists, it also mined data from those friends. So whereas only about 270,000 people actually downloaded the app and gave consent, the app ended up with information from somewhere in the neighborhood of 87 million users. Exponential growth at work.
The tricky part of this centers around consent: was proper consent obtained for the information or not? Although Zuckerberg referred to the incident in his official statement as a “breach of trust” and not as an actual data breach, the policy that once allowed Cambridge Analytica to mine the data of friends that had not personally consented to such activity has since been reversed. This would seem to indicate that, even though some form of consent was given to access friends list, it was not enough consent to keep the situation above board. Furthermore, the app was pretty unclear about precisely what the data gleaned from Facebook would be used for—while users were told they were participating in a personality test, they were likely not aware that their results would be used to help Cambridge Analytica profile American voters.
And this brings us back to just how difficult it is to pin down social media—especially when it comes down to questions of morality. For one thing, the rules are constantly changing: one moment, Facebook allows a certain API like the one that Cambridge Analytica took advantage of. The next, it doesn’t—and vice versa. And frequently, those changes are so gradual and subtly communicated that we don’t even notice them. For another thing, the quick, flashy facade of social media naturally (and, likely, intentionally) lends itself to users accidentally getting pulled into situations they might “technically” agree to but don’t fully understand. All of this makes it nearly impossible to make a unilateral ethical judgement: hence, Zuckerberg’s ability to—so far—successfully tiptoe around the real issues and skate by on mere apologies.
But while it might be difficult to actually nail Facebook for its recent actions, we can (and should) take some concrete steps as security-minded consumers to protect ourselves and our data moving forward. First of all, we need to be smart about what we’re really getting ourselves into when we join any social media platform. If we gloss over the fine print (which, let’s be honest, most of us probably do), then, in a way, we have only ourselves to blame. Facebook and its minions have shown and continue to show themselves to be tricky when it comes to what they’re doing with our data, so it’s on us to pay attention—close attention. Second of all, we need to demand that more rigorous regulations be put in place to make these “gray areas,” well, a little less gray. Most organizations are held in check by serious privacy laws that prevent them from using data for anything other than the express purpose for which it was obtained. According to DMN, the upcoming General Data Protection Regulation would almost certainly have prevented this incident, by demanding in no uncertain terms that organizations prove contractual necessity, consent, and legitimate interest for each and every piece of data they control. Hopefully, the increased transparency required by GDPR will provide more clarity and clear lines when it comes to how companies—even the untouchables like Facebook—deal with personal data.
Either way, it might be time to eat some crow with my Facebook-hating friends. While the instances of obvious data breach might be few and far between, situations like this make it abundantly clear that we give social media way too much grace when it comes to our information. And while the answer may or may not be to abstain from these platforms altogether, we certainly need to start asking the hard questions about exactly how our information is being used. And the more we do it—the less we start looking the other way—the harder it will be for Facebook to pull stunts like this one.
To learn more about privacy laws, including GDPR, and how you can help stay compliant, please visit us at www.globallearningsystems.com.Read More...