May 01, 2018 by nwilliams
I have multiple friends who are conscientious objectors to social media. Their primary arguments for why they choose to abstain mostly center around one thing: privacy. “Don’t you know that Facebook and Instagram steal your information?” they’ll ask me, as I nonchalantly plan how I will Instagram the dinner I’m eating with them. In the past, my posture has been to respectfully listen to their concerns, but take them with a grain of salt. After all—how bad could it possibly be? How much proof do we really have that Facebook cares about our personal details, beyond accurately predicting—often frighteningly so—what kind of sidebar ads will appeal to us or who we might want to “friend?” Up until a couple of weeks ago, the answer might have been “not much.” Discussion and conjecture about social media is, after all, almost as difficult to pin down as social media itself. We live in a world that abhors definition.
But then, in an incident that will probably forever characterize the way we think about those tempting, time-sucking little apps on our phones, Facebook CEO Mark Zuckerberg got called out for selling user information to a UK-based political consulting firm called Cambridge Analytica. According to the New York Times, the firm mined information about what individuals had “liked” and what their friend networks looked like in order to create personality schematics, allegedly for the purpose of influencing voter decisions in the 2016 presidential election. This maneuver began back in 2014, but was only discovered a few weeks ago. Since then, Zuckerberg has been caught up in a veritable firestorm—he has been asked to appear before both Congress and British Parliament, although he apparently refused Parliament’s request (maybe not the best move).
But the thing that makes this particular scandal so interesting and complicated from a cyber-security standpoint is the precise way in which the data was actually collected. Several years ago, a Cambridge University professor hired by Cambridge Analytica to collect data released an app called “thisisyourdigitallife.” It was essentially an app designed to capture different personalities, and because it used Facebook Login (the service apps use to allow users to login to their site using Facebook), it requested various Facebook data from users as they entered the app. Every app that utilizes Facebook Login does this—it asks for permission to view your profile, your friends list, your email address. But the kicker is that, when this app asked for permission to access users’ friend lists, it also mined data from those friends. So whereas only about 270,000 people actually downloaded the app and gave consent, the app ended up with information from somewhere in the neighborhood of 87 million users. Exponential growth at work.
The tricky part of this centers around consent: was proper consent obtained for the information or not? Although Zuckerberg referred to the incident in his official statement as a “breach of trust” and not as an actual data breach, the policy that once allowed Cambridge Analytica to mine the data of friends that had not personally consented to such activity has since been reversed. This would seem to indicate that, even though some form of consent was given to access friends list, it was not enough consent to keep the situation above board. Furthermore, the app was pretty unclear about precisely what the data gleaned from Facebook would be used for—while users were told they were participating in a personality test, they were likely not aware that their results would be used to help Cambridge Analytica profile American voters.
And this brings us back to just how difficult it is to pin down social media—especially when it comes down to questions of morality. For one thing, the rules are constantly changing: one moment, Facebook allows a certain API like the one that Cambridge Analytica took advantage of. The next, it doesn’t—and vice versa. And frequently, those changes are so gradual and subtly communicated that we don’t even notice them. For another thing, the quick, flashy facade of social media naturally (and, likely, intentionally) lends itself to users accidentally getting pulled into situations they might “technically” agree to but don’t fully understand. All of this makes it nearly impossible to make a unilateral ethical judgement: hence, Zuckerberg’s ability to—so far—successfully tiptoe around the real issues and skate by on mere apologies.
But while it might be difficult to actually nail Facebook for its recent actions, we can (and should) take some concrete steps as security-minded consumers to protect ourselves and our data moving forward. First of all, we need to be smart about what we’re really getting ourselves into when we join any social media platform. If we gloss over the fine print (which, let’s be honest, most of us probably do), then, in a way, we have only ourselves to blame. Facebook and its minions have shown and continue to show themselves to be tricky when it comes to what they’re doing with our data, so it’s on us to pay attention—close attention. Second of all, we need to demand that more rigorous regulations be put in place to make these “gray areas,” well, a little less gray. Most organizations are held in check by serious privacy laws that prevent them from using data for anything other than the express purpose for which it was obtained. According to DMN, the upcoming General Data Protection Regulation would almost certainly have prevented this incident, by demanding in no uncertain terms that organizations prove contractual necessity, consent, and legitimate interest for each and every piece of data they control. Hopefully, the increased transparency required by GDPR will provide more clarity and clear lines when it comes to how companies—even the untouchables like Facebook—deal with personal data.
Either way, it might be time to eat some crow with my Facebook-hating friends. While the instances of obvious data breach might be few and far between, situations like this make it abundantly clear that we give social media way too much grace when it comes to our information. And while the answer may or may not be to abstain from these platforms altogether, we certainly need to start asking the hard questions about exactly how our information is being used. And the more we do it—the less we start looking the other way—the harder it will be for Facebook to pull stunts like this one.
To learn more about privacy laws, including GDPR, and how you can help stay compliant, please visit us at www.globallearningsystems.com.
March 30, 2018 by nwilliams
In case you thought it had been a suspiciously long time since a massive data breach was announced, well, here you go. Just a couple of days ago, Orbitz (part of the massive travel conglomerate Expedia) revealed that during the second part of last year, the personal data of many of their users was breached. And by “many,” I mean somewhere in the neighborhood of 880,000. And while Orbitz promises that no Social Security Numbers were compromised, a lot of other data was: names, dates-of-birth, even email and street addresses. And, of course, credit card information. Let’s not forget that.
Importantly, this was not a phishing attack. It was a system hack, and although the exact method is unknown, the hackers did target an older Orbitz platform (not Orbitz.com), as well as a partner sites (separate occasions), and were able to access records still embedded in it. And unlike with Equifax, this also doesn’t appear to be a situation in which administrators followed blatantly terrible password security practices. These data loss situations are always somewhat harder to assess, since they can’t be directly traced back to a clear and specific bad decision. They’re also harder to pass judgement on or attempt to provide solutions for, for the same reason. And yet, anytime this much data is exposed, there’s a serious issue. Something wasn’t adequately protected—someone wasn’t doing what they were supposed to do. It might not be a cut-and-dried situation of a user imprudently clicking a bad link or failing to change a major server password from the system default, but there’s something fishy at play. Let’s unpack it a little bit.
. First, this breach was not discovered until years after it occurred. The hacks both occurred back in 2016, which means that compromised data was floating around, likely being used for nefarious purposes by hackers, for nearly two years before anyone would have any reason as to why. This should raise major red flags. The fact that it took so long for the hack to be discovered likely means that the servers the information was stolen from were not being properly monitored. Typically, IT professionals that are on their game discover those hacks while they’re still in progress—not two years too late.
So why were the systems clearly not being properly monitored? Well, probably because they were what’s known as “legacy” systems—older servers that still store data but have been replaced by newer systems (in this case, probably Orbitz.com). In most cases, these systems are older and not very well-protected—and they’re certainly not going to be closely monitored for unusual activity the way current systems would be. At best, they’re certain to become an afterthought: while all of IT’s attention is focused on the current, busy server, what happens to the old one gathering dust? An idle computer is the hacker’s playground.
I think the problem here is rapidly coming into focus. If a system is old and weak enough that it’s being replaced by a new one, then either all data from it needs to be transferred off, or at the very least it needs to be carefully monitored to ensure that everything is safe. There is absolutely no excuse for leaving important data vulnerable. So while this may not have been a hack in which an individual was, directly and immediately responsible, some very poor decisions led to this breach.
And as for prevention? Obviously, if you’re the organization responsible for protecting this data, you need to implement proper firewalls and other system security measures, as well as ensure that IT professionals are consistently monitoring each and every data-holding system to guarantee its security. You should also be well-versed in privacy standards related to the data you’re storing. Many privacy regulations—the far-reaching General Data Protection Regulation among them—have strict stipulations as to how long and for what reasons older data is supposed to be stored. And if you’re a data-holder, hold the organizations that might possess your personal information to a high standard for protecting it.
In a way, a non-phishing-related attack like this one makes a helpful point about cyber security: hacks are not always the result of a blatantly obvious, easily pinpointed attacks, an email virus that spreads like wildfire and infects an entire system. Sometimes they fly so under the radar that they’re not even discovered for a year, or two, or three afterward. This ought to spur us on to even greater awareness, even more caution, even sharper and better enforced training programs. After all—these things can happen when we least expect them, and without us even realizing...until it’s too late. And, evidently, the cost can be deadly.
March 21, 2018 by nwilliams
Almost every new season arrives fraught with its own unique phishing scams. Around Thanksgiving and Christmas, hackers take advantage of rushed shoppers and an increase of traffic to online marketplaces to trick users into clicking on fake links or visiting unsecured sites. During the summer, it’s vacation packages and killer plane tickets that con us into giving up our credit card data. Whatever the season, hackers have an acute ability to determine what we’re occupied with, and then to tailor intricate scams to hook us. Well, now it’s tax season, and you bet that scammers are coming up with some brilliant ways to reel us in.
Scam Phone Calls
Ah--one of the oldest tricks in the book. Many individuals have reported receiving phone calls supposedly from the IRS, stating that taxes have been filed improperly, and threatening the taxpayer if they don’t take immediate action. This phish utilizes the classic trick of urgency—telling the recipient that something bad will happen to them if they don’t follow through immediately. This trick works especially well with phone calls, as recipients have less time to consider what they’re being told before they act on it.
Thankfully, there’s something obvious here to clue you into the fact that it’s a scam—the IRS doesn’t call taxpayers on the phone. Their business is done exclusively via snail mail, never by phone or even email. So, if you receive a mysterious phone call from someone claiming to be from the IRS, rest assured that you can safely hang up. And remember—regardless of what company or agency might claim to be calling you, never give up personal data over the phone. This goes for banks, wireless providers, or even IT help desks. Always verify the information you’re being given through an external source before you act.
Unsurprisingly, it doesn’t stop at phone calls. According to recent reporting by CBS, taxpayers have also been receiving emails from senders claiming to be IRS-affiliated debt collection agencies. These emails warn recipients that the tax refunds they received were incorrect, and must be returned to a “local refund account” immediately. Unusually, this form of phishing email directly steals the recipient’s money, rather than more circuitously gathering bank account data or Social Security Numbers. But like most phishing emails—and like the phone scam—it demands immediate action and even threatens legal repercussions otherwise. The scam also flashes a wealth of personal information about the recipient, making it look more legitimate.
Unfortunately, this is not the only email hack making the rounds. Another common one requests W2 information, and then uses that information to steal the identity of the victim. According to Forbes, this scam typically targets HR or payroll departments, and spoofs or hacks into the email account of a high-level executive “requesting” the information. Any phishing email that sends from (or even appears to send from) a company email address is automatically much more effective and difficult to spot. The trusted email address makes the recipient much more receptive to the information or action items presented in the email than they likely would be otherwise. Your boss emailing you asking for an employee’s W2 information would raise a lot fewer red flags than some unknown sender from the IRS. And that’s part of what makes phishing emails in general increasingly scary--as hackers get more sophisticated, their emails begin to lose those classic phishing email “tells.” Which means that we just have to be on even greater alert.
Whether it’s tax-related phishing emails or any other scam, one principle stands firm over and above all the rest: better safe than sorry. Releasing personal information on the phone or via email is like playing a loaded game of Russian Roulette. Once or twice, you could get lucky. Maybe that email requesting a W2 form really is from your boss. But the odds are absolutely not in your favor--and if you’re wrong? Bang. But lucky for you, there’s a simple solution, which is just not to spin the chambers at all. While the other tip-offs might begin to fail as phishing scams improve, playing it safe never will. In fact, it’s the only surefire way not to get scammed, and it works every time. If you get a phone call, say that you’d like to independently verify the information you’re being given, and hang up the phone. If you get an email, delete it—regardless of how legitimate it looks. Instead, pick up the phone and call your supervisor, your tax accountant, or even the IRS, to confirm.
If you do receive an IRS-related phishing email, you can notify the IRS at firstname.lastname@example.org (include the phishing email header in the body, with subject line “W2 Scam”). And don’t forget to maintain a strong annual training plan to keep yourself—-and your workforce—up-to-date with current phishing scams and solutions. Contact us to find out how you can integrate seasonal and industry-specific threats into a cohesive, effective program. And don’t forget: better safe than sorry.
February 21, 2018 by The GLS Team
2017 was a huge year for email phishing. We saw a 4x increase in spam emails, a consistent increase in phishing rates in every quarter of 2017, and an increase of almost 900% in W-2 phishing emails. None of these statistics, however, represents the most nerve-racking development of 2017: the rise of the phishing in the cloud. Our most trusted applications are now being used to hijack our information.
Out of all the cloud phishing scams that have come up this year, the ones from Oauth are some of the most prominent. “Oauth” is a simple protocol designed to allow you to sign on to many accounts with one protocol—just click “Sign in with Facebook” or “Sign in with Google” and instantly gain access to any new app. This new method of gaining access seems to have benefited everyone involved: users can now effortlessly get into apps, and applications have gained greater credibility and security by partnering themselves with battle-tested firewalls like Google.
But there’s a problem: in an attempt to spread their “Sign in with [our service]” buttons across the web, Oauth providers stopped scrupulously vetting the third party applications they were helping authenticate. What resulted is a brand-new breed of phishing scams: difficult to detect, highly effective, and capable of tricking even very technical employees.That’s because these phishing scams look like this:
Take a look at the page above. The url is correct, the browser is using extended-validation SSL, and the application requesting access is “Google Docs”. But make no mistake: this page is part of a phishing scam. And it is capable of accessing not just the information above—email history and contacts—but also google drive, google photos, or even location & search history...simply by requesting access as a “trusted” provider.
This type of phishing is not going away. In fact, it’s only going to expand: every major application we use on a daily basis—Facebook, Slack, Salesforce, Zendesk, and even Amazon—is now vulnerable to these kinds of attacks. Get a user to click an Oauth link, and you can steal all the information your heart desires. Let’s face it: phishing in 2018 is going to get a lot trickier. Major scams like the aforementioned Google Docs will continue and get more sophisticated.
Additionally, this new breed of attacks is posing great risk to traditional intrusion-prevention systems by rendering traditional methods of detection useless. For instance, most intrusion-detection systems today use DNS “greylisting” to find scams. Endpoint protection tools and network-level systems then prevent users from connecting to untrusted DNS names. But with an Oauth attack, all data is being accessed through Google.com. As soon as you grant permission, the cloud platform—not your network—is sending out your data. To combat this type of phishing attack—aside from the essential step of training employees—organizations can implement Cloud Access Security Brokers (CASBs) such as Netskope, Bitglass, and Saviynt. CASBs monitor and defend against cloud application data access by refining permissions across the cloud. While not foolproof, these types of systems will play an essential role in the phishing landscape in 2018.
February 13, 2018 by The GLS Team
by Paul Lewis
We’ve all heard the stories. We’ve all read the stats. We all know email phishing is a problem. But who cares, really? I consider myself very well-protected and would never fall for a phishing scam. I am not that foolish. I have nothing to worry about.
Unfortunately, phishing affects each of us no matter how vigilant we think we are. “Downstream Phishing” has become a favorite attack vector for cyber hackers and makes phishing a much greater issue. Consider the following real-life example:
A couple purchased a new home and was getting ready for the closing. Their real-estate attorney had been emailing documents for review, statements to be signed, and updates on the closing schedule. The day before the closing, the attorney emailed the couple, asking them to transfer $575,000 to the escrow account. All seemed normal. The couple went online, staged the wire, and clicked “send.” A few minutes later, an agent from the bank called them on the telephone to confirm the wire and requested their verbal passcode. The wire was sent.
The next day, the couple arrived at the attorney’s office for the closing. The attorney greeted them, smiled, and said, “Did you bring the certified check for $575,000 as per my email?” Extreme panic struck as the couple explained the email requested that they wire the money, which they did. They immediately called the bank. The bank informed them that the money was wired as per the couple’s request, including the verbal confirmation by phone. The money was gone.
After an investigation, it was discovered that the real-estate attorney’s email account had been phished, and that he had inadvertently given his login credentials to a hacker. The hacker then remotely logged into the attorney’s email account and shadowed his activity. The day before the closing, the hacker intercepted the email that requested the couple bring a certified check to the closing, and instead sent an email asking them to wire the funds to a fictitious escrow account. The bank refused to get involved, stating that they followed proper protocol and, for added security, received a verbal verification from the couple prior to sending the wire transfer. The couple was out $575,000, all because of a phishing email that they weren’t even the original recipients of: this kind of attack is called “Downstream Phishing,” so-called because it uses the victim of one phishing scam to lure more individuals down the line.
This story can teach us a couple of different lessons in preventative and defensive security. First of all, had the attorney been better educated in how to spot and respond to phishing emails, he might have identified the suspect email. And even if he had clicked the tainted link, he should never have entered his email login credentials. Best practice says that if you are ever prompted to login to an account through a link in an email, don’t. Instead, close and save all of your work and reboot the computer. But the attorney wasn’t the only one who failed to follow anti-phishing best practice: the couple also should have taken a more suspicious posture. If you ever receive a request to wire funds, even if it is expected and comes from a trusted source, it is imperative that you verbally communicate with the requestor to confirm that the request is legitimate. And when staging a wire transfer with a bank, it is equally important to verify the name of the account the funds are being sent to. Had the couple exercised either of these safety options, they likely would not have lost their life savings to a hacker.
Cyber-crime is at an all-time high and continues to evolve in complexity. We all must remain vigilant, verify email requests, and be suspicious of any call to action that involves large sums of money.
About Paul Lewis
Paul Lewis is a cyber detective that has assisted with cyber investigations around the globe. He is a certified expert witness and a frequent presenter at cyber security conferences. Paul can be reached at @PaulLewisUS on Twitter.