July 19, 2018 by nwilliams
The problem with passwords is a perennial one. Just a few days ago, SplashData released a list of the most popular passwords of 2017. What are they? Well, “123456,” “Password,” “12345678,” and “Qwerty” were the top 4, which is fairly indicative of the rest of the list. Among the others was “starwars,” which is in turn pretty reflective of where we stand on cyber-security. Need a password? Here, have a pop culture reference. It’ll be effortless for you (and the rest of the world) to guess. Apparently, ease—rather than safety—is the name of the game.
Of course, it’s possible to practice poor password security without ever suffering noticeable consequences. Many of us do. But this is primarily because we’re relatively unlikely targets. Sure, it’s possible for personal bank accounts to be hacked using weak credentials. It can and does happen, and for the individuals it happens to, such a security misstep can have terrible consequences. But there’s a far more likely scenario we should be concerned with.
In this scenario, an administrator in charge of his company’s cloud database fails—just as you and I have, countless times—to change a default password. Or, perhaps, he uses one he’s used elsewhere, or one that’s just a bit too easy to crack. And, just like you or I, he probably thinks to himself, Will this really matter? It’s so much easier this way. Unbeknownst to him, he’s about to cause a massive, headliner data breach. A hacker is going to use his weak password to break into the cloud, and access—even steal—user data. In fact, this hacker has been covertly breaking into the database for weeks without the administrator even knowing it. Sound familiar? This is exactly what happened on July 4 to Timehop, an app that connects to users’ social media accounts and compiles memories from previous years. This particular hack led to the compromise of 21 million users’ data, and is now being compared to an eerily similar data breach we’ve already analyzed: that of tax service Equifax last summer, which tallied up at about 146 million accounts breached. That breach was also caused by a password, and one which has since become notorious. And although the Equifax hack was a lot bigger, this one is, in a certain way, more telling.
When Equifax got hacked, I think it marked a shift in the way we view passwords: for perhaps the first time, the world saw what could happen if we fail to reset system credentials. And the results were apocalyptic. After Equifax, all plausible deniability we might have had before was gone—which means that, as soon as another company (cough, Timehop) makes the same rookie mistake, it can only be infinitely more embarrassing and less understandable. This again?
There are a couple of different things at play here. First of all, the undeniable cosmic forces named habit and existing protocol are extremely difficult to redirect. It seems, at least based on the past few hacks, that passwords are playing second fiddle to system firewalls and protocols for combating phishing emails. To some extent, this makes sense. We focus on the doomsday scenarios: hackers finding the one crucial flaw in a server and breaking in at just the right second; a scam email spreading a virus across an entire company in minutes. Those are the sorts of breaches we expect. But a hacker guessing at a weak password and using it to waltz right in? It’s the gaping hole hiding in plain sight—so obvious and so simple that we don’t even think to give it a second thought. And by now the oversight has become so ingrained that it’s taking longer than perhaps it ought to to change the tide. Even though we all know that real hackers are using this weakness to make their move, the breaches are still happening because no one is actually taking steps to change their behavior. A few individuals or companies might be—sure. But, clearly, organizations in general are not giving their employees proper training or instruction when it comes to what counts as a good password. They’re not communicating the importance of creating strong passwords to begin with, let alone changing them regularly. And you know why? Because it’s hard. Making and remembering strong passwords is significantly harder than using your birthday or leaving the system default in place. And making each and every employee do it? Even harder. So here we are.
Second of all, I think we’re starting to see the effects of security measures that simply aren’t keeping up with rapidly innovating technology. This happens across all avenues of cyber security, mostly because hackers, who frequently earn their livelihood and spend most of their time becoming intimately familiar with how our current technology works, learn very quickly how to surpass it. Even if we’re taking adequate security precautions, they’re almost always a step ahead of us. Given that, our best bet is to employ the latest possible technology. For passwords, that means Two-Factor Authentication (2FA). If you’ve ever had a website or an app ask you to confirm your login via a code texted to your phone or a link sent to your email, then you’ve seen 2FA in action. Unfortunately, this extra measure has not implemented by as many organizations as one might hope. Why? Well, probably for the same reasons we don’t prioritize creating secure 1FA passwords. Thankfully, especially as the number and frequency of these password-related hacks increase, more companies are starting—bit by bit—to employ 2FA. Banks have been doing it for ages (for good reason), and most of the major social media sites have it as an option (Facebook, Twitter, Instagram, LinkedIn, Snapchat). However, sites having it enabled as an option is a far cry from establishing it as the standard, let alone requiring it for back-end systems. And until that happens, we’ll be caught in the same old dichotomy between what we know is safest and least risky, and what we feel like implementing or requiring.
As soon as one of these hacks hits the news, everyone says the same thing: this could have been prevented. Could this one? Sure, probably. According to multiple reputable sources, the thing that could have prevented it might just have been 2FA. But I think there’s a more important point at stake here. Timehop’s response to the foible was to “beef up” their security: this is in incredibly reactive--rather than proactive--response. It admits to an incident and seeks to do whatever needs to be done in order to fix that incident...whether to save face or actually to protect something that had previously been unprotected. But the reason why we can watch these hacks happen over and over with very little change is that we don’t care about actually fixing the underlying problem. Security has to start from the bottom up: and that means addressing our attitude first. Do we understand the need for strong security, the real possibility that we could be the next victims, and prioritize it accordingly? If we don’t, then 2FA or firewalls won’t help us. We’ll always be a step behind.
Strong passwords start with education, encouragement, and enforcement. Contact us to find out how password security training can help keep your organization safe.
June 25, 2018 by nwilliams
If you’ve been paying attention the past months, you’ve likely heard tell of the FIFA World Cup. From social media to television to print advertising, news of the different tournaments, winners, and events has been circulating internationally over the past week, and will continue over the next several. It’s Soccer Fan Heaven. Unfortunately, like any worldwide obsession, this phenomenon can also spawn some rather serious scams. Between increased web activity in general and an influx of wild fans eager for deals, hackers have plenty of opportunities to run their cons, and we have plenty of opportunities to find ourselves seriously compromised. Here’s what to be on the lookout for.
One of the most popular scams circulating is a variation on the common phishing technique known as “baiting,” which lures people (pun intended) into clicking on a link by offering them a deal or a reward too good to pass up. In order to access whatever amazing offer the baiter is offering, the victim is required to click a link and give away personal information—typically, credit card data. Baiting scams are versatile and can be optimized based on the location or time of year of the hack. In this case, hackers are taking advantage of the World Cup to create specialized phishing emails that claim to offer discounted tickets or giveaways. Soccer fans are understandably intrigued, perhaps to the point that their common sense for matters of information security flies out the window.
Another threat involves infected websites. Along the same lines as clicking on bad links that promise great deals, fans who aren’t being 100% careful might also end up visiting a less-than-savory website in order to stream otherwise unavailable content. According to Kaspersky, these sites can contain webminers that contaminate computers and leave their owners compromised. Along the same lines as baiting attacks, compromised websites feed off of our desire for immediate or inexpensive gratification.
But even if you’re not a die-hard fan looking to scalp tickets or find a back-alley channel to watch the games, you might still be at risk. Increased web traffic in general (something similar happens around the holidays) always increases the likelihood that hackers will be out looking for targets. Malicious pop-ups or emails are likely to be exponentially more common over the next few weeks, leaving even the most innocent user open to attack.
So—how should we avoid these potential attacks? First of all, the usual rules of internet safety still apply. As always, when it comes to emails, clicking on links is a massive no-no—despite how “safe” or legitimate the correspondence might seem. The same goes for visiting shady websites—no matter how appealing the end result might be, or how distant or improbable the risks, the possible results could be catastrophic. Second of all, and excuse the truism, remember as a general rule that any deal or opportunity that seems too good to be true probably is. Human nature is prone to let excitement or desire override our more rational instincts, and, unfortunately, that tendency is usually a hacker’s best friend. But nothing—not even the possibility of being able to conveniently watch the World Cup or even to see an actual game live—ought to cloud our reason when it comes to matters of security. The same rules need to apply even when the possible gains are at their highest.
Third of all, and this is perhaps the most important point, the internet is becoming more and more a wild west of possible threats. Any one of us has only to run a brief internet search on current threats—related to the World Cup or otherwise—to turn up a good number of frightening results. An international soccer match is, in this case, the perfect example of how even a seemingly innocuous cultural event can create a firestorm of potential threats—and it’s exactly the sort of thing that we must be aware of. While it is undoubtedly useful and perhaps even indispensable, the internet is really not our friend. Hackers will take every opportunity they can get to manipulate us into doing something foolish. That being said, the more we understand what’s going on and how we can protect ourselves, the better chance we have of flipping the script and appreciating the good that the internet has to offer while still avoiding the bad. The current threat landscape requires that we keep our wits about us—but if we do that, we ought to be able to navigate it with only minimal and occasional hiccups. So keep your eyes peeled, and—safely—enjoy the game.
To learn more about how to avoid phishing threats, please visit our website at globallearningsystems.com.
May 01, 2018 by nwilliams
I have multiple friends who are conscientious objectors to social media. Their primary arguments for why they choose to abstain mostly center around one thing: privacy. “Don’t you know that Facebook and Instagram steal your information?” they’ll ask me, as I nonchalantly plan how I will Instagram the dinner I’m eating with them. In the past, my posture has been to respectfully listen to their concerns, but take them with a grain of salt. After all—how bad could it possibly be? How much proof do we really have that Facebook cares about our personal details, beyond accurately predicting—often frighteningly so—what kind of sidebar ads will appeal to us or who we might want to “friend?” Up until a couple of weeks ago, the answer might have been “not much.” Discussion and conjecture about social media is, after all, almost as difficult to pin down as social media itself. We live in a world that abhors definition.
But then, in an incident that will probably forever characterize the way we think about those tempting, time-sucking little apps on our phones, Facebook CEO Mark Zuckerberg got called out for selling user information to a UK-based political consulting firm called Cambridge Analytica. According to the New York Times, the firm mined information about what individuals had “liked” and what their friend networks looked like in order to create personality schematics, allegedly for the purpose of influencing voter decisions in the 2016 presidential election. This maneuver began back in 2014, but was only discovered a few weeks ago. Since then, Zuckerberg has been caught up in a veritable firestorm—he has been asked to appear before both Congress and British Parliament, although he apparently refused Parliament’s request (maybe not the best move).
But the thing that makes this particular scandal so interesting and complicated from a cyber-security standpoint is the precise way in which the data was actually collected. Several years ago, a Cambridge University professor hired by Cambridge Analytica to collect data released an app called “thisisyourdigitallife.” It was essentially an app designed to capture different personalities, and because it used Facebook Login (the service apps use to allow users to login to their site using Facebook), it requested various Facebook data from users as they entered the app. Every app that utilizes Facebook Login does this—it asks for permission to view your profile, your friends list, your email address. But the kicker is that, when this app asked for permission to access users’ friend lists, it also mined data from those friends. So whereas only about 270,000 people actually downloaded the app and gave consent, the app ended up with information from somewhere in the neighborhood of 87 million users. Exponential growth at work.
The tricky part of this centers around consent: was proper consent obtained for the information or not? Although Zuckerberg referred to the incident in his official statement as a “breach of trust” and not as an actual data breach, the policy that once allowed Cambridge Analytica to mine the data of friends that had not personally consented to such activity has since been reversed. This would seem to indicate that, even though some form of consent was given to access friends list, it was not enough consent to keep the situation above board. Furthermore, the app was pretty unclear about precisely what the data gleaned from Facebook would be used for—while users were told they were participating in a personality test, they were likely not aware that their results would be used to help Cambridge Analytica profile American voters.
And this brings us back to just how difficult it is to pin down social media—especially when it comes down to questions of morality. For one thing, the rules are constantly changing: one moment, Facebook allows a certain API like the one that Cambridge Analytica took advantage of. The next, it doesn’t—and vice versa. And frequently, those changes are so gradual and subtly communicated that we don’t even notice them. For another thing, the quick, flashy facade of social media naturally (and, likely, intentionally) lends itself to users accidentally getting pulled into situations they might “technically” agree to but don’t fully understand. All of this makes it nearly impossible to make a unilateral ethical judgement: hence, Zuckerberg’s ability to—so far—successfully tiptoe around the real issues and skate by on mere apologies.
But while it might be difficult to actually nail Facebook for its recent actions, we can (and should) take some concrete steps as security-minded consumers to protect ourselves and our data moving forward. First of all, we need to be smart about what we’re really getting ourselves into when we join any social media platform. If we gloss over the fine print (which, let’s be honest, most of us probably do), then, in a way, we have only ourselves to blame. Facebook and its minions have shown and continue to show themselves to be tricky when it comes to what they’re doing with our data, so it’s on us to pay attention—close attention. Second of all, we need to demand that more rigorous regulations be put in place to make these “gray areas,” well, a little less gray. Most organizations are held in check by serious privacy laws that prevent them from using data for anything other than the express purpose for which it was obtained. According to DMN, the upcoming General Data Protection Regulation would almost certainly have prevented this incident, by demanding in no uncertain terms that organizations prove contractual necessity, consent, and legitimate interest for each and every piece of data they control. Hopefully, the increased transparency required by GDPR will provide more clarity and clear lines when it comes to how companies—even the untouchables like Facebook—deal with personal data.
Either way, it might be time to eat some crow with my Facebook-hating friends. While the instances of obvious data breach might be few and far between, situations like this make it abundantly clear that we give social media way too much grace when it comes to our information. And while the answer may or may not be to abstain from these platforms altogether, we certainly need to start asking the hard questions about exactly how our information is being used. And the more we do it—the less we start looking the other way—the harder it will be for Facebook to pull stunts like this one.
To learn more about privacy laws, including GDPR, and how you can help stay compliant, please visit us at www.globallearningsystems.com.
March 30, 2018 by nwilliams
In case you thought it had been a suspiciously long time since a massive data breach was announced, well, here you go. Just a couple of days ago, Orbitz (part of the massive travel conglomerate Expedia) revealed that during the second part of last year, the personal data of many of their users was breached. And by “many,” I mean somewhere in the neighborhood of 880,000. And while Orbitz promises that no Social Security Numbers were compromised, a lot of other data was: names, dates-of-birth, even email and street addresses. And, of course, credit card information. Let’s not forget that.
Importantly, this was not a phishing attack. It was a system hack, and although the exact method is unknown, the hackers did target an older Orbitz platform (not Orbitz.com), as well as a partner sites (separate occasions), and were able to access records still embedded in it. And unlike with Equifax, this also doesn’t appear to be a situation in which administrators followed blatantly terrible password security practices. These data loss situations are always somewhat harder to assess, since they can’t be directly traced back to a clear and specific bad decision. They’re also harder to pass judgement on or attempt to provide solutions for, for the same reason. And yet, anytime this much data is exposed, there’s a serious issue. Something wasn’t adequately protected—someone wasn’t doing what they were supposed to do. It might not be a cut-and-dried situation of a user imprudently clicking a bad link or failing to change a major server password from the system default, but there’s something fishy at play. Let’s unpack it a little bit.
. First, this breach was not discovered until years after it occurred. The hacks both occurred back in 2016, which means that compromised data was floating around, likely being used for nefarious purposes by hackers, for nearly two years before anyone would have any reason as to why. This should raise major red flags. The fact that it took so long for the hack to be discovered likely means that the servers the information was stolen from were not being properly monitored. Typically, IT professionals that are on their game discover those hacks while they’re still in progress—not two years too late.
So why were the systems clearly not being properly monitored? Well, probably because they were what’s known as “legacy” systems—older servers that still store data but have been replaced by newer systems (in this case, probably Orbitz.com). In most cases, these systems are older and not very well-protected—and they’re certainly not going to be closely monitored for unusual activity the way current systems would be. At best, they’re certain to become an afterthought: while all of IT’s attention is focused on the current, busy server, what happens to the old one gathering dust? An idle computer is the hacker’s playground.
I think the problem here is rapidly coming into focus. If a system is old and weak enough that it’s being replaced by a new one, then either all data from it needs to be transferred off, or at the very least it needs to be carefully monitored to ensure that everything is safe. There is absolutely no excuse for leaving important data vulnerable. So while this may not have been a hack in which an individual was, directly and immediately responsible, some very poor decisions led to this breach.
And as for prevention? Obviously, if you’re the organization responsible for protecting this data, you need to implement proper firewalls and other system security measures, as well as ensure that IT professionals are consistently monitoring each and every data-holding system to guarantee its security. You should also be well-versed in privacy standards related to the data you’re storing. Many privacy regulations—the far-reaching General Data Protection Regulation among them—have strict stipulations as to how long and for what reasons older data is supposed to be stored. And if you’re a data-holder, hold the organizations that might possess your personal information to a high standard for protecting it.
In a way, a non-phishing-related attack like this one makes a helpful point about cyber security: hacks are not always the result of a blatantly obvious, easily pinpointed attacks, an email virus that spreads like wildfire and infects an entire system. Sometimes they fly so under the radar that they’re not even discovered for a year, or two, or three afterward. This ought to spur us on to even greater awareness, even more caution, even sharper and better enforced training programs. After all—these things can happen when we least expect them, and without us even realizing...until it’s too late. And, evidently, the cost can be deadly.
March 21, 2018 by nwilliams
Almost every new season arrives fraught with its own unique phishing scams. Around Thanksgiving and Christmas, hackers take advantage of rushed shoppers and an increase of traffic to online marketplaces to trick users into clicking on fake links or visiting unsecured sites. During the summer, it’s vacation packages and killer plane tickets that con us into giving up our credit card data. Whatever the season, hackers have an acute ability to determine what we’re occupied with, and then to tailor intricate scams to hook us. Well, now it’s tax season, and you bet that scammers are coming up with some brilliant ways to reel us in.
Scam Phone Calls
Ah--one of the oldest tricks in the book. Many individuals have reported receiving phone calls supposedly from the IRS, stating that taxes have been filed improperly, and threatening the taxpayer if they don’t take immediate action. This phish utilizes the classic trick of urgency—telling the recipient that something bad will happen to them if they don’t follow through immediately. This trick works especially well with phone calls, as recipients have less time to consider what they’re being told before they act on it.
Thankfully, there’s something obvious here to clue you into the fact that it’s a scam—the IRS doesn’t call taxpayers on the phone. Their business is done exclusively via snail mail, never by phone or even email. So, if you receive a mysterious phone call from someone claiming to be from the IRS, rest assured that you can safely hang up. And remember—regardless of what company or agency might claim to be calling you, never give up personal data over the phone. This goes for banks, wireless providers, or even IT help desks. Always verify the information you’re being given through an external source before you act.
Unsurprisingly, it doesn’t stop at phone calls. According to recent reporting by CBS, taxpayers have also been receiving emails from senders claiming to be IRS-affiliated debt collection agencies. These emails warn recipients that the tax refunds they received were incorrect, and must be returned to a “local refund account” immediately. Unusually, this form of phishing email directly steals the recipient’s money, rather than more circuitously gathering bank account data or Social Security Numbers. But like most phishing emails—and like the phone scam—it demands immediate action and even threatens legal repercussions otherwise. The scam also flashes a wealth of personal information about the recipient, making it look more legitimate.
Unfortunately, this is not the only email hack making the rounds. Another common one requests W2 information, and then uses that information to steal the identity of the victim. According to Forbes, this scam typically targets HR or payroll departments, and spoofs or hacks into the email account of a high-level executive “requesting” the information. Any phishing email that sends from (or even appears to send from) a company email address is automatically much more effective and difficult to spot. The trusted email address makes the recipient much more receptive to the information or action items presented in the email than they likely would be otherwise. Your boss emailing you asking for an employee’s W2 information would raise a lot fewer red flags than some unknown sender from the IRS. And that’s part of what makes phishing emails in general increasingly scary--as hackers get more sophisticated, their emails begin to lose those classic phishing email “tells.” Which means that we just have to be on even greater alert.
Whether it’s tax-related phishing emails or any other scam, one principle stands firm over and above all the rest: better safe than sorry. Releasing personal information on the phone or via email is like playing a loaded game of Russian Roulette. Once or twice, you could get lucky. Maybe that email requesting a W2 form really is from your boss. But the odds are absolutely not in your favor--and if you’re wrong? Bang. But lucky for you, there’s a simple solution, which is just not to spin the chambers at all. While the other tip-offs might begin to fail as phishing scams improve, playing it safe never will. In fact, it’s the only surefire way not to get scammed, and it works every time. If you get a phone call, say that you’d like to independently verify the information you’re being given, and hang up the phone. If you get an email, delete it—regardless of how legitimate it looks. Instead, pick up the phone and call your supervisor, your tax accountant, or even the IRS, to confirm.
If you do receive an IRS-related phishing email, you can notify the IRS at email@example.com (include the phishing email header in the body, with subject line “W2 Scam”). And don’t forget to maintain a strong annual training plan to keep yourself—-and your workforce—up-to-date with current phishing scams and solutions. Contact us to find out how you can integrate seasonal and industry-specific threats into a cohesive, effective program. And don’t forget: better safe than sorry.