Intelligent Connections. Recruiting Integrity.
Call Us: 415-510-2973

Archive for July 2017

Majority of Global Orgs Lack Security Best Practices

Majority of Global Orgs Lack Security Best Practices

More than half of global businesses in a recent analysis scored an F or D grade when evaluating their efforts to measure their cybersecurity investments and performance against best practices.

The inaugural 2017 State of Cybersecurity Metrics Report from Thycotic analyzed the results of a Security Measurement Index (SMI) benchmark survey of more than 400 security executives around the world, based on internationally accepted standards for security embodied in ISO 27001. It found that 58% are falling down on the job.

“It’s really astonishing to have the results come in and see just how many people are failing at measuring the effectiveness of their cybersecurity and performance against best practices,” said Joseph Carson, chief security scientist at Thycotic. “This report needed to be conducted to bring to light the reality of what is truly taking place so that companies can remedy their errors and protect their businesses.”

With global companies and governments spending more than $100 billion a year on cybersecurity defenses, a substantial number, 32%, of companies are making business decisions and purchasing cybersecurity technology blindly, the report found—without any way to measure their value or effectiveness. Even more disturbing, more than 80% of respondents fail to communicate effectively with business stakeholders and include them in cybersecurity investment decisions, nor have they established a steering committee to evaluate the business impact and risks associated with cybersecurity investments.

The report uncovered an array of other poor practices as well. For instance, four out of five companies don’t know where their sensitive data is located, or how to secure it. Two out of three companies don’t fully measure whether their disaster recovery will work as planned. Four out of five never measure the success of security training investments. And, while 80% of breaches involve stolen or weak credentials, 60% of companies still do not adequately protect privileged accounts—their keys to the kingdom.

“We put out this report not only to show the errors that are being made, but also to educate those who need it on how to improve in each of the areas that are lacking,” added Carson. “Our report provides recommendations associated with better ways to educate, protect, monitor and measure so that improvements can be implemented.”

Source: Information Security Magazine

Iranian Espionage Campaign Hinges on Beautiful (But Fake) Woman

Iranian Espionage Campaign Hinges on Beautiful (But Fake) Woman

An APT actor believed to be backed by the Iranian state is using an elaborate fake persona—a beautiful young woman—to lure victims on social media.

The fictional person, named Mia Ash, is a linchpin in espionage campaigns from a group known as Cobalt Gypsy, targeting several entities in the Middle East and North Africa (MENA), with a focus on Saudi Arabian organizations. The focus is on firms in telecommunications, government, defense, oil and financial services, with Cobalt Gypsy identifying individual victims through social media sites, according to Dell SecureWorks.

At the core of this is a well-established collection of fake social media profiles for Mia Ash that are intended to build trust and rapport with potential victims, while performing reconnaissance on employees of targeted organizations.

In one example of the gambit, Mia Ash (a purported London-based photographer) used LinkedIn to contact an employee at one of the targeted organizations, stating that the inquiry was part of an exercise to reach out to people around the world. Over the next several days, the individuals exchanged messages about their professions, photography and travels. Mia then encouraged the employee to add her as a friend on Facebook and continue their conversation there, noting that it was her preferred communication method. The correspondence continued via email, WhatsApp and Facebook for weeks, until Mia sent a Microsoft Excel document, Copy of Photography Survey.xlsm, to the employee's personal email account. Mia encouraged the victim to open the email at work using their corporate email account so the survey would function properly. The survey contained macros that, once enabled, downloaded PupyRAT, an open-source cross-platform remote access trojan (RAT).

Further analysis revealed that the connections associated with these profiles indicate the threat actor began using the persona to target organizations in April 2016.

Most victims are mid-level employees in technical (mechanical and computer) or project management roles with job titles such as technical support engineer, software developer and system support, Dell said.

“These job titles imply elevated access within the corporate network,” the firm said in an analysis. “By compromising a user account that has administrative or elevated access, threat actors can quickly access a targeted environment to achieve their objectives. The individuals' locations and industries align with previous Cobalt Gypsy targeting and Iranian ideological, political and military intelligence objectives.”

The fake persona attack vector is not a new one—as far back as 2009 a researcher named Thomas Ryan created Robin Sage as an experiment, a fictional 25-year-old cyber threat analyst at the Naval Network Warfare Command in Norfolk, Va. Using several accounts on popular social networks like Facebook, LinkedIn, Twitter and others, Ryan as Sage contacted nearly 300 people, most of them security specialists, military personnel, staff at intelligence agencies and defense contractors. People bought it—she was offered consulting work with notable companies Google and Lockheed Martin and had several invitations to speak at conferences.

Meanwhile, Ryan gained access to email addresses and bank accounts as well as learning the location of secret military units based on soldiers' Facebook photos and connections between different people and organizations. Sage was also given private documents for review.

Earlier this year, Israel Defense Force (IDF) soldiers were targeted in an espionage campaign by Hamas using Facebook friend requests purporting to be from attractive women. To sweeten the pot, these “ladies” sent multiple messages expressing their interest, along with photos—though the photos were cribbed from other, legitimate Facebook profiles. After chatting enough to convince the soldier that she’s real, the person on the other end asks the soldier to video chat.

“But all the [existing video] apps he has won’t work for her—she needs him to download another one,” the IDF described in a blog. “She sends him a link to an app [in a third-party app] store called ‘apkpk.’ He downloads the app she requested. The app isn’t working, not for the soldier, at least. He tries to tell the pretty girl on the other end, but she won’t respond.”

Validating a user's authenticity prior to accepting social media connection requests can mitigate threats posed by threat actors leveraging fake personas.

Source: Information Security Magazine

Microsoft Turns Up $250,000 Bug Bounty for Windows

Microsoft Turns Up $250,000 Bug Bounty for Windows

Hard on the heels of Facebook announcing a $1 million investment in security research, Microsoft has ponied up as well, with a $250,000 top payout for a newly launched Windows Bounty Program.

The program includes all features of the existing Windows Insider Preview, and adds focus areas in mitigation bypass, Windows Defender Application Guard and Microsoft Edge. As part of the initiative, Microsoft is also bumping up the pay-out range for the Hyper-V Bounty Program.

“Since 2012, we have launched multiple bounties for various Windows features,” the software giant said in a blog post. “Security is always changing and we prioritize different types of vulnerabilities at different points in time. Microsoft strongly believes in the value of the bug bounties, and we trust that it serves to enhance our security capabilities.”

The program will pay out anywhere from $500 to the aforementioned quarter-million for critical- or important-class remote code execution, elevation of privilege, or design flaws that compromise a customer’s privacy and security. If a researcher reports a qualifying vulnerability already found internally by Microsoft, a payment will be made to the first finder at a maximum of 10% of the highest amount they could’ve received (example: $1,500 for an RCE in Edge, $25,000 for RCE in Hyper-V).

Facebook earlier this week said that it is increasing the size of its Internet Defense Prize to $1 million, which will be given out in slices in a series of prizes next year. Speaking at Black Hat, Facebook chief security officer Alex Stamos said that battling password re-use and social engineering would be particular focus points. That’s a 10-times increase from what the social network offered before; last year, it awarded just $100,000 in prizes.

Source: Information Security Magazine

#BHUSA: Panel – Fad or Future? Getting Past the Bug Bounty Hype

#BHUSA: Panel – Fad or Future? Getting Past the Bug Bounty Hype

At Black Hat 2017 in Las Vegas today, a panel of experts gathered to discuss the concept of bug bounty programs and share their experiences with running these within their respective companies.

Kymberlee Price (KP), Microsoft

Angelo Prado (AP), director of product security, Salesforce
Charles Valentine (CV), VP of technology services, indeed
Lori Rangel (LR), director of product, Silent Circle

Getting proceedings underway, panel moderator Price asked each speaker to share a few words on the worth of bug bounty programs to their organizations.

AP: One of the interesting things about the bug bounty concept is that you pay by performance, so I absolutely think it is worthwhile.

CV: We think the bug bounty is worth it. We pay to use an external vendor to manage our bug bounty program. We were pretty small when we started and so didn’t have the staff to run our own bug bounty. It helps take the work load off of us and we still see very high value in the program.

LR: When you’re starting a startup you may not have the resources to have large internal security teams, so we value our bounty programs as an augmented security team. 

KP: What are the best practices for running a bug bounty program?

LR: I’m going to harp all day about research community outreach, it’s super important to go out and talk to the community and find out what motivates them. Find out what their concerns are and that really builds your reputation as well.

CV: I think what we noticed immediately is that the researchers are not a business; it’s an individual you’re talking with. Have somebody in place who has really good customer service skills to be able to communicate with that external person and bring it to a human level. Also, recognize when it’s time to increase your bounties.

AP: When you’re convincing your organization to open a bug bounty program, one of the key things you need is to have a very mature set of individuals who are able to understand risk and ready to address any vulnerabilities that come their way. 

KP: How do you compete for talent when bug bounty programs are becoming more and more popular?

CV: The payouts need to be rational for the work that’s happening. Establish a reputation and be very clear in your program about what you will pay and why.

LR: As far as competition goes, you’ve got to get creative. If the monetary value isn’t there you can still give recognition in other ways, so you’ve got to get a little creative.

KP: Do you have a discloser policy? Does your program allow for disclosure?

CV: We do allow disclosure, after we’ve fixed of course. In practice very few folks have asked to disclose details, and in the cases where they have they were very reasonable and worked with us. Our experience has been very positive when a researcher has asked to disclose.

LR: Officially, our scope does not allow for public disclosure, but we have made some exceptions in the past for that, simply because it affected our mission critical applications or it had a wider impact. So we felt it was important to get that message out into the public. It’s always been a really positive interaction with the researchers so I would continue to do that on a case-by-case basis.

KP: At what point in the process do you reward the researcher for discovering the vulnerability?

CV: We reward when we have identified that it is a problem and that it is correct—when we feel like the researcher’s work is done.

LR: We definitely have a two-step process. We have our frontline communications and then an internal discussion about the value and the impact of the issue, and at that point is when we go back and pay the researchers, sometimes before we’ve resolved the problem.

Source: Information Security Magazine

#RSAC: Australia Calls to Fight Back Against Attempts to Control Internet

#RSAC: Australia Calls to Fight Back Against Attempts to Control Internet

The Australian Ambassador for Cyber Affairs insists that in order to reap the enormous economic growth opportunity that cyberspace offers, the internet must remain free, open and uncensored

In his keynote at RSA APJ in Singapore on July 27 2017, Dr Tobias Feakin, the Australian Ambassador for Cyber Affairs, shared his concern about some nations’ desire to control and restrict the flow of data.

“Cyberspace offers us an enormous opportunity for economic growth in APAC. It’s government’s job to enable businesses to take advantage of these wonderful online opportunities. We also recognize this is only possible if underpinned by a free, open and secure internet. That is at the heart of the cyber diplomacy bill”, said Dr Feakin. “Cybersecurity and internet governance are central to our diplomacy abroad.”

Governments, he said, “should resist the compulsion to control and restrict the flow of data. The attempts we see from some countries to exert excessive control needs to be countered.” This is the responsibility of other nation states, governments, and the private sector, he added. “I’m hoping that all of us can work to this goal – to fight back against attempts to lock down and fragment the internet.”

The private sector, in particular, is a hugely powerful voice, he added. “The private sector owns the majority of the backbone of the internet’s infrastructure, it is hugely powerful and a key partner in achieving that goal.”

Cyber affairs, said Feakin, have fundamentally shifted from being a technical issue to one which is a strategic issues for countries. “Nation states have fought, traded, stolen and aided each other since the dawn of human history, but we’re starting to see that behavior transforming to the online space. There is scarcely an international issue now without some kind of cyber component attached to it.”

The Australian Government, he added, is trying to protect IP as much as it possibly can. “We’re also investing large amounts in R&D to strengthen Australia’s digital capabilities in the long-term.” After all, he concluded, “our economic prosperity is at stake.” 

Source: Information Security Magazine

#BHUS2017: The Psychology of Phishing and Why Training Fails

#BHUS2017: The Psychology of Phishing and Why Training Fails

Speaking at Black Hat 2017 in Las Vegas today Karla Burnett, security engineer at Stripe, explored ‘phishing as a science’, shining a light on the psychology of phishing and why attacks continue to be successful.

Burnett explained that the standard methods used for phishing attacks, in which we see an email from [for example] a “Nigerian prince who wants to offer you a large sum of money if you just hand over your bank details”, actually make attackers and defenders really lazy.

“Attackers get told you can’t use phishing in your read teams because it’s too easy and defenders get told that provided you’ve trained your users they won’t do this [fall for phishing emails] and it’ll be fine—but that’s dangerous and unrealistic. Just because you say phishing is inevitable that doesn’t actually make the problem go away.”

So why do people fall for phishing campaigns in the first place? Burnett pointed to a psychological school of thought which suggests that there are two modes of thinking: system 1 (fast) and system 2 (slow).

System 1 is very fast and very instinctive (like swerving suddenly to avoid a car on the motorway), she explained. “It’s below what we consider to be the level of conscious thought, and is pretty emotional and pretty gullible.”

System 2 is a much slower and methodical way of thinking (such as writing up a list of pros and cons for a business idea). “It’s rational and it’s sceptical,” Burnett added.

However, the problem is that we often don’t have enough time in our days to use system 2 everywhere, and when you stop and think about how many emails we receive on a daily basis, it’s easy to see how attackers take advantage of this.

“The problem we have with phishing training at the moment is that it’s focused on getting people to look at URLs or hover over links, which require system 2 methods of thinking, not system 1. Phishing training is only useful once somebody is already suspicious of an email, not beforehand. You can’t train somebody’s system 1 to think an email is suspicious when it looks exactly like every other email they’ve received.”

It also doesn’t matter how technical you are, she argued, as everyone is vulnerable to this.

This has in turn created a sense of hopelessness about phishing among the information security community, Burnett continued, with a belief that people will be phished no matter what.

Despite this, she concluded that we need to find technical solutions to the problem instead of just relying on training that proves ineffective, and look at how the three authentication factors of ‘Have’, ‘Know’ and ‘Are’ can be tied to the domain that’s requesting the information, instead of giving them out as shared secrets.

Source: Information Security Magazine

#BHUS2017: Infosec Community Not Yet Reached Full Potential, Says Facebook CSO

#BHUS2017: Infosec Community Not Yet Reached Full Potential, Says Facebook CSO

“The industry we fought to draw respect for is now on every front page of every newspaper pretty much every week. Whilst times have changed, the community and the industry have not really changed with them. The truth is, we don't fight the man anymore, in some ways we are the man, but we haven’t really changed our attitudes towards what sort of responsibility that puts on us.”

These were the words of Alex Stamos, chief security officer at Facebook and opening keynote speaker at Black Hat 2017 in Las Vegas today.

In a session titled ‘Stepping Up Our Game: Re-Focusing the Security Community on Defense and Making Security Work for Everyone’ Stamos argued that the information security community is not yet living up to its full potential.

“We have perfected the art of finding problems over and over again without addressing the root issues,” he added. “We need to think a little more carefully about what we do down stream after that moment of discovery.”

Therefore, Stamos pointed to three cultural aspects that the infosec community and industry needs to improve to make the lives of internet users safer.

Firstly, Stamos pointed to a tendency to focus on the complexity of flaws as opposed to the human element. “We are still really focused on the really sexy, difficult problems”, but in reality the adversaries will do the simplest thing to affect the cause they want, which can often go unrecognized.

Secondly, is that the infosec community has a real problem with empathy, and a tendency to punish people who implement imperfect solutions in an imperfect world.

“I don’t just mean empathy to each other,” Stamos explained, “we have a real inability to put ourselves in the shoes of the people we are trying to protect.

“It’s a common thought that everything would be better if users were perfect, but it’s a really dangerous one, because it makes it easy to shift responsibility for building trustworthy, dependable systems off ourselves onto other people.”

Third, Stamos argued that the industry has not been very effective in engaging the world.

“We are no smarter than the people whose systems we break. Security people are not brilliant, we’re not that much smarter than anyone else. We bring a very important way of looking at the world and an important set of skills and tools, but that does not mean we should denigrate others when we point out their mistakes.”

To conclude on a more positive tone, Stamos said that he is optimistic about the industry’s ability to improve, and doing so will come down to addressing two key factors: defense and diversity.

“I believe that good, educated defense is the child of offensive research, but I do believe the balance is a little bit off."

When it comes to diversity, Stamos said that building more diverse teams , with a diversity of backgrounds and ways of thinking, is vital for good security, as “you never know what kind of problem you are going to get into, so it’s much better to have a tool box with lots of different types of tools rather only having the best screwdrivers in the world.”

Source: Information Security Magazine

A Quarter of Dangerous Mails Skate Past Common Email Filters

A Quarter of Dangerous Mails Skate Past Common Email Filters

Gmail and other common mail providers are failing to filter out a quarter or more of dangerous emails from users’ inboxes.

According to Mimecast’s third quarterly Email Security Risk Assessment (ESRA), these mails can contain malicious attachments and dangerous files types, be used for business email compromise and impersonation attacks, or be spam and phishing attempts. Also, the risks to email remain whether delivered to a cloud-based, on-premises or to a hybrid email environment.

“Email remains the top attack vector for delivering security threats such as ransomware, impersonation, and malicious files or URLs,” the firm said, noting that 91% of cyberattacks start with an email. “Attackers’ motives include credential theft, extracting a ransom, defrauding victims of corporate data and funds, and, in several recent cases, sabotage with data being permanently destroyed.”

When the data was sliced by incumbent email security vendor (including Google G Suite, Microsoft 365 and Symantec), the report found that even these top players were missing commonly found advanced security threats, highlighting the need for a multi-layered approach to email security. Notably the cloud vendors are leaving organizations vulnerable by missing millions of spam emails and thousands of threats and allowing them to be delivered to the users’ email inboxes: More than 10.8 million pieces of spam made it through incumbent email security systems, as well as 8,682 dangerous file types, 1,778 known and 503 unknown malware attachments, and 9,677 impersonation emails.

“Many organizations have a false sense of security believing that a single cloud email vendor can provide the appropriate security measures to ensure protection from email threats,” the report noted.  

From a historical perspective, Mimecast has examined the inbound email received for 62,323 email users over a cumulative 428 days, more than 45 million emails were inspected, all of which had passed through the incumbent email security system in use by each organization. Of this, 31 percent were deemed “unsafe” by the firm.

Late last year, Mimecast commissioned Forrester Consulting to evaluate drivers of cloud-based email adoption and to evaluate their related business concerns and expectations. That report, titled Closing the Cloud Email Security Gap, revealed that only 5% of respondents are very confident in the overall security capabilities of their chosen email cloud provider. In fact, 44% of respondents said they would review the security implications of their cloud provider more thoroughly if they were to deploy a cloud-based email platform again.

“To achieve a comprehensive cyber-resilience strategy, organizations need to first assess the actual capabilities of their current email security solution. Then, they should ensure there’s a plan in place that covers advanced security, data management and business continuity, as well as awareness training to the end user, which combined help prevent attacks and mitigate business impact,” said Ed Jennings, chief operating officer at Mimecast. “These quarterly Mimecast ESRA reports highlight the need for the entire industry to work toward a higher standard of email security.”

Source: Information Security Magazine

The Threat-Hunter's Secret to Success: Human-Machine Teaming

The Threat-Hunter's Secret to Success: Human-Machine Teaming

A report examining the role of humans in the investigation of cyberthreats and the planned use of automation in the security operations center has found that experienced threat hunters verify, on average, 90% of the root causes of attacks. They also embrace automation.

By contrast, McAfee’s Disrupting the Disruptor, Art or Science? report also found that new and inexperienced threat hunting organizations only determine the root cause in one-fifth (20%) attacks—likely due to their lagging use of machine learning and automation.

A threat hunter is a professional member of the security team tasked with examining cyberthreats using clues, hypotheses and experience from years of researching cyber-criminals. McAfee found that 71% of SOCs with a level 4 maturity closed incident investigations in less than a week because of the context provided by skilled threat hunters—and 37% closed threat investigations in less than 24 hours. These organizations also were twice as likely to automate parts of the attack investigation process (75% compared to 31% of organizations at the minimal level), and as a consequence, they are able to devote 50% more time to actual hunting.

Less experienced outfits are taking notice of these outcomes, and 68% of those surveyed said better automation and threat hunting procedures are how they will reach leading capabilities.

“Organizations must design a plan knowing they will be attacked by cyber-criminals,” said Raja Patel, vice president and general manager, Corporate Security Products, McAfee. “Threat hunters are enormously valuable as part of that plan to regain the advantage from those trying to disrupt business, but only when they are efficient can they be successful. It takes both the threat hunter and innovative technology to build a strong human-machine teaming strategy that keeps cyber threats at bay.”

Aside from manual efforts in the threat investigation process, the threat hunter is key in deploying automation in security infrastructure, the report found: threat hunters in mature SOCs spend 70% more time on the customization of tools and techniques than novice organizations.

“The successful threat hunter selects, curates and often builds the security tools needed to thwart threats, and then turns the knowledge gained through manual investigation into automated scripts and rules by customizing the technology,” the report noted. “This combination of threat hunting with automated tasks is human-machine teaming, a critical strategy for disrupting cybercriminals of today and tomorrow.”

The report found that the sandbox is the No 1 tool for first- and second-line SOC analysts, where higher level roles relied first on advanced malware analytics. Mature SOCs use a sandbox in 50% more investigations than entry level SOCs, going beyond conviction to investigate and validate threats in files that enter the network. More advanced SOCs also gain as much as 45% more value than minimal SOCs from their use of sandboxing, by improving workflows, saving costs and time, and collecting information not available from other solutions.

Other standard tools include SIEM, endpoint detection and response, and user behavior analytics—and these are all targets for automation.

“Automation and analytics are necessary and available, and for mature organizations, every level of the identification and investigation processes has automation options, particularly for sandboxing, endpoint detection and response, and user behavior analysis,” the report said. “Level 4 organizations show three times more willingness to automate parts of the threat investigation process versus SOCs at levels 0 through 3.”

Bottom line: Threat hunters are using a wide range of tools and techniques to find, contain, and remediate cyberattacks. As they mature in the role, and the security organization’s tools and processes mature along with them, threat hunters experience a significant increase in effectiveness. Their success is heavily based on human-machine teaming, combining the human judgment and intuition with machine speed and pattern-recognition.

“This research highlights an important point: mature organizations think in terms of building capabilities to achieve an outcome and then think of the right technologies and processes to get there,” said Mo Cashman, principle engineer at McAfee. “Less mature operations think about acquiring technologies and then the outcome. It’s a classic top down versus bottom up approach. In this case, top down wins!”

Source: Information Security Magazine

DDoS Attacks Could Disrupt Brexit Negotiations

DDoS Attacks Could Disrupt Brexit Negotiations

IT security professionals are bracing for DDoS attacks of unprecedented frequency in the year ahead, and are already preparing for attacks that could disrupt the UK’s Brexit negotiations and cause outages worldwide.

That’s according to a survey from Corero Network Security, which found that more than half (57%) of respondents believe that the Brexit negotiations will be affected by DDoS attacks, with hackers using DDoS to disrupt the negotiations themselves, or using the attacks merely as camouflage while they seek to steal confidential documents or data.

The latter “hidden attack” scenario is on the radar of many, and it generally involves the use of smaller, low-volume DDoS attacks of less than 30 minutes in duration. As Corero found in its research, these Trojan-horse campaigns typically go un-mitigated by most legacy solutions, and are frequently used by hackers as a distraction mechanism for additional efforts, like data exfiltration.

About 63% of respondents are worried about these hidden effects of these attacks on their networks— particularly with the GDPR deadline fast-approaching, where organizations could be fined up to 4% of global turnover in the event of a data breach.

At the same time, worryingly, less than a third (30%) of IT security teams have enough visibility into their networks to mitigate attacks of less than 30 minutes. 

Meanwhile, many in the industry expect to see a significant escalation of DDoS attacks during the year ahead, with some (38%) predicting that there could even be worldwide Internet outages during 2017.

As for who’s behind the growing wave of attacks, the perpetrators are generally financially motivated, IT pros said—despite continued discussions about nation-state attackers or political activism. Security teams believe that criminal extortionists are the most likely group to inflict a DDoS attack against their organizations, with 38% expecting attacks to be financially motivated. By contrast, just 11% believe that hostile nations would be behind a DDoS attack against their organization.

This financial motivation explains why almost half of those surveyed (46%) expect to be targeted by a DDoS-related ransom demand over the next 12 months. Worryingly, 62% believe it is likely or possible that their leadership team would pay.

“Despite continued advice that victims should not pay a ransom, a worrying number of security professionals seem to believe that their leadership teams would still consider making a payment in the event of an attack,” said Ashley Stephenson, CEO of Corero. “Corporations need to be proactive and invest in their cybersecurity defenses against DDoS and ransomware to protect themselves against such extortion.”

The good news is that the vast majority of security teams (70%) are already taking steps to stay ahead of the threats, such as putting business continuity measures in place to allow their organizations to continue operating in the event of worldwide attacks. However, they also agree that some responsibility for DDoS protection lies with the ISPs; and about a quarter of those surveyed (25%) believe their ISP is primarily to blame for not mitigating DDoS attacks.

At the end of 2016, the head of Britain’s new National Cyber Security Centre suggested that the UK’s ISPs could restrict the volume of DDoS attacks across their networks by rewriting internet standards around spoofing. Continued discussions on this topic have led nearly three-quarters of respondents (73%) to expect regulatory pressure to be applied against ISPs who are perceived to be not protecting their customers against DDoS threats.

“While most in the IT security industry wouldn’t expect their ISP to automatically protect them against DDoS attacks, there is a growing trend to blame upstream providers for not being more proactive when it comes to DDoS defense,” said Stephenson. “To help their cause, ISPs could do more to position themselves as leading the charge against DDoS attacks, both in terms of protecting their own networks, and by offering more comprehensive solutions to their customers as a paid-for, managed service.”

Source: Information Security Magazine