What is causing a lack of focus in putting the right defenses in the right places in the right amounts against the right threats?

In my daily reading, the opening line and the entire post entitled, “6 reasons you’re failing to focus on your biggest IT security threats” by Roger Grimes got my attention.  The entire posting is worth a read.  Below are the highlights

Most companies are not focused on the real security threats they face, leaving them ever more vulnerable. That can change if they trust their data rather than the hype.

 Humans are funny creatures who don’t always react in their own best interests, even when faced with good, contrarian data they agree with. For example, most people are far more afraid of flying than of the car ride to the airport, even though the car ride is tens of thousands of times riskier. More people are afraid of getting bitten by a shark at the beach than by their own dog at home, even though being bitten by their dog is hundreds of thousands of times more likely. We just aren’t all that good at reacting appropriately to risks even when we know and believe in the relative likelihood of one versus the other happening.

The same applies to IT security.

Computer defenders often spend time, money, and other resources on computer defenses that don’t stop the biggest threats to their environment. For example, when faced with the fact that a single unpatched program needed to be updated to stop most successful threats, most companies do everything other than patch that program. Or if faced with the fact that many successful threats occurred because of social engineering that better end-user training could have stopped, the companies instead spent millions on everything but better training.

I could give you dozens of other examples, but the fact that most companies can easily be hacked into at will is testament enough to the crisis. Companies simply aren’t doing the simple things they should be doing, even when confronted with the data.

The problem bothered me enough that I wrote a whitepaper, slide deck, and book on the subject. Without having to read all of that, the answer for why so many defenders don’t let the data dictate their defenses is mostly about a lack of focus. A lot of priorities compete for computer defenders’ attention, so much so that the things they could be doing to significantly improve their defense aren’t being done, even when cheaper, faster, and easier to do.

What is causing this lack of focus in putting the right defenses in the right places in the right amounts against the right threats? A bunch of things, including these:

1. The sheer number of security threats is overwhelming
2. Threat hype can distract from more serious threats
3. Bad threat intelligence skews focus
4. Compliance concerns don’t always align with security best practices
5. Too many projects spread resources thin
6. Pet projects usually aren’t the most important ones

… it starts with an avalanche of daily threats and is worsened by many other factors along the project chain. The first step in fixing a problem is admitting you have a problem. If you see your company’s ineffective computer defenses represented above, now is the time to help everyone on your team understand the problem and help them to get better focus.

The Takeways

  1. Prioritize your projects.  Focus on projects that have the highest return on investment for improving the overall security posture and risk alignment
  2. Validate that your teams are working tasks related to the prioritized projects. Prioritized projects should have a smaller focus, but have aspects completed. For example, instead of deploying a database monitor solution to all of your critical databases, deploy the solution to one or two database.  The deployment should be in blocking mode and have all the operational support documents, procedures etc. completed.
  3. Leverage DevOps and Agile principles to obtain faster and incremental results as well as alignment with business
  4. Ensure the vulnerability management program is adapted and customized to your company so you can identify threats and vulnerability that are truly a priority for your team and not just hype.

Is there such a thing as Agile Security? What about DevSecOps?

There is an interesting post from Brian Forster, Fortinet, on InfoTech Spotlight.

To ensure in-depth defense during a faster deployment cycle, financial services firms have to adopt multiple security controls. This ensures that if vulnerable code delivers a great new feature but with an unknown flaw to consumers there need to be additional security measures in place that will keep it from being exploited. Combining a strong network security infrastructure with constant application and service monitoring ensures end-to-end protection as new software is deployed.

Because the DevOps approach is primarily adopted for the purpose of web application development, it’s necessary that a part of this infrastructure include a web application firewall (WAF). A next-generation WAF provides comprehensive application protection that scans for and patches vulnerabilities, and keeps applications from being exploited by the risks identified in the OWASP Top 10. Additionally, threat intelligence can be fed to the WAF to keep applications safe from even the latest sophisticated attacks. Which means that if an application is running a common exploit or is being probed by malware, the WAF will recognize it and know to deny network access to the application.

A successful DevOps program will have automation as another primary component. As code is committed to a central system by developers, an automated process looks at the submissions in the repository and builds a new version of the software.

The security protocol of DevOps initiatives will also need to be automated in order to keep up with increased volumes of both internal development and cyberattacks. Security automation capabilities are becoming more sophisticated through the use of artificial intelligence and machine learning. Eventually, this will allow for a fully automated, secure DevOps process, with the ultimate goal of enabling intent-based security.

Security and Agility

Financial services firms have a great deal to gain by adopting the DevOps approach, including remaining competitive and defending against cybercrime. When software has such a short development cycle, complete security cannot be guaranteed. For this reason, financial services firms must integrate additional network-level security controls. These controls extend security from mobile devices and IoT through the network core and out to the cloud. As financial services firms move forward with their DevOps process, the above recommendations will help construct an intelligent, integrated security system that allows agility at the same time.

The blog post missed another important ingredient: Information Security teams can learn and adopt the tenants from agile and DevOps (see for example the work by Fitzer ) Like Forster writes,

The irony here is that DevOps has also gained ground among malicious actors. New malware releases often move faster than security does. Therefore, the continuous integration and continuous deployment (CI-CD) that DevOps creates is necessary in order to keep pace with malicious actors.

The Take Aways

  1. InfoSec GRC and Security architecture teams need to review and update as needed the latest development procedures.
  2. InfoSec needs to become agile too

 

What lessons should we have learned from 2017?

Sarah Peters posted an interesting post, 17 Things We Should have learned in 2017, but probably didn’t.

Below is the summary:

1. You need to know what data you have, and where it is.  I agree and is the right thing to do, but is no small undertaking to complete and maintain in a large and dynamic environment.

2. How we respond to incidents is just as important as how we prevent them.

3. Social Security Numbers should not be used for anything but Social Security. Yes, but legacy applications and processes may still leverage SSN as an unique identifier.

4.Radio frequency communications need to be secured.

5. ICS/SCADA needs special security treatment

6. You need to deploy patches faster … no, really.

Equifax was compromised first in May, via the critical Apache Struts vulnerability disclosed in March. When news broke, attackers were already attempting to exploit the vuln and researchers urged anyone using Struts2 to upgrade their Web apps to a secure version. Clearly Equifax did not move fast enough.

In fairness, patching is hard, and March to May isn’t that much time for an enterprise Equifax’s size to complete the process. Organizations nevertheless must inject some jet fuel into their patch management processes because the vendors sometimes take their sweet time issuing fixes. Microsoft, for example, didn’t patch a Windows SMB bug until a month after an exploit for it, EternalBlue, was publicly disclosed. The EternalBlue exploit, which enables malware to quickly spread through a network from just one infected host, was soon used in both the WannaCry attacks in May and the NotPetya attacks in June. Despite the terrifying (and highly publicized) nature of WannaCry and NotPetya, a scanner created by Imperva researchers found in July that one of every nine hosts (amounting to about 50,000 computers from what they’d scanned) was still vulnerable to this exploit

7. The NSA might not be the best place to put your secret stuff.

8. Cybersecurity failures are beginning to have significant market impacts … sort of. I like this comment too:

Security researchers are investigating other ways to use market pressures to improve cybersecurity themselves. Meanwhile, organizations are getting smacked by regulatory fines and legal settlements, like Anthem Healthcare’s record-setting $115 million to settle its 2015 data breach

9. Integrity of data (and the democratic process) can be disrupted by more than “hacking. I agree. In healthcare, we have been focusing a lot on the confidentiality and availability of systems and data.  As more medical and personal / wearable devices become interconnected and integral part of providing healthcare, integrity of the data and device will be critical.

10. You really should refresh your DDoS defense and preparation plan.  To be effective, companies need to also refresh their business impact analysis data.  How badly will your operations, legal obligation or regulatory requirements be affected if an externally facing patient portal is not available for 15 minutes? What about 30 minutes? What about 2 hours? 1 day?

11.You can’t escape the effects of political and civil unrest.

12. Infosec workforce diversity is something you should actually care about.

13. Bitcoin is awesome, once you take away the part about currency.   I absolutely agree. I am excited and agree about the next comments too.  I want to explore this topic in future posts

 

…But the best thing about it is the platform upon which it’s built: Blockchain. The distributed ledger technology essentially allows for the creation of a list of records, each record cryptographically linked and secured, thereby enabling greater data integrity for all manner of applications. JP Morgan’s CEO Jamie Dimon called Bitcoin “stupid,” but his company got behind Blockchain in a big way this year, announcing a Blockchain-based cross-border payment network; IBM released a similar offering.

14. Encryption is great … except when it isn’t.

15. Firmware is your problem too.

16. No, malware does not mean no problem.

17. I want to include the last item in the full. This item requires a separate blog posting too.

Getting stabbed in the side is a bigger problem than getting stabbed in the back. We’ve known for years that attackers can break in through one poorly secured endpoint and laterally move through your network until they access the crown jewels from the inside. While attackers continue to get better at lateral movement, most organizations haven’t done anything to get better at preventing it. With better-managed access controls and microsegmentation, and the use of an automated lateral movement tool to help good guys (and others) quickly find the most vulnerable pathways, organizations might begin to help defend themselves against a variety of attacks, including nightmares like an Active Directory botnet.

The Takeways

  1. Review blog post and update any plans

What is the Trusted Exchange Framework?

On January 5, 2018, the Office of the National Coordinator for Health Information Technology released the “Draft Trusted Exchange Framework.” Per ONC’s  website, the framework:

outlines a common set of principles for trusted exchange and minimum terms and conditions for trusted exchange. This is designed to bridge the gap between providers’ and patients’ information systems and enable interoperability across disparate health information networks (HINs).

The framework was a response to Congress.

In the 21st Century Cures Act (Cures Act), Congress identified the importance of interoperability and set out a path for the interoperable exchange of Electronic Health Information. Specifically, Congress directed ONC to “develop or support a trusted exchange framework, including a common agreement among health information networks nationally

Marianne Kolbasuk McGee at www.healthcareinfosecurity.com provides a good analysis here, including  security components that go beyond HIPAA requirements. I will review the draft proposal and provide comments.

The Takeways

  1. Review draft trust exchange framework, provide feedback to the ONC and alert your partners in compliance, legal, privacy and InfoSec GRC about this new framework.

Is the American Hospital Association suggesting manufacturer liability for vulnerabilities in products?

I found a letter from the America Hospital Association to FDA, while reading a blog entry at NH-ISAC.  The NH-ISAC blog concludes:

Is the American Hospital Association suggesting manufacturer liability for vulnerabilities in products?

Here is an excerpt from the AHA letter:

….. recent ransomware attack highlighted the extent to which medical devices are vulnerable and can create high-risk areas for the security of hospitals’ overall information systems. The FDA must provide greater oversight of medical device manufacturers with respect to the security of their products. Manufacturers must be held accountable to proactively minimize risk and continue updating and patching devices as new intelligence and threats emerge. They share responsibility for safeguarding confidentiality of patient data, maintaining data integrity and assuring the continued availability of the device itself. While the FDA has released both pre- and post-market guidance to device manufacturers on how to secure systems, the device manufacturers have yet to resolve concerns, particularly for the large number of legacy devices still in use.

…Moreover, AHA members report that many manufacturers were slow to provide needed information about their products during the WannaCry attack. This includes information on the software components embedded in devices, the existence of vulnerabilities and the availability of patches. Furthermore, the mitigating steps recommended by manufacturers – such as taking a device off-line, putting it behind a firewall or further segmenting the network – had significant, and sometimes expensive, operational or patient care impacts. We recommend that the FDA proactively set clear measurable expectations for manufacturers before incidents and play a more active role during cybersecurity attacks. This active role could include, for example, issuing guidance to manufacturers outlining the expectations for supporting their customers to secure their products.

The Takeways

  1. If you are in the healthcare sector, share the letter from the AHA to the FDA with your key medical device manufacturers for a response and setup a lessons learned session on WannaCry

Why traditional vulnerability management falls short?

Read this interesting blog post on Threat-Centric vulnerability management by Ravid Circus at SC Magazine.  Please note the “proper” English!

Why traditional vulnerability management falls short

Most vulnerability management programmes are based on the Common Vulnerability Scoring System (CVSS). This system was developed more than a decade ago and was designed to help organisations prioritise patching. CVSS had intentions of providing “temporal” scores incorporating up–to–date threat intelligence and vendor input, including on available fixes, but this was never fully implemented. CVSS also could not accurately determine “environmental” scores of the potential impacts within an organisation.

I agree.  It is very difficult to operationalize and connect the actual applicable vulnerabilities with exploit data.

So, unfortunately, traditional vulnerability management relies on CVSS base scores of intrinsic properties of the vulnerability. The problem with this score is that vulnerabilities don’t exist in a vacuum. Changes within the threat landscape and within the organisation in which they exist impact the threat a vulnerability poses. Without this larger context, remediation priorities can be skewed, focusing precious resources on relatively low–risk vulnerabilities while leaving those more likely to be used in an attack within reach of threat actors.

A new approach: threat–centric vulnerability management

To stay protected in the era of distributed cyber-crime, organisations need to take their vulnerability management programme to the next level. Threat–centric vulnerability management (TCVM) is a new approach that collects data from a wide range of sources, including threat intelligence; uses modelling and simulation to analyse vulnerabilities within their unique environment and prioritise them accurately; and provides remediation guidance based on available resources.

Not sure if it is new, but merely a progression in maturity.

Internally, TCVM collects data on known vulnerabilities within the organisations, asset information, patch levels and the state of network topology and security controls in place. It builds this data into a model to understand vulnerability exposure, attack paths (including of multi–step attacks), potential business impacts, and remediation options beyond patching, such as rule changes or IPS signatures.

Externally, TCVM correlates this information with CVSS scores and, more importantly, security–analyst verified threat intelligence from dozens of security data feeds and investigations in the dark web. This highlights vulnerabilities with available exploits, such as those with a POC, and those observed to be actively exploited in the wild. It also shows which vulnerabilities are being packaged in distributed crimeware, such as ransomware, exploit kits, etc.

With this complete context, remediation actions can be aligned with the threat level a vulnerability poses — not just a generic CVSS score. Those that are being actively exploited or exposed within the network pose an imminent threat and need to be dealt with immediately. Other vulnerabilities pose a potential threat and can be dealt with over time, but need to be monitored for changes in the threat landscape or network exposure.

Automation and centralisation for intelligent defence

Because of the scale and complexity of data the TCVM approach requires, tasks have to be automated. From data collection to contextual analysis, these processes are essentially impossible to perform manually, especially in an enterprise network. While tools are available for automating each step within the TCVM workflow, there are advantages to efficiency — and ROI — of centralising management on a single platform.

With automation and centralisation, vulnerability management and incident response teams can dedicate even more resources to acting on intelligence rather than gathering and analysing it. The systematic approach of TCVM ensures that actions are informed with the full context surrounding a vulnerability, so organisations can take on attackers proactively and keep their networks secure from the distributed cyber-crime threat.

Yes to automation and centralisation.  Intelligence defence is better than “dumb” defence?

The Takeways

  1. Think about how to design and implement a foundation of technology and processes that fosters automation and centralization.

What are some emerging trends in cybersecurity?

I read an interesting blog post on ten cyber security trends for organizations to consider.  Please read the entire blog posting here.  Below (copied directly from blog post) are the potential trends that I find the most interesting and accurate.

A new model of cyber security will emerge
As firms invest more in cloud computing, a new model for cyber security is emerging. Increasingly, firms can look to cloud providers to embed good IT security, but firms still own the problem of setting their requirements and determining just who can access what. The shift towards DevOps and agile development build on these more flexible infrastructures, but also demand new ways of embedding security into the development lifecycle and an equally agile test regime. Security can no longer engage at the end of development cycles and, if it does, it risks being seen as a blocker rather than an enabler.

Automation of controls and compliance will be the order of the day
Firms are coming under pressure to contain their burgeoning cyber security budgets. Manpower-intensive compliance processes are beginning to give way to continuous testing and controls monitoring, helping firms build a more accurate picture of their IT estate – helping the CIO as well as the CISO. The growing demand for supply chain security and third party assurance will also lead to a burgeoning industry of testing firms offering risk scoring and testing services for those third parties.

Digital channels will demand customer centric security
Digital channels are becoming more and more sophisticated, demanding new consumer identity and access management approaches, dynamic transaction risk scoring and fraud controls, and an emphasis on usable non-intrusive security measures which don’t impact the consumer’s experience. Open Banking and the arrival of Payment Services Directive 2 will drive richer interactions between a new ecosystem of payment service providers and the banks who handle our money. A new world of open API is on the horizon, but concerns over criminal exploitation of these rich interfaces abound.

Resilience and speed matters
Regulators are focusing on resilience – the ability of an organization to anticipate, absorb and adapt to disruptive events – whether cyber-attack, technology failure, physical events or collapse of a key supplier. Exercises and playbooks are in fashion as firms try to build the muscle memory they need to respond to a cyber-attack quickly and confidently, while cyber insurance is finding its place not just as a means of cost reimbursement but as a channel for access to specialist support in a crisis.

The Takeaways

  1. Review and discuss above trends and adjust any strategy as appropriate for your organization

 

What are your top 10 vuln?

Of course, there is nothing new per se below, but it is a good refresher from the National Law Review website.  I find #6 and #9 really interesting.

  1. No, or inadequate, security program in place.
  2. No recently conducted vulnerability and risk assessments.
  3. No evaluation of weaknesses or gaps in your controls in light of statutory requirements and potential common law claims.
  4. No formalized patching process or inadequate enforcement of the current process to ensure its systematic implementation.
  5. No insider threat program.
  6. Lack of connection to the cybersecurity community
  7. Lack of stringent configuration management.
  8. Lack of stringent remote access management.
  9. Failing to consider available cybersecurity data. .
  10. No incident response plan in place

 

The Takeaways

  1. Compare your plans in your security program with the above items at a high level
  2. If #1 does not exist, fight for and win the budget for completing an IT Security Function Maturity assessment by Deloitte, PwC, E&Y or KPMG

How can we improve IoT security?

Read interesting article on securityweek.com by Lance Cottrell. I think that the following comment is spot on:

It is easy to vilify the IoT makers, but they are simply responding to the constraints and market realities in front of them. Moral persuasion will not meaningfully change their behavior. To get better IoT security, that needs to actually be a priority for the business, and that means changing the regulatory and liability landscape to make it so.

 

This not only applies to IoT makers. What about biomedical makers? What about manufacturers of computer software in general?

Take aways

  • In the absence of regulation, you need to collaborate with your Legal, Risk Management and IT teams to encode your standards into terms of legal contracts.  These terms can be negotiated and exception granted (and monitored).