Category Archives: cybersecurity

What is the “new” value proposition for Enterprise Agreements?

Does your company have legacy or systems with unsupported operating systems?  Of course, you understand the compliance and cyber risk of having systems with unsupported operating systems in your environment.  For example, you are probably aware of the $150,000 2014 HHS settlement with Anchorage Community Mental Health Services.  The resolution agreement states:

From January 1, 2008, until March 29, 2012, ACMHS failed to implement
technical security measures to guard against unauthorized access to e-PHI that
is transmitted over an electronic communications network (See 45 C.F.R. §
164.312(e)) by failing to ensure that firewalls were in place with threat
identification monitoring of inbound and outbound traffic and that information
technology resources were both supported and regularly updated with
available patches.

The WannaCry malware is a good example of a real threat that exploits systems running unsupported operating systems.

So, why not replace these archaic and “thorn-in-your-side” systems? Here are some of the reasons: 1) cost. In some instances, the vendor will not allow you to update just the operating systems. You have to buy an entirely new solution.  This can result in a price  tag into the six figures; 2) lack of expertise on the system.  Technical expertise on the unsupported OS or potentially the application that runs on top of it may be absent or sparse.

So, what are some of your options or “Take aways”  in the short-term?

  1. Define your risk tolerance.
  2. Ensure your asset inventory includes systems with unsupported OS. This inventory needs to have IT and business owners over the assets. The asset inventory should be dynamic and proactively discover new systems with unsupported operating systems or software.
  3. Understand and document what compensating controls can be applied to these systems
  4. Involve IT and business owners over these systems into the exemption process. Ensure that the risks are explained in their language
  5. Ensure there is a policy and legal/contract language that address systems with unsupported software.  Ensure that this policy and legal language is known and understood throughout the organization.
  6. Potentially revise your thinking about leveraging Enterprise Agreements.

 

Ben Boswel, at SC Magazine, wrote the following on EAs:

One way to tackle this problem is to change the relationship between organisations and the software and hardware providers they buy from. Many rely on Enterprise Agreements (EAs) whereby vendors agree to sell a specified amount of software and hardware over a certain timeframe. But EAs have been evolving in recent years to offer more support to customers. Many EAs have expanded to include security and software updates.

Large and complex organisations need EAs with a Software as a Service Offering, a contract between customer and supplier whereby hardware and software are fully supported on a rolling basis.

Instead of companies simply buying IT infrastructure from a provider and then having to update, maintain and replace it themselves, under an evolved EA this is largely the vendor’s responsibility. To ensure the best user experience and encourage users to renew, it is always in a vendor’s interests to ensure that their customers are making use of the most up-to-date versions of their software. Vendors can then manage the continued maintenance of these systems. This takes away the burden domestically maintaining systems over a vast and sprawling business network of different systems.

Although used in many areas, in recent years EAs have evolved to better accommodate the changing needs of businesses, who are looking for increasing flexibility. Many EAs now include security, network and other hardware support in the same package as well as being available on a pay-by-usage policy. This means firms can accelerate innovation into their IT systems through just one agreement.

I shared this view of EAs with a VP over IT Infrastructure and Operations.  The VPs response:

Didn’t that used to be called a “Managed Service”? Lots of pro(s) and con(s)…

Very true.

The Take Aways:

  1. The review and act on the short term TTAs above
  2. Contemplate how to leverage and update the use of EAs.

Do you know your data breach notification requirements?

This is a difficult question.  Snell & Wilmer have launched an interactive data breach notification site to help organizations answer this question. No doubt the site is marketing tool, but this law firm is contributing to the community.

Here is excerpt from S& W:

By clicking on a state, you will see a summary of the key features of its notification statute; highlights include PII and breach definitions, respectively, along with notification requirements, including the circumstances in which the state Attorney General’s Office or a similar consumer protection agency is required to be notified as well as timing requirements for the notifications to individuals. We’ve also included links to both the data breach statutes themselves and relevant state agency websites.  Additionally, the second tab on the Data Breach Map provides a visual summary for those states that require notification when PII has merely been accessed as compared to those states that only require notification when PII has been acquired.

The Takeways

  • Ensure and invest in your relationship with peers in compliance and privacy departements
  • Ensure that you cyber incident and management team is aware of data breach notification requirements and has incorporated these timelines into their playbooks
  • Ensure that you socialize data breach notification requirements and timelines with your IT peers.

Why complete network segmentation?

I have network segmentation on the brain. Apparently, I am not the only one. Jack Koons posted a good summary. Please see them below

The Why!

…  Organization can achieve network resiliency and survivability through a strategy embracing network segmentation in general, and micro-segmentation in particular

Network segmentation removes the gooey inside, simultaneously reducing mean time to detection and mean time to remediation – the two most important metrics for security incidents. These steps make it very hard for any adversary to gain, maintain and further develop access and move freely across a network. In fact, this will significantly reduce attacker ROI, often making them look elsewhere for an easier target.

Segmentation is the solution to this problem with a particular focus on the emerging world of micro-segmentation. In this model, security profiles are adopted closer to the endpoint, thus replacing the traditional concept of a hardened single perimeter, and providing a dynamic and scalable perimeter wrapped around every workload.

Deployed correctly – particularly when combined with software defined networking and encryption – microsegmentation allows for the presentation of true “zero trust models” across the enterprise. This protects critical workload and business processes while reducing reliance on overly complex hardware-based infrastructure and rulesets (which bring their own vulnerabilities to the mix).

The key is to limit the extent by which the attacker retains any advantage inside the network, regain control and initiative, and reduce the impact of any attack across the enterprise. It’s a fact of life today that organizations will eventually be hit with a cyberattack.   But with the appropriate segmentation, they will survive if they are prepared and resilient

The Takeways

  • Have you outlined your network segmentation strategy
  • Is your company ready to support SDN  technology from an operational support perspective? That is, complete a gap analysis.

Is Compliance Synonymous With Security?

In honor of GDPR, Josh Lefkowitz posted a solid reminder about compliance not equating to security.

In fact, I would argue that compliance should be an outcome of a robust InfoSec program.

But, compliance with regulations is A business driver for security.  If there is an incident at your company and you cannot demonstrate due diligence or compliance with the most applicable standard or regulation, your company’s position and negotiation poster is weakened.
Here is Lefkowitz’s list

  • Compliance does not guarantee security
  • Compliance standards are not comprehensive
  • Threats evolve faster than compliance standards do

The takeaways

  • Confirm if GDPR applies to your organization. If you don’t know, you better find out since it went into affect on May 25/2018.
  • Take the GDPR enforcement date to manage expectations of your leadership by discussing the differences and overlaps between InfoSec compliance and a comprehensive InfoSec program (i.e. my pure InfoSec and cynical side of my brain is whispering “real or meaningful security)

Are you aligning your company’s controls with the business to ensure that your company is not paying $3 million dollars to protect a $51K asset?

Of course, I think that majority of the people reading this blog post wouldn’t find the situation eluded to in the title of this blog post a dilemma. This is InfoSec 101 material. However,  I read an interesting blog post by Kevin Townsend at securityweek.com that makes you think about this building block InfoSec principle:

Over the course of the last week, it has become apparent that the City of Atlanta, Georgia, has paid out nearly $3 million dollars in contracts to help its recovery from a ransomware attack on March 22, 2018 — which (at the time of writing) is still without resolution.

Precise details on the Atlanta contracts are confused and confusing — but two consistent elements are that SecureWorks is being paid $650,000 for emergency incident response services, and Ernst & Young is being paid $600,000 for advisory services for cyber incident response. The total for all the contracts appears to total roughly $2.7 million. The eventual cost will likely be more, since it doesn’t include lost staff productivity nor the billings of a law firm reportedly charging Atlanta $485 per hour for partners, and $300 per hour for associates. The ransom demand was for around $51,000.

The ransomware used in the attack was SamSam. In February this year, SecureWorks published a report on SamSam and attributes it to a group it knows as Gold Lowell. Gold Lowell is unusual in its ransomware attacks since it typically compromises its victim networks in advance of encrypting any files. 

….

However, the few facts that are known raises a very complex ethical issue. Atlanta seems to have chosen to pay nearly $3 million of taxpayer money rather than just $51,000, possibly on a point of principle. That principle is supported by law enforcement agencies around the world who advise that ransoms should not be paid. In this case, the sheer disparity between the cost of the ransom and the ransomware restitution (more than 50-to-1 and growing), all of which must be paid with someone else’s money, makes it reasonable to question the decision.

 

Actions

  1. Have you decided what you will do during a ransom attack?  Do you have supporting procedures?
  2. Are you aligning your company’s controls with the business to ensure that your company is not paying $3 million dollars to protect a $51K asset (unless your company supports this time of wasteful behavior).

What are some lessons from the five biggest breaches of 2017?

I came across this blog posting from Continuity Central.  It is a good post because it is succinct.

  1.  NHS – Based on knowledge in the public domain, we believe the root cause of the vulnerability relates to an ‘enhanced data sharing’ option. If enabled, that data can be accessed by hundreds of thousands of other users of the same system. This is a common oversight, as organizations tend to focus on their web application testing and security but fail to extend this security to their desktop applications.We regularly find vulnerabilities like this when we’re auditing desktop applications and the communication mechanisms that support them. By extending the same care to both web and desktop applications, these vulnerabilities can be minimised.
  2. Equifax – This breach highlights how critically important it is for all organizations to be on top of their vulnerability management processes, ensuring that critical patches for software and systems are applied as soon as possible.Regular penetration testing and vulnerability scanning feed into a central vulnerability management system within the wider governance, risk and compliance (GRC) processes. They’re fundamental to help mitigate the risk of these kinds of breaches occurring. After all, if you’re not aware of your vulnerabilities and risks, you can’t treat them.
  3. Yahoo – …these types of breaches usually originate from an exploited website vulnerability. Preventing such a hack starts with using controls that identify vulnerabilities. However, it’s also critical that incident response processes are in place to identify attacks in progress.
  4. Uber – …beyond securing vulnerable information, communication is key. Uber tried to brush the breach under the carpet but making your customers aware of a breach as soon as possible is the best response. This will be critical when the General Data Protection Regulation becomes enforceable. Under the regulation, organizations must notify of the breach to the relevant supervisory authorities and affected parties within 72 hours of its discovery, as failure to do so could result in fines up to €20m or 4 percent of world-wide revenue, whichever is greater.
  5. Alteryx...a cyber risk researcher revealed that data analytics software company , had left a 36-gigabyte database exposed in an Amazon Web Services storage bucket. Alteryx’s unsecured database was discovered during a routine search of Amazon Web Services storage buckets, with the breach affecting 123 million households in the USA. Configuration related vulnerabilities like this are common, and AWS storage buckets that have not been protected correctly with the right controls are frequently discovered. According to The Register, information from Accenture, Verizon, Viacom, and the US military had been inadvertently left online due to incorrect configuration.When storing sensitive information in the public cloud, it’s vital to implement best practice security measures. All storage buckets must be configured correctly, with procedures, checks and balances in place to make sure that systems can’t go live without being properly audited. Each configuration must be checked against potential vulnerabilities, and it is best practice to ensure that the configuration is peer reviewed before the system goes live.

 

How Smart, Connected Products Are Transforming Companies – A New Architecture

I randomly (can you say “squirrel”) came across article entitled, ” How Smart, Connected Products Are Transforming Companies.” The article has an interesting architecture or new technology stack for handling smart, connected products. It requires companies to build and support an entirely new technology infrastructure. The entire article is a really good read.

R1411C_A2

The authors writes about the need for security:

Until recently, IT departments in manufacturing companies have been largely responsible for safeguarding firms’ data centers, business systems, computers, and networks. With the advent of smart, connected devices, the game changes dramatically. The job of ensuring IT security now cuts across all functions.

Every smart, connected device may be a point of network access, a target of hackers, or a launchpad for cyberattacks. Smart, connected products are widely distributed, exposed, and hard to protect with physical measures. Because the products themselves often have limited processing power, they cannot support modern security hardware and software.

Smart, connected products share some familiar vulnerabilities with IT in general. For example, they are susceptible to the same type of denial-of-service attack that overwhelms servers and networks with a flood of access requests. However, these products have major new points of vulnerability, and the impact of intrusions can be more severe. Hackers can take control of a product or tap into the sensitive data that moves between it, the manufacturer, and the customer. On the TV program 60 Minutes, DARPA demonstrated how a hacker could gain complete control of a car’s acceleration and braking, for example. The risk posed by hackers penetrating aircraft, automobiles, medical equipment, generators, and other connected products could be far greater than the risks from a breach of a business e-mail server.

Customers expect products and their data to be safe. So a firm’s ability to provide security is becoming a key source of value—and a potential differentiator. Customers with extraordinary security needs, such as the military and defense organizations, may demand special services.

Security will affect multiple functions. Clearly the IT function will continue to play a central role in identifying and implementing best practices for data and network security. And the need to embed security in product design is crucial. Risk models must consider threats across all potential points of access: the device, the network to which it is connected, and the product cloud. New risk-mitigation techniques are emerging: The U.S. Food and Drug Administration, for example, has mandated that layered authentication levels and timed usage sessions be built into all medical devices to minimize the risk to patients. Security can also be enhanced by giving customers or users the ability to control when data is transmitted to the cloud and what type of data the manufacturer can collect. Overall, knowledge and best practices for security in a smart, connected world are rapidly evolving.

Data privacy and the fair exchange of value for data are also increasingly important to customers. Creating data policies and communicating them to customers is becoming a central concern of legal, marketing, sales and service, and other departments. In addition to addressing customers’ privacy concerns, data policies must reflect ever-stricter government regulations and transparently define the type of data collected and how it will be used internally and by third parties.

Shared Responsibility for Security.

In most companies, executive oversight of security is in flux. Security may report to the chief information officer, the chief technology officer, the chief data officer, or the chief compliance officer. Whatever the leadership structure, security cuts across product development, dev-ops, IT, the field service group, and other units. Especially strong collaboration among R&D, IT, and the data organization is essential. The data organization, along with IT, will normally be responsible for securing product data, defining user access and rights protocols, and identifying and complying with regulations. The R&D and dev-ops teams will take the lead on reducing vulnerabilities in the physical product. IT and R&D will often be jointly responsible for maintaining and protecting the product cloud and its connections to the product. However, the organizational model for managing security is still being written.

The authors continue with implications for organizational structure (i.e., The Takeaways)

R1510G_PORTER_ANEWORGANIZATIONAL-1024x794

 

How do I get started in Infosec?

I often get asked this question: How do I get started in InfoSec?  Well, Brian Krebs posted 5 interviews on this topic several years ago.  The posts are a bit dated, but still contain good advice.

The Takeways

  1. Encourage people to read Krebs blog and develop 1, 3 and 5 year plans. Ensure plan includes skills to develop and a mechanism to document skills or successes achieved (e.g. blog).
  2. Get as much practical experience. For example, has the AZ Cyber Warfare range that people practice will real systems and step through real hacking scenarios
  3. Have fun!

What is causing a lack of focus in putting the right defenses in the right places in the right amounts against the right threats?

In my daily reading, the opening line and the entire post entitled, “6 reasons you’re failing to focus on your biggest IT security threats” by Roger Grimes got my attention.  The entire posting is worth a read.  Below are the highlights

Most companies are not focused on the real security threats they face, leaving them ever more vulnerable. That can change if they trust their data rather than the hype.

 Humans are funny creatures who don’t always react in their own best interests, even when faced with good, contrarian data they agree with. For example, most people are far more afraid of flying than of the car ride to the airport, even though the car ride is tens of thousands of times riskier. More people are afraid of getting bitten by a shark at the beach than by their own dog at home, even though being bitten by their dog is hundreds of thousands of times more likely. We just aren’t all that good at reacting appropriately to risks even when we know and believe in the relative likelihood of one versus the other happening.

The same applies to IT security.

Computer defenders often spend time, money, and other resources on computer defenses that don’t stop the biggest threats to their environment. For example, when faced with the fact that a single unpatched program needed to be updated to stop most successful threats, most companies do everything other than patch that program. Or if faced with the fact that many successful threats occurred because of social engineering that better end-user training could have stopped, the companies instead spent millions on everything but better training.

I could give you dozens of other examples, but the fact that most companies can easily be hacked into at will is testament enough to the crisis. Companies simply aren’t doing the simple things they should be doing, even when confronted with the data.

The problem bothered me enough that I wrote a whitepaper, slide deck, and book on the subject. Without having to read all of that, the answer for why so many defenders don’t let the data dictate their defenses is mostly about a lack of focus. A lot of priorities compete for computer defenders’ attention, so much so that the things they could be doing to significantly improve their defense aren’t being done, even when cheaper, faster, and easier to do.

What is causing this lack of focus in putting the right defenses in the right places in the right amounts against the right threats? A bunch of things, including these:

1. The sheer number of security threats is overwhelming
2. Threat hype can distract from more serious threats
3. Bad threat intelligence skews focus
4. Compliance concerns don’t always align with security best practices
5. Too many projects spread resources thin
6. Pet projects usually aren’t the most important ones

… it starts with an avalanche of daily threats and is worsened by many other factors along the project chain. The first step in fixing a problem is admitting you have a problem. If you see your company’s ineffective computer defenses represented above, now is the time to help everyone on your team understand the problem and help them to get better focus.

The Takeways

  1. Prioritize your projects.  Focus on projects that have the highest return on investment for improving the overall security posture and risk alignment
  2. Validate that your teams are working tasks related to the prioritized projects. Prioritized projects should have a smaller focus, but have aspects completed. For example, instead of deploying a database monitor solution to all of your critical databases, deploy the solution to one or two database.  The deployment should be in blocking mode and have all the operational support documents, procedures etc. completed.
  3. Leverage DevOps and Agile principles to obtain faster and incremental results as well as alignment with business
  4. Ensure the vulnerability management program is adapted and customized to your company so you can identify threats and vulnerability that are truly a priority for your team and not just hype.

Is there such a thing as Agile Security? What about DevSecOps?

There is an interesting post from Brian Forster, Fortinet, on InfoTech Spotlight.

To ensure in-depth defense during a faster deployment cycle, financial services firms have to adopt multiple security controls. This ensures that if vulnerable code delivers a great new feature but with an unknown flaw to consumers there need to be additional security measures in place that will keep it from being exploited. Combining a strong network security infrastructure with constant application and service monitoring ensures end-to-end protection as new software is deployed.

Because the DevOps approach is primarily adopted for the purpose of web application development, it’s necessary that a part of this infrastructure include a web application firewall (WAF). A next-generation WAF provides comprehensive application protection that scans for and patches vulnerabilities, and keeps applications from being exploited by the risks identified in the OWASP Top 10. Additionally, threat intelligence can be fed to the WAF to keep applications safe from even the latest sophisticated attacks. Which means that if an application is running a common exploit or is being probed by malware, the WAF will recognize it and know to deny network access to the application.

A successful DevOps program will have automation as another primary component. As code is committed to a central system by developers, an automated process looks at the submissions in the repository and builds a new version of the software.

The security protocol of DevOps initiatives will also need to be automated in order to keep up with increased volumes of both internal development and cyberattacks. Security automation capabilities are becoming more sophisticated through the use of artificial intelligence and machine learning. Eventually, this will allow for a fully automated, secure DevOps process, with the ultimate goal of enabling intent-based security.

Security and Agility

Financial services firms have a great deal to gain by adopting the DevOps approach, including remaining competitive and defending against cybercrime. When software has such a short development cycle, complete security cannot be guaranteed. For this reason, financial services firms must integrate additional network-level security controls. These controls extend security from mobile devices and IoT through the network core and out to the cloud. As financial services firms move forward with their DevOps process, the above recommendations will help construct an intelligent, integrated security system that allows agility at the same time.

The blog post missed another important ingredient: Information Security teams can learn and adopt the tenants from agile and DevOps (see for example the work by Fitzer ) Like Forster writes,

The irony here is that DevOps has also gained ground among malicious actors. New malware releases often move faster than security does. Therefore, the continuous integration and continuous deployment (CI-CD) that DevOps creates is necessary in order to keep pace with malicious actors.

The Take Aways

  1. InfoSec GRC and Security architecture teams need to review and update as needed the latest development procedures.
  2. InfoSec needs to become agile too