There is quite a large disconnect in the way security breaches are evolving versus how security solutions and resources are keeping up to address them, much like the book by John Gray covering relationships and the different motivations, of men and women. Unlike the book though, we’re not trying to come to a happy medium – we’re trying to keep the warlike Mars at bay. As a security strategist, I’m constantly evaluating what is possible to help identify gaps and opportunities. The one thing I have learned over the course of my career is:
The only thing constant in cyber security is that attackers’ methods will continue to evolve. They get smarter, more resourceful and are impressively ever patient.
The HPE Integrity NonStop server is not only a foundation of the HPE Server business, but it is also central to countless mission critical environments globally. For the longest time, security of these powerful systems and the “Mission Critical” applications they run remained mostly static and under the radar while high-profile attacks on other platforms have taken the spotlight. That hasn’t lessened the risk and exposure of the NonStop server. It’s actually created a gap. With globalization and the introduction of new technologies for the NonStop server, this security gap will only increase if not addressed.
Interestingly enough, the NonStop server isn’t the only mission critical enterprise solution in this situation. There are some colorful parallels that can be drawn between applications running on the NonStop server and those running in SAP environments. Both are in highly mission critical environments and vital to the revenue generation of an organization, and they frequently run payments applications like ACI’s BASE24 and other homegrown applications. This creates some interesting security challenges. In a recent The Connection Magazine Article, Jason Kazarian, Senior Architect at HPE described legacy systems as “complex information systems initially developed well in the past that remain critical to the business in spite of being more difficult or expensive to maintain than modern systems”. His article went on to point out the security challenges of legacy applications. In summary, some of these types of applications can tend to be unsupported, security patches aren’t readily available and if they are, they aren’t applied in a timely fashion because of fear of disruption, and they don’t have a lot of the security features modern applications would have. This makes detecting and addressing security risk and anomalies a greater challenge than it already is.
MIND THE GAP
How can this problem be addressed? Protect what you can. As a first step, be it system, application or data – push the risk down the stack to an area that is more controllable by typical security. For example, tokenizing data used by a legacy application will send an attacker to go search for that data through another method, preferably one better suited for detection.
Have a risk-based, layered approach. This will swing the odds in your favor. OK, maybe not completely in your favor, but this approach will provide you with the arsenal you previously did not have: It will create those choke points, provide the visibility needed and help reduce mean time to detection and response.
With the way threats are evolving, those of us responsible for security need to constantly evaluate and assess our capabilities. Let’s take a dive into each layer to explore the benefits they provide in an overall security strategy.
Protection/prevention is the first and most critical layer of any security framework. Without a proper protection layer in place, none of the other layers can be relied upon. Think of the protection layer as the traditional defensive strategy – “the wall built around assets“. This includes defining and implementing a security policy as well as hardening of the network, the system and applications. The protection layer is also where users, roles, access control and audits are set up. Key fundamental concepts to consider as part of the protection layer.
- Authentication – Allows a system to verify that someone is who they claim to be. In a HPE NonStop server environment, this can be done using Safeguard, XYGATE User Authentication, or through application authentication.
- Authorization – Determines what a user can and cannot do on a system. Authorization defines roles and access to resources.
- Access Control – Enforces the required security for a resource or object.
- Logging and Auditing – Ensures that all security events are captured for analysis, reporting and forensics
- Encryption and Tokenization – Secures communication and data both in flight and at rest. Examples of products which protect data include VLE, TLS, SSH, Tokenization and more.
- Vulnerability and Patch Management – Ensure timely installation of all RVUs, SPRs and application updates. Prioritize and take recommended action on HPE Hotstuff notices.
These types of preventative controls are necessary and intended to prevent unauthorized access to resources and data, but they cannot solely be relied on as a long-term sustainable security strategy. Attackers’ motivations and sophistication are changing, therefore when prevention fails, detection should kick in while there is still time to respond and prevent damage.
In testimony given before the Senate Subcommittee on Science, Technology and Space, famed cryptographer and cyber security specialist Bruce Schneier said:
“Prevention systems are never perfect. No bank ever says: “Our safe is so good, we don’t need an alarm system.” No museum ever says: “Our door and window locks are so good, we don’t need night watchmen. Detection and response are how we get security in the real world… “
Schneier gave this testimony back in July of 2001, yet in 2016 where organizations are getting hit by incidents they can’t detect, this premise is still valid and critical. In the previous section we discussed hardening systems and building a wall around assets as the first layer of security strategy. I’m surprised by the number of conversations I have with IT and Security folks who still carry the mindset that this degree of protection and compliance is good enough. No matter what level of protection a system has, given enough time, an attacker will find a way through. The faster you can detect, the faster you can respond, preventing or limiting the amount of damage a security breach can cause.
Detection is not a simple task. The traditional method of detection is through setting up distinct rules or thresholds. For example, if a user fails 3 logons in a span of 5 minutes, detect it and send an alert. In most cases that rule is explicit. If the failed logon events spanned 20 minutes, or worse yet, 10 days, it would not be detected. The limitation with relying on rules for detection is they will not alert on what they don’t know about. Those low and slow incidents and unknown unknowns – activity not normal on a given system -will fly under the radar and no one would be the wiser until you get a call from the FBI.
The other challenge is correlating events from multiple data sources. Let’s look at the incident pattern below.
In this incident pattern, we have events from EMS, Safeguard and XYGATE. The NonStop server could send each individual data source to an enterprise SIEM, but the SIEM would not have any context to detect the incident pattern as suspicious behavior. A security analyst could create rules to detect the incident pattern, but that’s just one use case. The traditional method is to scour through event audit records, try to put the pieces together and then create a rule to detect that pattern in the future. The weakness in that thinking is the incident has already occurred. You’re putting a rule together on the off chance it will happen again. However, it’s not reasonable or possible to anticipate and define every possible incident pattern before it happens.
A third area of concern is profiling a system and its behavior to understand what is normal behavior for users, applications and the system to be able to recognize when activity is not normal. This can be accomplished through evaluating the system and its configuration, profiling the system over a period of time, profiling user behavior, highlighting risk management and a variety of other intelligence methods. This is where machine learning has a significant advantage. No human could possibly evaluate the volume of data needed to make these types of determinations at the speed required by today’s standards. Machine learning is a type of artificial intelligence that enables the system to teach itself. Explicit rules are no longer the lone method of detection. Machine learning can profile a system or network over a given amount of time to determine what is normal to isolate what is not normal. Inserting machine learning as part of a solution process significantly increases abilities to stay on top of what is going on with a given system, user, network or enterprise.
The third layer relies on alerting. The challenge most environments have as they grow and their infrastructure become more chaotic with more tools, more users, more data and more events is that they alert too much or too little. How does one know what to act on and what is just noise? There are solutions that position themselves as being able to do data and analytics, but that end up generating more data from existing data. Now someone needs to determine if the newly formed alert is actionable or just noise.
Going back to our previous failed logon example, if we were to receive 15 different alerts for the same rule, how can one know which alert to pay attention to and which to safely ignore? If you’ve ever been responsible for responding to security alerts, you know this creates alert fatigue. Back in my early days, mass deleting emails of similar types of alerts was one of my favorite things to do.
Contextualization allows the system itself to determine what is actionable and what is just noise. A solution like XYGATE SecurityOne can evaluate each potential alert and, based on activity that happened previously for that that user, IP, system etc…, determine whether the reported activity is business as usual or a serious issue that needs to be paid attention to. Creating new data and new alerts from existing data don’t solve the problem. Applying context to the new incidents generated helps focus efforts on those incidents that truly need attention. Once an account changes hands, it will behave slightly differently.
Contextualization is key.
For any of the first three layers to produce value, there needs to be a proper incident response plan.
Responding will allow you to deploy your countermeasures, cut off access, send the attacker to a mousetrap or other actions that will assist in minimizing the impacts and recovery of a breach.
Containing the breach and quickly recovering from it are the most important steps of this layer. Response and containment comprise of a number of simultaneous activities to assist in minimizing the impact of a breach. These may include but not limited to:
- Disabling accounts
- Blocking IPs and Ports
- Stopping applications or services
- Changing administrator credentials
- Additional firewalling or null routing
- Isolating systems.
This is necessary to slow down or stop an attack as well as the preservation of evidence. Evidence of the attack is generally gathered from audit logs, but coupled with detection and analytics tools can provide access to information in a much quicker and more granular fashion. Being able to preserve evidence is key is forensic investigations of the breach as well as important for prosecution.
Once all the pieces fall into place and there is an incident alert that requires response, how will your organization deal with the issue? Breach incidents are hardly ever the same. There needs to be a level of categorization and prioritization on how to deal with specific incidents. In some cases, you may want to slowly stalk your attacker, whereas in others, the sledgehammer approach may be the only thing that can preserve data. Does everyone understand their assigned roles and responsibilities? Is there someone in charge? Is there a documented plan? All of these are considerations that need to be accounted for as part of the response. This can be summarized in two words – BE PREPARED.
On the HPE NonStop server – the protection layer can be addressed with properly configuring Safeguard, implementing protection of data in flight and data at rest and deploying third-party security tools available for the system. For alerting and detection, XYGATE Merged Audit with HPE Arcsight can provide the tripwires and alarms necessary for proper detection. For further detail on how to properly protect a NonStop server, HPE has published the HPE NonStop Security Hardening Guide. XYPRO has also published a 10 part blog series on how to properly protect a NonStop server (http://bit.ly/21nmQiY).
For the next generation of detection and alerting, XYPRO’s newest offering, XYGATE SecurityOne (XS1), brings risk management and visibility into real-time. XS1 correlates data from multiple HPE Integrity NonStop server sources, and detects anomalies using intelligence and analytics algorithms to recognize event patterns that are deemed out of the ordinary and suspicious for users, the system and environment. Coupled with HPE ArcSight, the solution can provide a constant, real-time and intelligent view of actionable data in a way that was never been seen before.
Strong technology and process is important, but people are paramount to any successful security strategy. Constant security training and development on industry best practices, security trends and attack evolution should be factored into any security program. Without ongoing training and reinforcement of people, the gap only has an opportunity to widen. An organizations most valuable resource are the people hired to provide security and close the gap. Use them wisely and ensure they have the tools and training to provide the layers of defense required.
Cybercriminals don’t sit around waiting for solutions to catch up. Security complacency ends up being the Achilles Heel of most organizations. Because of its unique attributes, security on the NonStop server needs to be addressed in a layered approach and Risk Management is a big part of the process. Putting the layers in place to allow us to highlight risk as early as possible to address it is key in dealing with upcoming challenges. This will hopefully help bridge the gap between attacks and security.
We need to recognize the paradigm shift and the change in mindset in how we approach security, and attackers’ ability to stay one step ahead of most defenses is central to their strategy. As the NonStop platform evolves and becomes more interconnected, what was put in place previously to address security will not be sustainable going forward. No matter how vendors position their solutions, security is hard, doing the right thing is hard, but that doesn’t mean security professionals need to work harder.
From a security professional’s perspective, cybercriminals will always be viewed as Mars – warlike. Relentlessly driving to break into systems, get to data, wreak havoc and cause disruption to fulfill their malicious objectives. Meanwhile, cyber security staff need to act more like Venus – clouded in mystery and deliberately avoid being seen while following the enemy. If Mars knows our tactics, Mars can avoid them. Mars is at war. Mars is patient. Mars will continue to attack, low and slow. With the proper security layers in place, Mars will be thwarted by deliberate masking, redirection and detection that hides where the data really is and alerts when the enemy is near. We continue to get smarter by blocking, hiding and redirecting things away in response to attacks. But unlike men and women, Venus in the security world has a goal is to keep Mars at bay forever…or longer…
Steve Tcherchian, CISSP
Chief Information Security Officer
Steve Tcherchian, CISSP, PCI-ISA, PCI-P is the CISO and SecurityOne Product Manager for XYPRO Technology. Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security Standards Committee. With almost 20 years in the cybersecurity field, Steve is responsible for XYPRO’s new security product line as well as overseeing XYPRO’s risk, compliance, infrastructure and product security to ensure the best security experience to customers in the mission critical computing marketplace.
Steve Tcherchian, CISSP, PCI-ISA, PCIP is the Chief Product Officer and Chief Information Security Officer for XYPRO Technology. Steve is on Forbes Technology Council, the NonStop Under 40 executive board, and part of the ANSI X9 Security Standards Committee.
With over 20 years in the cybersecurity field, Steve is responsible for the strategy and innovation of XYPRO’s security product line as well as overseeing XYPRO’s risk, compliance, and security to ensure the best experience for customers in the Mission-Critical computing marketplace.
Steve is an engaging and dynamic speaker who regularly presents on cybersecurity topics at conferences around the world.