There is a large disconnect in the way breaches are evolving versus how security solutions are keeping up to address them. Virtualization adds an entirely new layer of complexity to the puzzle. As a security strategist, I’m constantly evaluating what is possible to help identify gaps and opportunities. The one thing I have learned over the course of my career:
The only thing constant in cyber security is that attackers’ methods will continue to evolve. They get smarter, more resourceful and are impressively ever patient.
The HPE Integrity NonStop server is not only a foundation of the HPE Server business, it is also central to countless mission critical environments globally. For the longest time, security of “Mission Critical” systems, such as IBM Mainframes, HPE NonStops, SAP HANA and others have remained mostly static and under the radar, while high profile attacks on other platforms and applications have taken the spotlight. That hasn’t lessened the risk to these foundational systems and the data they possess,it has actually created a gap. A security gap that will only increase if not properly addressed due to the advancements in globalization, virtualization and newly introduced technologies like IoT, distributed systems and in-memory computing.
Interestingly enough, the NonStop server isn’t the only mission critical enterprise solution in this situation. There are some colorful parallels that can be drawn between applications running on the NonStop server and those running in SAP environments. Both are in highly mission critical environments and vital to the revenue generation of an organization, and they frequently run payments applications like ACI’s BASE24 and other vendor-supplied and homegrown applications. This creates some interesting security challenges. In a recent article from The Connection Magazine, Jason Kazarian, Senior Architect at HPE, described legacy systems as “complex information systems initially developed well in the past that remain critical to the business in spite of being more difficult or expensive to maintain than modern systems”. His article went on to point out the security challenges of legacy applications. Some of these applications tend to be unsupported and lack the security features that modern applications would have. Security patches aren’t readily available, and even if they are, these critical patches aren’t applied in a timely fashion out of fear it will cause a disruption to the business. This makes detecting and addressing security risks a greater challenge than it already is.
Mind The Gap
How can this problem be addressed? Protect what you can. Be it system, application or data – push the risk down the stack to an area that is more addressable by security controls. For example, tokenizing or encrypting sensitive data used by a legacy application will send an attacker away to search for that data through alternate methods, preferably one better suited for detection.
Have a risk-based, layered approach. This will dramatically swing the odds in your favor. A layered approach based on security risk will provide you with an arsenal to fight back with that you never had. It will create those choke points, provide needed visibility and help reduce mean time to detection allowing you to respond quickly and decisively.
With the way threats are evolving, those of us responsible for security need to constantly evaluate our capabilities. Let’s dive into each layer to explore the benefits they can provide in an overall security strategy.
PROTECT
Protection/prevention is the first and most critical layer of any security framework. Without a proper protection layer in place, none of the other layers can be relied upon. Think of the protection layer as the traditional defensive strategy – “the wall built around assets”. This includes defining and implementing a security policy as well as hardening the network, the system and applications. The protection layer is also where users, roles, access control and audits are set up. Key fundamentals to consider as part of the protection layer:
- Authentication – Allows a system to verify that someone is who they claim to be. In a HPE NonStop server environment, this can be done using Safeguard, XYGATE User Authentication, or through application authentication.
- Authorization – Determines what a user can and cannot do on a system. Authorization defines roles and access to resources.
- Access Control – Enforces the required security for a resource or object.
- Logging and Auditing – Ensures that all security events are captured for analysis, reporting and forensics
- Encryption and Tokenization – Secures communication and data both in flight and at rest. Examples of products which protect data include VLE, TLS, SSH, Tokenization and more.
- Vulnerability and Patch Management – Ensure timely installation of all RVUs, SPRs and application updates. Prioritize and take recommended action on HPE Hotstuff notices.
These types of preventative controls are necessary and intended to prevent unauthorized access to resources and data, but they cannot solely be relied on as a sustainable security strategy. Attackers’ motivations and sophistication are evolving, therefore when prevention fails, detection should kick in while there is still time to respond and prevent damage.
Detect
In testimony given before the Senate Subcommittee on Science, Technology and Space, famed cryptographer and cyber security specialist Bruce Schneier said:
“Prevention systems are never perfect. No bank ever says: “Our safe is so good, we don’t need an alarm system.” No museum ever says: “Our door and window locks are so good, we don’t need night watchmen.” Detection and response are how we get security in the real world…”
Schneier gave this testimony back in July of 2001, yet nearly 20 years later, organizations are getting hit by incidents they can’t detect, proving this premise is still valid and even more critical than ever before. In the previous section, we discussed hardening systems and building a wall around assets as the first layer of security strategy. I’m surprised by the number of conversations I have with IT and Security professionals who still carry the mindset that this degree of protection and compliance is good enough. No matter what type of protection a system has, given enough time, an attacker will find a way through. The faster you can detect, the faster you can respond, limiting the amount of damage a security breach can cause.
Detection is not a simple task. The traditional method of detection is through setting up distinct rules or thresholds. For example, if a user fails 3 logons in a span of 5 minutes, detect it and send an alert. In most cases that rule is explicit. If the failed logon events spanned 20 minutes, or worse yet, 10 days, it likely would not be detected. The limitation with relying on these types of rules for detection is they can’t alert on what they don’t know. Low and slow incidents and unknown unknowns – activity not normal on a given system – will fly under the radar and no one would be the wiser until it’s too late. The damage is done, the data is taken, the system is compromised, and the confidence of your customers in you is lost.
Correlating events from multiple data sources proves to be another challenge of detection. Let’s look at the incident pattern below:
In this incident pattern, we have events from EMS, Safeguard and XYGATE. The NonStop server could send each individual data source to a Security Incident and Event Management (SIEM) solution, but the SIEM would not have any context to detect the incident pattern as suspicious behavior. A security analyst could create rules to detect the incident pattern, but that’s just one use case. The traditional method is to scour through event audit records, try to put the pieces together and then create a rule to detect that pattern in the future. The weakness in that thinking is it can only be accomplished after an incident has already occurred. Then the rule is put together on the off chance the same combination of events will happen again. However, it’s not entirely reasonable to anticipate and define every possible incident pattern before it happens.
The third area of concern is profiling a system and its behavior to establish what is the normal behavior for its users, applications and the system itself. Without establishing a baseline to compare to, you will be unable to recognize when activity is not normal. This can be accomplished by evaluating the system and its configuration, profiling the system over a period of time, profiling user behavior, highlighting risk management and a variety of other intelligence methods. This is where machine learning has a significant advantage. No human could possibly evaluate the volume of data needed to make these types of determinations at the speed required. Machine learning is a type of artificial intelligence that enables the system to teach itself. It can profile a system or network over a given amount of time to determine what is normal and more importantly, isolate what is not normal. Inserting machine learning as part of a solution process significantly increases the ability to stay on top of what is taking place for a given system, user, network or enterprise.
Alert
The third layer encompasses alerting. The challenge most growing environments have as their infrastructure becomes chaotic with more tools, more users, more data and more events is that they alert on too much or too little. How does one know with certainty what are legitimate alerts to act on and what is just noise to ignore? There are solutions that position themselves as data and security analytics, but end up generating even more data from the existing data. Someone still needs to determine if the newly formed alert is actionable or not.
Going back to our previous failed logon example, if 15 different alerts were received for the same rule, how can one know which alert to pay attention to and which to safely ignore? If you’ve ever been responsible for responding to security alerts, you know this creates alert fatigue. Back in my early days, mass deleting emails of similar types of alerts was one of my favorite things to do.
Contextualization allows the system itself to determine what is actionable and what is just noise. A solution like XYGATE SecurityOne can evaluate each potential alert and, based on activity that happened previously for that that user, IP, system etc…, determine whether the reported activity is business as usual or a serious issue that needs to be paid attention to. Creating new data and new alerts from existing data doesn’t solve the problem. Applying context to the new incidents generated helps focus efforts on those incidents that truly need attention. Once an account changes hands, it will behave slightly differently.
Context is key.
Respond: Deploy your army
For any of the first three layers to produce value, there needs to be a proper incident response plan.
Responding correctly will allow you to deploy your countermeasures, cut off access, send the attacker to a mousetrap or any other list of actions that will assist in minimizing the impact and recovery time from a breach.
Containing the breach and quickly recovering from it are the most important steps of this layer. Response and containment are comprised of a number of simultaneous activities including but not limited to:
- Disabling accounts
- Blocking IPs and ports
- Stopping applications or services
- Changing administrator credentials
- Additional firewalling or null routing
- Isolating systems
- and sometimes just pulling the plug!
It is obviously necessary to slow down and stop an attack, but the preservation of evidence is essential as well. Evidence of the attack is generally gathered from audit logs, but coupled with detection and analytics tools can provide access to information in a much quicker and more granular fashion. Being able to preserve evidence is key is forensic investigations of the breach as well as important for prosecution.
Breach incidents are hardly ever the same. There needs to be a level of categorization and prioritization on how to deal with specific incidents. In some cases, you may want to slowly stalk your attacker, where in others, the sledgehammer approach may be the only thing that can preserve data. Does everyone understand their assigned roles and responsibilities? Is there someone in charge? Is there a documented plan? All of these are considerations that need to be accounted for as part of a well thought out response. This can be summarized in two words – BE PREPARED.
Resources
On the HPE NonStop server – the protection layer can be addressed with properly configuring Safeguard, implementing protection of data in flight and data at rest and deploying third-party security tools available for the system. For alerting and detection, XYGATE Merged Audit with HPE Arcsight can provide the tripwires and alarms necessary for proper detection. For further detail on how to properly protect a NonStop server, HPE has published the HPE NonStop Security Hardening Guide. XYPRO has also published a 10 part blog series on how to properly protect a NonStop server .
For the next generation of detection and alerting XYPRO’s newest offering, XYGATE SecurityOne (XS1), is bringing risk management and visibility into real-time. XS1’s Patented technology correlates data from multiple HPE Integrity NonStop server sources, detects anomalies using intelligence and analytics algorithms to recognize event patterns that are deemed out of the ordinary and suspicious for users, the system and the environment. Coupled with SIEM solutions, XS1 can provide a constant, real-time and intelligent view of actionable data in a way that was never been seen before.
Strong technology and process is important, but people are paramount to any successful security strategy. Regular security training and development on industry best practices, security trends and attack evolution should be factored into any security program. Without ongoing training and reinforcement of people, the gap only has an opportunity to widen. An organization’s most valuable resource is the people hired to provide security and close the gap. Use them wisely and ensure they have the tools and training to provide the layers of defense required.
Cyber criminals don’t sit around waiting for solutions to catch up. Security complacency ends up being the Achilles Heel of most organizations. Because of its unique attributes, security on the NonStop server needs to be addressed in a layered approach and Risk Management is a big part of the process. Putting the layers in place to allow us to highlight risk as early as possible to address it is key in dealing with upcoming challenges. This will hopefully help bridge the gap between attacks and security.
We need to recognize the paradigm shift in how we approach security, especially in a virtual word and understand an attackers’ ability to stay one step ahead of most defenses is central to their strategy. As the NonStop platform evolves and becomes more interconnected, what was put in place previously to address security will not be sustainable going forward. No matter how vendors position their solutions, security is hard, doing the right thing is hard, but that doesn’t mean security professionals need to work harder.
From a security professional’s perspective, cyber criminals will always be viewed as war-like. Relentlessly driving to break into systems, get to data, wreak havoc and cause disruption to fulfill their malicious objectives. Meanwhile, cyber security staff need to act more cautiously and deliberately to avoid being seen while following the enemy. With the proper security layers in place, the enemy will be thwarted by deliberate masking, redirection and detection that hides where the data really is and alerts when the enemy is near. We continue to get smarter by blocking, hiding and redirecting things away in response to attacks. We just have to keep it up and evolve with the technology around us.
Steve Tcherchian, CISSP
Chief Information Security Officer
XYPRO Technology
@SteveTcherchian
@XYPROTechnology

Steve Tcherchian, CISSP, PCI-ISA, PCIP is the Chief Product Officer and Chief Information Security Officer for XYPRO Technology. Steve is on Forbes Technology Council, the NonStop Under 40 executive board, and part of the ANSI X9 Security Standards Committee.
With over 20 years in the cybersecurity field, Steve is responsible for the strategy and innovation of XYPRO’s security product line as well as overseeing XYPRO’s risk, compliance, and security to ensure the best experience for customers in the Mission-Critical computing marketplace.
Steve is an engaging and dynamic speaker who regularly presents on cybersecurity topics at conferences around the world.