Four (4) Questions All CFOs Should Be Asking Their Organisations About Ransomware Preparedness and Data Resiliency

Four (4) Questions All CFOs Should Be Asking Their Organisations About Ransomware Preparedness and Data Resiliency

I’m not sure why organisations continue to fall victim to ransomware. It might be because the wrong risk-based questions are being asked within the organisation.

The answer to the question “Are we protected and secure?” isn’t necessarily black and white. The truth is that cyber-attacks continue to evolve, and therefore, so must your security. Being “protected and secure” is now fluid.

In order to understand your risk posture to new cyberthreats, you will need to look at the issue from the bottom up while contemplating the worst possible outcome of a cyber incident as well as how fast your organisation can recover from one.

With typical cyber incidents, the worst possible outcome is a catastrophic data breach. While this is very bad, the organisation carries on. Ransomware, however, has changed the game. The worst possible outcome is now having your organisation’s operations temporarily crippled by ransomware, for days, weeks or months.

If COVID-19 taught the world anything, it is that Black Swan events will happen.

Exploring the option of holding cryptocurrency on the books in the event you need to pay off the attacker(s) might not be the best way to address the Black Swan. Based on recent volatility, that could be an expensive backup plan.

By using the roadmap below to uncover your hidden risks and weaknesses, you could potentially avoid a disaster for your organisation that has impacted many recognisable names over the last two years…

Assess the BIG Risk: Your business is crippled and offline. How long can you survive?

In the past, cyber-attacks used to consist of a breach under which organisations could carry on. Ransomware has changed all that.

Ransomware encrypts your data, thereby restricting and removing your access to it.
Operations may continue to limp along afterwards, or could come to a complete standstill. How long could your organisation survive if forced into a standstill? One day? One week? One month?

The answer to this question will be the very first question your CEO or leadership team will want to know after an attack. It’s better to have an answer to that question before an attack even happens when things are calm, as opposed to in a panic state. Because you might not like or be able to swallow the answer.

Assess: If we are breached, where are the risks and threats?

Start with ensuring your organisation has identified the most critical processes that depend on technology.

Digital transformation initiatives continue to push organisations away from manual processes. Such initiatives often result in improved organisational effectiveness, efficiencies and improved customer experiences, to name a few. COVID-19 accelerated these initiatives.

This is generally good and progressive. It also increases your risk exposure.

A cyber-attack may render processes that are 100% digital to completely inoperable, either through a manual process impossible to execute or because the organisation’s memory does not exist with a manual process.

Identify the processes that depend on technology, because with a manual (ie. paper-based) workaround is not sufficient. Determine which ones are mission critical.

For your critical processes, perform a comprehensive mapping of dependencies across technology platforms, suppliers, people and data. Assign an executive the title of Risk Owner. In doing so, you will start to understand your risk and what areas must not fall.

Review the report and assumptions. Test them. Test them regularly.

Assess: Can our controls prevent, contain, or minimize a breach?

Dive deeper into the organisation. Ensure your key people understand their risk management responsibilities associated with cyber security. And that the controls existing with key suppliers and external stakeholders as well as your internal technology architecture can stop a fast-moving threat.

Areas to Explore:

  • Do business process owners understand the cyber risks?

The answer to this question is often “no.” Closing your risk exposure means your process owners need to both manage and mitigate the cyber risk.

Start with assessing if process owners understand the risks, and how to deal with systemic vulnerabilities. For example, do they have line-of-sight over the risk health of the key controls on the systems they own? Have they set up the right controls? If not, why not? Are there any identified vulnerabilities and risks that exist on the system which are unresolved?

If vulnerabilities exist, why are they not closed? Funding? Acceptance? Agreement on severity? Try full sentences to flush out the questions regarding Funding, Acceptance, Agreement on severity.

If you are not getting satisfactory responses, ask “Do we have a common framework for cyber risk decision making?” If there is no such framework, put one in place and re-run the assessment with the business process owners.

Are our internal infrastructures designed to prevent the spread of an attack?

Currently, the simple best practise is to create compartments or divisions between networks to ensure that when one area is attacked, the rest of the organisation does not fall, too. By doing so, you could be mitigating potential damage across the organisation.

  • Assess: Do our suppliers align and match our risk and security posture?

It is common to find key suppliers who have an operational role within a critical process, or a support role (such as an equipment vendor who can remotely connect to diagnose issues). The supplier may even manage the whole process.

The threat exists because sometimes your suppliers have “all keys to the kingdom”. This provides the opportunity for the introduction of ransomware threats to the environment, sometimes by accident. If this happens to your organisation, you wouldn’t be the first to fall to an attacker via one of your partners (just ask Kaseya and Solarwinds).

Additional questions to explore:

  • Are key suppliers clearly identified? What are our baseline expectations? Are these contracted and monitored?
  • Do we need to request their (who?) cyber risk plan and if so, have we tested it?

Assess: Can we Recover on your Terms?.

It should go without saying that you never want to put your faith and dependency into the hands of a cybercriminal. Did you know that only about 8% of organisations get access to all their data after a ransomware attack?

To ensure a ransomware attack does not cripple your organisation, you need to ensure your data is resilient. Ensure the individual cyber risk assessments (including restoration testing) have been performed on your organisation’s critical processes. Ask for the report from the owners.

While this comes across as tedious, painstaking work that often gets pushed down as other seemingly more urgent items come up, nothing becomes more critical than after an attack. Test the restoration process on your terms, not the cyber criminals’ (when you may have to cross your fingers for good luck). This will expose your weaknesses.

In a perfect world, your organisation may be able to recover within hours after an attack because your recovery plan executes as expected. Many do not test the efficacy of such a plan, and therefore, often learn much too late that their backup plans failed.

  • How can you ensure your data is recoverable within a few hours?

One of the most common strategies for protecting critical operational technology / industrial systems is to use an air gap, which is a system completely isolated from and without a connection to other systems. For data backup, this is critical for ransomware recovery, and needs to be tested to ensure ransomware cannot ‘jump the (air) gap’.

Ransomware is a scary proposition. However, it can be mitigated with the right precautions and risk management practises in place. It requires the proper process controls, technologies and recovery plans. Spend the time today to assess preparedness in order to avoid the potential panic and scramble, tomorrow.

Are Ransomware Attacks Getting Worse? Yes They Are. And the Reason May Surprise You.

Are Ransomware Attacks Getting Worse? Yes They Are. And the Reason May Surprise You.

Just like every business and sector, cybercrime is transforming.

Gone are the days whereby a single, lone hacker posed a significant threat to your organisation. Cybercrime is evolving, shifting from that lone hacker to a coordinated crime syndicate that operates much like a startup. Unlike law abiding startups, these cybercriminal organisations are funded for the sole purpose to extort from you. Several have produced returns in the hundreds of millions of dollars.

The toolset to extort is constantly evolving too, moving away from individually built software to one that can be purchased as a ‘Ransomware-as-a-Service’ offering obtained off the darkweb for a percentage of the bounty. Sadly for society, it has never been easier to get into the cybercrime game, and never more rewarding. There is still a high volume of easy ‘prey’ targets that will pay. Organisations keep paying the ransoms, so the criminals keep continuing to attack, all the while improving their skills of extraction. It’s just supply and demand, sadly basic economics that pays into their favour.

Recently, how the criminals secure paying customers may have been their greatest insight.

Hackers typically extract data from an organisation and then hoped for a big pay day by selling that customer data on the open market to the highest bidder. Hackers learned that this approach was fraught with limitations. Organisations became better at shutting down extraction attempts, so hackers could not get much data to sell. Negotiating with buyers was challenging, risky, and lengthy, leading to outcomes that didn’t produce the expected payoff.

With ransomware, the security game changed.

Rather than spend too much time extracting data from an organisation and then hope for a big pay day by selling that customer data on the open market to the highest bidder, cyber criminals found that they could limit both their “no data extracted” risk along with removing the challenges of a no or small payoff with one small tweak: Ransomware.

Now, cyber criminals had an instant ‘highest bidder’ willing to pay now – the compromised organisation. Criminals figured out that ransomware creates a built-in buyer with deep pockets – You and your cyber insurance policy. 96.88% of all ransomware infections take four hours to successfully infiltrate their target – with the fastest infections being completed in under 45 minutes. The urgency of recovering the data quickly so that the organisation can continue their ongoing operations makes them the most willing buyer.

According to Wired Magazine (Jan 2021), the business of ransomware has set its sights on those wealthy organisations willing to pay to save their reputations. If this is you, then you should be aware of the scope of the costs during the pandemic. Also be aware only 8% of Organisations that pay a ransom get back all of their data and that 80% who paid a ransom experienced another attack.

The 2021 State of Security found that for a UK-based organisation, the average cost of a breach in 2021 is $4.67M, up 19.7% from 2020. The costs include detection and escalation (29%), lost business (38%), post breach response (27%), and notification (6%).

With the plethora of headlines, why are organisations (still) falling victim to cybercrime (and be forced to pay)? Do you ever wonder how they get it so wrong?

Perhaps a better question might be ‘Why does ransomware succeed when the outcomes are known to be so bad’?

Unfortunately, when it comes to a ransomware attack, organisations often are forced to pay the high ransom demanded by the crime syndicate because organisations learn after the attack that they were in fact highly exposed, not protected and now don’t control their data or destiny.

Some organisations think they can take the easy route by being all too willing to pay the ransom and then make a claim against their cyber insurance policy. In some cases, leaders are simply too trusting of the criminals and believe these criminals will cooperate by releasing and unlocking the data. And this too trusting approach has cost many their jobs, as 32% have been removed either by dismissal or resignation after a breach. Ouch!

Unfortunately, negotiating with cyber criminals is often a lost cause. It is believed only 8% of businesses that pay a ransom get back all of their data. These are not good outcomes!

Worse, criminals don’t release your data immediately. The average length of time to get the control of the data is 21 days. That’s a long time to wait, all the while the organisation is disrupted and compromised in its ability to function properly. 21 days is too long to wait without access to the data after the fee has been paid. It’s certainly not the instant response many executives expect — all the while dealing with a downtime that’s potentially could damage the organisation even more through lost revenues and reputation risk.

What are the Warning Signs that You Could Be Next?

Based on our own informal discussions and research, we’ve seen some definite patterns that, if spotted in your organisation, could serve as an indicator that you will be more prone to a successful cyber-attack:

False belief: Won’t happen here. Think again. Ask the Scottish Environmental Protection Agency. Or Serco. Or Northern Rail. Or Accenture if it won’t happen here. All were ransomware attack victims in 2021. The new best practise is to assume you are already compromised. Doing so will drive new thinking, new priorities and prevent you from being the next victim of ransomware.

“My data is already backed up”: Many organisations are under the false assumption they have ‘copy’ of their data and can quickly restore after a disruption. And many of those same organisations find out the hard way that their ‘backed up’ data cannot or will not restore (and get forced into paying a high ransom). Only 57 percent of businesses are successful in recovering their data using a backup. Not all back up data is the same. At TES, we know the difference.

Security near the bottom of the action list. The pandemic changed views on cyber security. Unfortunately, the impact created opportunities for cyber criminals, as more than 75% of IT teams said cybersecurity took a “backseat to business continuity during the pandemic”

Relying too much on cyber insurance. Cyber insurance should be the last line of defence, not the first. Have strong security practises and compliance should mean that you will never need to activate your policy. And, just like other forms of insurance, sometimes not all your costs are covered. 42% of cyber insurance claims did not cover all the losses, meaning organisations had to pay in the end.

Ignoring the human factor. The same study that found cyber security took a backseat enabling to support WFH also found that more than 30% of workers under the age of 24 admitted to outright bypassing certain corporate security policies to get work done.

There is Time. You Can Avoid a Successful Attack

Security is a multi-layer approach like medieval castles. High, hard-to-scale walls, surrounded by moats, and with roaming guards to spot and kill an intruder. Multiple systems working together for one goal.

As with all security issues, there is rarely a “silver bullet” or singular step that will fully mitigate the problem. Multiple steps are needed to be able to reasonably defend again ransomware. Some steps are designed to prevent ransomware to taking hold, with other steps reduce the impact of a breach.

Given the ubiquitous and sophistication of cyber-attacks, the best position to take is to assume you are compromised. This will change the mindset inside the organisation and change the priorities to ensure the damage and risk will be minimised. If compromised, this mindset could ensure you can avoid the high cost of a successful attack by deploying the right protections, training, detection systems and response plans within the organisation.

Next Masterclass: Just Say “No” to Hackers > How to Minimize the Risk of High Cost and High Operational Impact of Ransomware

Cybersecurity is complex but does not have to be with the right strategy. This #TEStalk masterclass outlines a framework that can be implemented to deploy the right mindsets, protections, detections and other tech infrastructures to minimise the risk and impact of ransomware. Register for free

4 Options CIOs Are Considering for Centera End of Life

4 Options CIOs are Considering for Centera End of Life

| 7 minute read

 

If you’re facing a lengthy undesirable Centera migration decision,
then this guide is for you.

 

As of 31 March 2018, the Centera product line discontinued and became an end of life product. This resulted in no further product development. Leaders have a Fixed Content headache with the applications like FileNet, CMOD, Enterprise Vault, NICE, and EMR/HER/PACS applications.

Why Centera Customers are Considering a Change

Many Centera customers purchased the storage product for a specific use case several years ago. Since then, several other storage platforms and cloud-based storage have been introduced and evolved in the market over this timeframe. Enterprises are deploying hybrid cloud storage solutions to balance cost containment, data performance and data security.

For Centera, there is a perceived risk of migration to modernise your storage environment. You could be facing a lengthy and costly migration that non-TES customers report can take many months to years to complete. Slow migrations lead to an increased risk of downtime and data loss, plus the additional cost of running two systems for a lengthy period of time whilst your migration takes place.

This large (in some case petabyte-scale) repository of data is not deriving any business value due to the proprietary Centera interface which precludes any meaningful data analysis. Wouldn’t it be great to be using the trends seen in this data as part of an AI training model to be used for a competitive edge or a better quality of service?

4 Options to Address EMC Centera End of Life for Enterprise Storage

When dealing with end-of-life announcements, some organisations act quickly and move/migrate to another solution. Others seek to get maximum value from their current platform and manage the risk until such time as it becomes too expensive or risky to maintain.

Do nothing is still an option, just not a sustainable one. The status quo comes with additional risk since the proprietary hardware parts are no longer available and support is more difficult to procure.

However, ‘Do nothing’ is the exception in 2021. Over the last few years, concerned Centera customers are opting not to stick with the status quo and are seeking considering a change due to a) a short term need to expand enterprise storage capacity, b) the need to reduce current support costs and overall TCO, and/or c) a longer-term enterprise storage solution that is more flexible, scalable and secure.

Since ‘Do Nothing’ is the least favourable, the following are the three (3) appropriate migration paths for Centera:

Migrate to another On-prem platform(s)

This might be the most straight forward path, moving from one on-prem platform to another like IBM Cloud Object Storage.

Migrate to Cloud-only

This option is common with enterprises that are striving for a pure cloud-only architecture. Often years in the making to achieve, the migration to the cloud is often time-consuming, lengthy and costly upfront in order to receive the expected long-term cost savings (hint: early adopters are learning, especially during COVID, that the opposite may occur).

Hybrid Cloud Storage

The third option that is gaining momentum in 2020 is the concept of Hybrid Enterprise Cloud Storage, the blending of the best of cloud and on-prem storage. The Hybrid Cloud Storage infrastructure allows for matching the workload to the approach public cloud or on-prem hardware – thereby the storage of your data and content is matched with the most appropriate location based on expected cost, accessibility, frequency, security policies, and use case.

How Much Time Should You Dedicate to Your Centera Migration Project?

This can be challenging because of the manner in which Centera stores the data. This can be 100’s of TBs or even PBs scale and with a lack of application integration or multiple threading.

We have heard of horror stories of migration efforts taking years, often involving 3rd party consultants. This lengthy process can be unnecessarily expensive and risky. As each day passes the probability of the EoL Centera system experiencing an unrecoverable hardware failure increases.

TES has a modern approach that migrates up to 14TB/day and 50 million objects/day from Centera. This speed reduces the migration project from years to weeks and costs significantly less. Schedule a 15-minute consultation to determine the expected timeframe of your migration project using our modern, unique process. If you need a speedy and less costly migration solution for Centera, contact an Enterprise Storage specialist at TES.

The Typical Risks Associated with a Centera Migration Project?

As with any migration project, risk mitigation is best achieved with the proper planning, project management and allocation of resources. In the case of re-platforming from an EoL product like Centera, the speed of the migration effort (or lack of it) is a risk factor considering hardware failure may be unrecoverable.

In addition, migrating data from Centera tends to be trickier versus other storage platforms because of the proprietary API that allows applications to write data to and extract data from the repository. Extraction and migration tends to be slower and drags out the data migration process.

Finally, for many enterprises, any migration needs to include a process that ensures meeting the relevant regulations. As such, a cross-platform migration requires a process that ensures chain of custody.

How Can I Access Free Support to Build My Business Case?

With the proliferation of use cases, the simple storage decision is not straightforward anymore.

The Enterprise Storage Technical Specialists at TES will guide you to the ideal decision using a unique blend of analysis tools and assessment process. We can help you evaluate appropriate storage options within your own IT environment – often at no charge to you.

See if you qualify for the free business case development assessment by one of our Enterprise Storage Technical Specialists. Contact us today here.

Why Enterprises Running SAP HANA Seek IBM Power

Software platforms like SAP S/4 HANA have the power to digitally transform your Enterprise. For Enterprises that require scalability and availability at a lower TCO, there is a choice that makes the shortlist almost all the time:
IBM Power Systems.

Why Enterprises Running SAP HANA Seek IBM Power

Software platforms like SAP S/4 HANA have the power to digitally transform your Enterprise, streamlining business processes and generating real-time insights from your data. To take full advantage of the in-memory database capability of SAP HANA requires a review of your IT infrastructure. For Enterprises that require scalability and availability at a lower TCO, there is a choice that makes the shortlist almost all the time: IBM Power Systems.

IBM Power Delivers for SAP S/4 HANA

Since 2015, thousands of organisations across various sectors have chosen IBM Power Systems to run their SAP S/4 HANA platform. And for good reason, namely a £1.4M NPV positive impact over 3 years, a 7 month payback period, and no downtime over an 18-month period — all verified by a study completed by Forrester Consulting.

Forrester interviewed SAP HANA customers using IBM Power System about their experiences, then quantified the results a typical organisation could realise. Download your free copy of the report today.

The Key Criteria Enterprises Use to Select IBM Power Systems For HANA

In the digital-first world, Enterprises with a downtime/hour higher than £20,000/hour are starting to prioritise performance, scalability and security over cost with IBM Power. When unplanned infrastructure downtime is near zero over an 18-month period, revenues are maximised. This is not the only reason why Enterprises select IBM Power for their SAP HANA environment. Outside of this incredibly high-reliability rate, IBM Power also delivers several other benefits for enterprises:

  • Reduction in system/server administration costs: Enterprises report 43% less time to update solution stack and 30% cut in time spent on server administration with IBM Power
  • Infrastructure consolidation: Up to a 86% cut to the number of SAP HANA servers
  • Reduction in License costs: Consolidate up to 16 – x86 server onto 1 IBM Power. Fewer licenses to maintain and less server administration costs
  • Can Virtualise up to 24TB in scale-up. SAP has certified this environment for IBM Power
  • Precise cost control with allocation capacity to as little as 0.01 cores and 1GB. This enables you to avoid overpaying for capacity
  • Shared processor capacity: Efficiently utilize processing capacity across SAP HANA instances to further reduce Total Cost of Ownership (TCO)
  • Predictive failure alerts. IBM Power Systems uses heuristics, running in the background of ongoing SAP HANA workloads, to pre-emptively warn DBAs when a failure is likely to occur – thus enabling you to prevent such a failure that could be costly.

Download the Forrester TEI Report

If you are considering deploying SAP S/4 HANA either as an update or a new deployment, then download the Forrester report here. Read how one of the most important, yet underappreciated decisions could make or break your deployment: The IT infrastructure.

4 Overlooked Considerations That Can Cause Your AI Strategy to Fail

4 Overlooked Considerations That Can Cause Your AI Strategy to Fail

| 7 minute read

 

If you feeling some pain in scaling your AI strategy,
then this guide is for you.

 

AI is considered a driving force powering the next age of human progress and computing platforms. Early experiences suggest that achieving success with AI/Machine Learning/Deep Learning is harder than expected. The power of the transformative effects of AI is not as simply as turning on a light switch.

AI is the second-most important initiative to enterprise leaders today, second only to using data-driven insights to improve products and services, according Forrester Consulting.

The No. 1 goal for AI-based projects is increasing revenue growth (43%), followed closely by improving employee productivity, improving CX, and increasing profitability. Not surprisingly, top use cases mirror these key goals with over 70% of firms currently using or expanding their use of AI technology to support customer service interactions, operational efficiency, and business intelligence application scenarios.

Every organisation actively advancing their AI strategy and capabilities are doing so not in isolation, rather dealing with the direct dependencies and impact AI is placing on the organisation’s people, data, processes and technology.

AI Success is Driving the Next Generation of Market Leaders

With the rapidly evolving and transformative effects of the fourth platform, failure to participate is no longer a viable business option. Companies that wish to digitally transform must understand that embracing the status quo will leave them struggling to keep up with competitors that recognized the opportunity before them.

AI has the ability to create incredible value by decreasing costs, increasing productivity, and improving customer experiences.

Up until 2019/2020, enterprises focused on experimenting with AI within specific areas or functions. According to Forrester, those enterprises that have achieved success with AI are seven times more likely than firms that have not scaled AI to be the fastest-growing organizations in their industries. Conversely, those that have not scaled AI are 1.4x more likely simply to be average in terms of revenue growth rate compared to competitors.

Why Organisations are Failing at AI

Data Quality: 90% of firms are severely challenged at scaling AI across their enterprise, with data the driving force behind this difficulty.

Lack of AI Understanding: One of the most perplexing findings of the same Forrester study is that 52% of respondents simply don’t know what their AI data needs are. If enterprises don’t know what they need, they may blindly jump into AI initiatives that have little chance of success or worse, may never try in the first place.

AI Skills Shortage: Without the right skills in place, teams will struggle with solutions and fail to successfully carry out use cases. The skills shortage is real, and many enterprises are underestimating the time needed to rampup to become proficient (hint: It’s more than 12 months).

Not Thinking Beyond Compute: Simply put, there is no AI without information architecture (IA). Many organizations start with the focus on the compute side of AI, investing in GPUs. While GPUs are critical to AI success, this singular focus can, and sometimes does, lead to the disruption or complete failure of AI projects.

The IA that handles an AI pilot project may not function well when scaled across the enterprise. Organisations must review their entire information architecture for potential breakpoints (performance, cost, security) across computing processing, data storage and interconnectivity when they start to scale AI across the enterprise.

Data Quality is the Top Success Factor for AI. What are the others?

Without properly prepared and curated data, AI initiatives fail. While data quality and data standardisation are the top AI success factors, they are not the only ones:

Data Integration– The ability to connect AI platforms with analytics/business intelligence platforms, along with connecting multiple data sources.

People– Access to the abundant data science and AI/ML engineering skills is critical. As noted above, there is a shortage of these skills in 2020, as demand surges across many industries.

Tech Infrastructure– GPUs are a must, no arguments there. Like GPUs for computing power, not all storage is created equally for AI and Data workloads. Many general-purpose platforms were not designed with AI in mind. Purpose-built platforms like IBM Spectrum Scale and IBM Cloud Object Storage have been designed specifically to handle Data and AI workloads.

In addition, the next generation of Information Architecture (“IA”) is being designed to scale up and out with minimal to no disruption to your production operations. The current thinking behind multi-cloud and hybrid cloud architectures is ensuring this next-generation IA scales not only from a performance standpoint, also with cost and security considerations as well.

Data Management Processes– Managing data manually can be a challenge, especially when training AI. Organisations who are successful at scaling AI think ahead in this regard and consider automation to automatically manage data in an efficient manner.

Key IA Considerations within Each Stage of the AI Journey

Each AI journey or initiative contains four stages: a) collect the data, b) organise the data, c) analyse the data, and d) infuse insights into the organisation. AI is driven by data, and how your data is stored can significantly determine success. The specialists at IBM outline the impact of Storage across the four stages:

Data Collection The raw data for AI workloads can come from a variety of structured and unstructured data sources, and you need a very reliable place to store the data. The storage medium could be a high-capacity data lake or a fast tier, like flash storage, especially for real-time analytics.

Data Organisation. Once stored, the data must be prepared since it is in a “raw” format. The data needs to be processed and formatted for consumption by the remaining phases. File I/O performance is a very important consideration at this stage since you now have a mix of random reads and writes. Take the time to figure out what the performance needs are for your AI pipeline. Once the data is formatted, it will be fed into the neural networks for training.

Data Analyse and Infusion. These stages are very compute intensive and generally require streaming data into the training models. The Training and Analysis stage is an iterative process, requiring setting and resetting, which is used to create the models. Inferencing can be thought of as the sum of the data and training. The GPUs in the servers, and your storage infrastructure become very important here because of the need for low latency, high throughput and quick response times. Your storage networks need to be designed to handle these requirements as well as the data ingestion and preparation. At scale, this stresses many storage systems, especially ones not prepared for AI workloads, so it’s important to specifically consider whether your storage platform can handle the workload needs in line with your business objectives.

Moving Forward at Scale

To compete beyond 2020, organisations will need to progress on developing and scaling their AI capabilities in order to remain or become the leader in their space. The next paradigm is well under way. The next step is up to you.

TES offers a free IA and Storage Assessment, providing a free assessment report outlining where your IA and Storage is viable (and deficient) for scaling AI. Many of our clients use this report as a fresh set of eyes to validate their strategy and/or find the breakpoints that could emerge once the scale effort starts. Request the free assessment here.

The 4 Steps to Prioritise Your Cost Containment Initiatives

The 4 step process to identify, prioritise and execute your IT Cost Containment plan.

The 4 Step Process to Prioritise Your Cost Containment Initiatives

| 7 minute read

 

You may not know where to start, just that you must act swiftly.
If so, then this guide is for you.

 

It is famously said that “You can’t shrink your way to greatness”, but with a financial and health crisis, you do need to make cuts to survive.

In the early days of COVID, you raced to complete transformative initiatives to keep the organisation operating as expected. Many digital transformation initiatives became your top priority, helping transform the organisation in a matter of weeks rather than months or years.

However, a new reality has emerged for many: IT spend is too high and unsustainable within our unpredictable lockdown economy. Like many financial and IT leaders, you may be under pressure to get the operating environment ‘back to normal’ and must find a way to continue to deliver without the same level of resources.

Unfortunately, many, now exhausted, IT Leaders don’t want to dismantle the progress made over 2020 but realise that they must deal with the overspending issue in the face of one initiative depending on the other.

The Cost Reduction and Containment Initiatives for Enterprises in 2021

IT Cost containment and reduction initiatives range from small to involving the entire organisation. Easy, near pain free reductions include cancelling unused or underused monthly SaaS subscriptions while more complex, time-consuming initiatives involve code evaluation and shifting workloads to more cost-efficient platforms.

The cost containment and reduction initiatives many leaders are undertaking in 2021 include:

  • Evaluating existing projects for Pause, Stop and Continue
  • Terminating unused SaaS or cloud services
  • Ensuring application code minimizes computing resources
  • Shifting workloads or data storage to the most cost-efficient platforms
  • Negotiating down contracts
  • Consolidating databases and/or enterprise software onto less costly platforms, leading to a significant reduction in software licensing costs and operating costs
  • Increasing productivity of assets through increased use or consolidation
  • Driving automation across the IT department
  • Evaluating the impact of CapEx and OpEx on the IT budget. Leveraging SaaS-like consumption models for capital projects
  • Reducing internal service levels

Getting Started: Begin with the View to Strategic Cost Containment

Research shows that organisations that invest strategically during tough times are more likely to emerge as market leaders in the future. Tough times require difficult actions.

All organisations have some easy, tactical opportunities to save budget but this often not enough. Cost-cutting alone without context to the organisational impact is a recipe for disaster (and career limiting!). Frozen or suspended costs may help provide immediate expense relief, but may resurface at the wrong time. Getting it right means making strategic cost decisions.

Cost containment and optimisation initiatives should be sustainable over the short term and long term. Therefore, leaders should ensure decisions are made with a full understanding of the business impact and avoid cuts that simply shift spend — spend that is likely to return in another place or time without any overall gain or benefit to the organisation.

 

Step 2: Evaluate Cost Decisions with a Cost Optimisation Framework

To assess the cost containment, optimisation initiatives and programs to undertake in 2021, TES recommends the use of a cost optimisation framework. The framework balances the cost impact and potential benefits with that of the impact to the business, time-to-value, risk (business and technical) and any required investment.

Not all cost containment initiatives will result in the same benefits. By assessing cost containment initiatives within a framework, a prioritised and optimised list of cost containment initiatives will emerge that will:

a) Meet cost-cutting targets, and
b) Ensure the organisation is well-positioned for better days ahead.

The recommended framework consists of six (6) key areas to analyse and determine your prioritised cost-cutting initiatives:

Potential Financial Benefit
Estimate the financial impact each cost initiative can impact the bottom line.

Ask: How much can be cut from my budget if the initiative is implemented? Is there an effect on cash flow in the short and long term?

Business Impact
The optimal cost reductions are those that occur within the same fiscal period. While long term cost savings can drive the organisation forward, these may not produce much-needed immediate cost savings. Determine what impact an initiative will have on the operations of a specific business unit or function and on your people.

Ask: Will there be an adverse impact on day-to-day activities and operations, such as decreased productivity or product time to market? If the organisation fails to grasp these effects, initiatives may fail.

Time to Value
Whether cost containment and optimisation initiatives are approached via a Waterfall or Agile thinking, the time it will take to realise the cost savings and improve business value needs to be considered. If the cost savings will not be realised until the next fiscal period, then the initiative may not be as valuable as an initiative whose value is delivered instantly, no matter the ‘size of the prize’.

Ask: Can the cost savings be captured and realised within the desired time frame (weeks/months/fiscal year)? What is the best method to measure soft savings with an initiative?

Degree of Organisational Risk
The effectiveness of the cost containment initiative may depend on whether your organisation and people can change and adapt to new processes or structures.

Ask: Can our people ensure the changes are made? Does the organisation possess the capability of adapting and learning to change?

Degree of Technical Risk
This risk resides within the domain of IT leaders. IT leaders must work across the organisation to ensure IT changes can be integrated within the current operations. Delays caused by or attributed to the initiative could result in a loss of service delivery or productivity.

Ask: Can the change undermine the ability of our systems to deliver services?

Investment
Cost optimisation sometimes isn’t about cost reduction; in some cases, it is about sustained improvements in business processes, productivity and time to market. Some initiatives will require an initial investment, that leadership (and/or the executive board) must agree to fund. Present a business case showing the potential business benefits vs. the status quo and the level of investment required.

Ask: Does the initiative require a large, upfront investment before savings can be realised? Can our organisation make an investment at all?

 

Step 3: Determine Your Optimised Cost Containment List

Each organisation is different in terms of risk appetite, policies and investment considerations in challenging times, just to name a few. Your decision framework should account for these factors.

Start by weighing each of the six (6) assessment areas above, with Potential Financial Benefits and Business Impact weighed as one group, and the other four together within a second. Score the proposed initiative across each of the six areas. Once you determine the score values, calculate a weighted assessment score for the initiative and map to a 3×3 grid. Repeat this for all your considered initiatives. When all the initiatives have been mapped, you can prioritise your list with those high impact, low risk/time to value as first to action.

Need Help with the Assessment? Download our free IT cost containment framework tool to complete your assessment

 

Step 4: Action the Cost Containment Initiatives and Reduce your IT Spend

With your assessment complete and the cost containment initiatives prioritised, the work to contain, reduce, and change begins.

The strategic assessment outlines those initiatives appropriate for your organisation, balancing potential cost reductions against the net benefits and potential risks.

Putting these initiatives into action to realise the cost savings is when the real work begins. The Enterprise Specialists at TES are well experienced in database consolidations, data centre migrations, staff augmentation, hybrid cloud AI designs, contract renegotiations and code audits for performance productivity – many cost containment projects enterprises are executing in 2021.

Our specialists ensure you capture the anticipated cost reductions. Book a free assessment session today with an Enterprise Specialist to help you develop your executional roadmap to cost containment and reductions.

How to Design your Enterprise Hybrid Multi-Cloud Storage Strategy

A step by step guide to formulating your ideal infrastructure strategy.

How to Design your Hybrid Multi-Cloud Storage Strategy

| 7 minute read

 

Hybrid Cloud storage is an approach to managing cloud storage that uses both local and off-site resources. The hybrid cloud storage infrastructure is often used to supplement internal data storage with public cloud storage. Hybrid cloud storage is a critical component of an overall hybrid cloud strategy, as one drives the other. 94% of enterprises are pursuing a Hybrid Cloud Strategy in 2020. Cloud technologies, and to some extent on-premises technologies, have matured to make the Cloud value proposition less ‘either/or’, rather an ‘AND’  proposition.

Hybrid Cloud isn’t “to cloud or not to cloud” as Deepa Krishnan, IBM offering management director, wrote in her blog, rather “What is the best way to optimise my IT environment to drive my business forward”? The maturity of these technologies makes almost IT vision possible – all within your specific cost, regulatory compliance, and security configuration framework.

What Is Hybrid Cloud Storage?

When implemented successfully, no one in your organisation should know as the hybrid environment will act as a single storage system. Before we dive deeper into Hybrid Storage for enterprises, let’s first define the many terms associated with Hybrid cloud storage:

On-premise: This is the IT infrastructure you own, located inside a data centre or colocation facility. You bought the enterprise servers, storage environment, switches (etc). and you are responsible for the management and administration of the overall IT environment.

Public Cloud: This is the IT infrastructure you don’t own and pay for access through a cloud services provider such as IBM Cloud. The public cloud vendor provides access to a set of standardized resources and services and is available on a pay-per-use consumption model.

Private Cloud: Provides a cloud-like solution within a defined hardware footprint. Also known as a corporate/internal cloud.

Hybrid Cloud: Combines resources from private, public, and on-premises environments to take advantage of the cost-effectiveness each platform can deliver.

Benefits of Hybrid Cloud Storage

Enterprises adopting a hybrid cloud strategy view this as the optimal approach to address the constant explosive growth of data and content and to derive value from such data. Enterprises deploying a Hybrid Cloud Storage strategy are realising several benefits that may only be possible through a hybrid approach. These benefits include:

  • Extending the life of on-premise storage and maximising such investments
  • More predictable storage usage and scalability based on changing storage needs
  • Better control over data costs
  • Reclaiming on-site storage capacity
  • Optimising the balance between storage costs and data value
  • Improving disaster recovery and business continuity (DR/BC) strategies
  • Simplifying operations and saving time for IT personnel

Hybrid Storage Use Cases

You can use hybrid storage for a variety of purposes. The most common use cases include:

Sharing application data: Frequently you need to be able to access application data both on-premise and in the Cloud. Many applications share data and you may have applications in both environments. This requires applications to be able to access data no matter where the application is hosted. Hybrid storage enables you to share this data smoothly.

Cloud backup and archive: You can use hybrid storage to optimise backups and archives across multiple sites. For example, simple solutions will help you quickly and securely move backups to Cloud locations. Advanced solutions can help you combine back-ups from multiple sites into a centralised location for faster RTO and RPO.

Multi-site data: Hybrid storage can help you share data across sites while keeping data consistent. You can use hybrid storage solutions to synchronise data, ensuring that all storage resources contain reliable copies.

Extending on-premise data to the cloud Hybrid storage systems are used to supplement local data storage with cloud storage resources. These systems use policy engines to maintain active data on-site and move infrequently used data to cloud storage.

Big data applications Hybrid storage can help you process and analyze big data more efficiently. Using hybrid storage, you can easily transfer datasets from the cloud for in-house computations or vice versa. You can also more easily isolate sensitive or regulated data.

The 4 Areas to Assess when Designing a Hybrid Cloud Storage Strategy

A successful hybrid cloud strategy depends on a successful hybrid cloud storage environment. A Hybrid Cloud Storage Strategy starts not with the technology but rather understanding the complete picture of your data:

The Relationship between Your Data and Applications (Data Gravity)

  • What is the scope and size of your datasets?
  • Where does most of your data live? Is this ideal?
  • What applications need to access this data?
  • Can the data be easily moved? If the data needs to move, are there additional changes that need to occur to facilitate the change?

 

Security

  • Access: Who should have access to your data? Perhaps more importantly, who should NOT have access to your data?
  • Monitoring: How are you keeping an eye on the above two groups of people?
  • Lifecycle: How long is your data valid/relevant/useful?
  • Retention: What backups and disaster recovery options do you need for your data?
  • Compliance: What, if any, governance law dictates where and how long your data can live?

 

Performance

  • Latency: Do any of your applications have latency requirements? What is the impact if they are not met?
  • Frequency of Access: How often does your data need to be accessed? This is important, as it can impact operating costs
  • Growth: How will data growth impact overall performance?
  • Data Types: How easy is it to sift through your data (structured vs. unstructured)?

 

Other factors

  • Cost: Considerations include price, performance, tiers, and data transfer rates
  • High availability requirements & Network connectivity: While related to latency, this consideration goes further when moving data to the cloud, as internal networking equipment may need to be updated to ensure reliable connections, you cannot consistently access data and services
  • Alignment of your data strategy to your overarching business strategy
  • Data integration: Data needs to be synced across your infrastructures. Managing this synchronisation can be challenging without an automated process. Products like IBM Cloud Paks enable this data integration
  • Unified management: Smooth operations require unified, centralised visibility and management. Platforms like Red Hat OpenShift can drive this optimisation and automation

Technical Considerations

Many storage vendors have hybrid storage solutions that are proprietary and another form of lock-in.

Many enterprises are preferring to consolidate their hybrid cloud storage without being forced down one on-premise storage manufacturer. We can show you how to avoid a closed path whilst receiving consistent management of your data across storage vendors and across public cloud providers.

How TES can Help You Pursue the Right Strategy

The Enterprise Storage Technical Specialists at TES can guide you to the ideal computing environment, whether on-premise or hybrid Cloud. Using a unique blend of assessment processes and analysis tools, see how you can select the ideal strategy for your operating environment – often at no charge to you.

See if you qualify for the free personalised storage assessment with one of our Enterprise Cloud Storage Technical Specialists. Request the storage assessment here

5+1 Considerations for Selecting the Ideal Platform for Your SAP HANA Environment

SAP HANA is one of the first data management platforms to handle both transactions and analytics in memory on a single data copy.

This changes the game when selecting the ideal platform for SAP HANA.

5 + 1 Considerations to Select the Ideal Platform for Your SAP HANA Environment


| 7 minute read

SAP HANA is one of the first data management platforms to handle both transactions and analytics in memory on a single data copy. It converges a database with advanced analytical processing, application development capabilities, data integration and data quality.

The journey to SAP HANA can be a major transition. The decision of how you choose to implement S/4 HANA and the underlying platform may impact your organisation for many years to come. Our view is that a mission-critical application and database like SAP HANA requires the same care when it comes to the underlying infrastructure platform.

Database Size (and projected growth)

According to SAP (Link: https://www.sap.com/documents/2016/10/26db7b76-8f7c-0010-82c7-eda71af511fa.html), Memory is the leading driver for SAP HANA sizing. This feels obvious since SAP HANA is an in-memory database.

Sizing your memory therefore becomes critical to success. Size incorrectly and this could lead to HANA underperforming. With the natural growth of structured data at 30% per year, accurately estimating your memory needs means both success now while avoiding unnecessary costs to replace an undersized platform. So, it is therefore essential to plan for the future, all the while optimizing total costs. The larger the database, the more suited platform is on-premise.

System sizing is an exercise that is now supported by using various computing calculator tools provided by SAP and IBM in order to build the ideal configuration according to the size of your database, planned use cases, and projected data growth.

Cost (and Risk) of Downtime

SAP Applications typically run the mission-critical processes inside an organisation. In this scenario, if SAP goes down, portions of the company may stop too. As systems (and data) continue to become integrated and connected inside the organisation, the importance of uptime increases dramatically.

The Enterprise Specialists at TES hear from other customers that unplanned downtime costs are increasing. We account for this increase due to an increase in higher lost revenues from digital operations. For many enterprises, the cost of unplanned downtime is running into the millions an hour in lost revenues and fixes.

Poorly configured or chosen infrastructure options can cause unplanned downtime to occur several times a year ( click here for a Forrester report on the potential cost of downtime), meaning that ‘cheap’ upfront investment could lead to costly 10x challenges down the road.

Sensitivity of Data

With persuasive encryption now common for data at-rest and in-transit, cybercriminals have shifted their attacks to target (memory) in-use. This is because data in-use has been slow to adapt to persuasive encryption and thus accessible.

If your SAP instance contains significant volumes of personally identifiable information (“PII”), financial or health data, then platforms with integrated confidential computing capabilities are best suited for you.

Infrastructure Consolidation

One often overlooked consideration when selecting the ideal platform for your SAP HANA environment is the state of your current infrastructure. In some cases, it can be economically feasible to invest in a modern platform that is not only the ideal choice for your SAP HANA environment, can also enable you to consolidate existing workloads onto the same platform choice.

When this occurs, the total cost of ownership (TCO) receives a multiplying effect. This type of consolidation can lead to significant software license savings (in some instances, 33%), reduced administration time and resources, and reduced operating costs.

Analytics and AI Workloads

AI is a very resource intensive workload. Your SAP HANA environment should provide significant data that AI would like to ‘consume’. Selecting a platform purpose-built (instead of general purpose) for AI strengths your chance of AI strategy and execution success. Such success would then impact the value you could receive from your SAP HANA environment.

No Longer a Consideration: OPEX Spend

The need to shift IT spend from CAPEX to OPEX was a high consideration five years ago, as many organisations sought to shift their IT consumption from Capex to Opex by leveraging Cloud. The perceived core benefit of this shift is an efficient IT environment in which you only pay for what you use, whilst having the capacity to expand quickly.

With advances with on-prem pricing models, organisations can now enjoy the same pay-as-you-go pricing popularised by cloud vendors with their on-prem environment.

Many Enterprises Depend on IBM Power for Enterprise SAP HANA environments

IBM Power is designed for the demanding, high volume, mission-critical transactional and analytical workloads that are produced by a SAP S/4 HANA environment.

IBM Power is purpose-built for Big Data and AI workloads, and is industry-leading in avoiding disruptions and disasters. (Read how one customer recorded 18 straight months of uptime here).

IBM Power Systems is certified by SAP to run both non-SAP and SAP on the same server (i.e. Oracle database and SAP production applications on the same physical server).

IBM Power Systems is certified by SAP to run both HANA production and HANA non-production on the same physical server.

Power Systems can run the largest SAP HANA virtual machines with almost zero overhead. SAP has certified 24TB in scale-up configuration for both OLTP (S/4H) and OLAP (BWH) environments. This scalability also allows customers to run large SAP-certified, scale-out SAP HANA configurations. Power Systems also delivers 2X faster core performance versus compared to x86 platforms. The higher throughput helps in reducing the number of cores needed, further reducing the cost.

Additionally, IBM Power Systems offers a predictive failure alert capability. Using heuristics running in the background, IBM Power can pre-emptively warn DBAs when a failure is likely to occur.

The Easier Method to Assess the Sizing Ideal for You

With so much choice, much can go wrong with the wrong decision. The Enterprise Specialists at TES can provide a no-charge assessment of the platform ideal for you. Contact a specialist today and start your SAP HANA deployment with the right step forward.

1 Unexpected Benefit of an Enterprise Storage Health Assessment

Such practises have been applied by IT leaders as well to avoid hardware failure. This is the expected benefit of an IT health assessment. But is there more to an Enterprise Storage health assessment beyond avoiding failure? In short Yes.

1 Unexpected Benefit of an Enterprise Storage Health Assessment

| 4 minute read

 

Regular health checks have been used for decades in personal healthcare as a preventive way to catch health concerns early to avoid premature aging, deterioration and in some cases, death.

Such practises have been applied by IT leaders as well to avoid hardware failure. This is the expected benefit of an IT health assessment. But is there more to an Enterprise Storage health assessment beyond avoiding failure? In short Yes.

Why your Enterprise Storage Needs Regular Health Checks

Just like you, the health and performance of your enterprise storage environment may change over time. While aging equipment is an obvious factor that fail, other factors like workload changes, use cases and requirements change can lead to seemingly ‘healthy’ equipment performing sub-optimal.

Raising the stakes is AI/Machine Learning. An optimal performing Information Architecture that includes a healthy enterprise storage environment can be the difference between success and failure with AI. In many sectors, being unsuccessful with AI-based initiatives can put your organisation in ‘follower’ status for years to come.

Whether your data storage is on-premise or in the cloud, it’s critical that you keep your storage infrastructure in good health—after all, data just be your organisation’s most valuable resource. When undertaking an enterprise storage health check, you should be able to get answers to these questions:

  • “How healthy is my storage?”
  • “Can it scale to handle an influx of data?”
  • “Does my enterprise storage environment strengthen my cybersecurity resilience?”
  • “Is the cost structure still optimal? Should updates be made to reduce cost and maintain performance ?”
  • “Does my enterprise storage environment protect my data from breaches and cyber attacks?”

 

 

Sub-Optimal Performance Being Uncovered in 2020

Many Enterprises moved to a cloud-first strategy over the last decade.

This initial transformation period has brought many benefits and advantages to organisations. However, many organisations are discovering that Cloud, like many past IT paradigm shifts, is not a perfect ‘silver bullet’ solution, and 12% are moving a portion of their workloads back to on-premise.

For these early adopters, Cloud 2.0 has begun with a Hybrid Cloud approach that takes a best-of-breed approach to IT infrastructure, computing, storage and data to create the optimal IT strategy and environment.

 

7 Causes of Sub-Optimal Performance Uncovered by Health Checks

Some of the areas that can result in sub-optimal performance of your enterprise storage environment:

Use cases: Data needs are changing. For example, some organisations are discovering that data storage locally improves the AI performance.

Workload changes: How prepared is your storage infrastructure to handle a flood of data? If you don’t know the answer, you’re putting your organization at risk.

Data Security: Has your enterprise storage environment prioritised data security? This is becoming more important as data breaches increase and ransomware attacks are becoming more expensive

Ageing hardware: Evenually, all assets need to be retired. Infrastructure is no different.

Inefficient computing resources: Not enough cores and memory constraints could be signs of future failure.

Misaligned storage media: Each type of storage media performs differently and have different failure rates. Health checks can ensure you have the right balance.

Networking configuration issues: An optimal performing environment is highly dependent on a healthy, well-functioning network.

 

Let the Enterprise Storage Specialists Guide You Forward

Over the last 6-months, many of our clients have taken a renewed look at their IA due to the rising IT costs caused by COVID. Many are retaining the strengths of the cloud while reducing the costs of their IT operations.

That’s where we can help guide you forward.

One such tool is the Client Storage Assessment. This Free-to-You engagement helps you understand the operating performance of your enterprise storage environment, now and into the short term future. The output provides you with a roadmap to Optimal Performance, whether you operate in an OPEX or CAPEX environment. Request your free engagement here.