Testing Your Data Recovery Plan

Posted on

Whether it is human error, a weather event or a technical issue, a disaster is likely within the lifetime of your systems and data. Being prepared for disaster recovery is important to ensure your data and operations are protected and that downtime is minimal.  Backup and recovery is not just about having a plan on paper or a phone tree of contacts or a script for customers.  Preparedness comes from frequent testing of your backup and recovery solutions.

How often should tests be conducted? The answer to this question depends on your industry, regulations, compliance standards and the nature of your data and processes.  Think through these factors and plug in business leaders to determine the appropriate testing plan.

Quality is important. Each backup and recovery test should be treated as the real thing. Formal recovery procedures should be followed and proper documentation should be part of each test.  Be sure to review your testing plan against industry best practices or work with a trusted business partner to develop a solid test plan. After each test, a review should be conducted on the results of the test and what worked well and what did not.

Understand the environment. As part of your backup and recovery plan think through which business or IT situations put company data at risk for a disaster.  This will allow your team to conduct more real-life testing scenarios.  This exercise can also help you better understand periods in the year when risk may be at its greatest and the types of business functions, IT processes and customer interactions that open data up to risk.

Practice makes perfect. One of the biggest benefits of frequent testing is that your team gains a level of experience and confidence with your backup and recovery plan. By practicing they can react more quickly in an actual disaster event.

Backup and recovery planning is an ever-evolving function of the business. As your business changes so does your data backup and recovery needs. Make this critical function a key part of your team’s agenda.

Current Cloud Trends

Posted on

In Monday’s blog post, we discussed why small and medium-sized businesses should consider the cloud, especially when it comes to data protection and disaster recovery. Today, let’s again look at a recent cloud trends webinar featuring Joe Lyden, Evolving Solutions Cloud Sales Specialist, and James Keating III, Evolving Solutions Business Technology Architect. Below we focus on what the key trends are in the marketplace.

Some current cloud trends of note are:

  • A growing focus on management of Amazon Web Services (AWS) workloads by traditional co-location providers. Companies who at one time may have been seen as AWS competitors are actually now offering AWS management
  • Public cloud vendors: “a two-horse race.” Network World reported Gartner research showing that Amazon and Microsoft Azure are by far the clear dominant players in the public IaaS market
  • People want to talk cloud. The Evolving Solutions team, as James Keating points out in the webinar, is seeing this first hand. More companies have someone who is responsible for cloud and Gartner reports that 6 out of 10 Fortune 500 companies have hired or are looking to hire a director-level person to oversee cloud strategy
  • Cloud as part of the technology tool belt.  No longer is cloud consider “new” or “something for that project in the future,” instead the cloud is seen as a standard tool that the technology team can tap to improve current operations, meet new demands and drive innovation

Here are also specific use-cases our team is seeing. First, more businesses are turning to the cloud for disaster recovery, because disaster recovery as a service (DRaaS) provides businesses more protection and resiliency at an affordable, pay-as-you-go price. More businesses are also looking for an integrated monitoring service – cloud and on-premise systems together. Finally, businesses are also developing what we call a “private hybrid.” In other words, companies are asking what applications work best on-premise and which applications work best in the cloud without intention to bursting into the cloud. This is neither a true hybrid or public cloud.  This scenario is what our James calls, “a private cloud with a side of public cloud storage or backups.”

Want to talk more about how current cloud trends are shaping up and what that might mean to your company? Contact Evolving Solutions to discuss. You can also listen to Joe and James’ full cloud trends webinar here.

Why SMBs Should Consider the Cloud

Posted on

Last month Joe Lyden, Evolving Solutions Cloud Sales Specialist, and James Keating III, Evolving Solutions Business Technology Architect, discussed current cloud trends during a lunch webinar. Over the next two blog posts, we will look at several parts of that discussion. Today, let’s focus on cloud solutions for data protection and important factors for cloud success.

How important is your data?

  • 81% of companies that have experience an outage in the past 2 years were down for more than 2 days
  • 93% of companies that lost their data center for 10 days or more due to a disaster filed for bankruptcy within one year of the disaster
  • 51% of small and medium businesses (SMBs) have an IT business continuity plan in place. Flip that number and it could mean that 49% are not fully prepared
  • $10,000 is the estimated average cost of a single data loss incident

In today’s marketplace where you are expected to be “on” 24/7, data protection and business continuity are key to staying competitive.  Through the cloud, SMBs can now more than ever access cutting-edge, reliable data protection, disaster recovery and business continuity solutions at an affordable price point. Disaster recovery as a service is becoming an important and affordable entry point for many businesses when it comes to cloud adoption.

Small and medium-sized businesses especially are stretched when it comes to time and resources, but in today’s marketplace you are still expected to provide reliable, always on service. Cloud solutions allow SMBs to do just that – providing any-sized businesses with enterprise-class technology.

Take a step back.  The cloud is “not magic pixie dust” as Joe Lyden points out in the webinar.  Just like with other technology projects, you must have a clear cloud strategy, specific objectives and a clear understanding of what will integrate well. Testing is also extremely important.  The right cloud partner can help SMBs navigate cloud solutions available and weigh in on what would work best for your situation.

Look for our next blog post on Wednesday, April 27th where we will cover current cloud trends. You can also listen to Joe and James’ full Cloud Trends webinar here.

Back up with Brains

Posted on

Joe Garber walks through four types of analysis that can be applied to backup and disaster recovery (DR) in a recent article for NetworkWorld.  He writes, “Data is the DNA of the modern organization and found in the cloud, behind four walls and at the network’s edge.  Data is also growing at a greater speed than ever before.  This unique combination of growing data complexity, sprawl and volume is forcing IT to rethink traditional approaches to backup and recovery.”

Today let’s take a look at the four types of analysis that he describes as giving back up and disaster recovery “brains”. The analysis types include:

  • Environmental
  • Retrospective
  • Predictive
  • Prescriptive

Each provide a different look into your network and when combined Mr. Garber writes, “they allow enterprises to be proactive in prioritizing data, predicting resource utilization, mitigating risk and optimizing infrastructure in order to reduce the burden on resources and manage the costs.”  Here are the definitions of each:

Environmental – with data spread inside and outside of the company environmental analysis allows IT to determine how to manage back up and delivery of information.

Retrospective – this analysis takes into account historical back up and recovery success and failures. It also can be used to determine how resources are best utilized and used to prioritize data so back up can be optimized to meet service levels.

Predictive  – as the name implies, this type of analysis can help IT plan for future capacity needs. Using historical patterns it can also help to identify potential back up and DR  crunch points that need to be resolved.

Prescriptive – this analysis looks at what is happening now and provides steps to solve problems when they occur.

In closing Mr. Garber adds, “As organizations adjust to the reality of a changing IT world — with increasing volume, variety, and velocity of information sources, which have expanded beyond the four corporate walls — they must also expand their information management practices to keep pace with the increasing demands.  In short, they need to move from defense to offense.”

Share how your DR is moving from defense to offense this year.

Disaster Recovery in the Age of Cloud Infrastructure

Posted on

What does disaster recovery look like in the age of cloud infrastructure delivery? This year James Keating III, Business Technology Architect for Evolving Solutions took time to answer this question and walk through what he calls “disaster recovery without boundaries.” James breaks down how to set up a cloud infrastructure for backup and disaster recovery processes and operations and also dials in on how to evaluate your hardware and software needs.

The attributes of cloud that can be utilized for disaster recovery are as follows:

  • On demand computing
  • Containerized workload
  • Location agnostic
  • Speed of implementation

Each one of these attributes can lend itself nicely to improving disaster recovery abilities in terms of cost (both capital and labor), speed to recovery and the ability to automate.  We invite you check out his posts and learn more about what cloud can do for the world of disaster recovery.

In Part 1, “offsite backups with a side of disaster recovery,” James walks through the hardware and software needed to set up a cloud cluster for offsite backups. The goal is to reduce complexity and manual labor and utilize the data in the event of a disaster or for testing of failover.  The cloud cluster makes all of this possible without having to invest in a second data center location or additional management.

In Part 2, James focuses on building a scenario to tackle disaster recovery where return-to- operation (RTO) and recover-point-objective (RPO) are critical. Specifically, his solution works to meet an RTO of 8 hours and an RPO of 60 minutes with 100% of the compute and performance of production.  In this scenario, a secondary cloud site is built to look as similar as possible to the primary site.

Finally, in Part 3, James addresses the reasons why specific hardware and software was chosen in his back up and RTO/RPO examples.  He says that, above all, it is important to get insight into the facts around individual situations.  With the change rate within IT and the idea that technical design is not a one size fits all proposition, the only way to really know what will work well – versus what will just be a good fit – is to properly evaluate each company’s needs against available options.

Cloud clustering or disaster recovery as a service can containerize your workload and has the ability to treat disaster recovery and testing the same way data centers treat generators and electrical load.  What might have seemed unattainable without significant investment in both costs and labor in the past, DR as a service now offers businesses of any size a great and affordable disaster recovery solution.

If you would like to learn more or discuss your company’s unique needs, please contact Evolving Solutions.

The Importance Disaster Recovery Planning

Posted on

Did you know that three out of four companies are not ready to face a disaster? This finding is from the Disaster Recovery Preparedness Council and reported by Pragati Verma for Forbes.

Ms. Verma writes, “This is surprising on two counts. First, C-level executives have seen the havoc created by a series of high-profile data breaches and natural disasters in the last several years. Second, data protection, security and disaster recovery (DR) are expected to be among the top five areas driving spending on storage-related services during the next 12 to 14 months.” The problem comes from many companies getting the disaster recovery technology in place but not developing an action plan for what to do in a disaster. Or, perhaps, there is a disaster recovery plan, but it is shelved and collecting dust.

Disasters can come in many forms not just natural such as floods and storms. What would happen to your business if you lost connectivity for a website or key system? What if your business lost customer data or had that data breached by outsiders? Disaster recovery planning is an important foundational element for any-sized business. Having a comprehensive plan can help to mitigate the risks when a disaster does take place.

Ms. Verma compiled these tips to help you get started in her article:

  • Evaluate your business assets and systems. What is most crucial to your customers and bottomline? Understand where these systems live and how they interact.
  • Determine the potential cost of downtime
  • Determine your risk tolerance
  • Classify your data. A one-size-fits all plan is not a good idea both from a budget standpoint and operations
  • Practice and test your plan

Disaster recovery planning should be an ongoing, living process at your company. It is important to have the right technology in place but also important to have thorough planning and forethought.

Prepare Your Disaster Recovery Plan

Posted on

With most consumers and clients expecting 24/7 availability in today’s business world, it is no wonder business continuity and disaster recovery are top-of-mind for us all.  Don’t have a plan yet? Don’t delay. Business disruptions can happen at any time and can have a direct impact to your bottom line. Today let’s look at tips for disaster recovery planning and advice on how to find the solution that is best for you.

Ryan Francis outlines critical “things” to cover in your disaster recovery plan in a recent NetworkWorld article. Here are a few highlights:

  • Establish a disaster recovery functional team , including a spokesperson responsible for communication
  • Identify your risks. What systems and infrastructure are most vital for operations: information systems, communication infrastructure, access and authorization, physical work requirements and key contacts/communication flow
  • Test your plan regularly and ensure it is accessible from both inside the company and remotely
  • Protect your on-premise systems: ensure alarms and sensors are installed and working properly, keep in mind natural disasters such as flooding or storms

Most companies also need to seek outside help for their disaster recovery solutions. Christian McBeth writes for CIO, “Your Board of Directors is demanding a complete BC/DR program, your CEO wants it done yesterday, service providers are promoting their services and calling on you all hours of the day, your different lines of business only care about their IT needs, the media is full of news about the latest and greatest cloud recovery technologies … and the list goes on. So how do you even begin to move forward when a hundred voices are clamoring for your attention, your money, and your resources?” Can you relate to this statement?

Mr. McBeth recommends getting the following right to help guide your decision:

  • Know your business requirements, what is happening now in your systems
  • Know what is most vital and what is not – prioritize systems and data
  • Take into consideration future projects
  • Map your critical applications to IT infrastructure, including interdependencies

In today’s faster business world, disaster recovery planning is more than a once a year test and update event. It is an important business function that works best when it is reviewed, updated and tested regularly.

Continuous Backup Part 1

Posted on

by James Keating III, Business Technology Architect, Evolving Solutions

Over the past year as I have been working primarily with cloud technology I have noticed that cloud backup is a topic that is brought up by many places and companies that are usually cloud avoidant.  It is something I have noticed and it seems to me to be widespread.  Basically this is the idea that a company that has a formal or informal anti-cloud strategy for some reason is comfortable or at least willing to discuss the idea of cloud backups.  Now as we all know the term cloud is really one of the most vague terms in all of IT today.  What is the cloud?  It seems for every person in IT you will get a slightly different definition of the cloud.  This however doesn’t seem to hold true when talking about cloud backups.  Almost everyone I have talked to is looking at cloud backups the same.  Since I like to define things in order to understand them, here is what I believe the majority of people are looking for with cloud backups:

  • A technology that allows for backup images to be stored off premises (not in the same location as the primary data)
  • A technology that allows for inexpensive storage of seldom or never used long term retention backup data sets
  • A technology that allows for simplified administration and accounting of backup data sets (usually the underlying desire is to get away from tape management)
  • A technology that allows for fast restores of data.
  • A technology that doesn’t require hours of administration to maintain.
  • A technology that allows for secure storage of data.

I know this list looks like a normal backup wish list, regardless of cloud.  That is my point, all of the items people are asking for in cloud backups are the same ones that one would think, would be requirements for traditional backup processes without cloud technology involved.  This is why I started to notice the willingness of those who are not as cloud friendly as others, still wanting to discuss cloud backups.  Why?  The conclusion I came to in my opinion, is that backup is an area that those who work in it day in and day out know is more complex and cumbersome than other parts of IT that get more money and focus.  Backups are an area where if somehow a new way of consuming technology would offer any relief regardless of the politics and emotions, it is worth looking at.  Basically backups are truly deep down what keeps up many administrators at night, knowing they are really one issue away from true disaster and that dreaded restore the data moment.

With all of this in mind, using the team of resources available at Evolving Solutions on the cloud team, the team and I set out and have tested many of the cloud backup offerings across the spectrum of SMB and Enterprise packages.  Full administration, backup, restore and disaster recovery testing has been done in various manners.  What came out of this testing is the technologies that allow for what I am calling continuous backup seems to be the ones that are the most effective and meet all of the above wish list items.  The issues with most of these offerings however are version control and cloud lock-in.

In my next blog post, Continuous Backup Part 2, I will go into what version control, cloud lock-in and continuous backup means and why these are the 3 main factors to investigate when looking at cloud backup technology.

Disaster Recovery Without Boundaries Part 3

Posted on

by James Keating III, Business Technology Architect, Evolving Solutions

Over the past two blog posts I have put forth two scenarios for using cloud infrastructure inside backup and disaster recovery processes and operations.  In both of those blog posts I put forth a list of software and hardware that could be used to achieve the goals of each scenario.  It is that list of software that seems to have resonated with readers the most.  I have received numerous emails explaining to me what other options could have been used and/or why certain parts of the bill of materials I presented should be changed.  First, let me say I am glad to see so much discussion was generated by the posts as that was the intent, to get people thinking about options in the changing world of IT.  Second, I will take the remainder of this post to explain why many of the comments and emails I got about pros and cons of the various portions of the architectures proposed are illustrative of the overall point of the Disaster Recovery Without Boundaries series of posts I have made.

The  two most common comments I received by far related specifically to two software tools I had listed by name in the posts, VEEAM and Zerto.   It would appear both of these software choices have very loyal and knowledgeable fans. I have received at least 20 comments and emails from various people stating essentially one of the two things listed below:

  • Why would one choose Zerto, when with some scripting VEEAM can accomplish the same thing?  Further Zerto support is not offered via VMware directly.
  • Why would one choose to use VEEAM when Zerto can be used to migrate data anywhere?  Zerto has numerous awards and has automations and recovery features that are not available in a single package by any other vendor at this time.

Those two comments are examples of why IT and specifically IT architecture choices are more of a journey into the unknown rather than a specific roadmap like most IT folks like to believe.  Basically the idea is IT is changing and businesses make choices based upon the information known at a given time.  Outside factors, inside factors and other items can influence what is a good or correct choice of design.  Taking the example of VEEAM and Zerto one can illustrate the point, if one uses a method of gathering specifics in terms of goals, strengths, risks and abilities as a factor in IT design, architectures may change for different situations using a similar use case.  So for example, let us take a company who has deep scripting skills, understands VMware and is often upgrading versions of VMware and wants to be able to backup and replicate VMware using a single 3rd party tool.  Contrast that company to one that tends to run infrastructure for 3 years without many upgrades in terms of software and has little to no scripting skills and has no budget or interest in growing that skillset but still wants replication and failover of VMware data.  Each of these companies would be correct in choosing different options to achieve roughly the same goals.  The first company would be fine choosing VEEAM, while the second company might not be able to fully achieve the goals due to lack of scripting expertise.  The second company would be well suited to choose Zerto as they would not be worried about compatibility limits with cutting edge releases of VMware and would have better success with a package that required no scripting.  Both options are valid and which one is best is individual to the circumstances of the company that made the choice.

To that end, it is important that with the change rate within IT, and the idea that technical design is not something that is a one size fits all proposition, getting insight into the facts around the individual situations are key to knowing what will work, what will work well and what is just not a good fit.  To get this type of data I recommend going through either a Cloud Potential Study, or Business Impact Analysis to know the how, what and why of the available options to make the best decisions for changing IT needs within the specifics of your situation.  Both are engagements that help identify the risks, strengths, goals and unique business factors that enable IT to make informed decisions.

The overall thrust of many vendors (specifically in the cloud space) that push the idea that one size fits all or that some IT infrastructure magic happens when one moves to an “As a Service” model are simply not real world in my experience.  Due diligence, facts and informed choices are required no matter if you are putting in a physical storage array or looking at an application delivered only via cloud methods.  Contrary to what many may think or say, the IT folks and in many ways the infrastructure focused members of IT are needed more than ever in a world that allows for so many choices and such a fast rate of IT innovation.   A change is how infrastructure is delivered doesn’t diminish the need for skilled people to understand all of the technical details and requirements.

If you would like to know more about how, and what a Cloud Potential Study or Business Impact Analysis is or works and how it may benefit IT at your company contact Evolving Solutions.

Did you miss a post in the Disaster Recovery Without Boundaries series? Check them all out by visiting the links below:

______________

James Keating III is a Business Technology Architect for Evolving Solutions. James is a technology leader in the IT community and brings a combination of excellent technical skills and business acumen which allows him to guide customers in developing IT solutions to meet business requirements.

Disaster Recovery Without Boundaries Part 2

Posted on

by James Keating III, Business Technology Architect, Evolving Solutions

In my last blog post, I covered a scenario to allow for offsite backups with a simple disaster recovery mechanism in the cloud.  This post I will go into a more complicated scenario where both return to operation (RTO) and recover point objective (RPO) are critical. Before I get into this scenario I will set the stage again on what aspects of cloud computing we will be using to help disaster recovery and also more clearly define RPO and RTO.

 

The attributes of cloud that can be utilized for disaster recovery are as follows:

  • On demand computing
  • Containerized workload
  • Location agnostic
  • Speed of implementation

Each one of these attributes can lend itself nicely to improving disaster recovery abilities in terms of cost (both capital and labor), speed to recovery, ability to automate.

Definitions:

RTO = Return to Operation, or the total time from the time a disaster is declared to when the system/s are operational again.  Key to this concept is, it is not the entire time a system is down in a disaster, but the time from when a disaster is declared (could be hours after the incident started) until the system is back up and running.

RPO = Recovery Point Objective, or the time difference from when the system went offline and the timing of the restored data.  So boiled down this is really the amount of data one is willing to lose.  So an RPO of 30 minutes would say when we get back going again the data will be consistent and intact for a time point of 30 minutes or less before the system went offline, or a loss of 30 minutes of data.

Now onto our scenario for this blog post.

Goals:

  • Meet an RTO of 8 hours and an RPO of 60 minutes
  • Meeting of the above RTO and RPO with 100% of the compute and performance of production
  • All applications for this scenario will be ones that reside on a VMware virtual environment in production
  • Ability to have all of the above without investment in a second data center location infrastructure and management.

Items we will need for this scenario:

  • Containerization of workload – As stated in the goals, we will be using our VMware environment as the base for our systems and applications that are part of this scenario.
  • Backup software – For this scenario we will choose to use the current backup software in use in our environment.  This can be a mix of backup software as this portion of the setup is for protection from corruption and a method to still meet the RTO if we have a corruption event, noting that if we do have a corruption event our RPO of 60 minutes is likely not going to happen, more on this later.  So for this example we will say we are using both IBM TSM and Symantec NetBackup.
  • Replication Agent – For this particular scenario we can also use several methods, from VMware replication to other dedicated software.  To allow for our RTO and RPO we will choose a software that can be both a replication agent and failover automation mechanism, Zerto.
  • Cloud Storage Gateway – for this we will choose a NetApp system that will be used as a backup storage target for both our IBM TSM and Symantec NetBackup, which will then mirror our backup data to our cloud provider.
  • Cloud Provider – Since the goal is 100% of productions horsepower in terms of performance and we need to meet our RTO and RPO, we will need a cloud provider that allows us a much more customizable setup that we would get from the common cloud names one would hear such as AWS or Azure.  For this, since we are also assuming a heterogeneous environment in terms of both Linux and Windows inside our VMware, we will choose PEAK.

Since this is a much more detailed scenario than the previous blog post, I will go into some details and notes below, but this is by no means a complete listing of considerations and requirements as the nature of the applications and backup software will play a huge factor in how this would get rolled out in a real world environment.  If you want to know more you can contact Evolving Solutions to go over your particular scenario.

Notes and items about our scenario and choices:

  • The ability to use software to replicate data from our on premises data center to our cloud data center is reliant upon network considerations and also ease of use.  Zerto was chosen precisely because it is easy to configure and maintain, but it also has automation features that allow for reduced exposure to human interaction risks. Further it is optimized for asynchronous replication of VMware systems.  This will help us in terms of meeting both the RTO and RPO goals.
  • We chose PEAK because we are using VMware.  We need to have a VMware configuration on the cloud side that can be treated just like our on premises VMware.  Peak can provide that to us  in terms of a dedicated private vCenter that we can admin in the same way we do our local VMware.  This is not something you can get from all cloud providers.  Secondly PEAK is native NetApp so this helps us in terms of getting our backup sets to the cloud provider efficiently using a common set of commands.  Third, PEAK offers us a 100% uptime SLA at the hypervisor level, so we can be sure our VMware in the cloud will be available at the time of a disaster.
  • Backup software and backup data, while not the first source of data for us, is still required.  This is because while we are replicating data from our primary to our cloud environment, we run the risk of corruption.  If our data gets corrupted on the primary side,  it will corrupted on the secondary side within seconds to minutes as the replication happens.  The backup copies are an insurance policy against this possibility.  Since we are using backup software we will need to have that software available on both sides and have a method of shared backup sets between the sites.  This is our NetApp storage that both IBM and Symantec will write to as disk based backup locations.  Then if we need to we can bring up the backup applications on the cloud side and read from the cloud storage and begin restores at disk speed.
  • NetApp as the backup target.  This was chosen as it fits the PEAK cloud model nicely and allows for ease of administration in terms of having backup software write to a CIFS or NFS share and have those shares both snapshotted and also replicated to a NetApp system sitting at our PEAK cloud location.  This will make administration of the backup storage identical on both sides of the equation.

So, with this setup we essentially have built a secondary cloud site that looks as similar as possible to our primary site.  We have an equal amount of compute power on both sides.  We have backup software and application licensing on both sides to allow us to run at either side.  We have replication in two flavors running to protect our data from corruption and/or loss.  We will likely have more storage investment in terms of capacity at the cloud location depending on how long we wish to have offsite backups retained as the cloud side will also be utilized as offsite backup storage.  So you can see this scenario is very much a redundant configuration that will have costs in line with that.  What cost savings we will have will be related to facilities in terms of not having to maintain cooling, power and building facilities for our cloud location.

Again this scenario is much more complex than the one described in my first blog post.   I also realize the above details may be too much to absorb while reading a blog post, so if you find yourself with questions do not hesitate to contact Evolving Solutions to go over how your situation would fit into the above model and what real world obstacle and limitations might exist.

________________

James Keating III is a Business Technology Architect for Evolving Solutions. James is a technology leader in the IT community and brings a combination of excellent technical skills and business acumen which allows him to guide customers in developing IT solutions to meet business requirements.