Creating a Common Experience Across Clouds With Nutanix Clusters on AWS

Posted on

Hybrid cloud deployments offer many advantages—but they also require spending time and money managing multiple locations. Often, organizations have a different platform for each location, which typically means completely different processes for each platform. In addition to purchasing extra tools, employees must be trained on various systems. Using multiple tools usually results in lower productivity. Businesses that use multiple cloud providers have even more challenges because of the differences between clouds deployments. For example, API calls are handled differently by Amazon Web Services (AWS), Google Cloud and Microsoft Azure.

To overcome these issues, many organizations are now utilizing Nutanix Clusters on AWS (NCA) to create a common way of performing tasks across different clouds and environments. Nutanix’s hyperconverged platform eliminates the need for disparate management between on-premise and cloud, as well between cloud services providers. By putting the Nutanix software on top of the cloud ecosystem, Nutanix handles the heavy lifting—and users do not have to understand the subtle differences between different clouds.

To further explain, I have asked my colleague Derek Raebel, a senior systems engineer at Nutanix, to elaborate. “The point is to have a common management platform across the private cloud vendors you work with and the public cloud vendors you work with. Today that’s AWS; in the next six months Azure will be included in that with more to come. The Nutanix experience stays the same regardless what hardware you’re running on or what cloud you’re running in,” he says.

With its self-healing architecture for clusters running on AWS, the system automatically resolves most issues without user intervention. To implement data resiliency, NCA deploys cluster hosts across as many partitions as AWS exposes for their bare metal instances. Each AWS partition corresponds to a rack in Nutanix ensuring replicas are across AWS failure domains.

Determining the Best Cloud Configuration

By adding in the orchestration automation software Nutanix Calm (Cloud Application Lifecycle Manager), organizations create even more unified processes for their multiple clouds. “Calm acts as an API translator with a construct called Blueprint to draw out what you want your end state to look like, whether that be a LAMP stack or a single server deployment,” Raebel says.

Calm then translates the API to implement the Blueprint in multiple environments, such as Azure, Google or VMWare. Regardless of where you are running your cloud, Nutanix provides the same experience.

By merging on-premise design practices with cloud strategies, you create similar fault-tolerant principals with AWS as your on-premise infrastructure. You have likely already developed fault-tolerant domains for your on-premise infrastructure, such as data center colocation. As you develop a service in AWS, you can use availability zones (i.e., discrete data centers) to make the connectivity highly available across different fault-tolerant regions in the AWS data center. “It’s really the merging of on-prem design practices you can mirror to AWS and its fault-tolerant principles,” Raebel says.

You can also create a cluster on a separate availability zone and mirror between clusters. If the availability zones are in the same region, the low latency allows you to synchronize replication across the clusters in real time.

Providing More Cost-Effective Disaster Recovery

Business continuity is a core element of a comprehensive disaster recovery (DR) strategy. Because NCA allows organizations to only pay for the resources used, many Nutanix clients appreciate the total cost of ownership from deploying Nutanix Clusters on AWS for DR. Traditional DR strategies require a significant capital outlay for duplicate production gear as an insurance policy. “Every company should have a DR strategy. If you don’t have a DR strategy, please put one together—and Nutanix Clusters on AWS can be a part of that,” Raebel says.

Nutanix hyperconverged infrastructure typically replicates your on-premise configurations, meaning if you used 15 on-premise servers, your cloud infrastructure would likely include approximately 15 servers. The cloud configurations give you the flexibility of reducing your footprint, because you do not have to keep all servers running when cold. Instead, you can configure a three-node cluster. When a disaster is declared, you can turn on the required capacity. “When the disaster is over, you can spin it down. The organization’s only cost incurred is from what was running in the cloud during the period you were running there,” he explains.

With an on-premise cold DR configuration solution, the order, delivery and installation of new servers can take weeks in a best-case scenario—and often much longer when attempting to make your cold DR solution all hot and active. With NCA, you simply  spin up additional nodes in Nutanix Clusters Portal, then send the replication data to the services so you can quickly get your business back up and running in a matter of hours. IDC cites that an hour of downtime can cost a large organization upwards of $100,000—so a timely DR solution can result in huge savings.

Creating Additional Layers of Protection

By using Nutanix Flow, a software-defined virtual network security product, you can secure your VMs with micro segmentation for both on-premise and in NCA.

Instead of using two different tools and strategies for on-premise and cloud, you can use a common platform and process regardless of your cloud provider and infrastructure.

If your servers are attacked by ransomware, Flow quickly quarantines the VM on the network to prevent malicious code from spreading through your network and other trusted VMs. You can then use forensic tools to investigate the VM and repair damage before getting the VM back on the network.

The ability to adapt quickly is invaluable to any organization, and an IT infrastructure that can flex accordingly is critical. By using NCA, organizations create the continuity and flexibility needed in today’s environment. And because you only pay for services used, organizations save money while providing a superior experience for users.

Nutanix Continuity and Flexibility

The ability to adapt quickly is invaluable to any organization, and an IT infrastructure that can flex accordingly is critical. NCA allows organizations to create the continuity and flexibility needed in today’s environment. And because you only pay for services used, organizations save money while providing a superior experience for users.

Evolving Solutions Author:
Jim Pross
Systems & Storage Consultant

HPE Global Partner Summit and Discover Conference

Posted on

I recently attended the HPE Global Partner Summit and Discover Conference in Las Vegas. I really appreciated the conferences overlapping partner and client/education schedules. The opportunity to combine HPE executive meetings with client meetings during the same trip is of significant value.  I do realize that for some that anytime you spend a full week in Las Vegas it can often feel like a month with so much going on there.

I give credit to HPE CEO Antonio Neri and the HPE leadership team for the ongoing consistency in their messaging. Their commitment to the Super 6 of Gen 10 Transitions, Blades, Flash Storage, Everything as a Service, Software Defined Infrastructure and the Intelligent Edge, is beneficial in two ways. First, it is refreshing to clients who are on their hybrid cloud journey to receive a consistent and definitive message on how the business direction of a manufacturer supports their journey. Second, it is helpful to partners who are co-investing in the HPE strategy to be confident in knowing that they can make the investments in their business without having to worry about a directional change that results in a wasted investment. The consistency helped drive home their commitment to the Super 6 initiatives.

CEO Neri made a couple of strong statements at the conference which support the HPE commitment to their mission. The one which stood out most to me was the statement, “I am committing today to you that in the next three years HPE will be a consumption driven company and everything we deliver you will be available as a service.” This puts an exclamation point on the GreenLake strategy and HPE’s belief that clients want the capability to consume capacity in a cloud-like manner, even if it is located within their data center.

On a related conference note, I would like to offer an enthusiastic congratulations to our distributor, Tech Data, for being honored as the HPE Global Distributor of the year. Congratulation Rich, Joe and extended Tech Data team for a great accomplishment!

While, I do not find myself saying this too often when it comes to Las Vegas, it was a productive week! The time at the conference was well spent with positive takeaways and actionable to do’s. Now the responsibility falls on the Evolving Solutions team to execute with our partner HPE on our joint strategic initiatives. I look forward to our continued partnership and driving growth for both businesses.

Jaime Gmach, President and CEO

Jaime Gmach co-founded Evolving Solutions in 1996 and continues to lead the company today as its President and CEO. Together with the extended Evolving Solutions team, Jaime has built the company into a business focused on creating enduring, open and trusted client relationships as a leading technology solution provider to businesses throughout North America.

Jaime has spent the past 30 years serving in various leadership roles within the technology industry. Jaime’s career began as a Systems Engineer with a Minneapolis-based professional services firm where he traveled throughout the world focusing on the implementation and support of mid-range compute and storage solutions. Daily face-to-face interaction with clients early in his professional career served as the inspiration for Jaime’s entrepreneurial passion and for his continued desire to work closely with clients.

Like what you read?  Follow Jaime on LinkedIn.

Minnesota HashiCorp User Group

Posted on


Join us for a special conversation around cloud enablement in conjunction with Optum on Thursday, April 25 at 4:30pm.

Organizations have a variety of options when it comes to choosing the infrastructure that runs their applications— cloud, private infrastructure, and third-party services. Embracing cloud and multi-cloud requires organizations to rethink their approach to provisioning, securing, and connecting their infrastructure. Static data relies on static fleet of standardized infrastructure, provisioned for long periods of time, and dedicated to users. Dynamic infrastructure now consists of heterogeneous infrastructure, frequently provisioned, short lived, and automated provisioning on-demand. Learn how Consul enables one workflow to connect any application across any infrastructure.

We look forward to seeing you.

6 Ways to Enhance Your IT Infrastructure

Posted on

It’s no secret that AI and cloud technologies have a disruptive impact on IT.  This affects everything from compute and storage requirements, to developer tools.  The hardware you choose today needs to meet your requirements, as well as be flexible enough to work for you down the road.  So what are some ways you can enhance your IT infrastructure and bring your environment closer to meeting new technology expectations? 

1. Get cloud enabled  

Have you been considering public or private cloud migration? As you transition to a cloud infrastructure, you need the ability to handle flexible consumption models, changing customer needs and cloud-based application development. Your developers need a platform that lets them use the operating systems and languages they are already familiar with and access the latest open source tools to maximize productivity. 

Look for cloud-enabled servers designed to meet these needs that offer you operating system and language flexibility.  It’s a good idea to spec something that gives you an open, secure hybrid cloud to facilitate collaboration and agile developments. 

Cloud enabled can also mean extending the value of your enterprise applications, connecting these systems to cloud services, with secure container technologies and microservices, can mean rapidly delivering new services. 

2. Become AI ready 

Deep learning, machine learning and AI are now more accessible than ever, and an increasing number of enterprises are capitalizing on these technologies for added business value. 

Running a proof of concept is one thing, but when an organization is ready to deploy an AI solution into production, the right processor architecture and server hardware are essential.  

3. Gain insights into your data 

Your business is driven by data, and harnessing the power of your data is critical to your competitive advantage.  Some options offer a broad portfolio of storage solutions to help you distill insights from your data and help you enable new insights about your data. 

You can even gain a new level of visibility across your storage environment to help manage complex storage infrastructure and make cost-effective decisions. You can seek options that deploy quickly and save storage administration time, while optimizing the storage environment. 

4. Consider your storage options 

Storage isn’t just about big external hard drives anymore. Storage today is encapsulated in a wide array of technology options that optimize the analysis and processing of a range of data types from myriad sources. Multiple options need to be considered, and different workloads call for different solutions: 

Flash storage is engineered to meet modern high-performance application requirements for data with a high frequency of use. Next-generation IBM Flash storage includes innovations such as inline data reduction with no performance impact. 

Software-defined storage (SDS) is the foundation for digital transformation. It helps you to manage data growth and enables multi-cloud flexibility by providing an agile and operations-friendly IT infrastructure. 

Hybrid storage arrays enable you to control costs with an optimized mix of storage media. With nearly unlimited configuration options, you can tailor a system to your needs. 

Tape storage for infrequently accessed data helps to improve your data economics. These are available in drives, libraries, virtual tape systems and archive software which makes tape as easy to use as disk storage.

5. Protect your valuable data 

As businesses adapt to capitalize on digital transformation, trust will be the currency that drives this new economy. How can you earn that digital trust? Look to pervasive encryption for all data, at rest and in motion, without changing application code. The result is data security at a lower price point than competing solutions. 

6. Unleash the mainframe 

Transform your data into insight with no data movement—processing isn’t slowed, and data isn’t exposed to risk, with some mainframe applications.  Capitalizing on the power of your mainframe can mean big things for your organization, and the possibilities are almost limitless. 

With so many options for enhancing your IT infrastructure, keeping these six key areas in mind as you assess and update your IT infrastructure to meet new technology expectations makes the transition a smooth one.  There are so many options out there, so feel free to reach out to a trusted technology partner who can help you assess, prioritize and implement some of these great enhancements.