Evolving Solutions Named to 30 Most Innovative Companies List

Posted on

CIO Bulletin includes Minneapolis-based technology solutions provider in its 2017 honor roll.

November 14, 2017 – Evolving Solutions has been recognized by CIO Bulletin in the publication’s 30 Most Innovative Companies 2017 list.

Each year, CIO Bulletin compiles a top 30 list of US companies that are forward thinkers and leaders in innovation.  This year, Evolving Solutions was named to the list for its commitment to providing leading technologies and expert talent.

“Evolving Solutions was founded with the purpose of creating enduring relationships with clients,” said Jaime Gmach, President and CEO. “Key to this purpose is helping clients simplify technology, while staying true to our core values: do the right thing, be a team play and be humbly confident.”

It is this mantra that has led Evolving Solutions through 22 successful years of business. Serving as a different type of technology partner to its clients, Evolving Solutions focuses on creating best of breed industry solutions designed to help clients exceed their business objectives.

“At the heart of our success is our service-centered mentality,” said Gmach. “Having local expertise in the markets we serve is vitally important. We have the right talent in the right place at the right time, and it is a key differentiator for us.”

With technology at the center of every business, Evolving Solutions continues to grow and embrace the evolution of its industry.

Read CIO Bulletin’s full article on “Evolving Solutions: Leading Technologies, from Expert Talent”.

Evolving Solutions Earns Certification for IBM z Systems Mainframes

Posted on

IBM z System Mainframes create enterprise infrastructure for cognitive businesses.

Minneapolis, MN, October 25, 2016 – Evolving Solutions is now an IBM z Systems Mainframe business partner.  Evolving Solutions enters a select group of IBM business partners that offer the powerful z Systems infrastructure. This new certification will allow Evolving Solutions to better serve its clients in today’s mobile, cloud-based world.

IBM z Systems enterprise solutions offer the most robust, secure and scalable solutions for the enterprise.  They enable enterprises to create outstanding customer experiences through mobile and analytics, deliver agility and efficiency through cloud, and ensure always on service and data protection, allowing businesses to take cognitive computing further. IBM z Systems mainframe has been ranked as the most reliable server for the past eight years.

According to IBM, organizations utilize z Systems to build greater business value, reduce cost and create competitive advantage by providing fast, reliable, and continuous service. In fact, the world’s largest retailer uses IBM z Systems to serve 250 million people a week and 92 of the largest 100 banks run on z Systems.  IBM z is the world’s leading cloud platform for enterprise transactions, systems of record and application workloads and provides the necessary power to crunch data to drive real-time insights.

“The z Systems partnership allows us to engage clients with a powerful IBM infrastructure product and service,” said Jaime Gmach, President, Evolving Solutions. “This is especially significant because many businesses utilize a mainframe to run their most mission critical business applications.”

Leading client z Systems solution development for Evolving Solutions will be Scott Rudin, z Systems Sales Executive. Scott has over 25 years of experience with z Systems and will guide clients through the next stage in their cognitive business strategy.  Contact Evolving Solutions today to learn more.

A Case For Storage Virtualization

Posted on

By Michael Keeler, Storage Architect, Evolving Solutions

The concept of storage virtualization has been around for a number of years, but until recently, has not gained widespread acceptance. As with most revolutionary ideas it takes time for companies to become comfortable with the idea of placing another device into the heart of the data path between their storage and their servers. The promises of virtualization are many, but so are the risks. Implementing an incorrect or inadequate solution would negatively impact the entire data infrastructure. So, the acceptance of virtualization has followed the same path as Storage Area Networks (SAN) which were discussed for years before the industry finally accepted them. Now it is hard to imagine a medium or large data infrastructure without a SAN. The same is becoming true with storage virtualization. It is logically the next step in storage management.

The benefits of storage virtualization are many; a single point of management for all storage, increased storage utilization, standardized copy services, ease of data migration between storage devices, and a common set of multipath drives and tools. It is also a key enabler for Information Life Management (ILM) strategies by assisting with data movement between storage tiers.

Storage Virtualization works by adding a management layer between the servers and the storage. From the servers perspective they see the virtualization engines as their storage device, while the storage sees the engines as their server. Once this layer is in place it becomes the primary management interface for communicating with both servers and storage. It is easy to group storage devices into tiers or by common usage, even storage devices from different vendors. Typically the entire capacity of the array is mapped to the engines in large increments which are placed into a storage pool. Virtual disks are then allocated out of these pools for assignment to a host server.

The presence of this layer shields servers and applications from changes to the storage environment. A storage device can easily be replaced with another unit and data copied in the background from one unit to the other without application downtime. The ability to move data at will means that lightly used or outdated data can be easily moved to less expensive storage devices.

Since the virtualization engines appear as the storage device to the servers, only the multipath drive associated with the engine manufacturer needs to be used. This reduces the management complexity and interoperability issues with having numerous multipath drivers.

Copy services is also managed at the virtualization layer which means that Point-In-Time (PIT) and disaster recovery data replication are only purchased at the virtualization layer and they have a common interface for management. It becomes easier to replicate data or create PIT scripts to copy data from one tier to another for backup and data recovery purposes.

There are two basic approaches to virtualization; a standalone appliance or as a blade within a SAN director. Both approaches have merit. The appliance has the advantage of scalability. To grow the virtualization environment you simply add more appliances which are managed as a single entity. Blades are potentially more cost effective since they already reside within the fabric and there is no need to connect external cabling to attach them to the fabric. Blades are protected by highly available, director class, hardware.

Like Storage Area Networks, implementation of storage virtualization has taken time to gain widespread acceptance, but its time has arrived. The cost benefits of having a single administrator, controlling hundreds of terabytes or petabytes of storage capacity, are just too substantial to ignore.

Lower Total Cost Of Ownership With NAS Consolidation

Posted on

By Chris Taylor, Director of Professional Services & Solution Sales, Evolving Solutions

One of the most significant factors in lowering IT total cost of ownership today is to simplify and enhance the utilization of both servers and storage in complex IT infrastructures.

Many organizations experience common business challenges that complicate their storage management and maintain high costs and redundancies. By implementing a NAS Consolidation solution, these organizations can solve common business problems and in turn, significantly reduce their total cost of ownership.

Business Challenges
Many businesses are unaware of the current state of their IT infrastructure. Before an organization can go about simplifying its network, it must first analyze the current situation and gain an overall picture of what the network looks like today.

Many organizations distribute multiple low-cost “one application” file servers throughout the enterprise. As the organization grows, more servers are added to support the growth. However, server utilization is low, averaging 15%, because many servers are over provisioned in an attempt to reduce risk and provide scalability. These servers are also unconsolidated, which makes management difficult, backup and recovery complex and costly and maintaining high availability a huge challenge.

Over 80% of Windows infrastructures and over 95% of Windows servers store their data on internal or captive external disk subsystems. The use of network attached storage (NAS) and storage area networks (SAN) is the exception. With predominately captive/internal storage, Windows storage utilization averages 25-35%. Large Windows infrastructures (90TB+ usable storage) may have utilization as low as 14%.

NAS Consolidation
NAS Consolidation allows extra hard storage space to be added to a network that already utilizes servers without shutting them down for maintenance and upgrades. The server processes the data and the NAS device delivers the data to the end user.
NAS Consolidation reduces the number of servers being used to improve performance, availability, scalability and management service levels. For example, an enterprise may reduce 7 file servers into one NAS device. Consolidating servers in this way can increase utilization to as much as 60 – 80%.

NAS Consolidation Targets Windows & Unix Platforms
Within a Windows environment, radically reducing the Windows file server cap would increase utilization and productivity, reduce costs, management overhead and complexity for daily management. It would also allow faster access to data because it is centralized, better data retention, compliance with Sarbanes-Oxley, organization structure and data protection.

For example, a Network Appliance solution is compatible with Windows 2000, Windows 2003, and legacy Windows NT and simplifies migration to Windows from UNIX® platforms. It looks and acts just like any other Windows file server in a Windows environment and relies on the Windows Domain Controller for authentication of windows clients as well as it integrates seamlessly in a Windows Active Directory structure. By supporting this natively, security administration and server overhead is drastically reduced. As more organizations embrace and adopt .NET, this type of solutions will allow them to scale and migrate to .NET server technologies.

Within a UNIX environment, using NAS Consolidation to simplify the storage management infrastructure improves data protection, reduces performance bottlenecks and allows data to be shared across platforms.

For example, a Network Appliance solution can deliver high availability and reliability to support critical business operations, rapid restores, efficient backups, and allows heterogeneous sharing of data between both UNIX and Windows environments. Another benefit of consolidating both UNIX and Windows to a single storage solution allows for interoperability testing between platforms, migration of data from one platform to another, and ease of management of data across both platforms.

In both Unix and Windows environments, by consolidating large amounts of both structured and un-structured data on a NAS solution, IT organizations can take advantage of the storage solutions backup utilities such as Network Appliance’s Snapshot and Snapmanager to more effectively and efficiently backup and most importantly recover the data and increase their availability and service levels to their customers and end-users.

Next Steps
Once a NAS Consolidation solution is in place, the organization can modify its IT infrastructure to further reduce total cost of ownership by implementing one or more of the following solutions.

Tiered Storage
A tool assisted approach can be used to identify and classify the organization’s different types of content and data and how often they are accessed. This data then helps the organization prioritize data and divide into Tier 1, Tier 2 and Tier 3. Tier 1 data would be placed on high availability storage, which is obviously more expensive. Tier 3 data would have low availability and, hence, would be the cheapest to store.

By using a tiered approach, the organization can prioritize data and ensure that it is only paying high storage costs for priority data that should be easily accessible.

Placing tiers on a NAS Consolidation Solution optimizes storage capacity and reduces redundancies, which in turn reduces overall expenditure.

Server Consolidation
Server Consolidation allows organizations to efficiently manage and optimize server and storage resources across the enterprise.

Server Consolidation can increase server utilization to 60 – 80%. Increasing server utilization lowers costs and makes provisioning for server and storage capacity faster, making it easier to respond to new business requirements while improving time to market. Availability is improved by providing zero downtime.

Consolidating server and storage platforms will realize business benefits including streamlined performance, centralized systems management, and reduced total cost of ownership, and improved security and resource utilization.

Server Consolidation is more than simply replacing a collection of smaller servers or storage devices with fewer, larger ones. Server Consolidation enables organizations of all sizes to simplify yet optimize their system’s infrastructure through the implementation of clearly defined processes.
As companies migrate to storage-centric environments and require the flexibility to grow their enterprise, Storage Consolidation cannot be overlooked. Companies demand the ability to fully utilize and grow their storage assets to meet future requirements and fully leverage their technology investments. This is where Network Attached Storage (NAS) is important.

Content Management
Using a NAS Solution allows organizations to go further down the road with content management and free up capital and resources to go after larger projects.

By consolidating structured and unstructured data onto a NAS device, the data is now centrally located. Now that the data in a centralized storage container, the process of determining data retention policies can now be determined by utilizing a content management solution to look at how content is created, exploited, and most importantly what to do with that data after it is exploited. The data being stored centrally means that it can be moved to other devices such as tape or less expensive storage solutions or even a hierarchical storage management (HSM) type solution.

Consolidation of Backup, Recovery & Archiving
By consolidating servers and the data residing on those servers, such as home directories, files that consist of structured data such as Databases and unstructured data such as word documents to a NAS solution, the data is now in a central repository. Now, the data can be replicated to another NAS device for Backup/Recovery thereby improving Recovery Time Objectives (RTO) and Recovery Point Objectives. Other added benefits include that now with a tool assisted approach, customers can analyze the data and decide how often the data is accessed, the criticality of the data, and determine the right classes of data and associate service levels with that data. At that point, Tiered Storage solutions along with Tiered Backup Solutions such as the Network Appliance R200 solutions address this strategy with the benefits of their robust software capabilities to improve RTO/RPO and recovery in a disaster. Also, now that the data is in one location, it can be more effectively and efficiently pushed to Tape or leverage a HSM solution to reduce the costs of storing the data on a Tier of storage that may be overkill.

File Virtualization
File Virtualization, the ability to virtualize systems over a heterogeneous storage environment, is an up and coming technology.
While NAS Consolidation can solve business challenges by reducing costs and complexity, File Virtualization enables organizations to share files over heterogeneous storage and, hence, get more out of storage. This would be a solution to consider down the road when implementing tiered storage.

The Benefits of NAS Consolidation
There is a lot that organizations can do to optimize their storage and server environments. If an organization is looking to simply reduce complexity and costs, a NAS Consolidation solution will satisfy its business requirements. Once NAS Consolidation has been implemented, the IT infrastructure has been simplified and the organization has seen significant lowering of overhead costs, it may be time to consider further storage and server options, as outlined above.

In order to achieve lower total cost of ownership, expected ROI and long term benefits to reliability, scalability and cost savings, companies must remember that the key is in understanding their IT infrastructure. NAS Consolidation will be most effective when the organization has obtained a baseline of current server performance and can pinpoint the types of applications being used and the utilization of resources (including bandwidth).

Indeed, for the organization looking for an uncomplicated and non-costly solution, NAS Consolidation is a great option.

Server Virtualization

Posted on

By Jaime Gmach, Evolving Solutions

The Healthcare Server Consolidation Dream
Because the storage and storage management needs of many Healthcare Information Systems are more than doubling each year, IT professionals dream of robust data on-demand environments that allow their storage area network environments to be capable of processing data from chemistry analyzers to radiology picture archiving information systems, AND at the same time be able to maintain patient data records and Email systems.

The Reality of Storage Virtualization
Luckily, the dream world of server “morphing” – or server virtualization – is becoming a reality for Healthcare IS professionals Although most healthcare IS environments are not yet taking advantage of virtual server expansion and contraction capabilities today, it is possible to “borrow” CPU and/or memory capacity from servers, which are currently not being “taxed”, and return that same CPU and memory capacity back to their original “owners”- in their original state. Imagine healthcare information server systems being spoofed to think they have unlimited CPU and memory capacity and subsequently never again surpassing processing or workload thresholds!

Engineers at Evolving Solutions, Inc., a data on-demand and storage consultation company that works with healthcare IT professionals predict that by early 2005, servers auto-monitoring and auto-adjusting for data on-demand requirements will be appearing far more frequently in healthcare IS environments. More than simply a foray into virtualization; they feel we will soon witness a complete leap into “autonomic computing”.

Server Virtualization – Why Not Now?
Many healthcare IS professionals may be wondering, if server virtualization is available today, why aren’t more healthcare IT environments taking advantage of this type of money-saving / resource-sharing solution? Because it is as new a concept now, as hybrid vehicles were ten years ago. Ten years from now, hybrid vehicles will no doubt be common place. Many IS professionals however, do not desire to wait ten to twenty years to virtualize their healthcare IT environment.

The following three steps are designed to get your healthcare information system driving in the direction of autonomic computing.

Server Virtualization – The First Steps

Step 1 – Assess & Validate
Conduct an environmental assessment to define server processing needs. Poll all servers to identify current CPU, memory, adaptors and file/system capacity and total used and unallocated disk space (be sure to account for all archive file space as it often takes up to 30%-40% of all data storage – much of it in duplicate and triplicate form). Identify; CPU, memory and adaptor usage peaks, read, write, and wait cycle peaks.

Step 2 – Rationalize and Critique
Critique your current server environment. Identify and consolidate processing-compatible applications to single servers, or virtualize your existing multi-server environment to share processing attributes from a common pool. The second choice will aid you in the reduction of purchasing new servers for every new application. As a result you would increase utilization of your existing servers from a typical 10 – 20% to a more efficient 40 -50%. More importantly, you will drastically decrease your “unexpected” outages while turning your one-to-one, limited-growth environment into a completely flexible and scalable solution without throwing out your existing investment.

Identify mission critical servers. Leave those servers in a one-to-one relationship for your heavy-hitting applications. Then, consolidate your non heavy-hitting applications and virtualize the remaining servers to form a common pool of hardware resources. Finally, configure the above mentioned CPU, memory, and adaptor resource pool to be shared with the heavy hitting servers/ applications – whenever it is needed.

Step 3 – Stop Investing
Stop thinking your only storage solution is to buy another server. Chances are you are not taxing your existing servers enough. Start “carpooling” your data and available resources!

Tap into your existing hardware pool and reduce the number of servers you feel you have to buy simply to increase on-demand processing capacity. Odds are high that you don’t need to add a server to increase your CPU and/or memory horsepower or even add to your existing server pool. Chances are you are positioned to cascade much of your existing servers and reduce your related server budget for years to come… starting today!

Autonomic Computing
Soon, healthcare production-level servers will be configured to perform internal “automated health checks” from I/O processing needs to page and buffer credit settings. They will identify their own CPU, memory, and adaptor requirements. Virtualized healthcare servers will reach out to idle servers and borrow capacity in order to complete information processing tasks. Then, without human prompting, these virtualized servers will return the capacity when it is no longer needed.

The ultimate goal of server virtualization is autonomic computing; that is capacity and healthcare data on-demand regardless of size, processing demands, resource needs, time of day or night, or human availability.

About the Writer
Jaime J. Gmach – President, Evolving Solutions
Jaime Gmach co-founded Evolving Solutions in January of 1996 after spending the previous ten years sharpening his entrepreneurial skills in various elements of the technology industry. He previously served in roles ranging from Customer Engineer and Director of Technical Services to Sales Manager and finally to President of Evolving Solutions. Jaime’s strong technical perspective comes from years of face-to-face interaction with clients to design and implement their desired business solutions.