To migrate to a hyperconverged infrastructure, there are tools, as before on traditional systems. “The procedures are little different from those of non-converged systems, but they are driven and run much faster, with fewer constraints,” says Fabrice Ferrante, Head of Cloud Practice Architects at Capgemini France.
The advantages of the three types of hyper-converged architectures. NetApp 2018 Source
Initially, as before, it is necessary to analyze, map, build a ‘blueprint’, an architecture and a migration path. “The BUILD phase is significantly accelerated, as well as the deployment phase. Some companies take the opportunity to change or consolidate their hypervisors.
And the test phase disappears almost.
Dedicated Nodes and Node Clusters
One of the priorities in building a hyper-converged infrastructure is to properly distribute the ‘workloads’ between the nodes and cluster by cluster, in order to maintain a simple and consistent administration of the whole. From one provider to another, the number of nodes that can be aggregated differs. And even if, in the absolute, it is possible to distribute the VMs on several nodes without particular constraints (the own file system NDFS Nutanix, among others), it is recommended to proceed in a logical way by dedicating such-and-such nodes to such and such applications.
VDI or transactional?
This is the case for the support of VDI (Virtual desktop infrastructure) stations or that of an IP telephony platform: they will be dedicated to certain nodes, distinct from those supporting transactional applications, with databases.
“Between VDI and transactional applications, the implementation of hyper-convergence in the IS of companies does not target the same types of nodes” explains Fabrice Ferrante. “Typically, we specify nodes, and node clusters, by application or application platforms. For VDI, we usually dedicate nodes, otherwise an entire cluster.
Similarly for the transactional: “We must dedicate nodes but not too much to keep flexibility and optimize costs. We can dedicate 2 or 3 clusters but not more if we do not want to fall back into the complexity of administering the whole. In fact, the two main benefits of hyper-convergence are being able to manage and anticipate the ‘provisioning’ of resources and accelerate deployment.
From an application map
Philippe Incherman, regional pre-sales manager at Nutanix, recommends starting with a classic mapping of applications “so that they can run on the platform and evolve without the need for technological leaps. Our solutions allow, with a mouse click, to mix workloads on different technologies – SSD, NMVe and soon 3D-X-Point controllers “.
In general, hyper-converged infrastructures are well suited for highly virtualized data center renovation or creation projects, with replication between remote sites, up to a recovery plan (PRA) and up to the support of a private cloud.
Similarly, IT development and testing platforms, especially DevOps-oriented ones, are quick to take advantage of hyper-converged infrastructures. Being able to deploy quickly and on a private or public cloud, for example.
Two key steps to follow
As a first step, we must aim to manage the services globally while sharing resource pools (CPU, memory, storage, network …), which facilitates the solutions of VMware, Nutanix, HPE Simplivity, NetApp HCI, etc.
In a second step, it is necessary to optimize the data at the source, to bring it closer to the processors to optimize the performances and to maintain the flexibility with the mobility of the VMs. You have to be able to deploy them very easily and then save them, move them, replicate them in record time. And with the new generation of data centers, the system needs to know where the workloads are, on which VMs.
Move the data where it needs to be
It is necessary to be able to move the data very simple, where it is necessary, as in a private Cloud, confirms Jean-François Marie. “Our HCI offering is in line with this priority: it is NetApp’s” Data Fabric “strategy, making it easy to connect to the cloud, either in a central data center or a peripheral” Edge “site.
Ideally, it is necessary to configure two hyper-converged bases on two remote sites and to make replication between them both at the level of the storage nodes and at the application level.
Example of hyper-converged infrastructure. (HCI) NetApp Source
Because it is always the application that determines the level of integrity of the application data. The use of Flash / SSD drives has improved the response times of the most critical applications to achieve values of the order of one hundred microseconds rendering synchronous replication useless.
“The application must regain control, certainly making it more difficult to set up a recovery plan. Technical teams must, therefore, focus on these complex tasks, and have simple and agile infrastructure solutions. This is one of the keys to the success of hyperconvergence, “says Jean-Francois Marie. NetApp HCI is distinguished by a “nodeless” node-based architecture, enabling computing and storage resources to be expanded independently and dynamically on demand.