There are many considerations for building a private cloud, especially with a tight budget. Keep costs in check by carefully planning out hardware, capacity, storage and networking
Identifying the most cost-effective approach to the cloud is not easy. Budgetary constraints and commonsense indicate enterprises will take a phased approach to cloud, except in the occasional greenfield installments. And when planning out cloud infrastructures organizations, should take into account hardware, capacity, storage and networking requirements.
A large enterprise might break a cloud installation into three annual phases to allow future replacement cyclesto take place as a rolling three-year upgrade, which evens out acquisition costs over time. Additionally, legacy systems need to coexist with the new cloud infrastructures. While mainframes rarely figure into the cloud equation, storage arrays can be repurposed -- especially newer purchases.
Virtualization originally drove enterprises to servers without drives, but the realities of virtualized I/O performance created high-end server configurations with local fast storage -- typically solid-state drives (SSD) or flash. These "instance stores" are surrogates for drives in un-virtualized systems.
Likewise, the processor structure differs depending on the server's target service. For big data analytics, high-end processors with large memory capability are the best configurations; Web-servers and general computing could use inexpensive, diskless low-core count x64 or ARM64 engines packed as 1/2U servers.
Segregating use cases into big engine/small engine operations allows IT teams to create two or more heterogeneous sub-clouds. And managing them from a single console shouldn't be too difficult.
Capacity planning for private cloud
Capacity Planning is the next step in your private cloud project. A large part of successful capacity planning is to look at daily, weekly and monthly cycles to establish variations in the workload. This should give an instance count for level-loading.
Next, look at peak excursions and determine how to handle them. This includes flexible start times and job priorities, which can be baked into orchestration policies. This analysis yields the number of instances required for an evenly loaded system with typical excursions.
It's good to add extra units for unexpected events, plus a small quantity to cover failures and lay down a plan to add more units, as needed, for expansion. Firewalls, software-defined networks and storage require additional instances. Perform a similar utilization analysis for each sub-cloud in your project.
Defining storage requirements
Networked storage is another element in cloud infrastructure. The days of expensive "enterprise" drives are disappearing as SSDs replace high-end Serial-Attached SCSI (SAS) disks. Auto-tiering and caching software is changing the tiering structure to a (smaller) primary SSD array and a (larger) . Even with less capacity in the primary, all of the active data will fit in most environments and will be delivered 1,000 times faster.
Deduplication is a way to expand the capacity of storage by as much as six times. And with drives jumping from 1 TB to 10 TB in just the last three years, the number and physical footprint of new-age arrays can be smaller.
Storage prices also are dropping rapidly, and for many similar reasons. Additionally, software-defined storage is poised to decouple high-end features from the arrays and eliminate the need for complex and expensive high-end arrays.
Networking options continue to advance
Much like storage, networking is also going through a transition -- and at a much faster pace. Software-defined networks promise to make network configuration agile and allow fast, automated orchestration of virtualized networks. The compute functions move onto standard virtual machines, while switching is done on commodity silicon.
Planning the infrastructure for the cloud is not magic; it requires a systematic approach.
All of this bodes well for infrastructure costs in your new cloud. However, there are still some options to save more money and work. Modular systems, such ascontainerized data centerpods or rack-level installations, save a great deal of start-up effort and can be implemented faster than traditional approaches.
Converged systems take this a step further, defining a combination of servers, networking and storage that are pre-integrated and managed through a common console. These limit the need for separate organizational siloes, with corresponding improvements in efficiency and staffing levels.
Planning the infrastructure for the cloud is not magic; it requires a systematic approach. The rapid evolution of hardware technologies can complicate the decision-making process. Buying only what you need and evaluating the cloud project quarterly can ease the process and help to maintain optimal performance in your private cloud.
Closely watching your private cloud network performance is important, but often overlooked. Finding the right monitoring tools can save you a lot of hassle.
Monitoring is a crucial element of any private or hybrid cloud performance strategy. Cloud performance can make or break your reputation with users, and make your job as cloud admins complicated. Without a clear picture of workload and network performance, it's impossible to justify configuration or architectural changes -- such as workload balancing -- or quantify the effectiveness of quality of service (QoS) implementations or more far-reaching technologies, such as software-defined networking (SDN) for the private cloud.
Performance monitoring is easier in the private cloud where an organization has complete access to systems and the software stack. Monitoring can be far more difficult in public cloud orhybrid cloud environments, because cloud providers expose far less of their infrastructure to users.
Still, even if your company decides to use public cloud services, application performance monitoring (APM) tools can offer a glimpse of performance behaviors -- especially when such results are compared to the performance of the same workload in a private cloud setting.
These objective performance comparisons can quickly help IT administrators determine the best place to run each workload, looking particularly at mission-critical applications. For example, suppose a workload uses 1.5 compute units in the public cloud to accomplish the same amount of work performed by 1 compute unit in a private cloud. It's not difficult to determine the most cost-effective computing site given a workload's importance, cloud burst needs and other factors.About the author:
Stephen J. Bigelow is the senior technology editor of the Data Center and Virtualization Media Group. He can be reached at sbigelow@techtarget.com.