As a concept, SDDC reaches back to at least 2012, when it became a hot topic at storage and networking conferences. As a reality, however, SDDC is an evolving field, one on which many IT pros have nonetheless pinned their hopes.
In 2015, Gartner analyst Dave Russell cautioned that “[d]ue to its current immaturity, the SDDC is most appropriate for visionary organizations with advanced expertise in I&O engineering and architecture.” At the same time, Gartner predicted 75% of Global 2000 enterprise will require SDDC by 2020—putting us about halfway along the trajectory to widespread adoption, at least among larger businesses. A 2016 Markets and Markets study reached conclusions supporting such a “hockey stick” growth chart, with projected 26.5% CAGR through 2021 reaching $83.2 billion.
By most measures, SDDC is at a transitional phase, where virtualized servers and hyperconverged appliances are reaching maturity but pushing the software-defined model to less charted realms, like software-defined power and self-aware cooling, remains mostly on the white board.
A conservative adoption strategy can track the changing status of emerging technologies, leaving guinea pig experimentation to those with the funds and inclination to undertake it and allowing time for winning and losing technologies in each area to shake out. The approach provides greater clarity for thos positioning themselves a step back from the bleeding edge.
Server virtualization, for example, is reaching its peak and has become the de facto choice for organizations of all sizes. Software-defined storage falls in line as well, with technologies frequently offering a favorable upgrade pathway from traditional solutions.
Software-defined networking is gaining stability as the explosion in global IP traffic—projected by Cisco to reach 3.2 zettabytes by 2021—coinciding with rapid growth of east-west traffic within the data center are necessitating advancements. Technologies like VMware’s NSX and NFV are by deemed by most experts to be mature enough to serve as primary software-defined networking backbone.
These developments provide viable SDDC entry points for the enterprise, as well as for IT professionals seeking to future-proof their own skill sets. For instance, server virtualization is no longer optional for the business or systems administrators. In fact, reaching 90% virtualization is a good indicator that the enterprise is ready for the transition to SDDC.
From there, converged racks of software-defined storage, networking, and computer or hyperconverged appliances are also enterprise-ready. Hyperconvergence, whether in its currently vendor-specific form or a potential future, more vendor-agnostic iteration, will likely be a requirement as enterprises absorb the flood of data from IoT, respond to accelerating market changes with continuously deployed solutions, and add capacity to serve internal and external customers. Its power comes from being preconfigured and engineered for scalability by adding interoperable modules under a hypervisor for Lego-esque expansion.
Despite its potential, the transition to hyperconvergence may not be complete until 2025 or later. Even then, the chances for 100% hyperconvergence are essentially nil, as it is not appropriate for every application. For example, ROI is best for centralized workloads but dwindles at smaller scales. As edge computing takes hold, adaptation of hyperconvergence will be tempered by IoT-centric and other technologies that cannot be incorporated into such a virtual box.
SDDC’s Final Frontier?
An important next stage in SDDC’s implementation will be to bring total virtualization to the data center facility itself in order to achieve the full potential of “software-defined everything.” Through virtualization, utilization rates and equipment densities have multiplied. In a sense, the space problem in the data center has been solved. Miniaturization via on-premises hyperconvergence, especially when paired with cloud overflow, is nearing par with increased demand.
With these changes, the pressure on power and cooling systems has only grown. Emerging possibilities in the area of software-defined power could soon enable use of spare or stranded capacity. With dynamic redundancy, power systems currently experiencing only 50% utilization may be able to approach 100%, resulting in significant operational and capital savings. While such developments have been primarily vaporware to date, hyper-scaler technology and data center infrastructure management (DCIM) are just now reaching a stage to invite commercialization of this next stage of data center virtualization.
The Bottom Line
The SDDC transition already underway holds great promise to boost enterprise’s abilities to absorb technological and business-driven change. Will there be developments beyond the software-defined revolution? Absolutely. But for IT professionals standing on the bridge navigating for tomorrow, harnessing the power of SDDC is likely the best option for future-proofing the data center currently available.
Paul Mercina brings over 20 years of experience in IT center project management to Park Place Technologies, where he has been a catalyst for shaping the evolutionary strategies of Park Place’s offering, tapping key industry insights and identifying customer pain points to help deliver the best possible service. A true visionary, Paul is currently implementing systems that will allow Park Place to grow and diversify their product offering based on customer needs for years to come.