Aging does not mean obsolete
Park Place Hardware Maintenance
As DevOps teams deliver increasingly powerful and hungry applications that drive competitive edge, IT administrators frequently have the task of applying these power apps onto already-pressed storage infrastructures. Pressurized change management planning – or lack of foresight into how many users will use and benefit from app delivery – can quickly lead to insufficient Input / Output (I/O) performance being served from the storage layer underneath. This lack of I/O availability hinders digital enablement and jeopardises seamless app adoption across the organisation. Indeed, if an app’s development has been outsourced, then the complexities of layering the app back into an existing internal storage infrastructure can become even more difficult.
Historically, in a typical mid-sized enterprise, 80% of resources (IT time, latest hardware innovations, and budget) are dedicated to the delivery of 20% of the most critical lifeblood apps, such as an organizations’ ERP and CRM systems. These top-tier applications are regularly ‘treated’ to upgrades such as the latest SSDs to guarantee maximum performance speed and total uptime.
However, for non-critical applications such as file and print, data archival and Exchange on-premise email, there is an increased level of tolerance on the speed of delivery and data retrieval. Accepting here that critical performance is less of an issue, these apps are ideal candidates to be tethered to legacy-reliable primary storage systems that continue to work way past their vendor-stamped End of Life (EoL) date. That is not to imply that the role and values of these applications are insignificant. Quite the opposite in fact. They are intrinsic to an organization’s working day. But given how they are historically applied, internal performance delays are somehow deemed more acceptable. Archiving data is another application where performance and access are rarely provided at premium speeds. The focus instead is on providing accurate location and tagging of data archives. Extending the life of reliable storage hardware to less-demanding apps, also helps organizations achieve sustainability goals, avoiding excessive rip-and-replace programmes simply because three years have passed since the procurement date.
At Park Place Technologies, this is a storage practice that we frequently recommend, successfully extending the life of Dell EMC Clariion, Celerra, VNX, Ipsilon, Data Domain, Pure, Hitachi, IBM, HPE and NetApp FAS storage systems through our leading Storage Hardware Support Services. For ultimate flexibility, these systems come backed with customizable response levels for service depending on Tier 2 app needs. For instance, frequently when we are looking at applications such as Exchange email, we are also requested to provide proactive monitoring of these storage devices, spotting and fixing anomalies such as new performance I/O conflicts or surges in requirements that could potentially lead to bottlenecks and sluggish performance. Holistically we also flag if a component within the storage stack is deemed to be on the edge of failure. With this level of monitoring, age date stamps become less relevant and any problems with data availability and data redundancy are overcome, restoring non-critical apps to their former glory. There is also a lot to be said for unifying the many storage badges under one contract. This will not only lower costs, but also achieve greater pull, holistic capabilities and increased service levels with guarantees on first-time fixes for total reassurance. Working under one management support contract, internally all departments receive the same unified support experience for storage management. This extends across devices through the Central Park monitoring portal, which even facilitates alerts to mobile devices to select IT Storage Managers.