AI is the hot thing. Don’t take our word for it. Edelman surveyed C Suite executives and found more than 9 in 10 characterize AI as the “next technological revolution,” while Gartner says AI implementations grew fourfold over the past four years.
There are some teething pains, of course. Enterprises today often skip the strategy and aren’t always successful when applying AI. So if you’re among the newcomers struggling to chart an AI path for your organization, you’re not alone. You may even have basic questions about how to move forward. We’re going to try to answer some of the most common ones here on the blog.
Cloud Comes First to Mind
Among the first questions many IT leaders raise, “Can’t I just outsource this to the cloud?” If there was ever an application for which a cloud-first strategy was appealing, it’s the still emerging and not fully understood field of AI. IT pros realize it’s not yet possible to follow the breadcrumbs of best practices toward an optimal result, and many would like to kill the trial and error in favor a more turnkey solution.
In this scenario, deploying AI in the cloud has many advantages, including:
- Current technology, which can be tough for organizations, especially SMBs, to afford on their own, at least until the price point on AI-ready systems drops.
- Ease of use delivered by hyperscale providers that want to increase subscriptions and, therefore, try to make their tools as user-friendly as possible.
- Low risk buy-in with ready scaling appropriate to early-stage pilots and roll-outs. In particular, the cloud eliminates the need for large, up-front capital investments.
- No facilities upgrades to account for the increased cooling and other needs of high-density AI architectures.
Closing the Circle with On-Premises
Cloud is often the first step to AI, but for many IT organizations, it won’t be the last. When reviewing the history, we may find that circa 2020 is the time when IT cried “back to the data center.” As with the “back to the land” organics movement of the 1970s that continues to shape today’s food culture, IT is reassessing the value of an approach that was, for a time, considered outmoded. The “the data center is dead” era, which we frequently contextualized, may be over.
In fact, some big names have been underscoring the advantages of on-premises AI implementations. Michael Dell, for one, has long opposed the use of public cloud for predictable workloads, and he applies the same logic to AI—it will cost less in your own data centers, he says, than in space rented by teraflop.
Moreover, IBM is monitoring customer results and has noted that the organizations “deriving the most value from data are building their data management and AI platforms close to where the data resides.” Considerations, such as latency, take on significant importance in genuine AI, which feeds on data differently than traditional data analytics. Rather than relying on more discrete data loads, high-performance neural networks require access to an underlying data fabric, often updated in real time, which can be difficult to support via the cloud.
Admittedly, both IBM and Dell are hardware manufacturers with a vested interest in keeping enterprises at the hardware acquisition trough. We don’t want overstate the AI-in-the-cloud opposition but rather articulate that a “cloud first” or “cloud only” strategy does not always apply.
The truth remains that for many organizations, offerings from hyperscale providers like Google and Amazon are the best entry point into AI, and for some, these services will remain the go-to over the long-term. But as AI takes hold, there are numerous use cases demanding on-premises processing and storage, so enterprises with the high AI ambitions will want to begin now to prepare their data centers for the high-density architectures they will soon need to house.