The shift to the cloud and the consequent boom in the sector was held together by its grand promise that any company could digitally transform itself and keep its data secure on the cloud. But the cost of such transformation is rising, now boosted by a spate of generative AI tools added to the mix.
Big Tech companies with fat cloud bills are facing something of a catch-22 situation as they are unable to opt out for the fear of being left behind. So, they are looking at more ways to cut corners.
Making in-house AI chips to cut costs
On 11 July, at a semiconductor conference in San Francisco, IBM said it was considering using its in-house AI chips to lower the costs of cloud computing. Mukesh Khare, a general manager of IBM Semiconductors, said in an interview with Reuters that the company may use a chip called the Artificial Intelligence Unit in its new enterprise AI platform Watsonx. Khare noted that this would solve one of the big pitfalls of its old Watson system – high costs, as their chip was more energy efficient.
IBM took the hint from other tech giants like Google, Microsoft and Amazon, all of whom are designing their own AI chips, with the hope that they can save money on their AI push. Up until now, the pressure was on an even smaller number of specialised chips, such as graphic chips, or GPUs, from NVIDIA. But the scope is widening to accommodate demand. Microsoft has reportedly accelerated Athena, its project to design its own AI chips. The Satya Nadella-led company hopes to make its AI chips available within the company and OpenAI by next year.
Shift to on-premises
“AI and ML require specialized resources which are extremely expensive to build on-premises, even for very large enterprises. On the other hand, given the heavy use of AI/ML by competitors, enterprises do not want to be left behind in the race. The cloud offers an ideal solution for enterprises that need to strengthen the infrastructure required to build AI/ML into their growth roadmap,” Dharmendra Chouhan, Director of Engineering at big data cloud platform, Kyvos Insights said.
“The basic nature of the cloud is such that the more one uses it, the more the enterprise pays. This does not mean that an enterprise cannot control cloud-spend if it wants to leverage the power of AI/ML,” he added. According to Chouhan it is essential for the clients to figure out what they themselves need. “The choice of tools is one such factor. There are many open-source and commercial AI/ML solutions that look attractive from a functional point. However, one needs to also compare the cost of creating and using models while choosing a too
He explains, “The next thing enterprises need to consider for maximizing ROI is not trying to do everything on their own. There are and will continue to be many open-source and paid models that can be used as a base, and then trained for the specific data of that enterprise. The choice of a cloud provider is also an important consideration. Since hardware is a major part of the cost, one needs to identify the right cloud provider and also factor in the cost of specialized hardware provided by them.”