Features IT Leadership

IBM – Red Hat OpenShift will Draw Big in the Hybrid and Multi-cloud Cloud Management Space: Vikas Arora, IBM

It may sound odd but IBM has now come clear on its cloud strategy. Coming off consecutive quarters of shrinking revenue and lagging behind arch rivals in cloud space, IBM seems to be fining its mojo in the cloud space through a comprehensive hybrid cloud strategy and with the innovation it has unlocked with the Red Hat deal. The company is placing high bets with containerised approach, Watson Anywhere and Cloud Paks.

I spoke to Vikas Arora, Vice President, Cloud and Cognitive Services IBM India & South Asia and discussed a wide range of issues including IBM’s strategy to address the challenges faced by enterprises.

Below are the excerpts:

DynamicCIO (DCIO): Directionally, if I were to ask where does IBM cloud fit in the current scheme of things from user’s perspective, how will you describe it?

Vikas Arora (VA): In last 4-5 years, the shift towards cloud has been noteworthy. However, at the same time, not all workloads are being put in the cloud. Critical ones still run on-premise. It establishes the fact that future will be hybrid. Secondly, every customer that we work with, is using multiple clouds. On an average, an Indian enterprise has anywhere between 5-10 different clouds instances a combination of SaaS or infrastructure clouds. So, multi-cloud is a reality. Another key development industry is witnessing is AI at scale. To make sense of data and gain real insights from it, data needs to be on a scalable platform and in an elastic form. Applying AI to small data sets doesn’t work. When you apply AI beyond the conversational form, to business problems like supply chain, we need to create a robust data platform. For this, we need to relook at all previous forms of data warehouses and put data in the cloud. Modernising the data platform is a reality in today’s context.

This is challenging. One of the key challenges is data security. Security posture and visibility across all environments is one of the key considerations. Security has become a primary issue whether it is mandated by a regulator or it is about addressing consumer data privacy. Another challenge is data management. Obviously, managing numerous clouds instances is a huge hassle for a CIO/CTO. An organisation can’t have different people manage different setups.

As organizations turn to the cloud, they’ll be looking for solutions that serve the needs of their specific industry. For highly regulated industries in particular, this means features that offload the burdens of compliance. We’re going to see more and more industry-specific characteristics emerging.

Open source technology will have a decisive impact on cloud. Enterprises are turning to open source software to modernise their infrastructure and accelerate their adoption of hybrid, multi cloud.

IBM’s cloud strategy focuses on hybrid and multi cloud environments that allow enterprises to use platforms and services they have already invested in. It has also got a large number of common open source services for ubiquitous integration, management, security, and data management. This approach is critical since more workloads are moving into the cloud. The foundation for hybrid is the bedrock of a cloud-agnostic strategy that enables businesses to use any existing or new service. To build a hybrid, multi cloud strategy, IBM is now combining its open source strategy with its depth of product services for integration, connectivity, and management.

IBM has decided to focus on these core themes and bring them together to create a strategy and a portfolio that addresses the industry challenges. At IBM, we help companies transcend from public to private without major hassles. In that context, the Red Hat acquisition is pivotal. The only way you could actually do seamless transitions of workloads from private to public or public to public or public back to private is by essentially following a container architecture. The most prevalent container orchestration architecture available today is Red Hat OpenShift. If you are modernising your existing platform or are a cloud native company, chances are you’d be using OpenShift. Keeping that in mind, the entire software portfolio of IBM is OpenShift-ready and containerised. Today Websphere is containerised, DB2 is containerised, Tivoli is containerised, MQ is containerised. All of that existing portfolio of IBM is completely containerized on OpenShift.

DCIO: When you say containerized, what changed from the traditional architecture of the products to make them containerized?

VA: Moving from monolithic application architecture, to a 12-factor microservices architecture, you had to bring in different elements. We have to look into the underlying platforms that this whole application code runs on. The question was whether or not it can sit on to a container or it still needs to sit on bare metal or VM? We were already powering a lot of mission critical application workloads through application servers. Now, if you had to containerise a core application workload like a Core Banking, you cannot just containerise the core and assume everything else below as containerized. The underlying platform, integration platform and API connection platform – all were to be containerized. We containerized the underlying platform on which the whole code rested. The microservices architecture made the applications portable. Architecturally, and underlying platform wise, embracing containers was a big need, and we started doing that. Today, all our products are container-ready.

That’s why the container strategy to support hybrid cloud shift for customers was an important one. Open standards through the Red Hat acquisition becomes a very vital part. Even if an organisation isn’t logged into a cloud platform, they can move as long as there is basic Kubernetes support, which is open by design. IBM came out with management products to basically have a single pane of glass to manage multiple cloud environments. If I wanted to move one container from here to there or one VM from here to there or even a bare metal trader from here to there, I could do that, because of our strong cloud management layer.

Some of our recent wins have been because of the compelling combination of IBM services and IBM products. This helps us run and manage the entire cloud setup, which, in many ways, is comforting to customers. It is a huge differentiating factor for IBM.

DCIO: Are there examples where you’ve offered a combination of IBM products and services?

VA: A large financial services company based in Mumbai wanted to move majority of its workloads on the cloud including all VMware instances. Since IBM had the VMware support on cloud, and because a lot of the VMware workloads today can automatically move to public cloud in the same vCenter environments, we were able to do that. This is where our services and cloud came together to not only move some of these workloads but also manage these workloads on the cloud.

Another similar example is Volvo Eicher where the IBM Services was already present and the company wanted to put more application on the cloud. This combination of services and products came handy.


I can also talk about a lot of DR cases. Customers are looking at modernising DR. So, instead of using a target location that’s in a datacenter, they want to put their DR in a public cloud. A combination of our cloud products and services to run and manage it becomes quite compelling for customers to evaluate.

DCIO: As Ginni Rometty said last year, Chapter 2 of digital journey is about moving from experimentation to true transformation. It’s about gaining speed and scale. In an inside-out approach, legacy orgs are modernising their core systems and architecting businesses for change. On the other hand, new age orgs are taking an outside-in approach which is largely driven by the market and demand for the new digital services. So, where does IBM have a bigger play?

VA: IBM is comfortably placed in both. There’re business cases in both areas. First, let’s talk about modernising the core – the inside-out approach. Last year, one of the businesses that grew very high double-digit for IBM India was data platform business. How did that happen? Some of the large companies, including banks, were looking at modernising their data platform infrastructure. IBM was present in those companies in a strong way with its technology, databases, appliances and more. These orgs wanted to move to the next level, which basically meant a modern information architecture that brings structured, and unstructured data together and wrap this data backbone with services to help consume analytics as a service. That’s just the first step. Organisations still have to get the other layers in place in order to move to the Chapter 2. This, to me, is a classical piece of this inside-out approach, where you are basically picking the monolithic huge investment that you’ve made and organising it for use cases that are not possible today. That’s where IBM plays a a role. Our default position in many of these accounts became an asset because we were anyways managing those infrastructure pieces and we just had to help them modernise it. Because our ability to manage multi-cloud, and a range of other infrastructure and software services, we had been able to manage the inside-out piece well.

Let’s now talk about outside-in. We have quite a few examples there but one that came out recently was Federal Bank. IBM has successfully put the bank’s infrastructure for API banking. This is a classical outside-in approach where you are essentially building layers that allow vendors, partners, or suppliers to have seamless connectivity with its core systems. Federal Bank went on record to say that they used IBM’s hybrid cloud platform to accomplish it. With completely containerized products we are able to serve such outside-in use cases well.

DCIO: Yet another aspect of the Chapter 2 was ‘digital and AI’ and that’s where the Watson Anywhere concept comes in. There cannot be an AI without IA (information architecture). The latter has to be in place to effectively utilise, run, exploit artificial intelligence. How does Watson Anywhere plays a role?

VA: In today’s context data is increasingly seen as single most vital asset that will help organizations drive digital transformation. The Chapter 1 one was really about digital transformation on the periphery where you modernise the interface, methods of interaction with stakeholders (consumers and partners) and a lot of other stuff in mobile and social domains. But the true digital transformation happens when you get insights from data and have the right analytics and data engine inside so that businesses are able to make informed decisions. AI is being seen as the tool to get value from crude data and hybrid cloud is where all of this is happening.

One of the patient care companies in India – focused on diabetes – has its core application hosted on AWS cloud. It wanted to infuse AI into this application and they chose Watson. For IBM, these cases aren’t unique. The platform can be any, but the Watson Anywhere works on it seamlessly. This company wanted the Watson APIs used with the code. The interesting part is that while AI can be used to unlock the value of data, the platform is hybrid, and this hybrid can be a mix of public and private cloud. That’s where real AI is moving after the experimentation stage. The initial use-cases of AI were more like toys created to play with but real value is coming out now with such cases.

I recently came across a customer who wanted to apply AI for price prediction. Frankly, we failed in it initially because the underlying data infrastructure was missing. We tried with some source data, but obviously the comprehensive models would not get created. So that led us to the stage of rectifying the information architecture or IA without which it was impossible to apply AI at scale.

I’d like to again stress on the fact that to have AI at scale the core data infrastructure has to be addressed. And it’s true that without IA, there cannot be an AI. Since last year, this has been a big focus area for IBM. Correcting the data foundation easily takes a couple of years because there’s so much data already with an organisation. On top of it, the right governance and right management are the essentials. Organisations need to relook at their fundamental information architecture. Once the information architecture is corrected, it is easy to apply an AI engine to then derive real insights. As part of our strategy, we also containerized the entire Watson platform giving birth to ‘Watson Anywhere’. Now companies can run Watson on IBM cloud, Azure or AWS. You can even run Watson on your own datacenter. This has made Watson more accessible to companies.

DCIO: What sort of organizations is IBM engaging for transforming their data platforms?

VA: Banking is a big sector; telecom is another. These have also been our traditional strongholds. They were the ones which forayed into data platform investments earlier than others. Historically, some of the largest data warehouses have been run by telcos and banks. These sectors have also been ahead on maturity curve having a data strategy and now looking to modernise. As digital transformation continues spreads its surface, other sectors are opening up too. That includes healthcare consumer goods, retails etc.

DCIO: You mentioned it takes a couple of years to straighten the data (information) infrastructure in place. What goes in these two years?

VA: In a lot of these cases, there is a combination of a future aspiration and an existing pain point that companies want to address simultaneously. And that makes business sense from ROI perspective. The business objective is to fasten the speed of decision-making. And the challenge is the right data (information), agility of the platform to seek that information and to translate that into effective insights by infusing artificial intelligence into it. No organisation today can afford to address it linearly. It will be insane to wait for years to see a single outcome. The process should ideally be broken into smaller sets to have quick wins and successes. If we take the example of a bank, either the bank is growing and they need basically information to be out to the user faster or just because the requirements of the business have changed, so they want a lot more from the data that they have in terms of predictions and insights. An organisation has to define these milestones. IBM’s global business services (GBS), the consulting arm, does a lot of this work. They help companies define business outcomes.

DCIO: Let’s now come on to the IBM-Red Hat deal. Different industry analysts describe it differently. Some say that it was IBM’s masterstroke to win the cloud war. However, my question is, what does this combination of IBM and Red Hat OpenShift mean to the users now? How can enterprises, using OpenShift, scale across IBM’s global footprint of datacenters and multiple zones for consistently monitoring logs and securing the applications?

VA: IBM and Red Hat have been working together for a very long time, even before the deal. IBM, on its own, have also been participating and contributing to large open source projects. IBM’s blockchain initiative is one of the greatest examples. We have always been a keen contributor to the open source. The other reality here is that Linux has been the predominant operating system (OS) in enterprises. According to sources, over 60% of services in Azure are actually on Linux. This was unimaginable some time ago. Within Linux, Red Hat Enterprise Linux is the predominant distribution platform. Now, where is this whole install base of Linux going? It is basically going the container route. It won’t be an exaggeration to say that OpenShift is the new Linux. If the industry is moving towards containers on Linux/open source, and it is going to happen across the spectrum of hybrid cloud, it made sense for IBM and Rd Hat to come together.

Ever since the deal, IBM came up with something called Cloud Paks. IBM Cloud Paks are enterprise-grade containerized software by combining container images with enterprise capabilities for deployment in production use cases with integrations for management and lifecycle operations. For example, IBM Cloud Pak for Applications runs on Red Hat OpenShift. Like that there are Cloud Paks for Data, Integration, Automation, Multi-cloud Management, and Security.

In Cloud Paks, companies get an integrated suite of products that helps build a cloud-native application, deploy and run it. The Cloud Pak for integration helps applications integrate through a containerized backbone. Similarly, the Automation Pak is about bringing business process automation or workflow automation – completely cloud native and cloud ready. The Cloud Pak for management lets you manage your multi cloud environments.

Recently we launched a Cloud Pak for Security too. This is a comprehensive platform with security for all layers including the orchestration and response layer, completely containerized using Red Hat OpenShift.

Since these are containerized, they can be deployed anywhere. You can deploy a Cloud Pak in your datacenter or a bare metal server. You could deploy it in an IBM cloud, or an Azure cloud because the underlying backbone is OpenShift.

DCIO: So finally, coming to the Indian landscape, even today there’s a slight tone of resentment against moving core applications to the cloud. Do you see that changing? Are organizations looking at putting core applications on the cloud? How are they reacting to it?

VA: This whole thought that ‘cloud is synonymous with public cloud’ is something that needs to evolve. Today, the hosted environment or the private DC environments are also becoming very cloud like. Using OpenShift demonstrates a lot of those characteristics of being able to put the fault tolerance, resilience, monitoring, everything that you’d typically see in a cloud environment. Our data suggests that not more than 20% of the enterprise workloads have moved to public cloud despite all of that last 10 years of cloud momentum. At least I do not see the concept of hybrid not being there in the foreseeable future. Why the mission critical or core applications are not being hosted on cloud? The reason is simple. What isn’t broken shouldn’t be fixed. For a lot of organisations, their data security policy or regulations don’t allow them to expose the data on the cloud. But with the ability to containerise their core workloads, that are actually running and empowering this whole IT infrastructure cloud like. Some mission critical workloads have actually started moved down that path, some will, over a period time. Companies need to come out of their hesitation to be able to finally move to public cloud. So, in my opinion, this movement will continue. But expecting a complete transition to cloud will not be appropriate. It is going to be hybrid in nature.

DCIO: Are the enterprises shifting their focus from VMs to containerise environments?

VA: I am certainly witnessing that for at least new workloads, which are meant to be cloud native or which are meant to be micro services architecture. Even if they are running out of a datacenter, architectural considerations are now becoming very key. When organisations are looking at building new stuff, containers are becoming their first choice.


Leave a Comment

Your email address will not be published.

You may also like