IT Leadership Opinion

‘Winning the Lottery’ Approach Doesn’t Work. CIOs Need to Fix Smaller Problems; One at a Time: Craig Stires, AWS

Say kia ora (hello) to Oscar and chances are you’ll get a super prompt reply instantly. Oscar is an AI-powered chatbot deployed by Air New Zealand, which can reply in as many as 450 ways. In Feb 2017, the airline announced a chatbot to assist customers and offer a personalised experience than searching online or speaking to agents on IVR. For Air New Zealand, facing stiff challenge from a dozen new airlines that have entered in the region, a seamless customer experience, powered by emerging tech could be a game changer. That said, Air New Zealand’s Oscar chatbot is powered by AWS machine learning. In the recently concluded re:Invent 2018, Andy Jassy, CEO of AWS sent out a strong message when he launched Amazon SageMaker Ground Truth – built for highly accurate training datasets and reduce data labelling costs up to 70% using machine learning. Andy also announced the launch of AWS Marketplace for machine learning in which more than 150 algorithms and models that can be deployed directly to Amazon SageMaker.

During the re:Invent 2018, I caught up with Craig Stires, Head of Analytics, AI, Big Data for AWS APAC region in Las Vegas. Even before a formal conversation could start, Craig was excited to tell about the story of Air New Zealand – a project that he was closely involved with. “It was frustrating for the airline customers to navigate websites. All the IVRs (interactive voice response) were terrible. The answer to it was a Chatbot. Building a Chatbot is easy but to create one that can live up to customer expectation is a daunting task,” Craig told.

When Oscar was first launched, it could answer about 1/7th of the questions correctly. But rather than giving up on the failure because it could only attain a 14% success, the airline introspected which were those 14% of the questions that got answered correctly and then planned to apply machine learning to figure out, which were those questions answered wrongly. Some of those were routine questions. E.g. “When is the next flight from Auckland to Sydney?” “If I’m a business class traveler, how much luggage can I take on the flight?” The conversations were mapped. Natural Language understanding was applied to see what was the nature of the questions where it failed. Within a year, they were able to attain a 75% success rate. From 1 out of 7 to 3 out of 4 was quite a jump! Plus, it was a delighting experience for customers. Air New Zealand picked up very specific problems to fix. They didn’t throw up their hands and quit when they realised they didn’t get it right the first time,” said Craig.

What’s the key message here? CIOs may be missing some of the broader contexts while trying to push change in the organisation through technology investments. “It’s important to delve why are we building, what we are building,” says Craig while emphasising on the fact that we shall always pick smaller problems to solve rather than a big bang approach.

Technology execs with ‘winning the lottery’ mentality aren’t at any advantage. We assume that throwing data into a black box gives magical outcomes. “Organisations that go down that path, rarely succeed. The ones who pick up real problems and make it a focal point, succeed,” he underlines.

This ‘winning the lottery’ remark set the tone for my conversation with Craig and I built over it to deep dive into an important issue of how CIOs shall make sense of data and how Machine Learning/AI will help them create the winning formulae.

 Rahul Mani (RM): The biggest challenge for CIOs, of a small, medium or large org, is to make sense of the data through a process of automation. Where do they go wrong?

 Craig Stires (CS): Universally, it’s a huge challenge to make sense of what you have (data)? Secondly, how can you be really smart to go into that data and look for the right answers? Tons of digital exhaust is being generated from systems like ERP, geospatial data, social streams, sensors etc. There is a huge upfront cost in collecting and storing this data. It isn’t the right approach. Great companies take smaller samples data and not the entire dump. It is advisable to capture data of a specific period and use that for test. I strongly believe there has to be a business case attached to the data we’re collecting. At AWS, we talk about ‘knowing what you have’. There’s a need for cataloging data. I’d like to mention about AWS Glue that has got a few different things that it does. One of it is that it catalogues data. In order to get good, clean data, it should be in a form that you can test and validate it. The likes of Netflix, Airbnb, and other such companies are really good with data and adopt these practices. Other enterprises are following them today. Similar to the example of Air New Zealand that I mentioned above, we work with many other airline companies who have huge amount of data that’s different from transactional data. That sort of data is so prolific that if they captured all of it, costs would be crazy. So, they start with some very specific things that they plan to fix. They pull smaller data samples and validate it.

That’s where managed services like Amazon SageMaker help immensely. It does a number of things to make the life of data scientists easier. Most of us don’t necessarily know where to start from, what’s the best model, what’s the best approach? Important is that as we move data into the data lake, we should know the intention behind it. Remember, it never about capturing everything but about capturing the best stuff, have a way to validate it, be able to test a few different approaches quickly, and then having a fully managed service that allows you not only to test, and validate but also to deploy at scale. Data cataloging is an important link to have a framework where you can test it quickly in a Sandbox, and then move to a fully managed service to deploy at scale. So, those are some components that set apart the businesses that get really good at data as opposed to the ones that struggle with it.

RM: As you said, digital exhaust, be it transactional or otherwise, is being accumulated. Cost of collecting this data is a huge challenge for businesses. That’s where the importance of data classification matter. How do technologies like AI, ML, make a difference in bridging this gap?

CS: This is the crossover where we help not only by plugging the right technology but also the gap between the CIO and CFO. Traditionally, CFO asks uncomfortable questions to CIOs about having big capital outlays and the ROI for business. It is true, there is a lot of cost incurred on data collection and its repository. In traditional models, we have to lay out and buy a lot of infrastructure, upscale a lot of people to run it etc. All of it is really hard. It is nothing but an overhead for CIOs and a point to justify for the CFO. In some large conglomerates, we work with the CFO to help them understand how to reduce the risk of investment. With our logic, the CIOs can go back and tell CFOs: “What if we use something like a SageMaker, or an AWS Glue?” These services run on AWS infrastructure so you may be able to spin an experiment for just a few hundred dollars only. That really gives CIO the ability to run a few experiments at a cheaper cost and successfully scale them when required. Working on a big bang approach, which costs millions, is a hard discussion to take. Technologies like Machine Learning and Artificial Intelligence are helping in taking that risk out of business.

RM: It now brings me to an important point. Once the case is established, how can AWS bring value by offering that scale?

That’s where the three pillars – People, Process, and Technology – come into play. Traditionally, a business can get stuck on any one of the three. For example, when we talk of scale, what would it take to scale where, instead of running one model, you want to run 100. A large payment processing company, which works with us, has customised fraud models deployed at different locations. They built 1,00,000 custom fraud detection models that are deployed across their networks. In a traditional model, you’d have to have a lot of data scientists to do it, which means more resource and skills followed by more process and technology infra. You’ll begin with buying the right technology, setting up operations, having fail-overs, DR and what not. Whereas, when you work with fully managed services of AWS, it’s there and available. So, when you want to go from 1 to 1,000 to 1,00,000, we have the capacity. It’s just a matter of getting it spun off. Instead of taking weeks or even months, something like this can be done in minutes. Managed services like SageMaker that helps automate and give performance and an option to redeploy those models, offers a big timesaver. In this case, a few individuals might be able to do the work of what was only possible by a large fleet.

The story so far is quite convincing. However, if we do a reality check, many corporations are still not ready to put their data in the cloud for a variety of reasons including losing control over data, data privacy concerns, vendor lock-in etc. They still want to continue with the legacy of an on-premise model. However, that has started throwing back enormous challenges and roadblocks. How could these nuances be addressed? I asked Craig.

RM: Cloud sounds a very convincing model from an ROI point of view. But, what’s the guarantee that this model will work amid concerns of data privacy, control, lock-ins?

CS: Every CIO has had his/her journey to get to where they’ve reached. They probably have pretty valid reasons to believe in what they’ve done. But if we look at our core business, we run global infrastructure securely at scale. Over time, we have seen digital natives moving quickly to cloud because it made sense for them. Now, you even have examples of banks running their core banking systems on cloud. What’s important to understand is that their core business is not running a data centre. Rather, it is providing unique products or services to customers. 100 years ago, a lot of organisations thought they had to generate their own electricity to facilitate manufacturing or run shop floors. Now when the electricity is commoditised and is available as a service, it sounds laughable. The same is true for cloud adoption. The competencies that we’ve been able to build be it governance, auditing, having multiple instances in different seismic zones, its unimaginable for companies to build the same on their own. The scale at which we build, the security that we provide, businesses, sooner or later, will switch on to it.

RM: Explain something about the deep learning models. How will they address the complexities like recognition of pictures, texts, sounds? What’s AWS doing in these areas?

CS: Deep Learning presents a bigger picture around us. Artificial Intelligence wraps a number of fully-managed services, language, and vision. AWS, as a business, has been building Machine Learning models and techniques for decades now. How we run our own businesses, is an exemplary story of transformation. How we design fulfilment centers based on what we’ve, how we do optimization for our delivery vehicles, how we have a pretty good guess of what you’re going to order, how we make sure we stack the right things in the FCs near you is all part of our exhaustive experiments with ML/AI. Now there’s drone delivery. We’re able to make sure that where the drone lands, its grass and not water. These are really important things to solve. The deep learning space will continue to evolve and solve many more complex problems. We’re getting pushed really hard by some demanding customers who want solutions for most complex real-world problems. For example, self-driving vehicles. Traditionally, you want a human driving a car because machines haven’t been reliable enough. With AI/ML, we are able to do multi-spectrum light analysis. There are more cases where machine learning, and specifically deep learning, will be able to solve issues that were hard for humans to reliably solve and also address the issues of scale.

I was almost reaching the end of my conversation with Craig before it struck me to go back to the first example that he gave: Air New Zealand. I wondered if there were similar examples where companies were looking at solving small but important problems for their customers using AWS.  

 RM:  Do you have more examples similar to Air New Zealand?  

CS: There’re quite a lot, but let me mention a few interesting ones. There is a Thailand-based insurance tech start up with a funny name Sunday Insurance. They actually do completely AI-based claims underwriting. They have ‘zero’ physical insurance underwriters. The process is built on multiple years of data they have collected from the parent company. They are fully based on AWS. Just with 70-80 people they are able to offer insurance to very large telcos and ride sharing companies. The Sunday Insurance’s CEO is very vocal about it.

Indian company Policy Bazaar uses our AI-based text-to-speech service. If users want policies to be read out, that’s what it does. They are building the next version of their AI engine to make it easy for sellers to sell cars in the used-car markets. Based on image recognition on their Cyclops engine or Deep Learning-based database engine, ensemble of different neural networks, it actually identifies the model of the car, trim, even from partial photos. Actually, it would recommend taking a photo from a particular angle to increase the likelihood of faster sales because customers like those type of pictures.

RM: Examples that you picked up are mostly of the digital native. Are there some traditional companies that AWS has offered AI/ML solutions?

 CS: While there could be many but if you take for banks for instance there’s a lot of regulatory issues for them to handle. There has been hesitation around. But all of a sudden there is a competition with digital-only banks and Fin-tech companies. As a result, there is a big push for them to modernise and evolve. That’s where the adoption of AI/ML is really showing great uptake. Banks that are thinking outside of websites and mobile to create a better customer experience are opting for the cognitive technologies. Banks use AI/ML in doing risk profiling of customers. There are similar examples from other industries too.

At the end of my conversation it was getting clearer that if the enterprises have to take advantage of emerging tech without having to go through the heavy-lifting involved in people process and technology, then the option of going with a fully managed service offered by AWS makes a lot of sense.

Within AWS there is a core mission to put machine learning in the hands of every developer. It reflects company’s commitment to build on the success of SageMaker, which is being used by over 10,000 customers despite only existing for just over a year.

Andy, during his keynote mentioned that “more machine learning happens at AWS than anywhere else” and it’s quite evident from the available use cases of SageMaker, which helps accelerate the process of developing machine learning models, for personalisation, for fraud detection etc. and perhaps that’s the reason why the machine learning practice at AWS has been able to launch over 200 product features and services since the time it was launched last year.

Leave a Comment

Your email address will not be published.

You may also like