This unique session of the tech talk podcast features special guest Chris Psaltis, CEO of Mist.io talking all things multi-cloud, public cloud, rising costs and the ‘trillion dollar paradox’ (and more!)
Nati Shalom: Hello everybody and hello Jonny, this is Nati here today with Chris, from Mist.io. We’re here to talk about my favorite topic multi-cloud and we’ll also talk about one of the kind of topics that is now blowing up the internet the last couple of days or weeks, I should say: the trillion dollar products which really opens an interesting discussion about cloud cost and the promise of public cloud to reduce cost, is it really a promise? Is that something that we should be aware of? So there’s going to be a lot of that interesting topics to cover today. Obviously, we’ll also talk about multi-cloud, Kubernetes and the regular stuff that you’re usually talking about so don’t worry about that, we’ll cover that as well. So without further ado Chris, go ahead and introduce yourself.
Chris Psaltis: Hi Nati, thank you for hosting me, I’m Chris Psaltis, I am the co-founder and CEO of Mist.io. At Mist.io we’re building an open source multi-cloud management platform, and we’ve been working together for some time now. I haven’t been in Tel Aviv recently, I hope I’ll be able to do it soon, but until then this recording will do just fine.
Nati: Yes, so interestingly enough, I mean, we met through Geva Perry believe, a mutual friend, which I understand is still working with you in some capacity, and that was like really, really the beginning of multi-cloud and I was personally looking into multi-cloud to measure our R&D cost and whatever this was kind of at the beginning of our cloud journey and I think the click was right there and then we started to collaborate around OpenStack and the OpenStack events in Tel Aviv. I must say that my first time in Greece after a long time, a very long time, it was actually almost a month ago with my wife visiting there in an island which was perfect and now I’m really eager to go back there and hopefully at that time, we’ll have a chance to meet, not just in Tel Aviv but also in Athens. So anyone who didn’t visit Athens, definitely you’re missing the real city. So with that, let’s start covering the topic of today and Chris, why don’t you take us through your personal journey first? How did you get into multi-cloud and the entire, kind of the topics that we’re discussing right now?
Chris: So we started building what is now our main product in 2012, back then we weren’t the company yet. We were basically another company, we were doing consulting for customers around the world. We were developing web applications, we were maintaining them as well as the underlying infrastructure. So back then people started slowly migrating to the cloud and we had a customer in Australia in a local data center and then somebody in Europe on an on-prem facility and then some people in the states on AWS. So it was already getting rather complicated and we started building Mist in order to solve our own problem, be able to manage all this infrastructure across the world from a single tool. So that’s how we started and then at some point down the road, we said if we need it, maybe there are other people who need it as well. So that’s how the company started. But as you said, back then, it was still very early days for cloud let alone a multi-cloud but now the situation is certainly much different, I mean, over the years, things have changed a lot and practically multi-cloud moved from denial to acceptance and I think if it’s not everywhere already, it’s going to be really, really soon.
Nati: Definitely and it is interesting to see the evolution of that. I think, and again, back in the days of cloud storage and BMS and network, and back then, kind of talking about cloud in general was still, you have to justify is cloud going to happen or not going to happen and now we’re talking about the trillion dollar paradox which means that it didn’t just happen, it happened kind of in a way in which we wouldn’t be talking about trillion dollar paradox if it wouldn’t be so successful.
Chris: Did we get back to square one? Do you feel that we’re having this discussion that was…?
Nati: We’ll get to that. We’ll get to that. But I think, again, I think that with the cloud, again, we wouldn’t be talking about this if the public cloud wasn’t such a big movement and a positive one. A lot of the unicorn companies wouldn’t exist if it wouldn’t be for cloud. I think the ability to scale at the speed in which you can do today from start-ups to unicorn wouldn’t be possible before. So a lot of that credit goes first to the fact that without cloud, we wouldn’t be able to be here and talk about that. But with the maturity, I think we’re getting to the point at which people are looking at kind of retrospect and seeing is the promise of the cloud for doing it also in a way that is cost-efficient to meet the promise or not, it does meet the promise of agility, but can we neglect the cost factor into that? Can we do better as we run into the cloud, what is the impact of cost and the infrastructure cost in our business kind of as we move to cloud, a lot of those questions kind of almost go away because we don’t really feel it’s apparent. I mean, we’re running on our own data center, we would have to buy service so we know the cost of everything, and this is just scale and if someone is giving us that continuous drag to scale, and all of a sudden we look at the bill and we’re saying, oh, she should publish it. What’s going on here? So let’s talk about the trillion dollar products, Analytica that was written by Martin Cassandra and Sarah Wang, what do you think about it?
Chris: It’s certainly very interesting. It’s a point of view that you don’t see discussed a lot. What’s the economics of cloud that is, and if there is a way to improve it or avoid it altogether or whatever. So it was certainly interesting. I’m not like 100% in agreement with everything that’s part of this article but it’s certainly very, very critical to push the discussion forward and discuss some of the things that we didn’t do so far. Probably because we were so happy about moving fast with a public cloud that we kind of forgot some of the key financial aspects of it. But yes, so overall I’m like 50/50, let’s say. I’m glad that we’re discussing some things that we previously didn’t but I’m not like 100% full on board with what’s presented in the article and we’ll get the chance to discuss more details as we go.
Nati: Yes, I think the interesting angle there in terms of the products, but those are inbred to the article, it is a very long one, and it does require some good amount of your time to go through the details. So it does talk about the cloud being cheaper at the beginning, and then becoming more expensive to the point in which, and I think that’s the interesting angle where I signed this article, we’re not just measuring how much it costs us to run an infrastructure, it is measuring the cost of infrastructure versus the revenue and when you do it in this way, the impact of that cost because it’s a ratio of the revenue and because there is a multiplier effect of valuation of companies based on that specific ratio, the impact isn’t just the dollar money that you’re losing or spending on the cloud, it’s the profitability and therefore the multiplier in terms of valuation. So I think it did some interesting extrapolation of the numbers to make it look like a much bigger impact than we would look at that if we’re just looking at the dollar from spending. On cloud, which would be probably in millions and maybe whatever, but the fact that we’re measuring it against company valuation, that was an interesting angle that I haven’t seen before and that’s an interesting exercise.
The other thing is that if we put a number there, which says 50% if you’re taking workloads off cloud which is obviously, I think that part is not necessarily that accurate to everyone, but probably plays some average of seven workloads, I’m assuming. So that’s another thing that is put in terms of assumptions. Again, I must say that a lot of that is based on real data, which is not necessarily as statistics go, it’s really matters not just if it’s on real data, it’s what type of data you’re looking at and how much that data fits the rest of the industry and a certain part of the industry and a certain part of the workload.
Chris: That’s part of the critique I’ve read and I had to do, I mean, myself like who is the model? Who is the example here? If it’s Dropbox, the example or companies similar to Dropbox, can this be considered your usual use case, or is it more of a black swan effect? So this is part of what somebody needs to consider while reading this article, does your organization look more like Dropbox or something else? And obviously, on the market cap discussion and all that, I’m no expert so I cannot really comment on that. But I can safely say that as you brought up earlier during this discussion, maybe there is something that’s being lost to public cloud right now in terms of market capitalization, but do we ignore everything that was possible because of cloud and everything that other companies are gaining because they are on the cloud. So I think this is kind of a mixed thing, you’re losing something, you’re gaining something, my feeling is that you’re gaining much more than what you’re losing. But it’s really hard to prove and pulling numbers. So yes, I would generally recommend some caution. I mean, now the main problem with this article is having people go back to their CTOs, CIOs and say, hey, look we’re leaving a lot of money on the table, let’s repatriate everything there. Let’s put everything on-prem again. So I think this is the wrong way to read this article and I’d like to point that out.
Nati: Yes, I think they’re also trying to state that this is not what they meant. I think what they’re basically saying is that we’re not trying to tell you now to take everything off the cloud, we’re trying to kind of make you open your mind in to this factor and that cost should be part of the key KPIs, the key performance indicators close to other KPIs that you’ve been measured and not second as it is in many cases because people think about the loss of the person’s optimization. So they would go with a loss at the end, say, optimization, sorry for the word. We’ll deal with that data because we’ve been measured on how fast we can deliver stuff and we’re not really measured on how efficient we are with the way we’re running things and that’s kind of the reasons why were we in this discussion, that efficiency is something that in many organizations is not well thought of and not incentivized even, the people within the organization is not really incentivized to actually think about it.
Chris: This doesn’t apply just to cloud, I mean cloud was never supposed to be about cost savings. I don’t remember anything like that. Even in the very early days, it is supposed to be about speed, agility, and doing things faster practically. Yes, at some point it was also about CapEx versus OPEX but this is probably gone nowadays, now you can get a server from Dell in an OPEX way or HP or whoever but it was never about real costs. It was never about, oh this server will be cheaper than buying, let’s say a refurbished machine and putting it in your office and the efficiency applies both on the cloud level, as well as on your on-prem data center or whatever other type of infrastructure you might have like buying a data center is not safe, so you might be spending, I don’t know, like tens of millions of dollars. If you’re not utilizing it, aren’t you leaving money on the table but then you also have to deal with the fact that, again, as you said earlier, building a data center is probably much easier to understand in terms of financial questions, because you buy the metal, you pay the bill for electricity and all that, with cloud, it’s much more complicated and much harder. Usually, you don’t have an idea of what you’re exactly going to spend in the coming billing period. So in the case of public cloud efficiency became more important because it’s relevant to the pricing and the cost model of the cloud itself. So if you’re not efficient since day one, you will soon notice a problem with your bill. This is not so much so like in the on-prem case.
Nati: Yes, and I think it’s, again, I think the recommendation there saying, okay, if you’re not efficient, this could touch your bottom line and as you said, this is true, not just for the public cloud, this is also true for any other options and I think in this case, they’re kind of doing a discount to the, let’s say, off cloud workload, because assuming that it is very optimized and it takes time to get it to that point where it will be very optimizing costs to the level where it can actually be 50% lower than the cost of public cloud. The other thing that I think is; yes.
Chris: So just to stop you here, I think this is another point that’s not very well illustrated probably and it has to do with the workload itself and the lifecycle of the workload or the application, or call it, or the service we’ll call it as you like. I mean when you get to a point where you can bring this application back from the cloud, and on-prem, it’s probably during the late stage of its life cycle. So you are at a point where you understand very well how the application operates, what is required in terms of resources how the needs are evolving over time. Are there any periodic spikes or anything like that? So during those late stages in the life cycle of your workload, such migrations off cloud or to the cloud, or I don’t know, are practically better because you understand the baseline, you understand what’s involved, you understand how the application is running. You understand what’s required to run this application. So when the time comes to move it on-prem, you don’t have to over-provision 100 extra large instances running on AWS, you can do it on something smaller and you know what’s expected, so you can respond. So this is something that I believe must be taken into account when you’re planning something like that. So I would have to say that during the early stages of your workloads life cycle, it’s probably better to keep it on cloud, so you can move faster, you can iterate, you can make architectural changes, but then as time goes by and you learn more about the workload and more and more and more then probably at some stage during this process, it would be easier and might make even some sense to take it off the public cloud and move it on-prem.
Nati: Yes, so I think if I summarize it, I think the point that you’re mentioning which is also of the assumption, but not necessarily a generic assumption is the workload itself, and obviously the cloud was built to be a general-purpose infrastructure, and therefore it’s not necessarily optimized for all types of workload, but if you know your workload and it is very, very, I would say specific, you could definitely create something that is more optimized, but usually, that happens at the stage in which you have a lot of data on how you use this workload and only then you could be optimized, which is not necessarily a deficiency of public cloud, it’s more a maturity of your own company or your own business and in that case, by all means, go ahead and optimize that specific workload. It does not necessarily mean that everything in a public cloud is going to be much higher or two times higher than everything that you will run on your on-prem environment, which can be understood even though they’re trying to make it clear that this is not the assumption. So I think we’re kind of talking about the fact that there’s an interesting angle here. I think Marlene and Sarah proved it, and emphasize on the impact of cost on infrastructure and provide that interesting angle, but we need to read it with a grain of salt and not read it as public cloud, you need to go off of the public cloud, that was also not the intent.
The other thing that I think, at least when I try to kind of analyze that ,I think one of the recommendations is talking about optimize, optimize, optimize, and make the cost optimization but I think me and you have enough time in the industry to know that this is not happening all the time. So yes, people are putting automation, but are still not necessarily seeing the value of automation, definitely not in cost. Why is that?
Chris: Because it’s carved, and not all problems can be solved with automation. Automation is not the answer to every problem. Automation is something that happens at a stage in the maturity of your team where you know what it is that you’re doing, you have done it manually probably 1000 times, and now you try to automate so there it works, but it brings like an additional layer of complexity and if you haven’t been doing anything relevant to cost so far, how can you automate that? There’s no magic bullet here, the automation that you’re built is a mirror of your organization’s experience. If you don’t have the relevant experience, just answering all the problems with automation won’t solve anything, probably it will be much harder to get started with at least initially. So yes, it’s something that’s more relevant, let’s say to your team and the experience they have and the practices that they currently follow rather than a problem with automation itself.
Nati: Yes, so basically what you’re saying is that automation in itself is an engineering practice. We kind of think, at least what I’m seeing over and over again, that you look at the automation and it’s saying, okay, we’ll put the dev-ops team and they’ll figure it out. It’s not that common to think about it as well thought out engineering exercise, it’s kind of iterative, you build templates and scrapes and then evolve and then at some point, find yourself at the point at which you’re finding you’re very inefficient with that and it’s very hard to change. So the main change is an engineering product list and should be treated as such very much like you treat any other piece of your code. The second part I think you touched on, which is you mentioned the mirror of the organization, which is a very accurate statement. I would say that in the same way that you can automate manual stuff, the downside of it is that you also automate the wrong thing much easier. So automation doesn’t know what is right, and what is wrong, at the end of the day, it’s a tool and you can use it to spin off VMs and you can spin up VMs and don’t kill them instantly and therefore the cost will just go high. I think in Kubernetes specifically, I hear it a lot, if people are starting the journey with Kubernetes and all of a sudden I hear them coming back, what happened here? All of a sudden we thought it’d be going to go with Kubernetes and it’s going to be much more efficient, but we’re finding that the amount of VMs and workload we have to put on the platform continuously grows and much faster than we were working with VMs.
Chris: This is a common misconception, I think, with Kubernetes or not exactly Kubernetes itself, this is a common misconception with higher level and more complicated systems like Kubernetes. I mean, they are sponsored as the solution to every complicated problem, but then what people tend to omit is that it is a hugely complicated platform by itself. It will change so many things that you’re currently doing if you’re coming from another world that it’s going to be like really, really hard to be on top of everything if impossible. So yes, it can be more powerful and it can be really efficient, but in order to get there, it’s a journey, so it won’t happen overnight and just to begin with, you will have to deal with the overhead of new type of additional complexity that was just introduced into your stack, new tooling, new ways to do monitoring, new ways to do provisioning, automating the provisioning process of something is pretty much straightforward but to get back to your example earlier, I’m guilty as charged. I mean, the easier it is to create new resources and spin up new stuff, the more you will do it. But does that mean that you’re doing it in an optimal way? Probably not. So there needs to be some sort of a fail-safe let’s say where you actually provision what you need when you need it, and you don’t just do it because it’s easy, you do it because it’s really needed. So yes, it’s a really hard problem to solve.
Nati: Great. So I think we talked about the thesis about cost and why it can be an important factor. We also talked about the fact that the thesis itself is not necessarily generic and applied to every workload and everything, but we also talked about one of the say main conclusions that comes up of that optimize, optimize, automate, automate that it’s not the silver bullet and it’s not a magic solution. You have to be very careful on how you do it and the tools to do that but those tools can work against you in just the same way that the tools were meant to optimize things they can actually, cause you to spend more than you originally spent and Kubernetes is a very good example for cases in which people went from VM to containers, to Kubernetes, which for the exact same workload would run much more efficiently. But the trend was that because it’s easier to spin up container than it is to spin up VMs you ended up spinning off much more workload than you really need to, and therefore your total cost and your total spend can actually grow.
So again, the main lesson, I think from all that is that the cloud doesn’t save you the need to take ownership of your own stuff and of your own business and if you think that there’s the magic thing here, you’re approaching it from the wrong way. Now that kind of brings me to another article that you wrote about multi-cloud misconception, and why is that relevant to going into this discussion? Because if I kind of take the main lesson from this article is to say if you really want to optimize for workload, you need to run everything in the public cloud, which means it’s a hybrid cloud or hybrid cloud model. No matter how you look at that, that’s the main conclusion. The main take away from this article is that multi-cloud is a key component in cost optimization because then you can separate workload from, let’s say, highly optimized environment, a generic environment, generic environment meaning the public cloud and specialized environment, meaning something else, whether it’s on-prem analytics, that’s less important and that kind of brings me to the; yes, sorry.
Chris: Or it can be just another cloud, I mean not everything costs the same on an all cloud providers or performance is not the same on all cloud providers. So in some cases, I don’t know, it’s pretty realistic to assume that in some cases, some applications might be better on public cloud A, and another application on public cloud B. So the whole optimization thing that you brought up, it is indeed very, very strongly connected to the whole multi-cloud space and I say multi-cloud like in a more general way, let’s say by multi-cloud in this case, I would mean something like a mix of infrastructure technology, either on public cloud on-prem or different services from a single public cloud vendor or whatever. So multi-cloud is in this situation is more about how do I mix and match the right infrastructure technology for the needs of the workloads that I’m running in order to achieve some tangible business outcome. In this case, it would be something like improve my market cap or something like that. But it could be something totally different as well and you’re not trying to solve a problem in a lab environment just for the sake of it. You’re trying to solve a problem in a business environment where whatever you do must have an impact on the bottom line of your organization, whatever this could be like, whatever you consider to be a big KPI or metrics that you’re chasing after.
Nati: So what are we saying to people, especially after this article to people who are saying, I don’t need multi-cloud, Amazon gives me everything or Google gives me everything or Azure gives me everything?
Chris: I think they should keep their eyes open. I totally get the concerns people have. I mean, if I’m already having trouble running my applications on a single cloud, imagine running them on multiple clouds. So it’s very, very easy to understand why people are trying to avoid complexity, but in some cases and in order to drive bigger outcomes, it’s practically inevitable and probably you’re already running multi-cloud without even knowing it. So instead of trying to avoid it, I would recommend embracing it, but doing it in a conscious way, can I improve something by going multi-cloud or not? If I can, let’s try it. It doesn’t have to be something big, you can start small, that’s the best way to do any projects. So start slow and see.
Nati: Let’s try to analyze, I mean, we’ve been in that space for a while, almost a decade or even more and at the beginning, and for a long time, multi-cloud club was just a buzzword, No one was actually doing, not really doing multi-cloud or maybe doing a little bit of that, but they’re mostly edging around that and all of a sudden, I’m starting to see almost every start-up having some sort of a multi-cloud strategy and talking about it and definitely the unicorn. Why is that? What changed, because at the same time cloud became richer and gave you almost everything, why is multi-cloud becoming more widely accepted, adopted than it was in the past?
Chris: I think it’s just a matter of the history evolving and moving forward and people maturing and understanding the cloud. I think like multi-cloud was there since the very early days. So that’s why we started doing Mist by the way. So it wasn’t like something that’s science fiction or anything, you had people back in 2010, I guess, that were facing multi-cloud sales challenges already. This wasn’t a very common issue, but over time it has become a common issue. People are using more cloud than ever, and cloud adoption continues to rise. So we will get situations where companies will have to combine services on different providers for several reasons and now we also have the level of maturity back to the article that we were discussing earlier to put down the numbers and say, does it make more sense than running it on-prem or not? So as companies become more mature and they gain experience while working on a cloud operating model, let’s say then such questions arise, and they need to do something about it and part of the answer is a multi-cloud. Now, what is exactly multi-cloud in each case might differ. I mean the way each company uses it. So it could be for example, multi-cloud application, which means it can run on Google, Azure, and AWS without any problems. But also there could be some sort of a higher level multi-cloud infrastructure system, like Kubernetes, you can run nodes on different cloud providers. So multi-cloud can include lots of concepts, they don’t mean all the same thing, but a multi-cloud strategy is really needed right now to bring something to the market because people are doing it and they will keep doing it. So you need to have an answer for that.
Nati: So what is the reason in your view, what role does Kubernetes Terraform, for example, an Ansible plays in the adoption of multi-cloud?
Chris: So let’s begin with the lower level, which to me at least is the Kubernetes example. So Kubernetes is practically something that we were lacking back in the hypervisor days. So we wanted a way to be deploying applications in a replicable and environment across any platform. So back then, we would probably ask for a hypervisor that can run anywhere and then some packeting system on top, and that’s when it is in containers. So that was the problem that they’re trying to solve. So Kubernetes comes and sits on top of your existing infrastructure, whatever this is and then, on the other hand, you just have to deal with the Kubernetes API. So one interface, several cloud providers underneath it and then that becomes, let’s say your hypervisor in the older days. So Kubernetes is your platform, is it a good idea to run it across multiple clouds? I don’t know, it doesn’t sound really good to be honest and in some cases, it might cause more trouble than it solves but it is certainly a possibility and it’s a way to achieve multi-cloud by doing practically nothing, just installing Kubernetes.
Nati: Exactly, exactly. I think that in my view, Kubernetes, that the major role in abstracting a lot of the complexity without, especially when it got embraced by the power of the cloud provider themselves, because Kubernetes itself is still very complex, but I think once it is a managed service UKS, AKS, GKE that took a lot of that, I would say barrier that I’ve seen in OpenStack, for example, OpenStack to the level of adoption, because yes, it has a very nice promise and it has a lot of nice features, but getting to run OpenStack, that’s a nightmare and if Kubernetes would have stayed, I think, at that level where you have to build Kubernetes to run it. I think it would get to the same place where OpenStack was. I think the fact that public cloud embraced it and made it a managed service that you don’t think about it when you want to run it, that reduces the huge amount of barrier and now, interestingly enough, I’m starting to see models that I didn’t really think that of before. For example, if you go to spot instances there’s something that is called Lu, you could run spot instances in multiple clouds in Kubernetes and Kubernetes becomes some sort of an obstruction to multi-cloud. So using almost a nice Sierra, not Sierra, I forgot what’s the name of it. I think it’s called Illustrator that if I’ve got something similar and if you go to Amazon, you could run some less engine behind Kubernetes and kind of abstract that as well and there are many different ways in which Kubernetes allows you to use a lot of the cloud resources without being tied to them.
So that’s in my view, a huge change that made the barrier to entry much lower, and therefore we can start and see more and more companies that are not necessarily large enterprises that are embracing it. Another thing, and that’s something that I wanted to ask you as well, if you’re seeing it, is the drive for regulation. As enterprises are moving to the cloud, that applies to SAS companies especially, there’s something to be said, very sensitive of where the data is running. So if you’re a SAS company before, like say even Salesforce, before, you’d say, yes, I’m running on Salesforce, Salesforce managed all my data or my customer data, whatever. Today, that’s not enough, today we’d go to Salesforce and ask them, where do you store the data? You can run it on your own cloud. No, no, I want it to run in Azure. I want your data to be running on Azure. So what I’m starting to see is that public clouds are now being forced to support this multi-cloud because of regularity concerns, because of those types of things. It’s not necessarily a choice, that’s what I’m basically saying. It’s not always a choice, sometimes it is a business decision, especially in a SAS world that is happening right now
Chris: In heavily regulated environments this was always a big issue and especially recently it has become even bigger in the usespecifically. So this has certainly gained a lot of traction and on one hand, I think that it influenced cloud providers themselves to be more open and adopt more open source technologies. Let’s say, for example, this is only part of the case. I think though, the other major part is for cloud providers who are lagging behind the competition to be able to view it in workloads easier. So I think like it’s two main things that are driving this change as we speak; one, heavily regulated environments where you keep your data, where you process your data, and where your applications are running are very, very critical and these examples are more than you might think of. We usually think of like banks, financial institutions, et cetera like nowadays any sufficiently advanced IT company in EDU pays a lot of attention to those things. Even if they’re not doing financial services themselves, so it’s certainly growing in terms of adoption and this is driving all the changes on the multi-cloud side that you said, because Azure it can be anywhere and can’t be another company, it is like a US company.
So yes, those changes are happening really, really quickly and they will certainly drive the way the landscape will look like down the road and that’s where also Kubernetes plays in because if you standardize against this more general and high-level interface, then it would be probably better to just run Kubernetes, even in an air-gapped environment because your team already knows how to do it. They already have the experience, they already have the tools, but then the question becomes, do I move everything to Kubernetes or what happens to my mainframe? Do I need to adopt Kubernetes just because it offers me this multi-cloud layer? Do need everything else? I mean, do I really need Kubernetes just for the multi-cloud interface or am I planning to take advantage of more than that?
Nati: I’m going to get to that in just a bit. I think I want to summarize what we just discussed, I think so far. So I think we went we started with the extreme use case, which is a company that scales and therefore finding themselves paying a lot of money to public cloud and that led to the discussion that if you want to optimize, when you’re going to scale, you probably need to run everything in a public cloud, then you need to be selective on the workload that you can optimize and run it in a specialized environment to reduce the cost. So that I would say a very advanced and applies to companies that are at the extreme scale and have this step of repeatable workload that they can really optimize much better than the cloud. Then we talked about multi-cloud in a more general sense, and what brought more companies to adopt multi-cloud even for not such an extreme case and I think we got to the point where the barrier to entry to run on multi-cloud with the introduction of Terraform, with introduction Kubernetes, and those tools and the fact that public cloud themselves, not just the tools themselves, but the public cloud themselves now provide a managed version of that reducing the barrier to entry, made it an option that now a lot of companies can now adopt because the toll of doing it is not that expensive.
Then we talked about regulation and other factors, business factors that now force companies to work in that way. So it’s not necessarily a choice, it’s something that you’re almost being forced to as part of your business decision and that I think kind of a summary of the trends or different drives to multi-cloud. We also talked a lot about SAS companies drive for multi-cloud, which is the ability to manage, again, this is a business requirement, which is slightly different than enterprise multi-cloud so there, the drive is different. That gets me to the technical bit of this discussion and I know that you’ve been running Mist.io as a CNP I think in that first segment, we were running the Cloudify as an orchestration space and I think I can say in all fairness that Terraform taught us a lesson, a different way to the multi-cloud. So where do you think Terraform succeeded and CNP failed in their approach to multi-cloud?
Chris: I don’t think in black and white, to be honest, I don’t know if it’s because I’m Greek or not, but I don’t think in black. So I believe there are a lot of interesting shades of gray in between that you need to explore and that’s where the really strong lessons are probably are. So if you just think of it like, oh yes, this succeeded and this failed and this changed and this blah, blah, blah, you’re probably missing some context. So I think that multi-cloud has evolved in a couple of ways on different parts of the stack. So you have like on the lower level of the stack, something like Kubernetes which you can say it’s multi-cloud out of the box, no need to do Terraform, no need to do anything else, just have like some Jamo to worry about, and that’s it, I will do it. Then you also have like a second layer, which is slightly higher than that, which is where Terraform and everybody else similar to Terraform, like Cloudify, or Ansible where you have this obstruction of your workflow. So yes, you have full access to the underlying infrastructure but then the workflow is the same, you still use the same tools, same workflow, everything works out of the box.
So, if you need to change platform tomorrow, you go back to your templates and your blueprints, and you just change the relevant parameters. Unfortunately, this is not easy as well because every pro platform is not the same. So there are platform-specific things that you need to take into account, example, networking, each cloud provider does networking in a totally different way. So even though you have the workflow set out, then you still need to learn more about the inner workings of the platform that you’re using, the cloud that you’re using and then you also had the CMPs layer, which until recently it was mostly about integrating with the native APIs of whatever cloud you were using and then, the limitation becomes, okay, so what do you do? Do you abstract everything and you generalize, and you say that this is what our network looks like and this is what storage looks like, and this is what the VMs look like across all the providers that I’m supporting or not and if you do this type of generalization, what information are you losing? So across these three options that I’ve said there were mistakes everywhere.
Nati: I think if I need to analyze Terraform success, it starts with identifying the main users, which is a DevOps guy and not an IT guy, which I think simply was really targeting IT. Terraform will start in DevOps guys and in DevOps guy, the UI is safe today, not the console, not whatever. So they realized that very early on and created an interface that caters to that. But that is very optimized for that use case for that user that is running multi-cloud, but as part of the CACD pipeline, not as an IT operation that you need to spin off environments and whatever.
Chris: Do you mean that they took the user out of it?
Nati: Almost, almost and even if you think about CMP we were kind of in the middle between the two worlds. So I think that’s another discussion, but I think the key point there, if you look at what makes a technology successful at the end of the day, it’s really understanding the user and catering to the end consumer that you’re targeting in a very accurate way. That’s really the difference between, even Facebook, when you look at Facebook, why they succeeded is because they realize is that their consumers don’t like at least at that time advertisement, it’s funny to talk about it today when all they’re doing is advertisement, but the key reason why they succeeded is that all the other competitors at that time were you running on the advertisement business model and they were not and therefore they were able to cater for a billion users and not to millions of users and the rest is history. That’s similar I think with Terraform, the user is CI/CD, the user, therefore, needs to have an interface with CLI, not UI. They didn’t have any UI, they didn’t have anything of that line, their command-line interface and that nuance, that accuracy about the user was a key factor to their success. It plugged into CI/CD very nicely, and it was driven as code as developer customed to now through again, the UI and things at that time, and that maybe you explained the Terraform success in that one, which still explains their success.
Chris: I totally agree with that. The user interface, let’s say the user experience was great and it was like a specific tool to do a very specific job in a very efficient UI. Now did they solve the general problem? No, but they didn’t have a lot of success and they still do, obviously, within this within this context. Now the question becomes, what do you do to solve the general problem? And can you grow something that’s so focused and so well, done and so efficient to include ABCD and Z as well? So that’s where I think, the interesting next step will be for multi-cloud. How do you combine all those different layers, like the CMP, the configuration orchestration, and even the virtualization layer, I could add there was like some new generation hypervisors coming out. So how do you combine all of that in a way that makes sense and it’s relevant to your engineering, but also relevant to your management where they need to keep track of your metrics. Well, that’s the great thing about the space and why it’s still interesting to me at least after so many years.
Nati: Yes, I think what we’ve seen is I think played a very important, still play a very important role in reducing the barrier to entry and kind of I made that jump. I think, and that’s to your point, what we’re starting to see right now is this next evolution. What’s coming up, what’s next after Kubernetes and Terraform and what are we looking at? When I look at that, I think the two main vectors that are driving that next thing, one of them is the workload itself, which is becoming more machine learning, AI, all those things that are highly distributed and very different than I would say, the regular web application or whatever that we used to have in the past. And the second part that I think is driving it is people because of that complexity. I’m starting to see many companies talking and saying community scaling is complex. Why? Because you have to actually program it or tell it how many replicas. And again, it’s all relatively speaking because scanning for Kubernetes was much more complex, but why are they saying that it’s complex, they compare it to Lambda services in the case of Amazon, or they compare it with other technology that hide that complexity and already have that mechanism to provision infrastructure and manage scaling out of the box. So the question becomes in the world, in which we’re moving tomorrow serverless, where everything is as a service, am I still going to have the same need for Kubernetes, or I’m just going to run everything in setlists and let the rest of the service be consumed as access.
Chris: Yes, it’s really hard to say because it depends on the use case a lot. So it’s really, really hard to say. I can safely say that even if you don’t know it, Kubernetes will probably running somewhere in the background like VMs are still running in the background somewhere, like bare metals are running in the background somewhere. So it’s not that they went away. They were just hidden underneath a higher level of obstruction and maybe a simpler user interface because all those serverless frameworks need some sort of container or orchestration layer to run on. So there is still this thing running in the background and somebody is running it for you, it’s not you right now, but there’s somebody there and if you would like to replicate this experience on premises, then you would need to do the same. You still need the bare metals and the cables and the VMs and the Kubernetes, and then K native on top. So it didn’t go anywhere, it’s still there. Maybe it’s not as visible because you can get this as a service instead of hosting it yourself, but it’s still there and I think it will be around for quite some time. I don’t know, what the next thing will be like the next big thing if it’s serverless or if it’s something else. But in any case, I’m seeing that the rate of innovation is becoming much, much higher. So we went from mainframes to VMs in 20 years, I don’t know, like 30 and then we went from containers to Kubernetes serverless and managed Kubernetes, fully managed Kubernetes, managed containers and I don’t know how many different kinds of services and in less than eight years.
So, it’s accelerating a lot. So it’s really hard to tell what will be the outcome. I can safely say that it’s not a winner takes all because each organization is different, each workload is different, different needs. Cloud is not a commodity. You cannot just take one kilo of cloud from Google and one kilo of cloud from Amazon, and then compare them, that will never happen. So since it’s not a commodity and there is differentiation between all those services and there are different needs from its user then the possibilities are practically endless. Yes, there might be some sort of predominant paradigm or let’s start from there kind of thing but in general, I’m sure that the complexity will increase a lot. So from the end of the end-user, it would be better to just prepare for it and start building the relevant skill sets internally rather than fight it or just bet all your money on a single platform that you think will win.
Nati: Yes, I think it touches on your comment to me earlier, when I think I asked you about the Terraform thing or the article that it’s not a black and white thing, and that might be also a great thing to look at that in gray scale.
Chris: We like to argue, so we say different,
Nati: For me, it touches the sentence that I used to say a lot, which is the only constant is change which is when we look at serverless Kubernetes versus Terraform, we try to think about it in black and white. Kubernetes will replace VM, Docker will replace VM, Docker will replace Swarmby the way, which is just still finding it running in different places and now we’re talking about serverless will replace Kubernetes and things of that line and the reality, if you really put a probe into almost every company, you’ll find that they’re not running on one and two, you’ll find that they’re not running on one platform, you’ll find that they have legacy and new things and existing things. That’s always going to be the case and I think the main advice there is really to realize that reality of having many tools and have different kinds of will not change it and therefore you need to really start thinking about how you manage things in this type of the continuously evolving environment, rather than trying to jump from one technology to the other, which is almost impossible.
That leads to, I’ll say the last point in the trends around multi-cloud that I wanted to discuss right now and that’s public cloud, we’re starting to see the demand for multi-cloud. So obviously they’re not standing still, and we can see Google coming with anthos. We can see Azure coming with Azure arc still, I think in fantasy, in terms of maturity and how much are they pushing it? But then they realizing that multi-cloud is a demand and they want to be part of it. They don’t want to be marginalized by multi-cloud. What do you think about that trend?
Chris: Yes, it’s very interesting. It’s very interesting because it kind of changed the whole story around what is a public cloud. It practically changes the term products like that single-handedly. You could think of public cloud as like a data center running somewhere and some other companies building or something like that and now you have this box inside your office in front of you, and it’s running the same API that AWS is running. So now is this like a small public cloud on-prem? Is it like multi-cloud, as you said, is it like ads? What is it? I think it was like a very, very interesting move and I’m expecting that there will be a lot of investment in the future around those technologies, practically bringing public cloud as next to you as possible. So yes, that’s certainly here to stay if not get even more complicated and bigger, like all of those offerings started moving Kubernetes on those devices as well. They are bringing more services and more APIs to them. So at some point, it’s going to be, if not like 100% compatible environment, but most of the services that make sense to you on a daily basis will be there and this will greatly impact the way that we think of hybrid, multi-cloud or whatever. How will your perception change if you can have the small AWS running next to you? Will this force you to become more single cloud versus multi-cloud for example or will it just be something very specific for a specific use case, and then you still have everything else running in parallel on other platforms.
So I believe that this might be complicating the situation a little bit, but it doesn’t change the basic premise of multi-cloud, which is getting the best tool for the job. Now cloud providers can offer me something on-prem that I don’t have to worry about, data issues, for example, or I don’t have to worry so much about latency and all, but does this specific AWS service do the trick for me? I mean, this question is still there. It might be easier to just buy a rack and bring it on-prem and still run the same APIs, but does it solve the problem or is it still better to buy it from somebody else? So I don’t think that it’s practically changing anything. It’s giving you more options, which means more complexity and more things that need to think about but at the end of the day, I don’t see any re-score, any benefit on on multi-cloud from this product.
Nati: Yes, I think there’s another angle to this, which is we started the discussion about who needs multi-cloud, and we had a lot of those discussions with both me and you, and whether I need multi-cloud and obviously public cloud themselves don’t have any incentive to do multi-cloud unless they see a demand for it and unless they believe that they would lose business if they don’t do it, otherwise it’s cannibalizing their own business. So the fact that they think they recognized the need and putting investment in last one to drive it, first of all, it’s an indication that the demand is there and it’s very solid. It’s not debatable. Whether the approach that we’re taking specifically with Anthos, which is very Kubernetes-centric will be successful, or Azure Arc which is a little bit more open architecture I think in terms of their approach, the jury’s still out on those options and how are they going to do it? But the fact that as Bradley said, the fact that public cloud them becoming hybrid cloud providers, that is a huge change in the game and I think that kind of opened a lot of interesting options that weren’t there before.
I would say in my view the key takeaway that I take from a lot of those discussions and this one specifically that there’s still a need for you as a business to own your workload and kind of be careful on how much you tie yourself to a particular cloud. Now, the cases in which you will be willing to pay the costs because the costs of the component itself is going to be higher, but those decisions needs to be, you need to be very careful on those choices because very quickly, and as you grow, that stickiness is going to hunt you back and therefore and obviously public cloud will always try to make you thicker, even if they’re offering you a multi-cloud strategy.
Chris: The discussion around vendor archive is probably an episode by itself.
Nati: Exactly. Exactly. So to your point it’s not the black and white, in the same way, not all workloads needs to be decoupled, some workload can be coupled, but you need to be very, very careful on the workload that is critical to you to make it such that it will be coupled to a specific cloud, because it’s going to haunt you back and I think back to the original point of the discussion where we talk about the paradox, that is a classical case, what happens when you scale? So you have to think about it earlier on, and don’t be naive about it that, yes, I’ll grow and then I’ll worry about it, there are more options right now to do it in the right way early on than it was before, and it is not as complex as it used to be. But you have to plan for it, you have to be ready to.
Chris: It doesn’t have to be like a very, very intensive kind of effort. What I usually like to call it is, when you enter a room, which is, let’s say this cloud room in this situation, look at the exit, where’s the exit? Is there an exit? If there is no exit or it’s hidden behind; so you don’t need to open the door of the exit door, you just need to look at, If it’s there and it looks open up, you can open it, it’s okay. I mean, I don’t know how good is a window, it’s an option, it is an option, take that into account as well. So obviously, as you progress and you move and you improve the architecture and the application and all that, this will become more interesting and more of a requirement maybe, but at least initially when you get in, just look at the exit, if it’s an exit, if there isn’t a real exit there, you’re probably okay.
Nati: Excellent. Well, I think that’s an excellent summary of a great podcast and great discussion. I’m very happy that we had that. Thank you very much, Chris. I hope the audience enjoyed it as well, as much as I did and next time in Athens, right?
Chris: Yeah, yeah, yeah. Great. That would be great. Thank you very much. I had a great time.
Nati: Sure. Thank you very much, Chris.