Podcast | Episode Six : Orchestration Language Comparison

Episode six of the Cloudify Tech Talk podcast takes a deeper dive in orchestration language – offering a thorough roundup with our in house experts. 

What cloud providers do you work with? GCP? AWS? Azure? Which services? Are you using a managed Kubernetes, like GKE, AKS, Openshift or EKS? How do you manage your cloud resources? How do you manage the release cycle of the infrastructure and applications that you deploy? More burning questions? Listen below:

_______

_______

Additional Resources:

Blueprint examples

Orchestration Mapping Image

_______

Transcript:

Guys welcome to a very special episode of Cloudify Tech Talk Podcast. Today, we’re taking a very exciting, deep dive into orchestration languages, particularly how they compare between 00:11unclear]  of our cloud formation as our arm, et cetera, et cetera. So to talk a little bit more on this and to intro who’s talking, I’m gonna handoff to Nati Shalom our CTO.

Thanks so much, Jonny. So with me is Alex, a regular guest, in this podcast. Alex?

Hey guys.

And Trammell, who’s been a guest in other discussions that we had on this topic. So Trammell, why don’t you introduce yourself?

Hey Nati.

Trammell has been, you know, kind of dealing with a lot of the integration with all say orchestration tooling that we’ve done over the years and specifically over the last release of Cloudify. And I had a discussion with Trammell about the differences between those different orchestrations and from his experience working on those integrations, we thought that a lot of the lessons and the [01:05unclear] experience, will probably be worth a discussion in the podcast. And that’s how the idea of the podcast came out.

So there is also a nice blog that Trammell did around that. And you will be discussing that.[01:20unclear] to that, Alex was working on creating some sort of a language running, which we’ll talk later on, how we can integrate [01:30unclear]  which kind of open up a different approach and how we deal with, you know, kind of the [01:40unclear] integration, code completion, and other things that are related to DSL, in an approach that I think is different than others, like Palumi and potentially Terraform to, which is more proprietary to their language, and we’ll talk a little bit about that.

So we’re going to have this mix of topics between DSL. Why do we have specific DSL formations? What is the difference between each of the domain languages? And that will take us to the discussion. So we’ll start with you, Trammell, say the question that I was already kind of plugging in, why is there DSL automation? Why can’t we just use, you know, many languages?

Well, I mean, what I can talk about is what was my personal experience with automation before we started to use things like configuration, management systems. When I started off as a system admin, it was mostly scripts, Pearl or bash, sometimes Python, where we did all the, bringing up a VM [03:02unclear] software, checking it, and stuff like that. And then we started using, you know, move to something a little bit more reasonable, like a puppet. First of all, you have to be able to go through lines of code and spot syntax errors or punctuation problems in addition to logical issues, that, in design issues, it’s just a lot more, it’s a lot more code to go over, it’s a lot less readable.

It’s a lot less easy to transport from infrastructure to another. It does the integrating with other tools is. It’s also a big problem than say that you want to integrate with a monitoring system or you want to integrate with CIVC systems, it can be a big challenge when you’re just working with scripts, deployment scripts, to build an entire system and then be able to maintain it for additional deployments, scaling, updating, all this type of stuff. It’s a lot more work with a script.

I would say [04:51unclear] that is not usable or easy to pass [04:54unclear] kind of work. So getting the initial script to work is probably the easiest 05:01unclear] and maintainable, that’s kind of where we hit the wall there. Right.

Yeah. I mean, on the other hand, it’s a kind of a lot more straight forward. You know, for the most part, programming languages are relatively similar in terms of the types of features that you have to be familiar with and limited also. So if a person comes to a new job, knowing, you know, bash, you can probably learn Pearl or Ruby or Python relatively easily and start supporting those scripts. So that’s kind of a, on the one hand, you know, like having a knowledge, preexisting knowledge is that you need in order to support scripts is a lot smaller, I think.

On the other hand, CMS really has it’s own way of looking at things. And in some cases it can be very, very different from how users are used to thinking about things. Just coming to Ansible from Puppet, for example, can be a huge change in the way you think about the components of the CMS, and even changing between versions of CMS. I remember, you know, moving from, I don’t remember, I think it was like Puppet 2 to Puppet 3 was just like a ridiculous change that was like learning something completely new. So it’s in some sense, working with only programming languages or only scripting languages is a lot simpler, but it also prohibits at the level of scale, I think, and bringing it to different systems or changing [07:19unclear] really is where CMS permits a lot more possibility.

So I think, Alex, you know, if you want to add anything from your previous experience.

I’ll add that if it means and so if we’re looking at, not only just like Java, Python, of course, we can do anything we want, and each one has his own best practices [07:49unclear] can be very challenging 07:54unclear] the other developer. The DSL, I think what it brought to the table is absolutely, especially in all the commission tools, it’s the 08:09unclear] the automation from 08:13unclear]  it’s very easy to describe and understand what we want to provision or what do we want to automate. And then there is really places or ways to meet and understand 08:35unclear]

When you talk about 10 based system, the whole idea is that I kind of describe in a way that is 08:45unclear]  and spoke to that end game, I want to run a VM. I want to see in the language, that’s what I’m doing. And this VM contains X or have these properties related to it. I want to describe it in that language, in that type of terms and not do kind of a script that calls it and only the one role that would know about it; that’s from a readability perspective.

The other thing that is more important is when you have a way to describe a system, there are many things that you could attach on top of that without writing code, for example, what we call generic workflow, installing and uninstalling of a system. If you have a way to describe it, then the workflow on how to create it or destroy it, if you like, can be completely decoupled from the actual description and can be generic. Where  if it’s an imperative language, as in the case of, by the way, Ansible, you have to program it for every use case or every case; even for the same service you have to break, install separately, then uninstall and you have to program every step of the way.

Where in a description language, you’re basically saying this is the system; in order to uninstall it, all you need to do is just go through that tree of dependencies and call the 10:07unclear] I don’t have to know what is that tree that stands behind it. Then if that tree happens to have 10:14unclear] and containers and Kubernetes in one case or database. And then another case that workload would be the same. Those are the tree called the right lifecycle operation and the right order, or do the reverse for  uninstall, and that’s basically things that you could do when you have a language that was built for that. When you don’t have that language, it’s going to be very hard to do those types of things because there’s going to be many ways in which you described the systems and therefore you could not do generic things in a consistent way across those different variations.

Most people probably wouldn’t understand what I’m talking about right now, only those who are really deep into orchestration probably understand that. But the end result is that to get a system to work, to get repeatable automation around specific task of how to create for development, and then this usually is the exact same description for production, but only with slight variation. It’s much easier to do it with a DSL than to do it with native language. And that’s, I think, led, I think, many others to write their own DSL and the fact that almost every automation have their own DSL.

It’s also an indication around that.

So that’s, 11:30unclear] DSL because the other question that comes out of that is okay, I’m unconvinced that needs to be 11:40unclear]  I would expect that the difference between one automation to the other will be that way. And I would expect that there would be some automation that everyone would be using. Right now, it’s go to the YAML or JASON, but not much more than that. Why haven’t we seen all of those automation that…let’s talk about what I’m talking about, automation. We talk about Kubernetes. We talking about Terraform, Ansible, Cloud Formation 12:16unclear]  Why they’ve come up with a completely different way of describing the idea?

I think that a lot of, well, you know, well, let’s leave, as you’re on cloud formation and Ansible out of the group, because they use basically YAML or JASON without any sort of, there’s nothing proprietary or original there that you have, you know, say chef(?) Puppet, Terraform, these all use their languages that, you know, similar to some existing languages. And what I’ve understood is that originally, there has been a lot of complaints from users about the difficulty of reading or writing something in JASON or writing something. actually a lot of users find writing in YAML simple stuff to be quite simple, but developing for YAML to be a lot more difficult. Because there’s all types of weird abstract types and stuff like that, which is that’s connected to the, really the power of the YAML is that it, it has so much that you can do with it but JASON, it’s just not particularly readable.  And so, a lot of 14:12unclear] for writing their own DSL in order to maximize readability and minimize the syntax errors, which is something that they succeeded.

I think it’s also a Genesis problem; meaning that when those different projects started, the DSL was built to describe their API and they kind of evolved from that. So they all started with, you know, the API and then the DSL was a way to describe their API. And I think that kind of led them to maybe use some substrate that is Jason or YAML. But all the types and definitions were really different from the API and that’s why it came up differently in each of those languages.

So that still exists, and I think will exist for a while until we get to that common, I would say, ground where a lot of them would start to look similar enough that we can start generalize it. On the other end, I think, you know, pretty well 15:22unclear]  actually started from a different direction. It didn’t have an API to bind to; it started as a language, as a DSL specifically, but actually, that was an advantage and disadvantage. It started as a language that is not bonded to an API. It was kind of built as that foundation for everything, but that was the downside of that as well because it wasn’t really built around certain API. It wasn’t optimized to describe that in a very descriptive way; descripted, meaning that you don’t have to have a lot of code to describe a certain goal or certain automation, or you have a way to cover the breadth of options and API and parameters that are described in each of the systems.

So with Tosca, you really had to create a lot of those custom note types per environment, which made this generic approach, I would say, too early in the market and I think that kind of what made Tosca less successful than I think it could be for that purpose, because it really wasn’t really built around an API, it wasn’t really built around the system. It was really built for environment 16:41unclear] and it was too generic from that respect. That’s kind of why we clarify when you look at the twist of that. So we’ve kind of taken Tosca and made it more specific to environments and most specific to use cases and things of that sort that I think made it much more applicable, and we’ll talk about it later.

I mean, I think that, you know, Tosca is a very unique case because it neither comes from the software development side. It also doesn’t come from the application side, meaning it doesn’t come from the service and it doesn’t come from the client and, like you said, it’s not like Cloudify developed Tosca or AWS, and then to Tosca, somebody else came along and was like, we need to solve a problem because the, you know, there is no standard way of describing applications and infrastructure and it’s the wild West out there and we would like to try to solve that problem. It came from my intelligence, more than it really came from, like, you know, a desire to develop a product. 18:15unclear/crosstalk]

So far, I think without the DSL 18:22unclear]  being a process would be, one, much more complex and unreadable; and second, it would be much harder to do generic stuff around the automation script. And therefore we’ll have to program everything which will make automation processes much more complex. We also discussed why are there different types of DSLs or so many types of DSL, even though they may be using the same substrate like YAML or JASON. And I think what we’re describing is two main trends that led to that. One of them is the Genesis of each one of them. The case of Kubernetes, it was pretty much a language to describe the Kubernetes API. In the case of Terraform, it was a language to describe the infrastructure of each of the clouds.

So it was really built around that. It was taking the Amazon API and starting to build a language around how that API looks like, and then extended to other clouds. And that was kind of how the HCL language was built.

The HCL is actually, it’s also different from Puppet, it’s different from Tosca and it’s different from, you know, JASON, it’s actually, it came along because this was a preexisting language used in products, like Console and another product from 19:44unclear]They used it to say that’s a language.

I’m assuming you’re talking about background(?)

Yeah. Back 19:51unclear] So it’s a good language. But the other thing that’s interesting about Terraform is Terraform, you know, recognizing the issue with integration, Terraform does have the ability to format it’s input and output in Jason. So this this allows integration and we’ll talk about that later.

Excellent. Let’s talk about the, I think chefs started from a different direction. It started from the realization. it was pretty early on from the time in which everyone was starting to write a website in groovy. It’s already in a Ruby, and then that kind of led the direction towards the DSL, which was, you know, you don’t have to write script, you can write everything in Ruby and if you’re already developing your website in Ruby, why don’t you use Ruby to also manage your infrastructure, which change over time. And therefore that’s why I think it’s 20:53unclear]  today, but at the time that 20:55unclear]

So we can see that the Genesis of each one of them was really kind of catering towards either an API or a set of use cases, that infrastructure, as I’ve mentioned, Tosca, it’s kind of different than many of them. It was really looking into, you know, kind of a more pure system orchestration and trying to create a standard around that at the time, in which I think the industry automation was very, very early days, so it was a bit too early, I think, in the market to try and find something of that sort. Having said that, there is a lot of things in Tosca that I think is less known to people but it’s very powerful and becoming more relevant today than it was in the past because of the maturity of the market, as it’s the fact that it has types like any other language.

You could create narratives, you could create interfaces, you could create relationships, things that are very powerful for those that are familiar with 21:56unclear]And we’ll talk a little bit about where those things become more relevant. It didn’t catch up with the trends toward microservices, which again, we’ll talk about how Cloudify took an approach to evolve Tosca to let’s say the modern systems and modern architecture, that I think makes it more relevant today with what we call Cloudify DSL. But let’s continue with that. I think we covered the two main questions that you we’re getting now to the actual comparison between the different platforms.

So, Trammell, you’ve done a integration, and as part of our work on Cloudify 5, we took the approach to work to make Cloudify an orchestrator,, we needed to integrate with any of those orchestration system. And as part of that, we had to understand and learn and integrate with each of those things, which kind of brought us to an interesting position in the market that we have good knowledge of all of them today, or many of them today. And you’re specifically have done a lot of that work, so you have that insight. Maybe you could share with us, kind of the differences, what you’ve seen in each of those platforms. And again, you wrote a blog post about it or media post about it, so we’ll see it as a fallout from this discussion. So go ahead, Trammell.

Yeah. I have a blog post that I wrote comparing from DSL in infrastructure, cloud infrastructure specifically. Cloudify, that is a unique case because the flexibility of Cloudify, it’s more than just a tool for, you know, managing installation of applications or anything it’s, you know, to use the tagline. It’s like a workflow engine that allows you to define your own workflows and allows you to define how you interact with different APIs. It’s very much like, it’s almost on SDK, it’s on, you know, tool kit for developing how you automate your systems.

And because of that, Cloudify, even though it started off, you know, has something like Terraform and that it installs and uninstalls and manage infrastructure, we also started finding a need among our users to actually manage out their systems. For example, not just AWS, but also Open Stack and also GCP. And also to feed the needs of users who are not part of a centralized development team. For example, you might have a large company and they have some teams that are using Ansible and some teams that are using Puppet and some teams that are using Cloud Formation and some teams that are using Terraform and the company wants to start bringing all these people together. But there’s internal politics and they want, you know, these teams think they’re two of the best, and really providing a way to make everybody happy, but also interact and also be able to integrate with one another.

This is really kind of the idea that made Cloudify come up with this idea of orchestrator of orchestrators. So, this is also tied to something which is a part of Tosca, which is the idea of having, you know, service chaining, putting together different services and being able to automate them from different parts of an organization, in between different organization.

I think those again were not coming from that world service, Jenny mini, the ability 26:30crosstalk] so basically means that you create a service and then you pass in the outputs or inputs from one service to the other, in a certain order, which is 26:44unclear] you finding that, you know, needs to happen in a certain sequence, and that sequence is where the world chain is coming from. Service chain meaning, services that needs to be orchestrated in a certain order and pass parameters from one another in certain 27:01unclear]

Yeah, exactly. So we had users coming to us and saying, look, I already have, you know, whatever, 2000 Terraform templates or 2000 Cloud Formation templates. It will take us ages to translate them to another language, how are you going to help me? And so, in the case of Cloud Formation, it’s quite simple. We support the API for Cloud Formation, which enables the user to bring their Cloud Formation templates, a file or link to an S3 27:45unclear] and just to install them and then take the outputs from the Cloud Formation, stack and feed them into other systems. So create, you know, 27:59unclear]  with the load balancer, and then send that information to Ansible and let Ansible start configuring the application behind the load balancer or something like that, or installing the system, installing the database after the infrastructure has been brought up.

And so having an orchestrator that’s actually able to define the entire workflow from, you know, this development team and this development team and this development team and orchestrate the integration between these different products is what Cloudify was able to do.

Differences between them, you’ve done that. I want to run a VM. Let’s take that single task. How is that different between each of those platforms like let’s take a Terraform and Ansible, and obviously Cloudify, and 28:59unclear]  and Cloud Formation. What would be the difference between each one, it’s more down to earth type of data.

It always comes down to how the cloud is designed. There are some clouds and there’s even some, you know, certain regions and certain clouds that have different capabilities and different steps that you need to go through in order to get a working system. In the case of AWS there, you know, about a few years ago and still today, it was easy to classic. When you say easy to classic, you can just turn on a VM and kind of attach a 29:41unclear] IP to it that you can start working on it.

Easy to classic is not available in every region, and so what you really need is you need to install the VPC, You need to install the 29:58unclear] You need to configure the route tables on top of it. You need to configure on that in order to get external access to your subnet. And then you’re going to start bringing up VMs and stuff like that. So if you want to use Terraform, there’s, you know, Amazon publishes their two tier examples for Cloud Formation.

If you’re only working in Amazon, that’s probably going to be fine. However, you immediately start running into the issue of needing to understand what are the different components of the language with different cheese and stuff; mean in terms of the console, because your experience in the console, it’s going to be different from your experience in the Cloud Formation template. You have to know what the name of the type is. You have to know what properties that type needs. You need to know how to ensure dependency between certain types so that you don’t install something that has a dependency on something else first.

And then in Cloud Formation it’s a lot of stuff is already pre-published, but it’s something you have to learn how to do.31:37 unclear] You also have the ability to work with YAML or with JASON. Examples on the documentation, both languages, it also has some useful features, their language get at, which is a way of getting property from another resource in the template. You have, um, you know, inputs, you have ways of constraining inputs to a selection of valid values so that, for example, you have a list of images that you want to constrain your user to using, you can provide a list of values for a particular parameter. Or if you want to say that there, or you can also create maps on list, which AMIs or which instance types are valid for a particular region.

So it has a lot of useful features, a lot of good documentation that you can use for a Cloud Formation. It’s a little bit verbose and it can sometimes be quite clunky to read. So, you know, I’ll leave that kind of criticism there, but it’s a proprietary language, it matches almost exactly the features that you can get in the Console.

So basically what you’re saying is kind of taking what you just said, the infrastructure specific, language analytics, Cloud Formation, 33:34unclear] is probably fallen the same, and you’ll talk a bit about that. The main benefit of that is that you’re using those platforms, and then they would have the benefit of being natively integrated into the platform by natively, meaning that it also integrate with a lot of the ecosystem. So for example, in the case of the AWS that a lot of governance tool, cost analysis tool, and monitoring tools, all of them would kind of speak to that thing. And the Cloud Formation can also integrate with them natively. So it’s not just, you know, calling a set of API, it’s an entire ecosystem that you’re binding into. Which kind of being a different discussion about the approach of multicloud orchestration in comparison to HCL.

But the next I’ve finished that round and talk about Azure arm  and how it’s different.

So with Azure arm, it matches very, very closely to Cloud Formation; obviously, some syntax differences, some differences and types for resources. So how you bring up, say a network in Azure arm is different, than how you bring up a VPC, not just by virtue of the API parameters but also in terms of what or in terms of DSL features like the name of the resource. But it would also be different than…it’s slightly different because of how a Azure thinks of a particular type of resource like a network. Azure doesn’t have the VPC, they have virtual networks. Which is the more classic, which is actually Azure being, you know, more like a classic virtualization platform instead of AWS, which is, you know, a lot different. But anyway, so yeah, a lot of the same features on their language, like getting an attribute or constraints on parameters but, you know, certainly types of syntax, things are slightly different here and there.

Again, if you want to install an application on either of them, you need to use either one of their products for installing an application, which is learning that product and also learning a new set of tools and also get to a certain amount of vendor lock in.

So talk about the HCL and the HCL took the 36:33unclear]  basically, took the approach of creating one common language across those different clouds.

Terraform has a certain amount of abstraction that allows you to use different clouds with the same language. And so if you know the syntax for one, for, you know, say, AWS, then you can use this same syntax and Azure for Terraform. I’ll qualify that by saying that the parameters of an API object in Azure will be different than the perimeters of an API object in the AWS in Terraform. There’s just no getting around the fact that, ultimately, every single service and resource in a cloud is a product and has a unique set of API parameters that are expected or can be handled. And that’s something that you’re never going to get away from unless you use some tool that obstructs all those things.

So it gives you that common framework, but it’s not like you have a 37:59unclear] meaning that the way you run a VM in AWS would look exactly the same as the way that you would run it on Azure or et cetera. It would look different, HashiCorp, just the language itself, give you access to those native APIs in a common framework, if you like, but it’s not a common obstruction, which is kind of a confusing, because some people think that, you know, if you have the multicloud orchestration, then the multicloud orchestration would be something that will allow you to obstruct the differences between the clouds. And that’s not really the case. Can you expand on that a little bit?

Sure. So for example, in Azure, when you want to define a virtual network, you’re actually just defining the network. You’re not defining, you’re actually saying, I will need a network and it does provision a network in Azure, but you don’t provide any sort of parameters, like 39:08unclear] like it’s the IDR, it’s a tenancy. You just say, I need a virtual network. In AWS, when you want a VPC, you provide a subnet or a CIDR that you want to define that this is going to be the CIDR of this VPC, or you will say that this is dedicated tenants; meaning that it will exist on a dedicated machines, and you will not have any shared tenancy on other users and values like that.

So an obstruction would actually say, like, I don’t really care about what network block I get, I don’t care about what, you know, the kind of tenancy is going on and, you know, make it the same experience for me from AWS or GCP or Open Stack; it doesn’t matter. For Azure, when you actually want to start talking about how the network is submitted and stuff like that, you actually talk about that on the subnet level, not on the network level or the virtual network level.

Got it. What you’re basically saying is that if I summarize the difference between what we discussed already, which has got formation around the native cloud orchestration, as we call it and then HCL, then the benefit of each native or cloud orchestration is that it’s native, meaning that it has the full breadth of API and services exposed to you because it’s part of the release cycle of each of those services and the cloud providers support them. And with this advantage of that is it’s very proprietary to the cloud vendor. One approach to create a multicloud orchestration run, that is the HCL approach, which is to create a common framework around them.

The approach that we’ve taken in Cloudify is really to recognize the fact that there is a difference between the different environment and pull the native, if you’d like, orchestration the right time. In that case, we’re not trying to create a common orchestration across all of them, but still provide you consistency between the different clouds and how you could layer an application workload on top of them would be completely common between the environments. We can kind of minimize the differences, but not obstruct them completely. And that’s kind of the different approach, is to move the cloud as the base from those three options. Let’s talk a little bit about the Ansible and the Kubernetes and their approach.

Sure. We can talk about Ansible and Kubernetes. One thing that I wanted to just add to what you said just now, is that about how Cloudify is unique is that Cloudify kind of plays both games that allows you both to have kind of an API visibility into the object, like a VPC or network, but through our component type, you can also just, you know, say you don’t really so much care; you just need an environment. So that’s something that, you know, we’ll probably talk more about later, but I wanted to point that out.

you mean that the language itself, in the case of Cloudify, it can have a portion of it described in Azure arm or Cloud Formation, or even Teraform, from that respect and other portion would be described in, you know, kind of, more of a Tosca language around that. So it’s a way to mix and choose the right tool for the job and not trying to have one language.

Yeah. So going back to Ansible and Kubernetes; so Kubernetes and Ansible, how do you compare them? Let’s say that they are both using YAML. Like Cloudify, they both are strictly using YAML and there’s no proprietary language there, there’s the DSL sits at the level of how certain, what is the syntax that you use to define a particular resource, and Ansible, which you normally talk about, are a list of tasks that you wish to run. And these can be packaged as roles that are then called from a master playbook. So, for example, if you have a role for 44:04crosstalk]

Yeah, because of Ansible it’s the task system is more imperative. Imperative, meaning that you’re, you know, kind of describing almost literally the steps of the work that you’re going to do. And the imperative approach you’re describing the intent and that the orchestrator 44:27unclear] workload the steps automatically, so it’s kind of auto generated out of the intent. In the case of imperative system, you actually defined what will be those steps. So I think Ansible would be closer to the imperative approach, even though it has some intent capabilities,. Kubernetes is more on the declarative approach.

Yeah. Kubernetes is a lot more declarative. Kubernetes is not talking about a set of steps. It’s rather talking about a set of resources that you need to exist. It’s a lot more similar to Terraform or Cloudify or Cloud Formation in that sense, because for Kubernetes, a lot of it has different configurations of container-based systems and sometimes just API objects, but for the most part, you’re talking about resources that you want to exist.

Ansible, like you said, it’s imperative, you’re saying run job A,  or run task A, run task B, and you can package that stuff as jobs, or package that stuff as roles. But then those roles are again, imported and called as tasks. Kubernetes, you know, is extremely unique in the way that they designed how the system works as a control flow. And so, Kubernetes is just, no matter what you give it, Kubernetes is just going to sit there and try to actualize what it is that you’ve designed. So if, you know, you say that you have two resources, one of which is actually dependent on the other, the dependents will be drawn inside of the language.

The dependency is arrived at, through the control flow. So it may, you know, in actuality, try to instantiate both of them concurrently. One of them has like a hard dependency will fail the first time, and then on the second try, it’ll make the connection and will come up. Kubernetes is really like designed, not just the language, but also the engine is really like that declarative concept is really built into the engine itself.

And how would you explain the need for helm on top of the Kubernetes template. Why was Helm even needed?

I think it’s helpful to think of Helm, like Helm, you know, introduces itself as a package management system for Kubernetes and that’s really kind of helpful and kind of not helpful because on the one hand, if you’re just looking to install something on Kubernetes or not even Kubernetes, let’s say that you just want to install some application and you don’t care if you’re using Ansible or if you’re installing it on, you know, I don’t know, GoDaddy, or if you’re installing on Kubernetes or if you’re installing it on Windows, Cluster, let’s say you just, you just want to install on some application or some stack, Helm enables you to package your product or your tool set, your stack, for Helm. It’s a way of packaging, a set of tools or a set of packages together, and delivering them.

Behind the scenes, it is called the culinary CPI, and it’s basically kind of a way to any of its users the same service template that is defined in Kubernetes. It’s just providing a more dynamic way to define parameters, relationship.

Not just that, it also manage dependencies that might be required in the network or on the physical machines of the nodes and make sure that other components in the cluster have the necessary components, in order to install a particular set of deployments or replication controllers or pods or group of applications that need to run together.

Excellent. So let me summarize what we discussed so far. So I think what we discussed is that, in summary, when we look at the automation landscape, there is a reason why a lot of the automation tools chose a DSL; we’ll get to that in a second. The difference between the different DSLs came from the Genesis of each of those platforms. In some cases, it was driven by the actual service that the language was bounded to, or targeted to; again, in the case of Cloud Formation and Azure arm, that’s clearly the case. In the case of HCL, it was really from the fact that they were really focusing on infrastructure automation and basically creating some common frameworks around multicloud APIs, and they were really built, unlike Tosca would say, kind of at the time in which those APIs stabilized. And therefore they could create a language that is very much targeted for that problem domain, which is now called infrastructure.

Tosca was built as a language that will be generic for any automation. And from that respect, it has a lot of typical language capabilities like inheritance, interfaces, workflow, classes, whatever, and it’s, as I said, very generic. In the case of Kubernetes; Kubernetes is a template again, was built around their API and it was really targeted to describe pods and services and describe the Kubernetes type of relationship. And Helm is the layer on top of the Kubernetes template that is adding the packaging, if you like, language into that. Which is kind of all the dependencies that are needed to deploy an application and all the parametrization that is custom to that type of process.

Ansible took a more, I would say, imperative approach for kind of a describing task system. It was built around SSH, and it gives, again, YAML and the task based systems is based on roles and playbooks, and that’s how you package your system. So if your system needs to do step one, step two, step three, and call different APIs, Ansible would be a good target for that, a workflow kind of system. If you want to deploy a cloud native service or community service, then obviously the Kubernetes CPA  would fit into that. If we want to do multicloud infrastructure, HCL would be a good part of that. If you want to do things in AWS, AWS cloud formation would be a good part of that. And if you want to do things in Azure, probably Azure would be the right target for that type of environment.

And let’s talk a bit about the Cloudify approach through all that, and I think we touched on that Cloudify really recognizes in the new release that we’ve built, which is called Cloudify 5, that there’s not going to be one language to rule them out, and there’s going to be a lot of those domain automations for the vision that I mentioned earlier. And we kind of build it there that will allow you to create that interoperability between those layers and kind of choose the right tool for the job even from within the same language. So it does create that ability to also inter operate between HCL and Cloud Formation, and Azure arm in the same, if you’d like, template. It’s similar with Ansible with the rest of the, and Helm, which is coming soon.

Maybe, Alex, you want to say a few words about that before we move to the second part of the discussion?

I had something in mind.

It’s the orchestrator of the orchestrator kind of a world.

Yeah. So regarding that, we do understand that eventually like each run, first of all, as Trammell and you said that earlier, like many times companies already have tools in place, so it’s very important to have a kind of 54:04unclear] to be able to work and integrate with existing solutions on the customer side, especially when, you know, they have some technical personnel who already is familiar with some technology. And there is a learning curve to go with, not there, but on the other side, you do have all this ecosystem of different orchestration tools that you want to actually build a layer on top of it that, and facilitates what Trammell mentioned earlier, like a service chain that you have different orchestration tools that each one responsible for different aspects of automation.

But you do want to actually delegate; for example, if it’s infrastructure to Terraform, but once it comes to an application level, so you want to take the values and actually chain them inside the the Kubernetes orchestration and all kinds of these things.

Okay. And you’re working on a secret project around that, that we’ll keep for another podcast, which is kind of taking advantage of that capabilities. And I think it has the potential to turn all the things that we discussed on the head. But again, we’re going to keep that teaser for the next podcast. This gets me to the second part of the…by the way, thank you very much Trammell for all these insights. And I’m sure there’s a lot to that. And as I said, there is a blog post that will point out, that covers a lot of the things that you’re discussing.

And the second part of the discussion is going to be much shorter. And we’ll be talking about the dilemma that we’ve opened up with; should we use a native language to do automation or should we use a VSL? There is an approach of taking a native language like Java or Dot Net or whatever. In the case of Palumi, that’s kind of the approach that I’ve taken. There is an advantage. The main advantage there is it’s native language; meaning that you have that full power of the language in terms of capabilities, you can do loops, you can do iterative processes, you have access to all the libraries that the language already provides but there’s also a downside to that.

And you also do some sort of approach to address the DSL approach based on the YAML file or Jason file and kind of try to get the best of both worlds. So maybe we could say a few words about the pros and cons of the Palumi approach, I would call it that way, which is kind of using native language to call it DSL. Yeah. So, with a few points there, and Trammell if you have something to say about that. I think the main difference from what I’ve seen is that it’s, we discussed earlier, how do we create some generic workflow? How do we create governance around the automation? Now, if it’s a native language, the problem is that you don’t have control how the developer would really be writing it. It can write the Java code that does many other things and then call seven API to 57:35unclear] . How do you ensure, in that case that is calling the VM in the right way, how do you ensure that things are, what is actually doing, how you could even motivate into for visualization, how you could create and show the topology that was created out of those API calls in a certain way and not being manipulated when a lot of the knowledge of how the things that has been created is in code.

For example, we could have a four loop, but as, you know, creating 58:08unclear] instances, and now you need to be able to process it and understand that it is actually creating X amount of nodes, but you don’t really know how many nodes would be created. And therefore, you cannot really be able to say what will be the cost estimation of that infrastructure is going to be creating, syntactically, you know, kind of making sure that it’s not abusing the cloud or not using things that it’s not supposed to use. It’s going to be much harder when you’re going to that approach, which kind of gets you almost out of the DSL world.

Yes, of course, like once you’re going to a native language approach, you have the oldest disadvantages, especially you’re prone too all kinds of errors that you were having. Like when you were having regular script or any other program, let’s put it that way, lots of hours might come from the runtime properties or through the business logic that you are incorporating through the language itself, it becomes a very hard, it’s no longer a DSL approach even if inside the problem, it’s very easy to understand, but to govern it, to make something more a generic or some rules on top of it, definitely has a complexity. And I guess for some developers, it might be easier because they’ve said they had to have more 59:52unclear] it depends on then who’s looking at it.

YAML developers or development also can be cumbersome. I mean, maybe it’s easier to pass it and easier for the orchestrator to understand it, and therefore model it, and therefore do all the governance things that we mentioned, but developers don’t like it because you know, there’s no course completion. It’s a very 1:00:26unclear] it doesn’t fit into the ideal world that they’re accustomed to, and then it’s kind of like creating a completely different domain that is not very developer friendly, I would say at the basic level. And I think that’s a good segue to what you have done on that.

Okay. So for all the topics you mentioned that there was a solution in, let’s say, the things to provide such a developer tools through YAML or JASON. There isn’t the option to add all those capabilities, like out the completion, type detection, type validation. And with description and the ID integration. And so what we did for Cloudify is actually we emphasize an opera store, called the schema store.

And so actually what they provide the integration, both with the visual studio code and the 1:01:36unclear] to most, probably common ideas, but for sure the same technology can be used for sublime for atom and other ideas. And so the idea is actually to provide adjacent schema. And adjacent schema provides a kind of how yourselves, like, what are the properties? What are the types you can then add the different preconditioned that the uncertain condition, the hierarchal nested objects 1:02:11unclear] differently. And it’s very powerful where probably we can then add an article way on that, both to our repository where we are using the article, how to implement it but definitely.

It kind of simplifies the development, both from the way you have other completion you have with the 1:02:39unclear]  what is required, what is not, you don’t have to go each one to the 1:02:45unclear] to understand what the properties are or what it’s meant to be, and it can be provided through through ID tool itself. But of course it requires some additional work from the developers who’s providing this kind of automation around YAML files.

So to summarize that, and then this is where we’re going to wrap up right now. And by the way, there’s already a media post that Alex shared with all the details and we’ll share it as part of the podcast. It’s kind of the best of both worlds. So rather than going the language approach and then end up with writing yet another script, you could stand in a DSL world with all the benefit that comes with it, but they’re on top, the language features; like code completion, validation, ID integration, all the things.

In that case, I think we kind of get a better balance between what we’re doing and that. I would also say that because it’s based on an open kind of framework that touches many platform, it’s not proprietary. That’s another benefit that means that the IDE integration that you’re going to have with, let’s say, this approach with Jason schema, would fit with other tools in your ecosystem, it’s not gonna be specific to Cloudify or specific to a certain platform. There’s a wide range of platform that is supported on that, and that’s also important. You don’t want to have language binding that is specific to a platform or product; you want language binding so that it will cover the entire DevOps tool chain ecosystem. And that’s a great benefit, I think, of that framework.

And with that, I wanted to kind of close the discussion again, we had very, in my view, very wide topic to cover here, a lot of ground to cover. Obviously, we didn’t have time to delve into each one of them. We tried to give you the glance of the landscape of automation. And again, the experience that we had in Cloudify 5 of integrating all those automation gave us an interesting perspective that I felt would be interesting to the rest of the audience here. So thank you very much Trammell for sharing a lot of that experience and for Alex for doing the work, but also sharing that experience.

I think many people would find benefit with that, regardless of Cloudify, in the way you were designing your multicloud approach or strategy or in the way you’re choosing a certain tool for the job and at least have some criteria understanding, which tool is best for which job.

Hopefully we covered that to some degree today. And, as in any cases, a welcome feedbacks and comments, and if you have questions that you wanted to follow up on or suggestion for others on that front, then let me know, and we’ll be more than happy to cover that in the next pod. I would say that the next podcast, Jonny would be hosting 1:05:53unclear] It’s going to be a very interesting discussion about their multicloud approach and how they’re dealing with big data automation, and a lot of things of that sort, that’s going to be very similar in nature to the way we’ve done the work with 1:06:10unclear] from Next Insurance. So we’re going to have a mix in that podcast, where we  are going to discuss some specific technology coverage, but also guest speakers from especially R&D managers that will talk about their real life experience, how they approach things, what are their dilemmas, what would they do differently. So hopefully we’ll find that a good forum to both learn new things, but also learn from other people experience. With that, I’ll handover to you, Jonny, to close this.

Well, thanks, Nati. Yeah. We’re looking forward to that next session. I think Walk Me is really an amazing brand and what they’re doing is fantastic and I’m sure taking a deep dive into their backend would be really, really, really interesting. Once again, just to reiterate what Nati said, thanks so much to everyone taking part; really interesting session. Keep your eyes peeled for the next session, which will be within the next few weeks. And, as previously said, any and all supporting materials will be attached to this podcast. So www.cloudify.co/podcasts. And in the meantime, stay healthy, stay safe, thanks for tuning in and we’ll catch you next time.



1 Comment

Leave a Reply

Founded in 2012, Cloudify has robust financial backing from Intel Capital, VMware, BRM Group, Claridge and other leading strategic and financial investors. Cloudify has headquarters in Herzliya, Israel, and holds offices across the US and Europe. Cloudify specializes in network orchestration, network automation, everything multi-cloud; providing orchestration solutions for expert orchestration software.