How do you really feel about Jenkins? The third episode of the Cloudify Tech Talk podcast talks all things JENKINS – and of course how it fits into Cloudify. CLICK HERE to see Jenkins and Cloudify in action (supporting material for this podcast). Click here to learn more about Cloud Provisioning.
Guys, welcome to episode three of the Cloudify tech talk podcast. I know at the end of episode two, we promised you a serverless adoption integration session, but we had loads of questions about Jenkins. So, why not? We’re going to launch into Jenkins in this session. We’re going to be talking about CICD, cloud-native GitOps, and much more. As always, everything can be found on cloudify.co/podcast.
All supporting materials for this session will be there too. So, without further ado, I’m going to hand it over to my cohost Ilan Adler who is going to introduce our speakers so we can take a deeper dive into Jenkins and on over to you.
– Thank you, Johnny. It’s my pleasure to be here. As before we’re joined by Nati Shalom, Alex Molev, and Isaac Schumpeter from the Cloudify team. Today as I said, we’re going to take a deep dive into CICD and Jenkins and tell you guys a little bit more at the end about the Cloudify integration with Jenkins when we’re that’s useful. But first, we’re going to start more with an overview of different things that we’re seeing in the CICD/Jenkins world. And I can start it off with our panel for today. Which tools are we seeing today being used to drive CICD?
– So maybe I’ll take a first stab on that. I think the most known one is Jenkins. But Jenkins started in a pre-cloud native world. And what we’re starting to see, especially over the past two years is that the move to cloud-native opened up the market for other alternatives that made Jenkins look old fashioned and complex. And then we started to see GitOps in the world. And we started to see the Gate Lab coming with their stuff and Circle CI with a hosted version and other startups that we know like Code Fresh and the companies that line starting to come up to suggest a more agile way to deal with CICD.
And there are interesting differences between the two. And, I think the world is being torn between the legacy one, the private oriented one, the hosted one and the one that is more cloud-native. That’s my view of the categories.
And so Ilan, I think you threw this interesting Reddit article at me this morning about why most people are hating Jenkins, which I found interesting because it has opened up an interesting discussion there. So maybe, Alex and Isaac, why do you think people hate Jenkins? They still use it very much, but they hate Jenkins.
Jenkins is about nine years old. It came in before all the new technologies such as Kubernetes and containers. It’s a very old solution for CICD and until recently, maybe three years ago, it was pretty much a Java-based application that runs on static servers. It’s very complicated to configure and plugins are not supported. All the configurations are done through UI. It’s very hard to maintain. And especially when it comes on to high scale organizations that are using thousands of jobs, the Jenkins cluster is a headache to maintain.
So, I think it’s because for these years, Jenkins was the only solution to go to and people had such a hard time maintaining it. People remember the bad experience up to this day. Jenkins went through different rotations. And even now when people are presented with new or more modern solutions such as running Kubernetes, not everyone is familiar with them, and more and more people are willing to go straight away to more modern solutions such as Circle CI, Gate Lab, especially given that they are managed.
Also, another aspect with Jenkins is that extending it is not very easy. As Alex said, Jenkins is based on Java plugins and have to be written in Java. The actual framework of how to build a plugin is not very well-documented which makes an organization limited to the offerings that exist out there in the Jenkins plugins market which may not be very adequate for more modern applications and frameworks.
Yeah. I think that Isaac, you had the pleasure of writing a plug-in yourself.
I had an immense pleasure doing so. One of the best times of my life. Just one more thing; another reason why Jenkins is still used. We have to remember probably a good chunk of the codes worldwide are still not hosted on GitHub GitLab and stuff like that. There are lots of organizations that still host their codebases. They have their own hosted source controls. Some of them are not even with Git. And so, other solutions may not be relevant to them. That’s also why…
That’s an excellent point.
… And still there. So, when you have a well-established CICD tool in your organization, it’s going to take a while before you shift to using external tools. These tools have been there for years.
I would also say that those tools that were created in a pre-cloud native world because the environment was much more complex, have a lot of options and flexibility built in them to deal with more complex use cases. That’s what we’re seeing that a lot of the cloud-native tools are lacking to some degree. And that’s where enterprise tends to be on the complex side of things because they’re never purely cloud-native. So, I think many would still find Jenkins’ flexibility to be a benefit for them and that’s what would lead them to towards that. That’s another thing that I’m seeing in the market right now.
Also, when it comes to Jenkins, you have great flexibility with how to compose jobs and the pipelines. You can take application components from different technologies, deal with them together whereas as far as I know, with CICD Git Labs and Circle CI, they work on the actual repository that they’re currently running with. They clone the repository and do stuff with it. Once you get into more complex things, it gets more complicated to use these tools.
Whereas, Jenkins is not biased towards a particular repository. You compose a job telling you exactly what to do. You can even build a package or deploy a test application that is based on multiple code sources. So that flexibility exists there. I’m not sure about the Git Lab/CICB framework, but if they’re not there yet, they probably should be.
We started to talk on the topic of what the difference is with Jenkins, Git Lab, and Circle CI. I think that another pain point that most people have with Jenkins is that they don’t realize that this project has been developing all the time.
So, some people have Jenkins because it has an old UI whereas Git Lab, Circle CI, and all the new CICD’s have a single page application. Jenkins then came up with the new UI; Fresh Blue ocean. And then Git Lab and Circle CI started to use more cloud-native approaches. I don’t think that everyone using Jenkins is aware that Jenkins can be run on Kubernetes as well with independent pods.
One of the struggles with Jenkins is that usually jobs were shared with the same machine. So, once many jobs from different domains are run on the same machine, they will probably have some overlap of resources or some library that has to be installed differently. Of course, more machines with more slabs can be an approach, but then to manage all these things becomes a nightmare.
And a thing that developers, especially remember is that in large-scale environments, where you have multiple projects with different staff technologies, some jobs will always have some troubles and some failures and to find this needle in the stack of hay is a real nightmare. Whereas the new more cloud-native solution of Jenkins that runs on the Kubernetes solves this and it’s more similar to Circle CI and Code fresh solution, in that, it’s very easy to maintain and add new jobs and better handles large scale jobs.
Thanks. One last thing just to drive both Alex’s and Isaac’s points. My understanding also, I saw a lot come up with sort of a feature creep that because of the plugins and then developers needing to do something specific and there are lots of plugins and they just upload plugins for their relevant workloads. And again, the Jenkins system becomes clunky and slow to use. So that’s another common criticism I’ve heard of Jenkins.
Another thing that I wanted to move on to in terms of infrastructure and how Jenkins ties into infrastructure; Nati, I saw there was a recent survey about common infrastructure orchestration tools by Gartner.
What Gartner did was classify the different infrastructure orchestration tools into three categories; infrastructure orchestration, configuration management, and cloud orchestration. In the infrastructure orchestration, there’s the Cloudify and there is the Terraform world, was another one mentioned and a few others.
And in the context of the configuration management, there is the [XXXXX], those types of tools and in the category of the cloud-managed or cloud-centric, you’ll find the cloud formation in the case of AWS and Azure Arm in the case of Azure. Google had its own thing. So, it’s interesting in the sense that, if you follow Gartner for a while and that category is that so far when they cover that space, there was one category which was the cloud management platform. And they didn’t cover the rest because the assumption was that if you’re in the automation space, probably everything needs to be in one bucket which covers everything to do with automation. And I think this was one of the first realizations that the world is becoming multi-domain in nature.
That multi-domain nature of orchestration that is specialized in different areas and there’s not going to be one platform to rule them all. I repeated that in almost every broadcast that we did. We kind of come back to that. We just talked earlier about Jenkins versus the rest of the platforms. And one of the questions is “Is there also a one Jenkins or one space you need to rule them all?” And I would say that if I continue on the Gartner side, we see that many organizations have more than one CICD tool. And that was a surprise when I started to do those interviews.
And part of the reason, which ties out to the previous discussion that we had is that different tools have different sweet spots. And not everyone is happy with one tool that does everything. The majority, 58% is still using Jenkins, which is pretty huge, but close to that there’s Git Lab which is 34%. By the way, Isaac, I think that correlates with what you said that a lot of our organizations are still running privately. So, if you look at the combination between Jenkins and Git Lab that represents the majority, which is more than 80% of the users.
Indeed. There is a tendency now of Git Lab, for example, to have hosted solutions for enterprises where an enterprise would run their Git Lab outside of the external network. This is happening. And then that of course opens them to use Git Lab’s CICD tools as well.
So, I think that that’s consistent with the way you mentioned the Ilan, the Gartner analysis on the multi-domain automation tools. So, what we’re starting to see is also domain-specific CICD tools.
So basically, what you’re saying is that we’re seeing it become more and more commonplace today for organizations or at least large-scale organizations to have more than one CICD per the organization, one tool?
I think the clear domains that we’re seeing is that you have the cloud-native part and you have the flexible one. Jenkins being the Swiss army if you’d like, and then others have their tweak and a specialized domain that they’re coming with.
This is one reason. Another reason to have more than one tool is mergers and acquisitions. That’s very popular in the financial world. You see one company buying another one, and now you have two CICD tools or everybody pulling in their directions, as usual. So that’s another reason why you would want to have more.
And one of the things is that once you started to use something, even as a startup, it’s very hard to move away from it.
Yes. Once you very invested in it, of course, everybody’s tool is the best. And unless the business or the actual stakeholders, see the business value or actual dollar value on it they won’t invest money in doing so.
That makes total sense. What do you guys see as the common release cycle used today in typical software development and developing the CICD around these tools?
I think that goes to Alex. Alex you were running those things.
I guess it depends on the company. With faster companies, their release cycle can be done there a few times a day. With their PR, they’ll do all the unit tests and integration tests to see that everything is fine and then they are easily able to release this to production in some release cycles I’m aware of. I was talking about using a Kanban approach where once the feature is ready, it can be released. For startups that are using the more sprint approach or a more scram style, usually, business value will be delivered by the end of the sprint. This is normally within two weeks but sometimes they can use smaller or bigger windows so by the end of each sprint, a version will be released. Some companies use versioning where they deploy or release a version once every few months so their release cycle will be affected as well. Even with the companies that are doing a release every few months, the CIDC cycle is pretty much the same; Unit tests, build integration tests just to make sure that everything is tight and running.
Maybe I’ll record The CNCF or cloud-native forum again. They published a survey around that asking many of the Kubernetes user community what their release cycles were and they come up with the following numbers; 50% were doing it in weekly increments. I’m sorry, that’s 27%. It was 15% last year in 2018 and moved up, which is a little bit more than 10% up this year to the weekly release cycles. And with the monthly release cycles, they’re starting to see a decline in that.
So, it was moving down from 18% to 16%. So, we see that the low frequency, meaning the daily and weekly are becoming the more popular choice. And as you said, Alex, I think when we talk about us, it’s a couple of times in a day and not just daily or weekly. So, we’re seeing a trend towards shortening those release cycles across the board, where the majority, at least in this survey, is on the daily and weekly and less on the monthly which is interesting, by the way
I think one of the reasons for it is that companies started to realize that the more time they hold the code that’s not exposed to the user, the chance that there’ll be bugs or something stops working and is higher. And with current tools, it’s pretty easy to keep the wheels going moving forward and constantly releasing new features to customers and getting the feedback of each feature.
And Alex, I think we had a couple of discussions on the approach towards CICD. And originally, I was thinking nothing has changed since we moved to cloud-native in terms of how we managed code. And therefore, I didn’t expect originally that there would be much impact, whether I’m using cloud-native or not, on the way I’m managing my software to deliver the project. Apparently, there is. And, it comes from an interesting angle and I know that you’ve done both before and after. So, you’re the best person to probably talk about the transition and the impact on how you remain sensitive in a cloud-native environment.
So usually in the cloud-native environment to one of the things that you be expecting to achieve is constant code delivery to production. It gives a few benefits; One being that the code is always used. But then the other thing is code stats where the product manager and the UIUX can always check what features are working and quickly introduce new changes.
Why is Kubernetes relevant for that? How does Kubernetes change the way you do release cycles?
So, Kubernetes allows you to have a massive impact. Let’s look at what we had before. We had the virtual machines running the applications so it was taking a long time to take and deploy applications on the virtual machines, Usually, the installation wasn’t going as well as we needed. Releasing features gradually was very hard, especially everything concerning small changes. So, for example, once we are looking at the architecture of virtual machines, one release can take a day. I’ve even seen places where their release going for a week. This was at one of the places I used to work, not because it takes much time to release the application, but just to make sure that everything runs correctly.
There are too many parts and that has to be checked. With the Kubernetes system containers, one of the things that assisted was to make sure that your web environment, test environment, and production environment were always aligned. They’re very similar. So, you know, for sure that once you develop it and you’ve tested it, you can take the same image, deploy it on Kubernetes and the Kubernetes will do all the work to make sure that the container runs the same way as it used to run in other environments. And I think that’s the biggest impact it had; to make sure that the environment itself from development to production is the same.
I think there is also the scaling aspect of that and the cost aspect. At least I know that you’re structuring the way we deal with this and the other scaling or auto capabilities play into that.
So auto-scaling is a really good point. Again, when comparing virtual machines to Kubernetes, it is very hard to understand what the right scaling is and then to dynamically increase and decrease virtual machines. Kubernetes does this pretty easily. And one of the things that is a good example is the journey that the guys from Jenkins did. And as I mentioned before, it used to be static virtual machines that were running codes and each sleeve in Jenkins cluster had to be configured directly to the specific pods it had run. And then if we find ourselves with a large number of jobs, so probably we’ll have to wait.
With the new solution and the new approach, I think it’s around three years old already. They introduced the cloud solution that they allowed to run at Jenkins on Kubernetes and all sleeves are run on Kubernetes as well. So, the pods are independent. What you need to do is just create pods that answer all the requirements that a particular job needs to be executed. And then the Kubernetes will take all that it requires to run a specific job.
For example, if you have a thousand jobs, it will scale nodes and if you have limited nodes, it will put jobs to pending until the resources are free. So, new nodes will be provisioned and the jobs will be running gracefully. And so, with the performance and scalability Kubernetes provides, companies can solve their issues in a very beautiful and simple way.
Isaac, you mentioned Git Lab and some of the other tools. Can you summarize for us or let us know a little bit more about what’s the difference between a GitOps and Jenkins as an approach?
I’m not sure it’s a difference in approach, it’s just to relate to the subject. So, Jenkins is an engine that allows you to build jobs and pipelines and run them in specific times or maybe as triggers from certain events. It’s just like a job manager, whereas GitOps is an approach. And the approach there is, basically talking about something very close to what Alex was mentoring before. The requirement to have environments very consistent across, SDLC cycles, like test, dev, prod, and stuff like that. I have been in many projects in the past where consistently maintaining environments was a full-time job for people. And every little hiccup between such environments caused huge delays and additional costs to the projects. And the way to deal with it is basically to remove as much of the human factor in it.
Just use computers for what they’re good for; automating things. And that’s where infrastructure and code come into play. You describe your environment in a way that can be source controlled, that’s important, in a way that can be source controlled and drift between versions and stuff like that. So, you describe your environment that way and then you need an engine to create an environment based on that description.
That completely removes the human factor from the actual preparation of the environment and that’s how you achieve really good consistency between environments. And that’s really what GitOps is all about; storing your ops in GitOPs, controlling your infrastructure. So, the approach there is about storing your infrastructure and then having a tool or a set of tools to provision your environments. So, GitOps doesn’t talk about application delivery, it talks about infrastructure delivery, whereas Jenkins can be used for the delivery of anything.
Yeah, it sounds very similar to the discussion that we have between imperative and declarative, where imperative is probably Jenkins, which is where you specify the steps and build processes in a workflow style, and with GitOps, you define the intent and the kind of structure of where you want to go and it makes you figure out how to get there in a sense if I’m understanding what you just said.
What you just described GitOps basically is relevant for the pool approach where something is monitoring your desired state as defined in the code and then just finds a way to apply it. Whereas the push approach of GitOps is similar to what Jenkins is doing. I have a description of an environment and I run a process to provision it or maintain the differences.
Interesting. I think that what we’re starting to see, and I think that’s also the analogies to the Cloudify experience is that as systems become more complex, you have to use some level of attraction which is the declarative intent-based approach and, infrastructure code, whereas if you want to maintain more control, you tend to be more imperative and what we’re starting to see is that a mix between the two approaches because everyone realizes that they need both. You can’t go completely with one side versus the other. Sometimes it’s good to be imperative, sometimes it’s good to be declarative and the question is how do we mix the two? Not so much how do we split them into completely different camps? So that I think has opened up, another kind of question Ilan, on the infrastructure side that I think you touched on. Maybe you could lead that.
So yeah, that’s an excellent segue for some of the next questions that we had. So, in terms of this approach, like what’s the current approach for actually managing the infrastructure as part of, the CICD tooling and how the infrastructure plays a part in that whole process?
And again, I think it’s, it ties up to a question that you asked before about the Gartner article and the fact that everything is becoming more domain-specific. So again, I think in early releases when Jenkins came to the world, cloud came to the world, everyone was showing how they can spin up their BM and how they could push code into that BM and do API calls and et cetera.
I think to cut a long story short, we’re getting to the point where the degree of complexity of managing infrastructure is something that if you start to mix it within your bill processes, that becomes messy pretty quickly, and you lost control over what’s going on there and you end up with a lot of duplicate work across the board. So, what we’re starting is to see a much clearer suppression even by the tool themselves and GitOps that you mentioned already integrated with the Terraform and Ansible for that type of task. And there is a growing number of plugins in the case of Jenkins to do the same thing.
And I believe that Circle CI has, I’m assuming, a similar approach to deal with infrastructure. But the general idea is that infrastructure is complex. You don’t want to mix the way you do a bill into the way you’re doing infrastructure. It’s good to separate between the two, but tie them in a more loosely coupled way. They’re not completely separated but they cannot be completely decoupled. There needs to be a better separation between the two, because once they become a spaghetti then everything becomes much more complex to manage.
Yeah. And that ties into a point Isaac made earlier about the more complex it becomes, the more people have full-time jobs, just managing that and dealing with that whole complexity and the manual work. And that is actually when it becomes even more complicated, Nati. So, CICD and a multi-cloud environment, or when it’s not even just one-unit infrastructure, it’s various types of cloud infrastructure, whether it’s a private cloud or public cloud, what are the best practices? How does that look? Alex, can you guys take us through some of this approach?
I can start with what I’ve seen so far. So, I was saying is that everyone wants to have consistency across the different tools and different environments and a way to manage it. In reality, you start your journey in some way, especially when you start with the cloud. The cloud provides a lot of tools around that and that leads you to use those tools. And so, you ended up using Azure and I’ve seen it even larger enterprises start this. Their tools and Amazon’s tools for using CICD are different. And there isn’t a common tool to manage all of them, but I think it will change. I think when people and organizations realize that that level of stickiness in their code is because of what we mentioned earlier, that it’s not easy to get away from it. They will try to take back control over that player.
It’s a normal tendency when such a tool or such an approach comes into the market to try to fit the problem into the tool instead of the other way around. No one tool can do everything right. And organizations are realizing that.
Yup, totally agree.
– I think that when it’s a single cloud environment, it’s pretty straightforward, but when you go into to multi-cloud environments from my perspective, the last thing you want to have is multiple CICD’s wherein you have to remember and track where the job for each particular cloud or environment is. You usually want a single pane where you can look and understand what’s happening in your environment and in your CICD. Because it’s like monitoring to see that everything across development goes as smoothly as possible. We have multiple departments, so you can work with several CICD’s. Each department can be responsible for a separate one. But usually, when you have a single team or department that can be responsible for the application running on multiple environments, what you prefer to have is a single place to observe and understand what is happening.
Excellent. And I think that also ties to the fact that this is where the separation between infrastructure and CICD and becomes even more so logical because what you don’t want to do is tie both your infrastructure, code and everything into a specific cloud provider that’s going to be looking at the very deep level.
Especially now when you have all kinds of obstruction above multi-clouds, so it’s very easy to have one orchestrator that is provisioning your multi-clouds.
And you mentioned Kubernetes. I think Kubernetes is probably the ultimate obstruction in that respect.
That’s a great point about avoiding lock-in and the strategic level. I’m seeing more and more people mentioning as replacements for Jenkins things like Azure, Dev Ops, and some of the other ones that we mentioned, the cloud-specific ones. And obviously, the end goal is to lock in that enterprises into very specific CICD tools. And to take this even further, and these are also use cases that we’re seeing here at Cloudify with Edge as part of the CICD pipeline. Do you see any specific use cases? Is this relevant? I’d like some thoughts on that.
I’ll start with my personal experience. When I use Google Wi-Fi today, I get alerts on my phone that say “your WIFI got better” because I was prompted to get the new version. I didn’t have to call an IT technician. It just happened. The WIFI sits in my house. The push was done from somewhere in the cloud. I have no idea where it is. I just connected it. And that’s the world that we’re getting into. The devices that were used to be siloed, you buy them at some point, install them, plug them into your power supply and then they completely stay off the network are or connected to your WIFI and controlled by your phone.
And therefore, they behave like any other compute resource that you have in the cloud. Only that they don’t live in your central cloud. They live in the outer cloud if you’d like. And that brings the question of how do you manage those processes? And I would say again if you look at other organizations that I’ve dealt with, they have to write a lot of custom work. They don’t use Jenkins to push codes into the Wi-Fi. They have special software doing that. But there’s no reason to do that. There’s no reason in my view to treat edge differently than we treat other processes from a release cycle perspective. And if you again, extend it from this Wi-Fi example that I mentioned to machine learning. What is machine learning? And you have a ROT device, like a camera or sensors scattered around to give you some feedback on traffic or temperature or other variables that you want to measure and you start to run that code closer to the edge to do some analytics, that code in itself needs to be updated continuously because you continuously improve the algorithms of how this logic has been executed.
So again, this needs to be part of a release cycle. And today it’s mostly done by custom work. And we can extend it to CDN and others. It’s becoming more and more common to have a more distributed type of environment and edge being an extreme use case that shows that most of the work from a release cycle perspective is done through custom to the edge, but it needs to be more standardized. And where I think that CICD also needs to be extended to support those use cases that are becoming more mainstream than ‘edge’ use cases. Pun intended.
And then one more thought about that. In terms of managing the network as part of this, is there also a specific angle that you see here?
So, if you look at many of the clouds themselves, they provide distributed data centers, multi-sites, multiple regions across the globe. Azure specifically accounts for Azure and AWSK with extensive network services as part of that. So even if you think about that from that angle, then there is no reason why you wouldn’t manage those network services from the cloud providers differently than the way you manage the computer system on those clouds or the database resources and those clouds. So that becomes another resource that you need to manage.
And there are many nuances, like security scale, distribution, and latency that you need to deal with. But that doesn’t necessarily mean that you need to do a completely different process for that. It just means that the processes need to be adjusted to meet those requirements.
So, it’s just another level of complexity that needs to be addressed as we move more and more towards the cloud into the edge. I want to cover one last section that we have here. So, what’s Cloudify? Cloudify is a multi-cloud orchestrator. We have the service called environment as a service, but I want all three of you guys if you can, to give us a little more about the Cloudify-specific approach to CICD, how it’s in your region, the value, some of the best practices. If you guys can address that.
We fairly recently released a plugin for Jenkins. We built a Java client for our rest service and we use that through a Jenkins plugin that we wrote. And what it gives you is the ability to interact with Cloudify in a way that is very familiar to you because you’re already a Jenkins user. It allows you to compose jobs and pipelines based on built steps as you usually would. It’s just that the interaction with Cloudify is much better because you have a built step, for example, for uploading a blueprint or creating environments, diluting environments, and so forth. So, that’s the level of support that we have today for Jenkins. And we also have in our plans to support other CICD tools in the future.
And I think we have two aspects of integration there. We have something that will allow us to integrate seamlessly between those different tools, the different orchestration; Ansible, the Cloud Formation, the Azure arm, the Terraform. And I think one of the… Go ahead, Isaac.
Sorry, that’s the second level. So, what I was describing was the first level where we offer discreet, built steps, or building blocks. And the second layer is something cool that we did there. We provided the build steps that allow you to invoke other orchestrators such as Azure Arm, Terraform, Cloud Formation, and so forth. The way they do that behind the scenes is by using Cloudify. We have a blueprint that can accept any Arm template or a cloud formation template or Terraform template or an Ansible playbook. We provide the Jenkins oriented mechanism to configure these steps. For example, for running the form, all you have to provide is a URL to the template and the parameters for your template.
We end up using Cloudify to orchestrate that. So, right off the bat, just by installing that plugin, you can create any environment using Terraform or an Azure or a cloud formation or Ansible right to Cloudify, without even knowing that it is Cloudify behind the scenes.
Excellent. And, I would add that when we mentioned Edge at the beginning, one of the things that we’ve done before the integration with Jenkins was to deal with pushing updates into remote devices that are spread across the globe. We called it ‘Cloudify inspire. So, it became a very natural fit into that. So, once we’ve done the plug into Jenkins that will allow us to plug inspire into that and extend that same thing that Isaac mentioned into the edge.
So, one of the things that we’ll be doing is using Cloudify as our CICD. The main goal of CICD is continuous integration and delivery. Now all the tools, either new or legacy tools like Jenkins are not handling the provisioning environments and that’s actually where we’ll using Cloudify. So, for part of our CICD, we are delegating the provisioning environment to Cloudify and the end goal is to enable our developers to merge code to master as fast as possible by changing our pipeline so that once a code is merged to master, we can validate.
And we will do it by provisioning a new environment for each merge to master so that we can run the integration tests on the latest code to see if it’s working by the end of each integration test. So, if everything is fine, we’ll delete the environment itself. So, it’s like having an ad hoc environment on the go. So even if we have, for example, multiple developers, merging to the master at the same time, we can check it simultaneously without one interfering with another. So, it’s increasing velocity by deploying to production.
Excellent. And that pretty much concludes the session, guys.
We’re a little over time, so we can, if someone wants to tackle this quickly, what’s the impact of integrating Cloudify into the organization? We mentioned the new Jenkins plugin. How is it looking? Is there an initial blueprint, pardon the pun, to do it?
The answer that most people expect is that it’s a long journey. We are now introducing a new tool and that’s going to be a very long and risky process because the approach that we’ve seen in the market so far, and that ties to the earlier discussion that we had in this podcast, is that because there are different requirements, the approach has been to release a completely new CICD tool. The first statement or clear statement is that this is not yet a new CICD. We’re not trying to position Cloudify as a replacement for Jenkins, Circle CI, or any of those tools. What we’re trying to do is augment things that Cloudify can bring that we think are missing in those platforms and to provide a consistent way to manage infrastructure between those tools.
So even if your organization is using multiple CICD’s, having that delegated into a commonplace would enable you to get some consistency without agreeing or trying to consolidate on one tool. On the flip side of that, we’re not trying to position Cloudify to be the central thing that everyone goes to. The approach that we’re taking, especially because there is a CICD, is that it allows us to integrate that approach in a much more incremental way.
So, think about Cloudify as just another build-step in your CICD. If you see that it’s helping and saving you time and providing value, you could extend it to other build steps or other use cases within the organization itself and do that integration more incrementally rather than the rip-and-replace type of approach. So that’s the nice thing about tying Cloudify as a plugin within Jenkins and not trying to position it as yet another CICD which is a much harder journey.
I’ve seen many times that with CICD tools, part of the pipeline itself is provisioning the environmental infrastructure in the best case if they’re not using the static one. And usually what happens is that a provisioning a new environment, both application, and infrastructure, it’s usually wrapped up with scripts and scripts are not the best way to validate that the environment is up and running. Most of the time, it probably will work as it should but what happens with the places where it doesn’t? So, the scripts are not a good validation and life cycle for this.
And I think where Cloudify can be of benefit is to make sure that the tool is qualified and that the environment up and running and takes the whole life cycle of provisioning. So, the CICD can delegate the provisioning environment to Cloudify and then get back and continue the required steps.
Thank you, Alex. I hear that. Just to close it out. I also hear that from different customers and users who have encountered this mess and they’re coming to us to try to help them get out of it and do things in a more streamlined way. So, thank you so much. I’ll hand it over to Johnny to take us home.
Thank you so much, guys. Thanks to everyone who contributed to this really interesting and insightful session on Jenkins. I even learned some stuff from it. So, in the next session, we’re going to get back on track and we will be taking a deeper dive into service adoption and integration. Just to remind you, as I said, at the beginning of the podcast, all supporting material will be on the podcast page; so that’s cloudified.co/podcast. And I think there’s also a way for you to contact us there with suggestions and conversation topics and anything you’d like to hear us talk about on these podcasts. So, this is us thanking you for listening and keep well and stay safe. Thanks, bye.