Putting Your Cloud in Order: Multi-Cloud Governance and Managed Solutions
- August 20, 2018
- Posted by: Ilan Adler
- Category: Governance, Multi-Cloud
Cloud governance and cloud management are terms that IT practitioners and leadership are going to be confronted with over the coming years. These concepts are interlinked and encompass a number of areas of concern. Understanding the difference between cloud governance and cloud management and how this difference impacts real-world IT operations requires knowledge of the overall evolution of IT as well as an exploration of problems facing organizations today.
Cloud governance basically means ensuring that applications and services located on a public or service provider cloud behave as expected. Cloud governance is achieved through the use of cloud management solutions. Cloud management solutions typically achieve cloud governance through the use of policies, profiles, Access Control Lists (ACLs), encryption, and other security features.
In more technical terms, cloud governance is a catch-all term for both the theoretical aspects of engaging cloud infrastructure (design, architecture, security and privacy policies, etc.), as well as the practical side, which includes implementation, monitoring, alerting, incident response, and remediation.
Test-drive Cloudify’s Multi-Cloud Lab Free! Get Lab
Cloud management, in contrast, has traditionally been concerned only with the implementation portion of this process. Monitoring, alerting, incident response, remediation, regulatory compliance, security, etc. have often required third-party products. In other words, cloud governance solutions will, by necessity, employ some form of cloud management, but not all cloud management solutions will provide cloud governance capabilities.
As organizations connect their on-premises infrastructures with their cloud providers, as well as engage more than one cloud provider, the gap between cloud management and cloud governance becomes ever more important. Today’s organizations need to apply their designs, security, privacy and regulatory compliance requirements across multiple infrastructures, all while achieving cross-infrastructure workload orchestration.
A Brief History of IT Governance
To understand some of today’s governance challenges, it helps to revisit IT history. Understanding the cloud revolution stretches as far back as the 1960s, a time of early mainframes and minicomputers. It was a simpler era: regulation regarding data use, privacy, and data sovereignty were more or less non-existent.
There was little need for regulation because there simply weren’t that many computers to regulate. Networking—let alone the internet—was a concept in its infancy, and exchanging data between computers was rare.
At the time, computers were also not generally used in an interactive, real-time manner. Batch processing was the standard. While users could submit jobs for processing, the execution of jobs was guarded by a class of systems administrators that jealously guarded access to their computers. Jobs that would crash a computer, or even those that would take too long, were not permitted.
In other words, IT governance at the time was accomplished through highly manual processes. As with all manual processes, this didn’t scale well.
The microprocessor was introduced in the 1970s, and by 1979, a flood of what we would today call personal computers had entered the market. The availability of computers for individuals which operated in a real-time manner, and which did not require permission from systems administrators transformed businesses of the early 1980s.
Computer ownership and computer use exploded, as well as the amount of data generated. By the mid-1980s, it was increasingly commonplace to own a computer at home, and floppy disks made transporting data between computers both simple and affordable. That development, along with the emergence of the first corporate networks, confronted the IT industry with its first governance crisis.
Data left the company premises on a regular basis. It was also increasingly easy to find an unattended—and thus not regularly monitored by humans—computer within a business that had not been networked with the others. Eventually, directory services emerged, with the dominant solution of the day being Novell Netware.
Directory services sped the adoption of the client/server model of IT, with files being stored centrally and users needing to authenticate against the central IT directory in order to access their data. Systems administrators wrested control of corporate IT back from the chaos caused by everyone having their own PC. Over the next two decades, vendors developed new tools that allowed systems administrators to deepen their control over all aspects of IT.
Just as IT teams began getting a handle on governance issues, practical management issues started to consume the time of IT teams. By the late 1990s, the number of workloads under management was beginning to get out of control, especially as each workload required its own physical server. VMware made virtualization for x86 servers mainstream, and it was an ease-of-use-revolution for wrangling infrastructure.
Virtualization also helped control costs because it commoditized server, storage, and networking vendors. By 2006, virtualization was also helping with governance issues by making the use of service blueprints, templates, snapshots and cloning simple and affordable. With IT infrastructure management back under control, IT vendors and practitioners turned their eyes once more to governance issues, which by this point had become a problem again.
In the mid-2000s always-on broadband internet was mainstream, but IT security was lax. Malware outbreaks were disappointingly common, and the value of data had become so widely recognized that both organized crime and state actors devoted significant resources to creating security breaches. IT teams the world over refocused on security. Regulations started to emerge on privacy, encryption and the use of data. The industry models of what IT governance should look like changed and vendors developed products accordingly.
Cloud Computing and Going Through The Loop Again
In 2010, Amazon unveiled AWS, ushering in the era of cloud computing. Complex IT infrastructure could now be ordered in a self-service fashion and in real time. No longer was a systems administrator necessary to create a server, or engage an application, or multi-layered service.
Anyone with a credit card could stand up enough IT to power an entire company. This eventually included data warehousing and powerful analytics solutions powered by machine learning and artificial intelligence. Once again, employees were able to completely bypass systems administrators—especially their governance controls—doing whatever they wanted with data, and sharing it with whomever they wished.
This became—and remains—a significant concern for organizations of all sizes, especially in light of the increasing number of regulatory compliance regimes that came into force in the 2010s. The E.U.’s General Data Protection Regulation (GDPR) alone allows for fines of €20 million, or 4 percent of annual global top-line revenue, whichever is greater.
Today, cloud governance solutions are emerging. At the same time, however, the number of workloads under management—both on-premises and in the cloud—has exceeded the capabilities of traditional management solutions. To cope, IT teams are increasingly turning to composable workloads and infrastructure as code solutions. Both of these solutions allow for the management of workloads at a scale that the templates and clones of the virtualization era can’t handle.
Further complicating today’s landscape is that many organizations no longer consider a single infrastructure to be sufficient. Organizations are using on-premises infrastructures in conjunction with public clouds, and they are engaging multiple public clouds simultaneously.
Currently, we are at another tipping point in the evolution of IT. Just as virtualization commoditized server vendors as part of solving the infrastructure management solutions of the last decade, much of the next decade will be defined by multi-cloud management solutions that enable the commoditization of infrastructure providers. Already, significant growth in this area has begun. Welcome to the hybrid multi-cloud era: it comes with its own problems.
If organizations can restrict their employees to a single infrastructure, IT governance is fairly well defined. The management tools for a single infrastructure incorporate much of what is needed to accomplish IT governance. What isn’t included in the management tools for a given infrastructure is provided by third-party software. Well known solution stacks exist to get a handle on everything from an on-premises VMware infrastructure to Amazon’s AWS.
Working with multiple infrastructures, however, remains a problem. Increasingly, data must be shared among workloads that operate on different infrastructures. Data also needs to be shared with suppliers, contractors, auditors, and customers—and all in a secure and verifiable fashion. Security needs to be coordinated across infrastructures. At the same time, workloads need to be supported, maintained and orchestrated, no matter where they execute. This is the job of Multi-Cloud Management (MCM) solutions.
The Importance of Unified Authentication
Just as directory services were necessary for imposing order on the chaos of the early PC era, they are also at the core of governance for the hybrid multi-cloud era. Today, directories are called identity services. They enable Role Based Authentication Controls (RBAC), which, in turn, make possible everything from User Access Control (UAC) to encryption and even policies, profiles, and encryption.
Each infrastructure has its own identity service. Making the identity services interoperate between infrastructures—often called Unified Authentication (UA)—is one of the most important functions of MCM solutions.
Management of hybrid multi-cloud deployments would be difficult—bordering on impossible— without UA. A developer, systems administrator, or even end user would likely have to know and keep track of credentials for each infrastructure they had to use. Monitoring that usage—and thus being able to audit it for regulatory compliance or security purposes—would require correlation of users among infrastructures.
Some functionality, such as examining log-in activity across all infrastructures in order to spot suspicious behavior, may not be possible without UA. Generating reports regarding who has accessed what data and when often requires post-processing, or even manual intervention without UA.
Real World Challenges
If UA is the strong core of making MCM practical, orchestration is the framework upon which next-generation governance is being built. Beyond simply being able to say which data is where, who has access to it, and who has accessed it, there are requirements to demonstrate that workloads are secure, regardless of where they are deployed.
Verifiability of workloads is increasingly intertwined with reproducibility, and both are concepts important for how MCM solutions manage and orchestrate workloads. The infrastructure that exists below the application—the operating system, hypervisor, networking, storage, etc.—is defined in code, and used to generate identical versions of a given application platform across multiple infrastructures.
With modern MCM solutions, security for a given type of workload is defined once in a central policy, and applied regardless of where that workload is instantiated. Access is defined in a policy and enforced using RBAC and UA. Networking, storage, data protection, reporting, analytics, and automated auditing are also applied as uniformly as possible.
While all of the above is possible manually for small deployments across multiple infrastructures, keeping all of the various balls in the air quickly moves beyond human capability as the number of workloads increases. With innumerable pressures on organizations to constantly scale their IT, MCM solutions are quickly becoming a real world necessity.
The Importance of Standards
Standards make MCM solutions possible. Standards let us “put to bed” new technologies and focus on the next leap forward. MCM wouldn’t be possible, for example, if each infrastructure that organizations seek to manage hadn’t solved the governance and management challenges inherent to their ecosystems. Fortunately, most infrastructure providers—both on-premises and off—have opened up the orchestration of those platforms via Application Programmer Interfaces (APIs).
Standards are also absolutely critical to regulatory compliance efforts. For example, there’s no point in specifying FIPS 140-2 compliance in a policy if the relevant encryption standards aren’t supported. It isn’t enough that each of the infrastructures in use can be managed by one’s MCM.
Each of the infrastructures in use must either meet the needs of the regulatory compliance regimes with which organizations must comply, or an MCM solution needs to be fully aware of the compliance status of the infrastructures in question. This way, a compliance-aware MCM solution can prevent workloads that must meet certain requirements from being deployed to infrastructures that can’t meet those requirements.
Standards allow administrators to administer via policy while letting the MCM solution handle the rest. Just as cloud computing moved administrators away from dealing with mundane infrastructure tasks, MCM vendors increasingly recognize that keeping track of which workloads and infrastructures support which standards is every bit a “keeping the lights on” exercise as swapping out dead hard drives.
Multi-Cloud Management and orchestration is one of the core challenges facing organizations as we head into the next decade. The most important tech vendors of the 2020s will likely be those that provide MCM solutions.
Organizations that resisted virtualization ended up at a disadvantage compared to those that embraced it. Similarly, organizations that embraced cloud computing are able to refocus their IT talent today on automating previously non-automated business processes and even engaging in revenue-generating activity.
Over the next few years, organizations that ignore the utility and the importance of MCM will find themselves at a similar competitive disadvantage compared to those organizations that embrace MCM. The evolution of IT is cyclical. We are fortunate that IT history is well documented, so we might learn from it.