Linux-containers: isolation as a technological breakthrough

Imagine that you are developing an application, and, on your laptop, where the working environment has a certain configuration. The application relies on this configuration and depends on certain files on your computer. Other developers may have a slightly different configuration. In addition, your organization has test and industrial environments with its own configurations and sets of files. You would like to emulate these environments as accurately as possible, but you absolutely do not want to reproduce complex and heavy servers on your machine. How to make the application work in all environments, passed quality control and got into production, without colliding on the road with a lot of problems that require constant revision of the code?
 
 
Linux-containers: isolation as a technological breakthrough
 
 
Answer: use containers. Together with your application, the container contains all the necessary configurations (and files), so it can easily be transferred from the development environment to the testing environment, and then into the industrial environment, without fear of any side effects. The crisis is eliminated, all in a win.
 
FreeBSD Jail , which appeared in 2000. It allows you to create inside one operating system. FreeBSD several independent systems running on its own core, the so-called "jails". The cameras were conceived as isolated environments that the administrator can safely provide to internal users or external clients. Because the camera is built based on the call chroot and is a virtual environment with its files, network and users, processes can not go beyond the camera and damage the underlying OS. However, due to constructive limitations, the Jail mechanism still does not provide complete isolation of processes, and over time, there are ways to "escape" from the camera.
 
 
But the idea itself was promising, and already in 2001 on the Linux platform appeared project VServer , created, according to its founder , Jacques Gélinas, in order to run "several standard Linux servers on the same machine with a high degree of independence and security." Thus, Linux has a foundation for implementing parallel user environments, and gradually what we call containers today is emerging.
 
 

On the way to practical use


 
A major and rapid step towards isolation was the integration of existing technologies. In particular, the mechanism cgroups , operating at the Linux kernel level and restricting the use of system resources by a process or group of processes, with the initialization system systemd , which is responsible for creating user space and running processes. The combination of these mechanisms, originally designed to improve overall manageability in Linux, has allowed for much better control of isolated processes and laid the foundation for a successful separation of environments.
 
 
The next milestone in container history is associated with the development of user namespaces (
? User namespaces
), Which allow you to separate the assigned user IDs and groups within and outside the namespace. In the context of containers, this means that users and groups can have the privileges to perform certain operations within the container, but not beyond it. " This is similar to the concept of Jail, but more secure due to additional isolation processes.
 
 
Then came the virtualization system Linux Containers project (LXC), which offered a number of highly sought-after tools, templates, libraries and language support tools, dramatically simplifying the use of containers in practice.
 
 

The appearance of Docker


 
In 200? the company Docker (then called
? dotCloud
) With the same name technology joined the scene, combining the achievements of LXC with advanced tools for developers and further facilitating the use of containers. Today, Docker's open source technology is the most popular and popular tool for deploying and managing Linux containers.
 
 
Along with many other companies, Red Hat and Docker are participants of the Open Container Initiative (OCI) project, which aims to unify the standards of container technology management.
 
 

Standardization and Open Container Initiative


 
Project Open Container Initiative works under the auspices of the organization Linux Foundation . It was established in 2015 "with the goal of creating open industry standards for container formats and execution environments." At the moment, its main task is to develop specifications for container images and execution environments.
 
 
The runtime specification specifies a set of open standards describing the composition and structure of the filesystem bundle, and how this package should be unpacked by the runtime environment. Basically, this specification is needed in order for the container to work as intended, and all the necessary resources were available and located in the right places.
 
 
The specification of container images defines the standards for "image manifestation, file system serialization and image configuration".
 
Together, these two specifications determine what is inside the container image, as well as its dependencies, environments, arguments, and other parameters necessary for the correct execution of the container.
 
 

Containers as an abstraction


 
Linux-containers are another evolutionary step in the development of methods for developing, deploying and maintaining applications. Providing portability and version control, the container image ensures that if the application is running on the developer's computer, it will work in an industrial environment.
 
 
By requiring less system resources than a virtual machine, the Linux container is almost as good as its isolation capabilities and makes it much easier to maintain multi-layered composite applications.
 
 
The meaning of Linux containers is to speed up the development and help you to respond quickly to business requirements as they arise, rather than provide a specific software for solving emerging problems. Moreover, it is possible to pack not only entire applications into the container, but also separate parts of the application and services, and then use technologies like Kubernetes for the automation and orchestration of such containerized applications. In other words, you can create monolithic solutions, where the entire application logic, runtime components and dependencies are in the same container, or you can build distributed applications from a multitude of containers that work as microservices.
 
 

Containers in industrial environments


 
Containers are a great way to speed delivery of software and applications to customers, using them in industrial environments. But this naturally increases the responsibility and risks. How to ensure the safety of containers, says Josh Bresser, a security strategist at Red Hat.
 
 
"I have been dealing with security issues for a very long time and they are almost always ignored as long as the technology or idea does not become mainstream," complains Bresser. - Everyone agrees that this is a problem, but the world is so organized. Today the world is seized by containers, and their place in the general picture of security begins to clear up. I must say that the containers are not something special, it's just another tool. But since today they are in the center of attention, it's time to talk about their safety.
 
 
At least once a week, they assure me that it is safe to run workloads in containers, so do not worry about what's inside them. In fact, everything is completely wrong, and this attitude is very dangerous. Security inside the container is as important as security on any other part of the IT infrastructure. Containers are already here, they are actively used and distributed with astonishing speed. However, there is no magic about security in them. The container is safe as far as the content that runs inside it is safe. Therefore, if your container contains a bunch of vulnerabilities, the result will be exactly the same as in the case of "bare hardware" with the same heap of vulnerabilities. "
 
 

What's wrong with the safety of containers


 
The technology of containers changes the established view of the computing environment. The essence of the new approach is that you have an image that contains only what you need, and which you launch only when it is needed. You no longer have any extraneous software that is installed, but it works unclear why and can cause big trouble. In terms of security, this is called the "surface attack". The less you run any things in the container, the less this surface, and the higher the safety. However, even if a few programs are running inside the container, you still need to make sure that the contents of the container are not obsolete and do not swarm with vulnerabilities. The size of the surface of the attacks does not matter, if something is installed inside that has serious security vulnerabilities. Containers are not omnipotent, they also need security updates.
 
 
Banyan has published report under the name " More than 30% of the official images in Docker Hub have a vulnerability of safety with a high level of criticality. " 30% is a lot. Since the Docker Hub is a public registry, it contains a lot of containers created by the most diverse public. And since anyone can place containers in such a registry, no one will guarantee that the newly-published container does not contain the old "leaky" software. Docker Hub is both a blessing and a curse. On the one hand, it saves a lot of time and effort when working with containers, on the other, does not give any guarantees that the loaded container does not contain known security vulnerabilities.
 
 
Most of these vulnerable images are not malicious, no one built into them a "leaky" software with malicious intent. Simply someone at one time packed the software into a container and laid it out on the Docker Hub. Time passed, and the software revealed a vulnerability. And as long as someone does not follow this and update, Docker Hub will be a hotbed of vulnerable images.
 
 
When deploying containers, the base images are usually "pulled up" from the registry. If this is a public registry, you can not always understand what you are dealing with, and in some cases, get an image with very serious vulnerabilities. The contents of the container are really important. Therefore, a number of organizations are beginning to create scanners that look inside container images and report the vulnerabilities found. But scanners are only half the solution. After all, after the vulnerability is found, it is necessary to find and install a security update for it.
 
 
Of course, you can completely abandon third-party containers to develop them and manage them solely on your own, but this is a very difficult decision and it can seriously distract you from the main tasks and objectives. Therefore, it is much better to find a partner who understands the security of containers and knows how to solve the corresponding problems so that you can focus on what you really need.
 
 

Red Hat solutions for containers


 
Red Hat offers a fully integrated platform for the implementation of Linux containers, which is suitable for small pilot projects, as well as for complex systems based on orchestrated multi-container applications - from the operating system for the host where the containers operate, to verified container images for building their own applications or The same orchestration platforms and controls for the industrial container environment.
 
 

 
 

Infrastructure


 
 
Host
 
Red Hat Enterprise Linux (RHEL) - Linux-distribution, earned a high reputation in the world in terms of trust and certification. If you only need support for container applications, then you can use the specialized distribution. Red Hat Enterprise Linux Atomic Host . It provides the creation of container solutions and distributed systems /clusters, but does not contain the functionality of the general-purpose operating system that is available in RHEL.
 
Inside the container
 
Using Red Hat Enterprise Linux inside containers ensures that regular, non-containerized applications deployed on the RHEL platform will work just as well inside the containeryeners. If the organization itself develops applications, then RHEL inside containers allows you to maintain the usual level of technical support and updates for containerized applications. In addition, portability is provided - in other words, applications will work seamlessly wherever there is RHEL, starting with the development machine and ending with the cloud.
 
The data warehouse is
 
Containers may require a lot of space in the data store. In addition, they have one constructive drawback - when the container crashes, the stateful application it contains loses all its data. Integrated with the Red Hat OpenShift platform, software storage Red Hat Gluster Storage provides a flexible managed storage for containerized applications, eliminating the need to deploy an independent storage cluster or spend money on expensive expansion of traditional monolithic storage systems.
 
Infrastructure-as-a-service (IaaS)
 
Red Hat OpenStack Platform Integrates physical servers, virtual machines and containers into one unified platform. As a result, container technologies and containerized applications are fully integrated with the IT infrastructure, opening the way to full automation, self-service and quoting of resources across the entire stack of technologies.
 
 

Platform


 
 
Platform of container applications
 
Platform Red Hat OpenShift integrates key container technologies, such as docker and Kubernetes , with the enterprise-class operating system Red Hat Enterprise Linux. The solution can be deployed in a private cloud or in public cloud environments, including Red Hat support. In addition, it supports both stateful- and stateless-applications, providing transfer to container rails without architectural processing of existing applications.
 
The solution "all in one"
 
Sometimes it is better to get everything at once. It is for such cases that the package is intended. Red Hat Cloud Suite , which includes a platform for developing container applications, infrastructure components for building a private cloud, tools for integration with public cloud environments, and a common management system for all components. Red Hat Cloud Suite allows you to upgrade the corporate IT infrastructure so that developers can quickly create and provide services to employees and customers, and IT professionals have centralized control over all components of the IT system.
 
 

Management of


 
 
Management of hybrid clouds
 
Success depends on flexibility and choice. There are no universal solutions, so in the case of a corporate IT infrastructure, it is always worthwhile to have more than one option. Complementing the public cloud platforms, private clouds and traditional data centers, containers expand the choice. Red Hat CloudForms It allows you to manage hybrid clouds and containers in an easily scalable and understandable way, providing integration of container management systems such as Kubernetes and Red Hat OpenShift with the virtual environments of Red Hat Virtualization and VMware.
 
Automation of containers
 
Creating and managing containers is often a monotonous and time-consuming task. Ansible Tower by Red Hat allows you to automate it and get rid of the need to write shell scripts and perform operations manually. Ansible scripts allow you to automate the entire life cycle of a container, including assembly, deployment, and management. You no longer have to engage in routine, and there will be time for more important things.
 
 
Containers and most technologies for deployment and management are released by Red Hat in the form of open source software.
 
 
Linux containers are another evolutionary step in how we develop, deploy and manage applications. They provide portability and version control, helping to ensure that a developer working on a laptop will work in production.
 
 
Do you use containers in your work and how do you assess their prospects? Share your pain, hopes and success.
+ 0 -

Add comment