Skip to main content

LINUX CONTAINERS

LINUX CONTAINERS

What is Linux Container: 
Linux containers have different approach than the Virtualization technology. Simply we can say this is OS level Virtualization, which means all containers run on top of one linux operating system. We can start containers on a hardware running machine or inside of running virtual machine. Each container run's as a fully isolated operating sysem.

In container virtualization rather than having an entire Operating System guest OS, containers isolate the guest but do not virtualize the hardware. For running containers one needs a patched kernel and user tools, the kernel provides process isolation and performs resource management. Thus all containers are running under the same kernel but they still have their own file system, processes, memory etc.

Linux based containers mainly involved with two concepts:

1. Namespaces
2. Cgroups ( Controll Groups)

There are total 6 types of Namespaces:
1. PID Name space.
2. Net Name space.
3. Ipc Name space
4. Mnt Name space
5. Uts Name space
6. User Name space.


Cgroups are already well known technology. We can find more detail about Cgroups here

There is an another important aspect of containers “Copy on write file system” This is also a well known technology in market:

Docker Architecture:

1. Docker Daemon
docker daemon is responsible for all the conainer operations. It runs on the docker host machine. We can't interact with this directly, all the instructions have to be sent through the docker.

2. Docker Client: It is the main interface to connect wit docker daemon. It can be install on the same machine or on the different machine.

3. Docker image: A Docker image is like a golden template. An image consists of OS (Ubuntu, centos etc.,) and applications installed on it.

4. Docker registry: That is a repository for docker images. It can be public or private. Public repository are maintained by docker is called docker hub. Users can upload and download images from there.

5. Docker Container: It is created on top of a docker image and it is completely isolated. Each container has it's own user space, Networking, and security settings associated with it.


INSTALLATION OF DOCKER

Docker can be installed on CentOS 7 without activating extra repository. Only we need to update the system:

# yum update -y

# yum install docker docker-registry

# systemctl start docker

# systemctl enable docker

# systemctl status docker


Now we can launch the container using following command:

# docker run ubuntu:14.04 /bin/echo ‘Welcome to the Container world’


1. docker run ==> is the command to run a container

2. ubuntu:14.04 ==> This is the image name. First it will find the image locally, if this is not there then pull it from docker hub account.

3. /bin/echo 'Welcome to the Container world' ==> This is the command, which will be executed inside if container.

Creating an interactive container

Previous command will create the container, but it was not interactive. To create the interactive container we can use following comamnd.

# docker run -it ubuntu:14.04 /bin/bash


1. -i ==> Make it interactive

2. -t ==> Open the terminal.

3. /bin/bash ==> Command which we want to execute inside the container.



Done...









Comments

  1. Nice post.Thank you so much for sharing this.MCITP Online Training trainings are integrated with modules that include thorough and specific principles to provide equip IT individuals a radical understanding and perception concerning the engineering.

    ReplyDelete

Post a Comment

Popular posts from this blog

Docker Container Management from Cockpit

Cockpit can manage containers via docker. This functionality is present in the Cockpit docker package. Cockpit communicates with docker via its API via the /var/run/docker.sock unix socket. The docker API is root equivalent, and on a properly configured system, only root can access the docker API. If the currently logged in user is not root then Cockpit will try to escalate the user’s privileges via Polkit or sudo before connecting to the socket. Alternatively, we can create a docker Unix group. Anyone in that docker group can then access the docker API, and gain root privileges on the system. [root@rhel8 ~] #  yum install cockpit-docker    -y  Once the package installed then "containers" section would be added in the dashboard and we can manage the containers and images from the console. We can search or pull an image from docker hub just by searching with the keyword like nginx centos.   Once the Image downloaded we can start a contai

Remote Systems Management With Cockpit

The cockpit is a Red Hat Enterprise Linux web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. In RHEL 8 Cockpit is the default installation candidate we can just start the service and then can start the management of machines. For RHEL7 or Fedora based machines we can follow steps to install and configure the cockpit.  Following are the few features of cockpit.  Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions Installation of cockpit package.  [root@rhel8 ~] #  dnf   install cockpit cockpit-dashboard  -y  We need to enable the socket.  [root@rhel8 ~] #  systemctl enable --now cockpit.socket If firewall is runnin

Containers Without Docker on RHEL/Fedora

Docker is perfectly doing well with the containerization. Since docker uses the Server/Client architecture to run the containers. So, even if I am a client or developer who just wants to create a docker image from Dockerfile I need to start the docker daemon which of course generates some extra overhead on the machine.  Also, a daemon that needs to run on your system, and it needs to run with root privileges which might have certain security implications. Here now the solution is available where we do not need to start the daemon to create the containers. We can create the images and push them any of the repositories and images are fully compatible to run on any of the environment.  Podman is an open-source Linux tool for working with containers. That includes containers in registries such as docker.io and quay.io. let's start with the podman to manage the containers.  Install the package  [root@rhel8 ~] # dnf install podman -y  OR [root@rhel8 ~] # yum