Linux LXC 2.0
"LinuxContainers.org Infrastructure for container projects. linuxcontainers.org is the umbrella project behind LXC, LXD, LXCFS and CGManager."
"The goal is to offer a distro and vendor neutral environment for the development of Linux container technologies. The project provides resources such as build machines, CI infrastructure, mailing-lists, website, e-mail, ... to the various projects under it."
"LXC is the well known set of tools, templates, library and language bindings. It's pretty low level, very flexible and covers just about every containment feature supported by the upstream kernel."
- From https://linuxcontainers.org/
LXC (Linux Containers), is a system built upon Linux kernels that allows for creation and management of a virtualized Linux system on a parent host system. However, unlike other virtualization tools, LXC is not using hardware emulation and a LXC container shares the kernel with the host. So, LXC container is incredibly lightweight and easy to get started with.
LXC is similar to other OS-level virtualization technologies on Linux such as OpenVZ and Linux-VServer, as well as those on other operating systems such as FreeBSD jails. In contrast to OpenVZ, LXC works in the vanilla Linux kernel requiring no additional patches to be applied to the kernel sources. Version 2.0.3 of LXC, which was released on 28 June 2017, is a long-term supported version and intended to be supported until the 1st of June 2021).
But the most recommended one is Docker.
Here is the list of Operating system level virtualization.
Though lxc 2.0.3 is out there, we're going to stick with LXC 1.0.8 which was released on the 9th of November 2015.
We need to install the tools needed to manage and create LXC containers. This package sets up the network structure for the containers.
$ sudo apt-get install lxc
Once the package installation is complete, we can check if the kernel and the configuration is ready by running lxc-checkconfig:
$ lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-3.13.0-40-generic --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled Bridges: enabled Advanced netfilter: enabled CONFIG_NF_NAT_IPV4: enabled CONFIG_NF_NAT_IPV6: enabled CONFIG_IP_NF_TARGET_MASQUERADE: enabled CONFIG_IP6_NF_TARGET_MASQUERADE: enabled CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled --- Checkpoint/Restore --- checkpoint restore: enabled CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: enabled CONFIG_INET_DIAG: enabled CONFIG_PACKET_DIAG: enabled CONFIG_NETLINK_DIAG: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
We can create a LXC container using lxc-create command. We'll use the ubuntu template to create and populate a new container named 'myLXC':
k@laptop:~$ sudo lxc-create -t ubuntu -n myLXC Checking cache download in /var/cache/lxc/trusty/rootfs-amd64 ... Copy /var/cache/lxc/trusty/rootfs-amd64 to /var/lib/lxc/myLXC/rootfs ... Copying rootfs to /var/lib/lxc/myLXC/rootfs ... ...
Since this is the first time to install LXC, it may take time to complete because it downloads all of the required components. However, since it caches them, the next it may be much quicker.
We have some messages from the output regarding username and password:
## # The default user is 'ubuntu' with password 'ubuntu'! # Use the 'sudo' command to run tasks as root in the container. ##
We do not want doing it now, however, to remove the container we can use lxc-destroy:
$ sudo lxc-destroy -n myLXC
Just to stop the container:
$ sudo lxc-stop -n myLXC
We can check the status of our containers with lxc-ls --fancy:
k@laptop:~$ lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART ----------------------------------
We do not see any running container from the output, and that's because currently our container is in a stopped state.
Let's get our container running using lxc-start.
This will setup the container according to the configuration previously defined with the lxc-create command or with the configuration file parameter. If no configuration is defined, the default isolation is used. If no command is specified, the lxc-start will use the default /sbin/init command to run a system container.
k@laptop:~$ sudo lxc-start -n myLXC -d
Note that we started the container in the background and attached to the console at any time later run.
We can issue lxc-ls --fancy command to see if our container has been started:
k@laptop:~$ sudo lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART ------------------------------------------ myLXC RUNNING 10.0.3.19 - NO
and we'll see the container is running and has been assigned an IP address. Note that the assigned IP belongs to an internal bridge, meaning that the containers are not accessible from outside.
We can also access our container with lxc-console:
$ sudo lxc-console -n myLXC
This will enter us into a regular login prompt running for the container, where we can use the username/password combo given during the creation of our VPS (ubuntu/ubuntu):
Ubuntu 14.04.1 LTS myLXC tty1 myLXC login: ubuntu Password: ... ubuntu@myLXC:~$
Once logged in, we have a regular bash prompt from which we can do almost anything we would on the host machine. Some of the tools may be missing, but we can install them using 'apt-get' if needed.
Let's what processes are running withing our container:
ubuntu@myLXC:~$ top top - 16:43:59 up 24 min, 1 user, load average: 0.64, 1.05, 1.35 Tasks: 16 total, 1 running, 15 sleeping, 0 stopped, 0 zombie %Cpu(s): 15.9 us, 7.3 sy, 0.0 ni, 75.3 id, 1.6 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3630880 total, 3392964 used, 237916 free, 174240 buffers KiB Swap: 3770364 total, 1572 used, 3768792 free. 876592 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 33112 2432 1460 S 0.0 0.1 0:00.71 init 186 root 20 0 15392 744 420 S 0.0 0.0 0:00.08 upstart-so+ 213 root 20 0 19476 396 212 S 0.0 0.0 0:00.05 upstart-ud+ 224 root 20 0 49272 1328 904 S 0.0 0.0 0:00.01 systemd-ud+ 272 root 20 0 15276 396 196 S 0.0 0.0 0:00.02 upstart-fi+ 305 root 20 0 10232 2404 116 S 0.0 0.1 0:00.00 dhclient 317 syslog 20 0 255844 1120 696 S 0.0 0.0 0:00.00 rsyslogd 378 root 20 0 12788 848 700 S 0.0 0.0 0:00.00 getty 381 root 20 0 12788 848 700 S 0.0 0.0 0:00.00 getty 382 root 20 0 12788 844 700 S 0.0 0.0 0:00.00 getty 394 root 20 0 61364 3060 2388 S 0.0 0.1 0:00.02 sshd 415 root 20 0 12788 852 704 S 0.0 0.0 0:00.00 getty 418 root 20 0 63132 1720 1248 S 0.0 0.0 0:00.07 login 425 root 20 0 23656 884 672 S 0.0 0.0 0:00.00 cron 465 ubuntu 20 0 21092 2052 1604 S 0.0 0.1 0:00.02 bash 475 ubuntu 20 0 22720 1404 1068 R 0.0 0.0 0:00.01 top
When we are done with our new container, the manual said we can exit its console and return to the host by typing Ctrl-a followed by q. But it does not seem to be working.
So, we can use poweroff command instead:
ubuntu@myLXC:~$ sudo poweroff [sudo] password for ubuntu: Broadcast message from ubuntu@myLXC (/dev/lxc/tty1) at 17:40 ... The system is going down for power off NOW! ubuntu@myLXC:~$ lxc_container: console.c: lxc_console_cb_tty_master: 672 Input/output error - failed to read k@laptop:~$
Ref: What's LXD?
LXD is a container "hypervisor" and a new user experience for LXC.
LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.
It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.
CGManager is a central privileged daemon that manages all our cgroups for our through a simple D-Bus API.
It's designed to work with nested LXC containers as well as accepting unprivileged requests including resolving user namespaces UIDs/GIDs.
Linux - system, cmds & shell
- Linux Tips - links, vmstats, rsync
- Linux Tips 2 - ctrl a, curl r, tail -f, umask
- Linux - bash I
- Linux - bash II
- Linux - Uncompressing 7z file
- Linux - sed I (substitution: sed 's///', sed -i)
- Linux - sed II (file spacing, numbering, text conversion and substitution)
- Linux - sed III (selective printing of certain lines, selective definition of certain lines)
- Linux - 7 File types : Regular, Directory, Block file, Character device file, Pipe file, Symbolic link file, and Socket file
- Linux shell programming - introduction
- Linux shell programming - variables and functions (readonly, unset, and functions)
- Linux shell programming - special shell variables
- Linux shell programming : arrays - three different ways of declaring arrays & looping with $*/$@
- Linux shell programming : operations on array
- Linux shell programming : variables & commands substitution
- Linux shell programming : metacharacters & quotes
- Linux shell programming : input/output redirection & here document
- Linux shell programming : loop control - for, while, break, and break n
- Linux shell programming : string
- Linux shell programming : for-loop
- Linux shell programming : if/elif/else/fi
- Linux shell programming : Test
- Managing User Account - useradd, usermod, and userdel
- Linux Secure Shell (SSH) I : key generation, private key and public key
- Linux Secure Shell (SSH) II : ssh-agent & scp
- Linux Secure Shell (SSH) III : SSH Tunnel as Proxy - Dynamic Port Forwarding (SOCKS Proxy)
- Linux Secure Shell (SSH) IV : Local port forwarding (outgoing ssh tunnel)
- Linux Secure Shell (SSH) V : Reverse SSH Tunnel (remote port forwarding / incoming ssh tunnel) /)
- Linux Processes and Signals
- Linux Drivers 1
- tcpdump
- Linux Debugging using gdb
- Embedded Systems Programming I - Introduction
- Embedded Systems Programming II - gcc ARM Toolchain and Simple Code on Ubuntu/Fedora
- LXC (Linux Container) Install and Run
- Linux IPTables
- Hadoop - 1. Setting up on Ubuntu for Single-Node Cluster
- Hadoop - 2. Runing on Ubuntu for Single-Node Cluster
- ownCloud 7 install
- Ubuntu 14.04 guest on Mac OSX host using VirtualBox I
- Ubuntu 14.04 guest on Mac OSX host using VirtualBox II
- Windows 8 guest on Mac OSX host using VirtualBox I
- Ubuntu Package Management System (apt-get vs dpkg)
- RPM Packaging
- How to Make a Self-Signed SSL Certificate
- Linux Q & A
- DevOps / Sys Admin questions
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization