Due to a recent Q&A from DistroWatch.com, I am considering the advantages of using the scheduled-release or fixed-release distros, especially Debian which is famed of its stability. The following is the quote from DistroWatch.com,
Rolling releases do tend to have their downsides though. For instance, it is difficult for third-party developers to create software for rolling-releases as a rolling-release distribution is a moving target. It is difficult to target and support an operating system which is changing on a regular basis.
I agree with this point, because I have to change the source code on my old project which uses CEGUI. Moreover, latest PHP source code may not work on the web hosting server, due to the version differences. Though some people argue that rolling release like Arch Linux is not stable, yet I am very satisfied with its stability. The only thing is the compatibility problem to our own source code and other distros or OSes.
Because of this problem, the Debian is always my interested distro. Yet Debian packages are normally not up-to-date, and there is no Mozilla Firefox in its official repository due to the strong philosophy in Debian.
Running the different OSes in the virtual machines such as VirtualBox and Qemu with KVM is a very good solution. However, both solutions show a strong feeling that an OS (guest) within another OS (host). VirtualBox is really useful because of the configuration of the bridged network is very easy. Moreover, the USB devices sharing allows to use the Windows in the VirtualBox to do the printing and using the interactive projector (which beyond my expectation). On the other hand, Qemu is difficult to setup the bridged network. (But Android Emulator is using Qemu.)
There are some disadvantages on VirtualBox. The 3D graphics do not work properly in VirtualBox (I didn’t try on Qemu about 3D graphics). However, we can still install the Direct3D drivers through the VirtualBox Guest Addition. Furthermore, the solutions of using VirtualBox and Qemu require to create virtual hard drives. That means, transferring the files requires some networking solutions such as FTP or SSH. Yet, VirtualBox allows easier solution that mounts the shared folder from the host.
Interestingly, when I come across to the Linux Containers (LXC), this shows an interesting alternative to the VirtualBox and Qemu. Unlike the VirtualBox or Qemu, the guest OS can be run almost side-by-side with the host OS.
Running GVim in LXC on Arch Linux
Setup LXC network
There are some useful tutorials/documents regarding how to setup LXC on the Linux distribution on the Internet. However, I found some difficulties when setting up the LXC on the Arch Linux. In Arch Linux, the kernel does not support User Namespace for the LXC. Thus, the LXC has to run in root privilege.
(I will not explain some of the basic steps such as using the commands lxc-create, lxc-start, lxc-stop, etc.)
After creating the container, the network support is also very important, because by default, the container cannot access the network. The easiest way is using bridge.
To setup the bridge, we can use the netctl.
Create a static IP for the netctl,
#In the /etc/ctl/lxbridge
BindsToInterfaces=(wlan0) #Depends which connection we want to bind, do not use multiple interface
sudo netctl start lxcbridge
This will produce a bridge interface br0.
Then, similar to Qemu, iptables and IP fowarding are required.
sudo sysctl net.ipv4.ip_foward=1
sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE #where wlan0 can be others
Note: iptables is a must, despising whether the iptables service is started or not.
Then, in the “config” file of the created container, we have to setup the networking
lxc.network.ipv4.gateway=10.0.2.1 #Based on the Bridge address
Because of using the root, the lxc-usernet file is not required to be configured.
In order to run the GUI application, in the config file, we can add this
lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir
lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none bind,optional,create=dir
lxc.mount.entry = /dev/video0 dev/video0 none bind,optional,create=file
After installing GUI application such as GVim in the guest, to run and use the display,
This will run the GUI application as on the host.
Setup LXC sound
In order to play the sound, this can be done through PulseAudio. (I mostly refers to this page.)
The easier way is using the paprefs on the host OS and “Enable network access to local sound devices” in the “Network Server” tab. However, it can also be done in the command-line,
pactl load-module module-native-protocol-tcp #no root required
(If referring to other tutorials, there are other parameters. But in my case, the above command is sufficient.)
Note: The above command is run on host OS.
Once the module loaded, in the guest (container), we can set the environment variable,
export PULSE_SERVER=192.168.1.2 #where the IP address is the host IP address
Then, using the mplayer on any audio file can play the sound successfully.
Please note that LXC does not work with non-Linux OS such as Windows. There are some available OS templates for the container, such as Debian, CentOS, Fedora, Gentoo,OpenMandriva, OpenSUSE, and Ubuntu. This is very interesting that we can use different package manager in one computer almost seamlessly working on the host OS.
So far I didn’t test the OpenGL in container. But running glxinfo, it shows the same output as the host computer. I expect that it is using the host display, so the OpenGL should have no problem.
My next intention is to make the container accessible through the LAN of the host OS. Needs time to figure it out.