Migrating Arch Linux from 32-bit to 64-bit

These days, I decided to migrate my Arch Linux from 32-bit to 64-bit. There are several reasons that make me to make such decision.

Firstly, in Arch Linux official site, there was an announcement that Arch Linux will drop 32-bit. And in the wiki page, it mentioned that Arch Linux user should use 64-bit if the processor supports.

Secondly, new distros such as KaOS and Evolve OS only support one architecture 64-bit. That means 32-bit is considered less targeted.

Thirdly, my favourite software FreeFileSync has only pre-built 64-bit binary package, though it can be built on 32-bit from source. Moreover, Opera on Linux only active on 64-bit. The current version is 27.0, yet 32-bit is still version 12.16. That means 32-bit OS users do not have the chance to use the latest Opera.

Next, Docker which is officially supported by Arch Linux, is 64-bit only. However, 32-bit can be installed through AUR. I tried Docker 32-bit and pull an Ubuntu image from Docker repository, at the end the image is 64-bit. As a result, the Ubuntu image does not work on my Docker container. And most of the official images on Docker repository are 64-bit, including Fedora. That is why when I was testing Docker on my 32-bit Linux, I had to use “debootstrap” to get the 32-bit Ubuntu then imported to Docker.

With these reasons and the trend, sooner or later, 32-bit software will less likely be supported. And migrating to 64-bit sooner or later must happen.

The way to 64-bit

Arch Linux wiki page offers some methods to do the migration between two architecture, 32-bit to 64-bit or vice versa. I did not use the methods as mentioned, due to the limitation of disk space in partitions. So, I have to use other method.

Since I have two laptops: personal and working. The working laptop is my primary laptop, which is always updated to the latest packages. On the other hand, the personal laptop is old but with good graphic card, Nvidia. In order to do migration, I tried to migrate the personal laptop first, to check whether my method works or not. So that when migrating the working laptop, everything should go smooth.

Firstly, I downloaded the latest Arch Linux installation live media, then created a 64-bit virtual machine using VirtualBox. Then installed Arch Linux 64-bit on the virtual machine. Besides that, I also installed all the packages which are explicitly installed on my working laptop.

Then the next thing I did was retrieving the /var, /opt, and /usr directories from the virtual machine. The reason was that these directories contain binary files to run the OS, and I wanted to preserve all the configuration in /etc, so that less configuration to do after installation. On the other hand, /bin, /lib, /lib64 are all symbolic links only. So, I ignore them, I can make the symbolic links myself.

Then, I used SystemRescueCD to delete all the files in /var, /opt, and /usr and replaced with the files retrieved from virtual machine. Then reboot using Arch Linux installation live media, mount the partition, arch-chroot, and mkinitcpio to generate the initramfs. Then reboot, but failed to load lxdm. After several tries, I concluded that my hypothesis failed. As a result, I decided to re-install everything, but backup /etc so that can refer it later.

Arch Linux is good, but there is a drawback, that is, installation requires Internet connection. Because to install, there is a script called “pacstrap” which will download the necessary packages and install to the partition. The “base” packages are around 150 MB. That means, if we have limited Internet quota, slow Internet, or no Internet, then there is a serious problem. So far, I have not look for any better solution for this.

However, all the other packages such as LibreOffice, Xfce, Firefox, and others, I had downloaded through VirtualBox. I just copied these cache to the /var directory, then installed the packages that I need. As they were cached, most of the packages needed not to be downloaded again. This reduces a lot of time and these packages can be used in the next OS installation.

Finally, /home directory preserved and it works fine. Just the /etc configuration has to be done manually, such as Apache and PHP configuration.

So, using the same method, I successfully installed Arch Linux 64-bit on my working laptop. Yet, there are some more configurations needed.


电脑图形文件格式(image file format)需知



很多人在工作时,都会用到图形文件(image file)。甚至有时还需要制作图形(图画)。当图形完成时,一般人都会使用JPEG格式。JPEG格式,就是在文件扩展名(file extension)后,加入 .jpg,或 .jpeg,甚至 .jpe。一般遇到的都是 .jpg。若是Windows用户,就未必会看到这些文件扩展名。



这两种格式,乍看之下是没有差别。因为hello.jpg用了90的压缩质量(类似90%之意)。就是这个压缩级别(compression level)带来了很大的差别。虽然压缩质量90,但是现在我将它们放大后做比较,


这里我们可以看到两个差别。就是PNG的字非常“干净”,而JPEG的则有些“脏”。那是90的压缩质量的结果。若使用者想要把文件压缩成更小的file size,而将压缩级别调成90以下,那图形会变得更“脏”。原因是JPEG的压缩方式是lossy compression,即“有损压缩”。就是在压缩的时候,将不重要的讯息摈弃,好让文件大小减少。相反地,PNG用的是“无损压缩”。



Raster vs Vector

这两个东西比较少人懂。即是raster image(位图)和vector image(矢量图形)。以上的JPEG和PNG都是属于raster image。而vector image一般人比较少接触到。最普遍的vector image的格式本人知道的就是SVG。一般的绘图软件(image editor)是无法编辑SVG的,必须用专用的软件,如Inkscape

Raster image和vector image的最大差别就是,当raster image的品质越好的时候,file size就会越大;而vector image是没有如此的问题。正如我们拍照,若用13MP的质量和5MP的质量相比,file size是有很大的差别的。然而,vector image的file size不会随着图形的大小而有多大的差别。纵然如此,vector image不能用在如照片这类由现实而来的图形上。因此,用vector image的话,图形的大小是没有任何影响的。而用raster image,图形的大小会影响到图形的质量。以下举个简单的例子。

放大后vector image的边缘
放大后vector image的边缘
放大后raster image的边缘
放大后raster image的边缘

这里可以看到vector image的好处就是不管如何放大,都不会有任何齿形的边缘(当然,post在这里的不是真正的SVG)。

如果用类似Inkscape的软件,通常可以输出(export)成其它的文件,其中就是PNG和JPEG。但是,若输出成PNG和JPEG,就会失去所有关于文字的资料,导致无法copy-paste(复制)图中的文字。因此,最理想的就是输出成PDF格式。因为PDF格式,既保留了文字资料,又保留vector image的资料。保留了文字的资料,因此可以copy-paste图中的文字;保留了vector image的资料,因此可以放大又不影响图形的素质。



这个是比较普通的知识。若我们要呈交一些不愿意他人修改的文件,我们通常都会把文件转换为PDF格式。若是使用较新的Mircosoft Word,都可以输出成PDF格式。若是不行,可以安装如PDFCreator的软件。



因此,简单来说,所有原来文件的格式,如DOCX/DOC、AI(Adobe Illustrator的格式)、PSD(Adobe Photoshop的格式),都保留原来的制作时的资料。所以一般不会提供给第三者。若要提供给第三者,较好的选择就是PDF、JPEG、及PNG。若是照片类的,就使用JPEG。若是绘图类的,就使用PNG。若是绘图类的,为了打印,且愿意保留文字资料的,就用PDF。

Web, cloud, virtualization, Docker, and Linux

From time to time, I always feel that I have to choose the “best” Linux distro. And I feel that, various Linux distros are somehow annoying, why not just combine all the best features into one powerful OS? (That is why I always struggle for the distro like Arch Linux and Debian.) With the recent trend of the technology such as LXC and Docker, I found that the varieties of the Linux distros is really a good way as it is diverging and exploring the new solutions for our daily problem.

Web is a cross-platform solution, because whatever OS you are using, as long as you have a web browser which is compatible to the web standard, then you can use the web browser to browse the web and use the service properly. That is why, the Internet is so important and so popular in our daily life. Then the cloud storage solution allows us to store our files in the cloud, and the client software will synchronise the files among multiple devices. The synchronisation solves the problem not only the files, but the contacts, calendar, task lists, email, notes, photos, etc.

OpenStack is a term that connects cloud computing with the virtualisation. Previously, when talking about “virtualisation”, my knowledge is something like virtual machine such as Qemu, VirtualBox or VMware. In the Linux world, other than cross-platform virtual machine like VirtualBox, there are some other terms related to virtualisation, and I personally not very familiar with all of them: KVM, libvirt, vagrant, Xen, etc. And recently, LXC and Docker grab my attention, because they are OS-level virtualisation. Instead of virtual machine, what we created is called “container”. Furthermore, the container uses the same kernel as the host OS in spite of the container image is based on other OS.

As DistroWatch.com mentioned, rolling release is difficult to target, because it keeps changing. That is why some users prefer fixed release instead of rolling release, as they want something which can work tomorrow as it works today, namely, consistent.

With the solution like Docker and LXC, rolling release may not be any problem, because we can use rolling release distro to create container based on the fixed release. For instance, running Debian in Arch Linux, or vice versa. That means, developing a solution by targeting on a fixed release, will also work on any type of distros as long as LXC or Docker can work.

Similar to cloud which solves the cross-platform problem, in my opinion, container is a good solution to solve different distros problem.