NVidia and hibernation issue, partially solved


In my previous post, I mentioned about NVidia and xcompmgr, it is not true reason that causes the Chrome not updating the display.

The root cause is partially found. The issue is caused by the optimus laptop (dual graphic card, NVidia with Intel). In unknown conditions, resume from hibernation will cause the Intel graphic card doesn’t work properly. This can be checked by running “glxgears” after resume. You will see the OpenGL fails to refresh on the display.

However, if installed bumblebee, then we can run “optirun glxgears”, and this solves the graphic card issue.

Children process

Now, there is a tricky issue. because I use GNOME-Do, it is not started with “optirun”, as a result, launching the application through GNOME-Do doesn’t use the NVidia graphic card. As a result, I need to quit GNOME-Do and start it with “optirun”. So, all the application launched by GNOME-Do will use the grphic card correctly.

Run with NVidia only

Unfortunately, I experienced failing to start the X window with NVidia graphic card only. And I didn’t disable Intel graphic card, because it becomes a waste for an optimus laptop. As a result, I cannot confirm whether if only using NVidia graphic card, will the display refresh issue exist.

But so far, I use the “optirun” to run the application, if the graphic card fails to refresh the display.

Advertisements

Firefox or Chromium (software development)?


I was switching from Chromium to Firefox as my primary web browser recently. Then, I switched back to Chromium again.

Chrome was usually claimed that it consumes a lot of memory. And recent Firefox updates claim that it is faster and consumes less memory. That is why, I switched to Firefox. I agree that, it is much faster than before. However…

I faced a critical issue. One less important issue that I would like to mention is, Firefox does not support Google Hangout.

The critical issue I faced related to JavaScript. During the web development or even visit CircleCI (which I believe it has heavy usage of JavaScript), if the JavaScript has severe errors, whatever web browser you are using will stop respond or slow down. But, Chrome (Chromium I mean) deals the issue differently from Firefox. The whole computer will be slow down temporary (may be several minutes), then at the end, the page will be shown as “dead” and I can control over my computer again.

In the same condition, Firefox will expand the memory (possibly exponentially) due to the errors. Then the computer starts slowing down and stop respond until I do a hard reboot. Based on my observation, the memory grows and uses all the RAM. When the RAM is not available, the memory is immediately stored into the Swap. Because storing into the Swap, that is the hard drive, it is much slower for me to switch to a Terminal to kill Firefox. And even I successfully switch to a Terminal, typing the command and see the response takes approximately infinite time, yet the Swap memory usage keeps growing non-stop.

As a web developer, I prefer to use Chrome.

NVidia and probably xcompmgr


I have a Dell Vostro 5459 with Arch Linux. Previously, whenever I do a hibernation, and resume will produce a black screen, which I can do nothing.

Then I believed that one of the NVidia updates fixed this issue.

However, very soon later, I faced another issue is, resuming from hibernation causes Chromium with freeze content, or the content doesn’t redraw. This not only happen to Chromium, but also Opera and SMPlayer. I thought it is caused by NVidia. Tried a lot of solution, search nothing from Internet. I also installed “bbswitch”, nothing solved.

But just now, before I did a hibernation, I tried to exit every application related to the display or possibly doing some graphic things. Then I remembered that I always run “xcompmgr”, as it enables the composite feature on OpenBox. I killed it, and do a hibernation. And now resume from hibernation, and Chromium works fine.

So, possibly it is “xcompmgr” that causes the trouble all the time since NVidia fixed. To be confirmed.

Complexity and simplicity


When we are developing a solution or a system, we are prone to choose a simple solution. Because simple solution is just better than complex solution. However, most of the time, we choose a simple solution inappropriately, and this causes more troubles gradually when the system is growing.

The complexity of a solution, should depend on the complexity of the problem itself, not the other way round. For example, we cannot create an operating system with a single line of programming statement. We also cannot create an operating system with just a single source file. Because an operating system is very complex (managing devices, memory, process, etc), no simple solution can fulfil the requirements.

That is why, most of the time global variables are not encouraged, because they become difficult to be managed when your source code is growing. However, if the problem is simple and global variables can solve the problem efficiently, then the approach will be acceptable.

Our human mind is limited. We cannot process too much information. Hence, if a source file contains a lot of global variables (or similar case like too many parameters in a function), we cannot process the information well. Because it is complex. And when a function is too long, with hundreds line of statements, we cannot remember what was happened in the beginning of the function. However, if we organize the variables and parameters properly, then we can process the source code much better.

As UNIX philosophy, “Do One Thing and Do It Well” (DOTADIW) (so does Microservices), this is what we ought to design our solution. We simplify the solution, not the problem, because problem cannot be changed. As a result, a very complex problem will need a lot of simple solutions or services to be built.

In reality, life form like human is complex, that is why we have multiple systems such as digestive system, respiratory system, circulatory system, etc. And each system is focusing on one task. However, the low life form organism like amoeba is very simple. We cannot expect the biological system of amoeba is workable on a human. Moreover, a large organization will need a very complex management system (not in terms of the software system), comparing to a small organization. You cannot expect the CEO have contact with thousands of employees every day in the large organization. But in a small organization, CEO can contact with every one in the team.

Therefore, if a problem is complex, or the system requirement is complex, we can only “divide and conquer” by breaking down the main problem into sub-problems, then for each sub-problem we solve it with smaller and simpler solution.

Pyramid, tree, or pipeline

When a community is growing, it will end up become a pyramid like hierarchy system. When a file folder is growing, it will end up become tree structure. If the data flow is linear, then pipeline will be the appropriate solution. Therefore, as the system is growing, your information needs to be passed from unit to unit. It is inefficient to convey the message, but it is efficient to be managed.

(But in reality, pyramid hierarchy is troublesome, because human is full of flaws and corruptions.)

Pure function

Interestingly, by learning ReactJS, uses the pure function method for development helps managing the code much simpler. Because all the input of a function is immutable, or read-only. That means, you will not create a side effect to the parent component or the caller. Similar to microservices, we just need to focus on the functionality of each component.

C++ future


Recently updating my hobby project Med, memory editor for Linux, still under heavy development with various bugs.

In this project, I use several C++1x features (compiled with C++14 standard). Most recent notable feature is multi-threading scanning. In memory scanning, scan through the accessible memory blocks sequentially is slow. Therefore, I need to scan the memory blocks in parallel. To implement this, I have to create multiple threads to scan through the memory blocks.

How many threads I need? I make it into variable n, default to 4. Meaning, when scanning is started, the n threads will start scanning asynchronously. When one of the thread finish scanning, the next (n+1) thread will start scanning the next (n+1) memory block, until the end.

I design the solution top-down, and implement it bottom-up. In order to design the solution for the requirement above, I created a ThreadManager (header file here). So the ThreadManager basically will let me queue the tasks that I am going to launch in parallel with n threads. After queuing all the tasks, I just need to start, then they will run in parallel as multi-threading. This is what ThreadManager is doing. If mutex is needed, it is the task need to handle with, not the ThreadManager to handle. ThreadManager just make sure the tasks are run in parallel.

This is the simple test that uses the ThreadManager.

Technically, there are several important C++ standard libraries used, vector, functional, future, mutex, and condition_variable. Vector is an STL that allows me to store a list of items as vector (just like array or list).

Since C++11, it supports lambda expression. Then using functional, I can use std::function template to create any function object.

std::function<void()> fn = []() {
  for (int i = 0; i < 4; i++) {
    this_thread::sleep_for(chrono::milliseconds(300));
    cout << "thread1: " << i << endl;
  }
};

The code above initialize a variable fn which stores an anonymous function. Previously, this can be done using callback function, which makes the code difficult to manage. By using std::function and std::vector, I can store all the anonymous functions to the vector.

Future is a very interesting library. If we are familiar with JavaScript promise or C# async, then it is similar to these (futures and promises). Whenever a task is start, it will return a future. Because we don’t know when the task will be ended. We can also do something like using a loop to check the condition of a task whether is ended, but this will be over complicated. Because future will let you handle what should be done when the task is ended.

Using future, I need not to create thread directly (though it is called ThreadManager). I can use async function to run the callback function asynchronously. It is the async that returns future. And this async function allows lambda expression as function argument. Great C++11.

C++11 supports mutex (mutual exclusion) and condition variable. Mutex can prevent race condition. When we are using multi-threading, most of the time the threads are using some shared resource. Read the empty data may crash the program. Therefore, we need to make sure when reading or writing, the resource is not accessible by other threads. This can be done by locking the mutex, so that other threads cannot continue. Then after the operation, unlock the mutex, and the other threads can lock the mutex and continue. Hence, only a single thread can access the resource.

Condition variable is used together with mutex. We can use condition variable to wait if a condition is fulfilled. When a wait is performed, the mutex will be locked (by unique lock). As a result, the thread will be blocked and cannot continue. The thread will wait until the condition variable notifies the thread to perform a condition check. If it is fulfilled, then the mutex will be unlocked and the thread will continue.

In ThreadManager, my previous code uses a loop to check the condition, if the condition doesn’t allows to run the next thread, then it will sleep a while and check again. This method is wasting the CPU resources. Because it keeps checking the condition. By using condition variable and mutex, I can just stop the thread, until it is notified to continue.

Yeah. Modern C++ is cool!

Academic people should git and TeX


Mr Torvalds created two amazing things: Linux and Git. Former is an OS kernel; latter is a version control system. Unluckily, none is prevailing in Malaysia.

When I was a lecturer, creating a new programme with various courses is truly exhaustive. The worst case is recording the changes of the documents for the government agency’s accreditation. If you are systematic, you will backup the files. But backing up the files does not tell you what are the changes you had made. Unless you create another note for each changes you made. But that will be double works. If you say you can use Microsoft Word’s feature to compare the documents and see the changes, it is totally impractical if the two documents are big and there have a vast changes.

What is the best solution? In practice, you need to ask your boss to step down and change all your colleagues 😉, because your boss doesn’t understand your solution, he and your colleagues will treat you as idiot.

In the condition above, the best solution is using TeX and Git. TeX allows you to create your document in plain text. Plain text is so important for Git. Git allows you to keep track the changes you have made, and you can see the difference of the changes line by line in text format.

Git allows you to work collaboratively with your team members to work on same project by preventing the conflicts. Preventing conflicts doesn’t mean there will have no conflict, but you will detect the conflicts earlier and need to resolve the conflicts manually.

TeX, unlike WYSIWYG application software, such as Microsoft Word or LibreOffice Writer (I don’t think Malaysians use LibreOffice), we create the document using markup language and setup the paper style through some complex TeX statements. Though the setup may be exhaustive and TeX has a steep learning curve, the results can sustain for long-term. The document style can be re-used for the whole institution, especially if the students are provided with the thesis format in TeX form. Moreover, the TeX skill is useful to publish papers to the journals and conferences. You can easily port your content to another TeX style such as IEEE conference paper style.

Sadly, most people take easy learning tools to do the complex tasks, yet feel proud and not open minded to learn useful skills.

An academic institution should offer training to the staff, lecturers, and students to learn TeX. It would be even better to offer Git training, Linux and command-line. Open source software can reduce students financial burden, avoid pirated software, and prevent virus infection.

However these are not implemented in most academic institutions. As a result, the users are spending hours to edit the document style, generating the table of contents, or even preparing table of contents manually. That is sad to create table of contents manually. Any change on the page, you will have to edit the table of contents. However, if you are using TeX, you will focus on the content, instead of the styling.

Lastly, because the culture focuses on the outlook such as styling of the document, instead of the quality of the contents, that is why implementing Git and TeX is just an unrealistic approach. Great Microsoft Word, you are a legend.

PHP programming


PHP was a great programming language in web development. It surpasses the VBscript for ASP and Perl for CGI. It is favoured because of the syntax based C and C++. It supports procedural programming paradigm and object-oriented paradigm. A lot of functions resemble C functions such as printf, fprintf, sprintf, fopen, etc. Similarly, it can work directly to the C library such as expat, zlib, libxml2, etc. A lot of great content management systems (CMS) are written in PHP, such as Drupal, WordPress, Joomla, etc.

However, a lot of new programming language emerges and surpassing it.

Taken from http://skillprogramming.com/top-rated/php-best-practices-1234

Array is passed by value

Because PHP syntax is very similar to C and C++, it can use “&” reference operator and pass the parameter by reference in a function. But this will be very different from other languages such Python and JavaScript. Python and JavaScript function parameters are passed by value for all primitive data types, such as integer, float, string, and boolean; complex data type like object and array are passed by reference, meaning they are mutable, including Date object in JavaScript.

function change_array($arr) {
    $arr[0] = 100;
}

function change_object($obj) {
    $obj->value = 100;
}

function change_many_objects($arr) {
    $arr[0]->value = 100;
}

function change_object_array($obj) {
    $obj->array[0] = 100;
}

class MyObj {
    var $value;
    var $array;
}

function main() {
    $arr = [1, 2, 3];
    $obj = new MyObj();
    $obj->value = 10;

    change_array($arr);
    change_object($obj);

    echo $arr[0], "\n"; // still 1, not changing
    echo $obj->value, "\n"; // changed to 100

    $arr_obj = [ new MyObj(), new MyObj(), new MyObj() ];
    $arr_obj[0]->value = 10;
    change_many_objects($arr_obj);
    echo $arr_obj[0]->value, "\n"; // changed to 100

    $obj_arr = new MyObj();
    $obj_arr->array = [1, 2, 3];
    change_object_array($obj_arr);
    echo $obj_arr->array[0], "\n"; // changed to 100

    $obj_a = new MyObj();
    $obj_a->value = 10;
    $obj_b = $obj_a;
    $obj_b->value = 20;
    echo $obj_a->value, "\n"; // 20
    echo $obj_b->value, "\n"; // 20

    $obj_c = &$obj_a;
    $obj_c->value = 30;
    echo $obj_a->value, "\n"; // 30
    echo $obj_b->value, "\n"; // 30
    echo $obj_c->value, "\n"; // 30
}

main();

In the example above, the function change_array() will not modify the array that being passed, this is because it is passed by value. Unless we use the “&” reference operator.

The function change_object() will change the object that being passed.

One of the key-points of PHP 5 OOP that is often mentioned is that “objects are passed by references by default”. This is not completely true. […]

(from PHP manual)

So, basically, the function parameters are passed by value, even though it is an array. But the object will be dealt differently. We can treat it as a pointer, if you are familiar with C++ “new” operator. In C++, “new” operator will create an instance and return a pointer to the instance. If we understand this concept, then this is how it works in PHP (according to what I know).

Consequently, the function change_many_objects() though the argument is for an array, and an array is passed into it, but the function changes the value of the object within the array. This is because the array stores the pointer to the object instances. The function does change the instance is pointed by the “pointers” stored in the array.

In summary, PHP deals array as value, which is different from Python, JavaScript, and even C and C++. However, PHP deals object as pointer, that is why object is mutable when it is passed to a function.

Other limitations

PHP was created before the technology of RESTful API. PHP focuses on the GET and POST, but not like PUT and DELETE. The HTTP methods are server dependent. As a result, the HTTP server such as Apache requires some extra configurations for PHP to work. Unlike Node and Ruby on Rails, Node itself has a HTTP module; Ruby on Rails has WEBrick HTTP server.

Comparing to the language like Python, Node with JavaScript, Ruby, Lua, it lacks of REPL (read-eval-print loop). Interactive shell is different from REPL. With REPL, the function to print the result in the console is omitted. REPL will print the result whenever the function returns value.