Some modifications about SSD-Tensorflow

In the previous article, I introduced a new library for Object Detection. But yesterday, after I added slim.batch_norm() into ‘nets/ssd_vgg_512.py’ like this:

Although training could still run correctly, the evaluation reported errors:

I wondered why adding some simple batch_norm will make shape incorrect for quite a while. Finally I find this page from google. It said this type of error is usually made by incorrect data_format setting. Then I check the code of ‘train_ssd_network.py’ and ‘eval_ssd_network.py’, and got the answer: the training code use ‘NCHW’ but evaluating code use ‘NHWC’!
After changing data_format to ‘NCHW’ in ‘eval_ssd_network.py’, the evaluation script runs successfully.

Choosing a Object Detection Framework written by Tensorflow

Recently I need to train a DNN model for object detection task. In the past, I am using the object detection framework from tensorflows’s subject — models. But there are two reasons that I couldn’t use it to do my own project:

First, it’s too big. There are more than two hundred python files in this subproject. It contains different types of models such as Resnet, Mobilenet and different type of detection algorithms such as RCNN and SSD (Single Shot Detection). Enormous codes usually mean hard to understand and customize. It might cost a lot of times to build a new project by referencing a big old project.
Second, it uses too much configuration files hence even trying to understanding these configs would be a tedious work.

Then I find another better project in GitHub: SSD-Tensorflow. It’s simple. It only includes less than fifty Python files. You can begin training only by using one command:

No configuration files, no ProtoBuf formats.
Although simple and easy to use, this project still has the capability to be remolded. We can add new dataset by changing the code in directory ‘/datasets’ and add new networks in directory ‘/nets’. Let’s see the basic implementation of its VGG for SSD:

Quit straight, isn’t it?

Finding core-dump file

      No Comments on Finding core-dump file

In a new server, my program got ‘core dump’. But I haven’t found the core-dump file in the current directory as usual.
First I checked the ‘ulimit’ configuration:

Seems ok. The system will generate core-dump file when the program crashed. But where is it?
Eventually, I found out the answer: core-dump file will be generated by following pattern written in /proc/sys/kernel/core_pattern.

Therefore all the core-dump files sited in /var/coredump/ directory. The pattern setting of ‘core_pattern’ file is explained here.

Migrate blog to AWS’s ec2

      No Comments on Migrate blog to AWS’s ec2

My blog had been hosting on Linost since 2013. But recently support staff from Linost noticed me that my site has led CPU usage of the host machine to 100% so the hosting system automatically ‘limited’ my resource, which actually means my site has totally been shut down.
The first thing I want to do is trying to log in my host machine by using SSH. But unfortunately, Linost doesn’t support SSH login. Without SSH and all the Linux commands, how could I find out the problem of high load and resolve it?
Finally, I chose ec2 of AWS for my new hosing machine. In order to reduce the cost, ‘t2.nano’, the cheapest instance type, has been chosen. Although it only has 512MB memory, it’s adequate to run a basic blog on WordPress. Additionally, I bought reserved instance by paying upfront for a whole year. That really decrease the cost further (about 50% discount).
Using ec2 has another advantage: I don’t need to install Mysql/Apache/PHP/Wordpress by myself. With Jetware’s AMI (Amazon Machine Image), a basic WordPress blog could be launched with a few clicks of buttons. Jetware’s AMI uses LEMP (Linux/nginx web Engine/MySQL/PHP) as its basic software stack, and also include myPHPAdmin for management of MySQL. This AMI is totally free. The only small defect is the account of MySQL has been set to an empty password with username ‘root’. But we could fix it by simply:

By typing ‘http://donghao.org/phpmyadmin/’ in the browser, I can manage MySQL so easily:




That’s awesome! Thanks to Jetware.

Source code analysis for Autograd

Autograd is a convenient tool to automatically differentiate native Python and Numpy code.

Let’s look at an example first:

The result is 3.2

f(x) = sqaure(x) + 1, its derivative is 2*x, so the result is correct.

Function grad() actually return a ‘function object’, which is ‘grad_f’. When we call grad_f(1.6), it will ‘trace’ f(x) by:




The ‘fun’ argument is our f(x) function.



In ‘trace()’, it acutually called f() without ‘x’ but a ArrayBox object. The ArrayBox object has two purposes:

1. Go through all the operations in f() along with ‘x’, so it chould get the real result of f(x)
2. Get all the corresponding gradients of operations in f()

ArrayBox class has already override all the basic arithmetic operations, such as add/sustract/multiply/divide/square. Therefore it can catch all the operations in f(x).




After catching all the operations, ArrayBox could lookup the gradients table to get all corresponding gradients, and using chain rule get final gradient result.

The gradients table is showed as below:



Otherwise, Autograd have other tricks to complete its work. Take function wrapper ‘@primitive’ as an example. This decorator make sure users could add new custom-defined-operation into Autograd.

The source code of Autograd is nice and neat. Its examples include fully-connected-network, CNN, even RNN. Let’s take a glimpse of the implement of Adam optimizer of Autograd to feel its concise code style:



Prediction of Red Wine Quality

In Kaggle platform, there is an example dataset about Quality of Red Wine. I wrote some code for it by using scikit-learn and pandas:

The results reported by snippet above:

Looks the most important feature to predict quality of red wine is ‘alcohol’. Intuitively, right?

Use PCA (Principal Component Analysis) to blur color image

I wrote an example of blurring color picture by using PCA from scikit-learn:

But it reports

The correct solution is transforming image to 2 dimensions shape, and inverse transform it after PCA:

It works very well now. Let’s see the original image and blurring image:



Original Image



Blurring Image

Do tf.random_crop() operation on GPU

When I run code like:

it reports:

Looks operation tf.random_crop() doen’t have CUDA kernel implementation. Therefore I need to write it myself. The solution is surprisingly simple: write a function to do random_crop on one image by using tf.random_uniform() and tf.slice(), and then use tf.map_fn() to apply it on multi-images.

It can run on GPU now.

Regularization loss in ‘slim’ library of Tensorflow

My python code using slim library to train classification model in Tensorflow:

It works fine. However, no matter what value the ‘weight_decay’ is, the training accuracy of the model could reach higher than 90% easily. It seems ‘weight_decay’ just doesn’t work.
In order to find out the reason, I reviewed the code of Tensorflow for ‘tf.losses.sparse_softmax_cross_entropy()’:

The ‘losses.sparse_softmax_cross_entropy()’ simply call ‘tf.nn.sparse_softmax_cross_entropy()’. Then let’s look into the implementation of ‘compute_weighted_loss()’:

The losses of ‘losses.sparse_softmax_cross_entropy()’ will be added into collection of ‘GraphKeys.LOSSES’. Then where dose the weight of parameters go ? Will they be added into same collection ? Let’s check. All the layer written by library of ‘tf.layers’ or ‘tf.contrib.slim’ are inherited from ‘class Layer’ and will call ‘add_loss()’ when this layer call ‘add_variable()’. Let’s check ‘add_loss()’ of base class ‘Layer’:

It’s weird. The loss from weight of variable has not been added into ‘GraphKeys.LOSSES’, but ‘GraphKeys.REGULARIZATION_LOSSES’. Then how could we get all the losses at training stage ? After grep ‘REGULARIZATION_LOSSES’ in whole codes of Tensorflow, it comes up with the ‘get_total_loss()’:

That is the secret of losses in ‘tf.layers’ and ‘tf.contrib.slim’: we should use ‘get_total_loss()’ to fetch model loss and regularization loss together!
After changing my code:

The ‘weight_decay’ works well now (which means training accuracy could not reach high value easily)

Using multi-GPUs for training in distributed environment of Tensorflow

I am trying to write code for training on multi-GPUs. The code is mainly from the example of ‘Distributed Tensorflow‘. I have changed the code slightly for runing on GPU:

But after launch the script below:

it reports:

Seems one MonitoredTrainingSession will occupy all the memory of GPUs. After search on google, I finally get a solution: ‘CUDA_VISIBLE_DEVICES’.
Firstly, change ‘replica_device_setter’:

and then use this shell script to launch training processes:

The ‘ps’ will only use GPU0, ‘worker0’ will only use GPU1, ‘worker1’ will only use GPU2 etc.