“kmem_cache_create: duplicate cache XXX”

In my kernel module, firstly I wrote:

In centos 7, this module works fine, but after port to centos 6, this kernel module reports:

The key to this problme is in the implementation of kmem_cache_create() in 2.6.32 linux kernel (for centos 6):

After creating a new pool, it only point to ‘name’, not strdup() a new one. But the ‘name’ in my kernel module is a temporary variable (in stack), so it considers the name is “duplicated”.
The correct code should like:

But why the old kernel module did not report error in centos 7? Because in centos 7 the default memory allocator is SLUB, and in centos 6 it is SLAB. They have totally different implementation.

“Cache flush bypassed!” from fio

fio is a effective tool to test IO-performance of a block device (also file system).

Today, my colleague tell me fio has report “Cache flush bypassed!” which means all IO have bypass the device cache. But I can’t agree because the cache of a RAID card usually could only be change by specific tool (such as MegaCLi), but not a test tool.

By review the code of fio:

and the implementation for blockdev_invalidate_cache() is:

Therefore, the “Cache flush bypassed!” is not mean all IO will bypass the buffer of device, but actually means: “fio can’t flush the cache of device, so let’s ignore it”.
If you want to disable the DRAM cache on the RAID card, the correct way is set cache policy of RAID card to “Write Through”:

Use recv() instead of read() for socket

When we use read() to read data from a socket like:

the read() may return ‘ret’ which is small than sizeof(struct msg) even the socket is not O_NONBLOCKING.
The correct way is:

Then, recv() will wait util all sizeof(struct msg) be read out.

China Linux Storage & Filesystem 2015 workshop (second day)

Zheng Liu from Alibaba lead the topic about ext4. The most important change in EXT-series filesystem this year is: ext3 has gone, people could only use ext3 by mount ext4 with special arguments in latest kernel (actually, in CentOS 7.0). Encrypt feature has complete in ext4.

Robin Dong (Yes, it’s me) from Alibaba give a presentation about cold storage (Slide is here). We develop distributed storage system based on a small open-source software called “sheepdog“, and modified it heavily to improve data recovery performance and make sure it could run in low-end but high-density storage servers.


clsf2015
Discussion in tea break

Yanhai Zhu from Alibaba (We have done so much works on storage) lead a topic about cache in virtual machines environment. Alibaba choose Bcache as code base to develop a new cache software.

Robin: Why Bcache? Why not flashcache?
Yanhai: I started my work on flashcache first, but flashcache is not profit to the product environment. First, flashcache is unfriendly to sequential-write. Second, it use hash data structure to distributed IO requests at beginning, which will split the cache data in multi-tenant environment. Bcache use B-tree instead of hash-table to store data, it’s better for our requirements.

They use radical write-back strategy on cache. It works very well because the cache sequentialize the write IOs and make backend easy to absorb the pressure peak.

The last topic is lead by Zhongjie Wu from Memblaze, a famous startup company in China on flash storage technology. It’s about NVDIMM, the most hot hardware technology in recent years. A NVDIMM is not expensive, it is only a DDR DIMM with a capacitance. Memblaze has develop a new 1U storage server with a NVDIMM and many flash cards. It contain their own developed OS and could use Fabric-Channel/Ethernet to connect to client. The main purpose of NVDIMM is to reduce latency, and they use write-back strategy(Surely).

The big problem they face with NVDIMM is CPU can’t flush data in its L1 cache to NVDIMM when whole server powers down. To solve this problem, Memblaze use write-combining in CPU multi-cores, it hurts the performance a little but avoid the data missing finally.


clsf2015
All the staff in this CLSF 2015

Articles from other attenders:

https://blogs.oracle.com/linuxkernel/entry/china_linux_storage_and_file1

China Linux Storage & Filesystem 2015 workshop (first day)

The first topic is lead by Haomai Wang from XSKY. He first introduce some basic concepts about Ceph and I did catch this opportunity to ask some questions.

Robin(from Alibaba): Dose Ceph cache all meta-data information (may called “cluster map”) on monitor-nodes so client could fetch data by just one jump in network?
Haomai: Yes. One jump, and comes to the OSD.
Robin: If I use Cephfs, is it still one jump?
Haomai: Still one jump. Although we add MDS in Cephfs, the MDS does not store data or meta-data of filesystem but only to store the context of distributed lock.

Ceph also support samba2.0/3.0 now. In linux, it is recommend to use iSCSI to access ceph storage cluster because it will have to update kernel in clients if we use rbd/libceph kernel modules. Ceph use pipeline model in message processing therefore it is good to Hard Disk but not SSD. In the future, developers will use async-framework (such as Seastar) to refactor the ceph.

Robin: If I use three replications in ceph, will the client write three copies concurrently?
Haomai: No. Firstly, the IO will come to the primary OSD, and then the primary OSD will issue two other replicated IOs to other two OSDs, waiting until the two IOs back, and return to client “the IO is success”.
Robin: Ok, now we still have two jumps….Is it difficult to change OSD to write at the same time so we can make the latency of ceph low?
Haomai: That will not be easy. Ceph use primary OSD to make sure the consistent of writing transaction.

The future developing plan for ceph is de-duplication on pool level. Coly Li(from Suse) said that de-duplication is better to be made on business level instead of block level because the duplicated information has be split in block level. But the developers in ceph community looks still want make ceph to be omnipotent.


ceph
Discussion about ceph

Jiaju Zhang from Redhat lead the topic about use cases of ceph in enterprises. Ceph has become the most famous open source storage software around the world and also be used in Redhat/Intel/Sandisk(Low-end Storage Array)/Samsung/Suse.

Next topic is about ScyllaDB and Seastar. Asias He from OSv lead this topic. ScyllaDB is a distributed Key/Value store engine which is written in C++14 code and completely compatible to Cassandra. It could also run CQL (Cassandra Query Language). In the graph, ScyllaDB is 40 times more faster than Cassandra. The asynchronous developing framework in ScyllaDB is called Seastar.

Robin: What’s the magic in ScyllaDB?
Asias: We shard requests to every CPU core, and run with no locks/no threads. Data is zero-copy and use bi-direction queue to transfer messages between cores. The test result is base on kernel TCP/IP network stack but we will use our own network stack in our future.

Yanhai Zhu(from Alibaba): I think the test you do is not fair enough: ScyllaDB is designed to be run in multi-cores but Cassandra is not. You guys should run 24 Cassandra instances to compare with ScyllaDB, not just one.
Asias: May be you are right. But ScyllaDB use message queues to transfer messages between CPU cores, so it avoid atomic-operation and lock-operation cost. And, Cassandra is written by Java, which means the performance will be low when the JVM do garbage- collection. ScyllaDB is written completely by c++ so its performance is much steady.

Last topic today is lead by Xu Wang, the CTO of Hyper ( A startup company in china, works on how to run container like VM).
Hyper means “hypervisor” addd “docker image”. Customers could run docker image on Xen/KVM/Virtualbox now.


clsf2015
The guy on the right side is Xu Wang

Run docker on centos6

      No Comments on Run docker on centos6

Docker use thin-provision of device mapper as its default storage, therefore if we wan’t run docker on centos6, we should update kernel first. I use linux kernel 4.11 and notice these kernel options should be set:

After build and reboot the kernel, I still can’t launch docker service, and finally find out the solution:

A example of Mesos Python Framework to calculate Pi

I have written an example of Mesos Framework by python. It simply calculate “Pi” by using Mento-Carlo algorithm
The whole source code is at https://github.com/RobinDong/mesos-python-examples/tree/master/calculate_pi
At beginning, I use python threading in “launchTask()”:

But I found out that the executors only spend a small part of CPU resource in slave machines (about 100%~150%, which is too low in a 8-cores server). First, I thought it may be limited by cgroup, but later, the answer is revealed: multi-threaded python application is actually single thread because of GIL.

With no choice, I have to change my code: the thread will launch a new process, then the new process will calculate the result and finally return the result to thread. It works fine in my Mesos cluster.

Override the __init__ of BaseHTTPRequestHandler in python

Currently, I am writing some python program for the front of our storage system. By using ‘BaseHTTPRequestHandler’ and ‘ThreadedHTTPServer’, we could implement a simple multi-thread http server quickly. But after add ‘__init__()’ for our ‘MyHandler’, it doesn’t work correctly now:

Then I found this statement in python docs:

But thanks for the code example, we could override ‘__init__’ of BaseHTTPRequestHandler’ this way:

Solve a USB network card problem

I am doing some source code porting works on linux kernel recently.
After I reboot my server (ubuntu 14.04) into new kernel version of 3.19.8, it can’t be connected by using ssh but only by using serial port.
The server is using a USB network card, so firstly I suspect some kernel driver for USB NIC has missing in .config file. Therefore I boot back into the old version kernel and try to find some useful information in ‘dmesg’:

The eth3 is using MII port, so when I try to grep “link up …lpa” in 3.19.8 kernel source code, I find out it must be printed by these codes in drivers/net/mii.c:

The only place “ASIX AX88772B” driver call mii_check_media is in drivers/net/usb/asix_devices.c:

So far, the reason is that: the system does not “reset” the USB network card after booting up. But why it only forget to reset USB NIC in 3.19.8 kernel? After checking the /etc/network/interfaces:

the answer is: the device name of the USB NIC has been changed to “eth5” by udevd in 3.9.18 kernel (new version kernel recognise new network port so eth0/eth1/eth2/eth3/eth4 all has been occupied by system) so the network scripts can’t start it up.

Solution:
Fix the name of USB NIC by adding below content into /etc/udev/rules.d/70-persistent-net.rules:

and add configurations into /etc/network/interfaces to start “usbnet0” up automatically:

puppet 3 certification problem on centos 7

I configure the puppet master and agent followed by this step. But when I run “puppet agent -t”, it report error:

My OS version is “Centos 7” and puppet version is “3.7.5”. After I have tried the way as this page answered, the problem still exists. Therefore, I write down the final correct operations here for future reference: