Find Out What Is Using Your Swap

This article is now over five years old, please consider reading a more recent version

Have you ever logged in to a server, ran free, seen that a bit of swap is used and wondered what’s in there? It’s usually not very indicative of anything, or even overly helpful knowing what’s in there, mostly it’s a curiosity thing.

Either way, starting from kernel 2.6.16, we can find out using smaps which can be found in the proc filesystem. I’ve written a simple bash script which prints out all running processes and their swap usage. It’s quick and dirty, but does the job and can easily be modified to work on any info exposed in /proc/$PID/smaps If I find the time and inspiration, I might tidy it up and extend it a bit to cover some more alternatives. The output is in kilobytes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/bin/bash
# Get current swap usage for all running processes
# Erik Ljungstrom 27/05/2011
SUM=0
OVERALL=0
for DIR in `find /proc/ -maxdepth 1 -type d | egrep "^/proc/[0-9]"` ; do
        PID=`echo $DIR | cut -d / -f 3`
        PROGNAME=`ps -p $PID -o comm --no-headers`
        for SWAP in `grep Swap $DIR/smaps 2>/dev/null| awk '{ print $2 }'`
        do
                let SUM=$SUM+$SWAP
        done
        echo "PID=$PID - Swap used: $SUM - ($PROGNAME )"
        let OVERALL=$OVERALL+$SUM
        SUM=0

done
echo "Overall swap used: $OVERALL"

This will need to be ran as root for it to be able to gather accurate numbers. It will still work even if you don’t, but it will report 0 for any processes not owned by your user. Needless to say, it’s Linux only. The output is ordered alphabetically according to your locale (which admittedly isn’t a great thing since we’re dealing with numbers), but you can easily apply your standard shell magic to the output. For instance, to find the process with most swap used, just run the script like so:

1
$ ./getswap.sh | sort -n -k 5

Don’t want to see stuff that’s not using swap at all?

1
$ ./getswap.sh  | egrep -v "Swap used: 0" |sort -n -k 5

…; and so on and so forth

May 27th, 2011

Example Using Cassandra With Thrift in C++

Due to a very exciting, recently launched project at work, I’ve had to interface with Cassandra through C++ code. As anyone who has done this can testify, the API docs are vague at best, and there are very few examples out there. The constant API changes between 0.x versions and the fact that the Cassandra API has its docs and Thrift has its own, but there is nothing bridging the two isn’t helpful either. So at the moment it is very much a case of dissecting header files and looking at implementation in the Thrift generated source files.

The only somewhat useful example of using Cassandra with C++ one can find online is this, but due to the API changes, this is now outdated (it’s still worth a read).

So in the hope that nobody else will have to spend the better part of a day piecing things together to achieve even the most basic thing, here’s an example which works with Cassandra 0.7 and Thrift 0.6.

Read on →
May 21st, 2011

Site Slow After Scaling Out? Yeah, Possibly!

Every now and then, we have customers who outgrow their single server setup. The next natural step is of course splitting the web layer from the DB layer. So they get another server, and move the database to that.

So far so good! A week or so later, we often get the call “Our page load time is higher now than before the upgrade! We’ve got twice as much hardware, and it’s slower! You have broken it!” It’s easy to see where they’re coming from. It makes sense, right?

That is until you factor in the newly introduced network topology! Today it’s not unusual (that’s not to say it’s acceptable or optimal) for your average wordpress/drupal/joomla/otherspawnofsatan site to run 40-50 queries per page load. Quite often even more!

Read on →
Mar 29th, 2011

A Look at Mysql-5-5 Semi-synchronous Replication

Now that MySQL 5.5 is in RC, I decided to have a look at the semi synchronous replication. It’s easy to get going, and from my very initial tests appear to be working a treat.

This mode of replication is called semi synchronous due to the fact that it only guarantees that at least one of the slaves have written the transaction to disk in its relay log, not actually committed it to its data files. It guarantees that the data exists by some means somewhere, but not that it’s retrievable through a MySQL client.

Semi sync is available as a plugin, and if you compile from source, you’ll need to do –with-plugins=semisync…. So far, the semisync plugin can only be built as a dynamic module, so you’ll need to install it once you’ve got your instance up and running. To do this, you do as with any other plugin:

1
2
install plugin rpl_semi_sync_master soname 'semisync_master.so';
install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Read on →
Oct 9th, 2010

GlusterFS Init Script and Puppet

The other day I had quite the head scratcher. I was setting up a new environment for a customer which included the usual suspects in a LAMP stack spread across a few virtual machines in an ESXi cluster. As the project is quite volatile in terms of requirements, amount of servers, server roles, location etc. I decided to start off using Puppet to make my life easier further down the road.

I got most of it set up, and got started on writing up the glusterfs Puppet module. Fairly straight forward, a few directories, configuration files and a mount point. Then I came to the Service declaration, and of course we want this to be running at all times, so I went on and wrote:

1
2
3
4
5
6
service { "glusterfsd":
    ensure => running,
    enable => true,
    hasrestart => true,
    hasstatus => true,
}

expecting glusterfsd to be running shortly after I purposefully stopped it. But it didn’t. So I dove into puppet (Yay Ruby!) and deduced that the way it determines whether something is running or not is the return code of: /sbin/service servicename status

So a quick look in the init script which ships with glusterfs-server shows that it calls the stock init function “status” on glusterfsd, which is perfectly fine, but then it doesn’t exit with the return code from this function, it simply runs out of scope and exits with the default value of 0.

So to get around this, I made a quick change to the init script and used the return code from the “status” function (/etc/rc.d/init.d/functions on RHEL5) and exited with $?, and Puppet had glusterfsd running within minutes.

I couldn’t find anything when searching for this, so I thought I’d make a note of it here.

Aug 9th, 2010

Legitimate Emails Being Dropped by Spamassassin in RHEL5

ver the past few months, an increasing number of customers have complained that their otherwise OK spam filters have started dropping an inordinate amount of legitimate emails. The first reaction is of course to increase the score required to be filtered, but that just opens up for more spam. I looked in the quarantine on one of these servers, and ran a few of the legitimate ones through spamassassin in debug mode. I noticed one particular rule which was prevalent in the vast majority of the emails. Here’s an example:

1
2
3
4
5
...
[2162] dbg: learn: initializing learner
[2162] dbg: check: is spam? score=4.004 required=6
[2162] dbg: check: tests=FH_DATE_PAST_20XX,HTML_MESSAGE,SPF_HELO_PASS
...

4 is obviously quite a high score for an email whose only flaw is being in HTML. But FH_DATE_PAST_20XX caught my eye in all of the outputs. So to the rule files:

1
2
3
4
5
$ grep FH_DATE_PAST_20XX /usr/share/spamassassin/72_active.cf
##{ FH_DATE_PAST_20XX
header   FH_DATE_PAST_20XX      Date =~ /20[1-9][0-9]/ [if-unset: 2006]
describe FH_DATE_PAST_20XX      The date is grossly in the future.
##} FH_DATE_PAST_20XX

Aha. This is a problem. With 50_scores.cf containing this:

1
2
$ grep FH_DATE_PAST /usr/share/spamassassin/50_scores.cf
score FH_DATE_PAST_20XX 2.075 3.384 3.554 3.188 # n=2

there’s no wonder emails are getting dropped! I guess this is a problem one can expect when running a distribution with packages 6 years old and neglect to frequently (or at least every once in a while) update the rules!

Luckily, this rule is gone altogether from RHEL6’s version of spamassassin.

May 26th, 2010

Control-groups in Rhel6

One new feature that I’m very enthusiastic about in RHEL6 is Control Groups (cgroup for short). It allows you to create groups and allocate resources to these. You can then bunch your applications into groups at your heart’s content.

It’s relatively simple to set up, and configuration can be done in two different ways. You can use the supplied cgset command, or if you’re accustomed to doing it the usual way when dealing with kernel settings, you can simply echo values into the pseudo-files under the control group.

Here’s a controlgroup in action:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@rhel6beta cgtest]# grep $$ /cgroup/gen/group1/tasks
1138
[root@rhel6beta cgtest]# cat /cgroup/gen/group1/memory.limit_in_bytes
536870912
[root@rhel6beta cgtest]# gcc alloc.c -o alloc && ./alloc
Allocating 642355200 bytes of RAM,,,
Killed
[root@rhel6beta cgtest]# echo `echo 1024*1024*1024| bc` >
/cgroup/gen/group1/memory.limit_in_bytes
[root@rhel6beta cgtest]# ./alloc
Allocating 642355200 bytes of RAM,,,
Successfully allocated 642355200 bytes of RAM, captn' Erik...
[root@rhel6beta cgtest]#

The first line shows that the shell which launches the app is under the control of the cgroup group1, so subsequently all it’s child processes are subject to the same restrictions.

As you can also see, the initial memory limit in the group is 512M. Alloc is a simple C app I wrote which calloc()s 612M of RAM (for demonstrative purposes, I’ve disabled swap on the system altogether). At the first run, the kernel kills the process in the same way it would if the whole system had run out of memory. The kernel message also indicates that the control group ran out of memory, and not the system as a whole:

1
2
3
4
...
May 13 17:56:20 rhel6beta kernel: Memory cgroup out of memory: kill process
1710 (alloc) score 9861 or a child
May 13 17:56:20 rhel6beta kernel: Killed process 1710 (alloc)

Unfortunately it doesn’t indicate which cgroup the process belonged to. Maybe it should?

cgroups doesn’t just give you the ability to limit the amount of RAM, it has a lot of tuneables. You can even set swappiness on a per-group basis! You can limit the devices applications are allowed to access, you can freeze processes as well as tag outgoing network packets with a class ID, in case you want to do shaping or profiling on your network! Perfect if you want to prioritise SSH traffic over anything else, so you can comfortably worked even when your uplink is saturated. Furthermore, you can easily get an overview of memory usage, CPU accounting etc. of applications in any given group.

All this means you can clearly separate resources and to quite a large extent ensure that some applications won’t starve the whole system, or each other from resources. Very handy, no more waiting for half an hour for the swap to fill up and OOM to kick (and often chose the wrong PID) in when customer’s applications have run astray.

A much welcomed addition to RHEL!

May 13th, 2010

Boot Loader Not Installed in Rhel6 Beta

Just a heads up I thought I’d share in the hope that it’ll save someone some time, when installing RHEL6 beta under Xen, be aware that pygrub currently can’t handle /boot being on ext4 (which is the default). So in order to run rhel6 under xen, ensure that you modify the partition layout during the installation process.

This turned out to be a real head scratcher for me, and initially I thought the problem was something else as Xen wasn’t being very helpful with error messages.

Hopefully there’ll be an update for this soon!

Apr 10th, 2010

Building Hiphop-php Gotcha

Tonight I’ve delved into the world of Facebook’s HipHop for PHP. Let me early on point out that I’m not doing so because I believe that I will need it any time soon, but I am convinced that I without a shadow of a doubt will be approached by customers who think they do, and I rather not have opinions or advise against things I haven’t tried myself or at least have a very good understanding of.

Unfortunately I set about this task on an RHEL 5.4 box, and it hasn’t been a walk in the park. Quite a few dependencies were out of date or didn’t exist in the repositories, libicu, boost, onig, tbb etc.

Though, CMake did a good job of telling me what was wrong, so it wasn’t a huge deal, I just compiled the missing pieces from source and put them in $CMAKE_PREFIX_PATH. One thing CMake didn’t pick up on however, was that the flex version shipped with current RHEL is rather outdated. Once I thought I had everything configured, I set about the compilation, and my joy was swiftly abrupted by this:

1
2
3
[  3%] [FLEX][XHPScanner] Building scanner with flex /usr/bin/flex version
2.5.4
/usr/bin/flex: unknown flag '-'.  For usage, try /usr/bin/flex --help

Not entirely sure what it was actually doing here, I took the shortcut of replacing /usr/bin/flex with a shell script which just exited after putting $@ in a file in /tmp/ and re-ran make. Looking in the resulting file, this is the argument flex is given:

1
2
3
-C --header-file=scanner.lex.hpp
-o/home/erik/dev/hiphop-php/src/third_party/xhp/xhp/scanner.lex.cpp
/home/erik/dev/hiphop-php/src/third_party/xhp/xhp/scanner.l

To me that looks quite valid, and there’s certainly no single – in that command line.

Long story short, flex introduced –header-file in a relatively “recent” version (2.5.33 it seems, but I may be wrong on that one, doesn’t matter). Unlike most other programs (using getopt), it won’t tell you Invalid option ‘–header-file’. So after compiling a newer version of flex, I was sailing again.

Feb 21st, 2010

Development; Just as Important as Dual Nics

There is a popular saying which I find you can apply to most things in life; “You get what you pay for”. Sadly, this does not seem to apply for software development in any way. You who know me know that I work for a reasonably sized hosting company in the upper market segment. We have thousands of servers and hundreds of customers, so after a while you get a quite decent overview of how things work and a vast arsenal of “stories from the trenches”.

So here’s a small tip; ensure that your developers know what they are doing! It will save you a lot of hassle and money in the long run.

Without having made a science out of it, I can confidently say that at the very least 95% of the downtime I see on a daily basis is due to faulty code in the applications running on our servers.

So after you’ve demanded dual power feeds to your rack, bonded NICs and a gazillion physical paths to your dual controller SAN, it would make sense to apply the same attitude towards your developers. After all, they are carbon based humans and are far more likely to break than your silicon NIC. Now unfortunately it is not as simple as “if I pay someone a lot of money and let them do their thing, I will get good solid code out of it”, so a great deal of due diligence is required in this part of your environment as well. I have seen more plain stupid things coming from 50k pa. people than I care to mention, and I have seen plain brilliant things coming out of college kids’ basements.

This is important not only from an availability point of view, it’s also about running cost. The amount of hardware in our data centers which is completely redundant, and would easily be made obsolete with a bit of code and database tweaking is frightening. So you think you may have cut a great deal when someone said they could build your e-commerce system in 3 months for 10k less than other people have quoted you. But in actual fact, all you have done is got someone to effectively re-brand a bloated, way too generic, stock framework/product which the developer has very little insight into and control over. Yes, it works if you “click here, there and then that button”, the right thing does appear on the screen. But only after executing hundreds of SQL queries, looking for your session in three different places, done four HTTP redirects, read five config files and included 45 other source files. Needless to say, those one-off 10k you think you have saved, will be swallowed in recurring hardware cost in no time. You have probably also severely limited your ability to scale things up in the future.

So in summary, don’t cheap out on your development but at the same time don’t think that throwing money at people will make them write good code. Ask someone else to look things over every now and then, even if it will cost you a little bit. Use the budget you were planning on spending on the SEO consultant. Let it take time.

Feb 13th, 2010