Tuning LAMP Server Performance

Having trouble with a crashing webserver? Is MySQL or Apache eating up your RAM and trashing your drive swapping? Don’t worry, there are good tools to help you get started in tuning your LAMP server to avoid crashes.

There are two scripts that I find invaluable in getting me a first and fast opinion on the current status of the server. The trick is often to get the settings right so that they do not risk eating up your RAM for breakfast. Here are two scripts to help you get your MySQL and Apache settings right for you.

This script will test your MySQL settings and suggest performance improvements. By using statistics from MySQL of current state of performance it will suggest modifications to your settings.

Inspired by MySQLtuner.pl Apachebuddy.pl does the same thing to Apache server settings. It checks your current settings and calculates average as well as maximum RAM usage and suggest improvements.

These script does of course not replace knowledge. Use them as a first opinion but then educate yourself about the settings before changing anything.

Both scripts have nifty URLs to download from:

Creating a MySQL Master – Slave connection

I’ve setup several MySQL Master – Slave connections and like to share my procedure. During the trials there are several details I’ve come to learn how to handle and my own set of “best practices”.

The MySQL Master Slave connection works under the premise that “a statement executed on the master should create the exactly same result when executed on the slave given that their database is equal”. For this to work we need to start two servers that are identical and then make one follow the other.

We used MySQL Server 5.5.11 when creating the master slave connection in the guide below. Please consult the MySQL Documentation if you are using a different version.

Step 1: Setup servers
First of all you will need two MySQL-servers. The standard community edition works fine. They should be of the exact same version to avoid any problem that bugs in one or the other might introduce. If you introduce a slave into an existing MySQL server you will need to make plan for a downtime for the duration of running the “mysqldump” command.

TIP: Save the MySQL installation file if you want to add more servers later since you will need the exact same version.

Step 2: Configuration
Edit the my.ini-file off the future master and add the following settings:

# Unique Server ID
# Name of binary log

The Server ID can be any number as long as there are no two servers with the same number in the replication chain, i.e. in our case the slave must have a different number.

The log-bin setting tells the server to make a binary log of every statement executed on the server.

Edit the my.ini file off the future slave and add the following settings:

# Unique Server ID

TIP: Add the setting relay-log=relay-bin to name the relay log. Otherwise MySQL by default uses [hostname]-relay-bin. The problem with the default is that if the host ever change hostname the replication will break. It also breaks if you want to make a copy of the slave to a second slave (if you do not add the setting to the new slave as well).

As mentioned before, the Server ID of the slave needs to be different from the Server ID of the master. When these changes are done, restart the service on both MySQL machines to let the changes take effect. Use the following commands to restart the service:

Linux (requires super user access):

user@host:~$ service mysql restart

Windows (requires administrator privileges):

C:\net stop mysql
C:\net start mysql

After the changes you should see a binary log starting to grow in data data directory of your future master.

TIP: If you have made other modifications to the my.ini file these needs to be copied as well to the slave, otherwise the slave riscs behaving differently from the master.

Step 3: Create a user
The replication will be using a normal user with the replication privilege. I opted to create a new user for this using the following commands:

mysql> CREATE USER 'slave'@'%' IDENTIFIED BY 'mytrickypassword';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'slav'@'%';

The user will be created on the master but if you replicate all databases (as this guide will) then the user will also be replicated to the slave.

TIP: You can use any password you like BUT the password will be visible in plain text on the slave server! In the file master.info that will be created later in this tutorial all the master information will be stored including username and password.

TIP: Make the slave user limited to a certain domain or IP so that security riscs will be minimized. In the above example the user slave can log in from any host.

Step 4: Copy database
Now the time critical portion of this tutorial begins, from here until the datadump is complete the master database will be unavailable for writing.

Execute the following command on the MySQL Master:


Now all tables will be locked so that no transactions can occur. This is required since we need to make a full database dump of the current state of MySQL Master. Next execute the following command:


Write down the reply of the following values: File and Position. An example would be:

File: mysql-bin.00001
Position: 1337

From the command line on the MySQL master issue the following command (change password etc as needed):

C:\mysqldump --user=root --password=rootpassword --all-databases --master-data --result-file=mydump.sql

TIP: Are you using non UTF-8 encoding? Add “–default-character-set=latin1” to the command line where latin1 is the encoding you are using. If you do not supply an encoding MySQL will assume UTF-8.

When the dump is complete and you have a file called mydump.sql you can unlock the tables. Issue the unlock command on the master:


The master server will now be on-line and working again.

Step 5: Create the slave
Copy the file mydump.sql to the slave server. When it is done execute the following command from the mysql command line (you might have to specify exact location of the mydump.sql file):

mysql> source mydump.sql

TIP: Do NOT use “mysql -u root -p < mydump.sql” from the normal command line since that can corrupt the encoding, again if you use non-standard encoding.

The database on the slave is now identical to what the master from a specific point in time. Now configure the slave to connect to the master and follow it from that point in time.


Make sure that MASTER_HOST is the name or IP of the MySQL Master. MASTER_USER and MASTER_PASSWORD are the same as created in step 3 above. MASTER_LOG_FILE and MASTER_LOG_POS are the same as read from step 4 above.

TIP: Since we used the flag –master-data when creating mydump.sql the MASTER_LOG_FILE and MASTER_LOG_POS should allready be set. The remaining settings are however needed.

TIP: Unless you specifically need it I recommend to avoid using binary logging on the slave while it tries to “catch up” with the master. Also the “bin-log” command only triggers logging of commands executed directly on the server, not from replication. To make the slave write replication to it’s own binary log the following setting must be added: “log-slave-updates=1”.

Start the slave with the following command from MySQL command line:


Step 6: DONE
Congratulations, your slave server is now replicating everything on the master server. Depending on how long time it took between step 4 and step 5 the slave should most likely allready have caught up the the master. To check on the status run the following command on the slave server:


Especially noteworthy fields in this reply are “Slave_IO_State” that informs us of what the slave is up to, most common reply here is “Waiting for master to send event”. “Seconds_Behind_Master” tells us how many seconds behind the slave server is at the moment. If the slave server has been done or restored to an old backup this value can be very high. Normally this value is zero indicating that the slave is up to speed.

TIP: Did you know you can “daisy chain” MySQL servers. Just setup the slave as master to a new slave! There are however some further considerations for doing that, maybe a future blog post!

TIP: The slave server is perfect to use as a “live backup” in case the master should fail. You can also temporarily stop/lock the slave to make a complete database backup without having to worry about downtime of the service. The slave will catch up with the master again once started.

TIP: As with every security meassure in information technology, try this out before trusting how it works! I give NO GUARANTEE OF ANYTHING WRITTEN IN THIS GUIDE, you have to try and verify it yourself. This works for me, it doesn’t necessarily work for you.

More tips, comments or questions? Please feel free to comment below!


Idag hade jag väldigt stora och konstiga problem med Word. Till synes slumpmässigt gav den mig ett felmeddelande om “slut på minnet” och att jag genast måste spara mitt arbete så att det inte gick förlorat. Med cirka 3 Gb internminne och 280 Gb hårddisk ledigt så var det ett lite lustigt felmeddelande.

Efter många om och men har jag nu kommit fram till följande när man felsöker dylika konstigheter i Word:
Steg 1: Radera normal.dot och starta utan några inställningar.
Steg 2: Skapa en ny användarprofil på datorn och starta Word i denna nya “rena” profil.

Steg 3: Bränn datorn på ett bål under en ritual för att fördriva de onda andarna ur datorn

Sån tur var löste det sig i steg nummer 2.

Vårstädning i Facebook

Jag kände att jag behövde se över hur mycket storebror Facebook känner till om mig och mitt leverne. När jag gick in under sekretessinställningar och sedan redigerade information om applikationer och webbplatser blev jag nästan lite häpen. Där fanns en lång lång lista över applikationer som tydligen hade rätt att läsa vad de ville om mig. Många av dem kommer jag ihåg att jag vid något svagt tillfälle tackat ja till (vem vill inte vara “Friends Forever”?) men nästan lika många kände jag inte alls igen.

I brist på vår i den riktiga världen bestämde jag mig för att utropa en vårstädning på Internet istället. Sagt och gjort, jag har nu tagit bort så gott som alla applikationer som hade tillgång till min information ifrån Facebook. De flesta av dem var inaktiva men det känns ändå skönt att ha dem borta.

Hur många applikationer har ni själva? Kolla här:


Base64-kodning för e-post

Ibland är lösningar lättare än man tror. Jag hade en bild som skulle kodas till Base64 för att vara del av ett e-postutskick.  Jag funderade först på om jag skulle skriva ett enkelt litet program för att konvertera bilder till Base64 samt skapa header-information som ska följa med bilden i e-postutskicket.

Den enkla lösningen var att istället för att skapa ett program så skickade jag bilden till mig själv med e-post. Sedan vara det bara att “visa källa” på e-postmeddelandet och klippa ut den färdig-kodade bilden inklusive header med mera! Perfekt och enkelt. Ibland gör man det lätt för sig.

Microsoft Security Essentials

För alla som missat det skulle jag villa påminna om att Microsoft Security Essentials sedan en tid tillbaka finns tillgängligt att ladda ner för alla Windows-ägare även här i Sverige. Tidigare var tillgången begränsad till vissa världsdelar men nu är den alltså i alla fall tillgänglig här.

Jag har alltid varit emot anitvirusprogram och försökt undvika dem bäst det går. Idag kan man inte vara utan skydd med tanke på alla hot som kommer ifrån Internet. För en gångs skull tycker jag dock detta är ett program som bör vara del av operativsystemet (till skillnad mot webbläsare och annat lull-lull som Windows ofta fokuserar på). Vem kan bättre än Microsoft själva veta var och hur man bör skydda systemet? Jag hoppas bara att de satsar ordentligt på den här produkten så att den blir minst lika säker som konkurrenterna som har flera års försprång i antivirus-hantering.

Under ett par månader har jag kört Microsoft Security Essentials (MSE) på min hem-PC och det har fungerat utmärkt. Alla mina tidigare antivirusprogram har alltid gjort mycket oväsen av sig i form av uppdateringar eller att de varit i vägen när vissa program skulle köra. MSE är mycket tystare av sig och hittills verkar den fungera bra. Det återstår att se vid nästa virusutbrott om den hanterar detta på ett bra sätt!

Filhantering med Dropbox

Dropbox är verkligen en härlig molntjänst. Tack vare de olika gränssnitt man kan använda Dropbox genom, till exempel som en vanlig mapp på din dator (vilket operativsystem du än kör) eller genom det utmärkta webbgränssnittet.

Det bästa med Dropbox är att man kan komma åt sina filer bakom nästan vilken brandvägg eller begränsad serverdator som helst. Ofta är företagsnätverk blockerade för “udda” trafik och då kan det vara svårt att ladda upp eller ner filer man behöver i jobbet.

Skaffa ett eget Dropbox-konto och få 2 GB gratis utrymme!

Utnyttja statistiken från din hemsida

Glöm inte att följa statistiken ifrån din hemsida. Vad letar dina besökare efter när de hittar din hemsida? Att använda ett bra statistikverktyg som Piwik eller Google Analytics kan öppna ett helt nytt sett för dig att se din egen hemsida.

Följ trender såsom var dina besökare kommer ifrån. Hur många besökare får du från den där bloggen eller forumet som du länkas ifrån. Det kan vara bra att veta då man kanske bör ägna mer tid åt att svara på kunders frågor på Internet och därmed lämna ett spår efter sig för nästa kund.

Ett mycket intressant sätt att använda statistik på är framförallt att hitta saker som dina besökare letar efter men som inte finns på din hemsida. Säg att ert företag lagt upp en “offert”-sida. Att se vilka besökare som når denna sida kan avslöja vad dina presumtiva kunder söker. Om du ser massor av sökord på “fastpris offert” (särskrivet i exempelvärldens underbara värld) så kanske detta bör framhävas på hemsidan om ni erbjuder denna tjänst.

Att ändra sidan efter de sökord där den passar är lite som att anpassa varorna i en butik efter vem som kommer och handlar. Kunden har alltid rätt, även på Internet!

How to avoid a page being cached

All web programmers have probably had trouble with browsers caching pages it ought not to. So what can we do about it? Well in good old HTTP 1.0 we had a nice header that simply said:

Pragma: no-cache

Easy huh? Yes. Probably to easy. If not browsers then sure some proxy server will dissobey that simple command and require that we explain it to them more thoroughly. This brings on the next HTTP-command:

Expires: -1

Acctually any invalid date format will do, the meaning should be interpreted as “this page have ceased to be” [mental image of John Cleese banging a parrot on the desk]. Only problem is still some missbehaving browsers and proxys interpret this as “well you might have written an erranous date, so we play nice and cache the page for you still”. Cue HTTP 1.1 and we have another header:

Cache-control: no-cache

Oh, remember this directive? Easy huh? Heard it before. Yes, it’s to easy to be true as well. The problem with this one is that some missbehaving reverse-proxys apparently fails to deliver these pages through the proxy in what seems to be their inability to forward it since they are not allowed to save it. At least in my case it was a reverse proxy that seemed to think very little of pages it wasn’t allowed to keep. We had to give it “Cache-control: private” in order for it to acctually pass the page on. The obvious problem with this is that it no longer prehibits the end user agent (as opposed to a in the middle proxy) to cache the page.

Now all available headers have failed in some way, add to this that someone using HTTP 1.0 might try and send a cache-control which will fail due to it not being part of 1.0 or in reverse someone using 1.1 sending Pragma header which might be ignored due to being replaced by cache-control in 1.1.

What is a programmer to do? Well, since proxys have made me not rely on normal HTTP headers the next step is into HTML and the http-equiv META tags. Let’s blast the browser with everything we have:

<meta http-equiv=”Expires” content=”-1″>
<meta http-equiv=”Pragma” content=”no-cache”>
<meta http-equiv=”Cache-Control” content=”no-cache”>

Now no proxy should ever interfere with our headers. The problem with cache-control and pragma remains so if you use HTTP 1.0 the former is ignored and in 1.1 the latter. If we include both we are safe, at least until they decide to probably change the whole thing in a future 1.2 version. We also send the expires tag which should make its way all the way to the browser without being cached. Hopefully at least one of these will be treated with respect by the browser, this is even partly recommended in an old KB-article from Microsoft. Still http-equiv is not as safe as real HTTP headers, it requires the browsers to support them. Some support them better than others (the article is old but still sends my head spinning in dissbelief).

Being dissillusioned by the current state of cache control (not the header, the subject) I ended up doing what probably most people are doing allready. Appending a random 10 character string to every call I ever make effectivly fooling the browser that this information might be improtant and making it update the page properly. Just append it to the back of every GET and include a random field in every POST.



Not the same page. Obviously. Please don’t tell any browser developer this or they might include a “random cache of everything in the known universe”-feature in their next build.