Utnyttja statistiken från din hemsida

Glöm inte att följa statistiken ifrån din hemsida. Vad letar dina besökare efter när de hittar din hemsida? Att använda ett bra statistikverktyg som Piwik eller Google Analytics kan öppna ett helt nytt sett för dig att se din egen hemsida.

Följ trender såsom var dina besökare kommer ifrån. Hur många besökare får du från den där bloggen eller forumet som du länkas ifrån. Det kan vara bra att veta då man kanske bör ägna mer tid åt att svara på kunders frågor på Internet och därmed lämna ett spår efter sig för nästa kund.

Ett mycket intressant sätt att använda statistik på är framförallt att hitta saker som dina besökare letar efter men som inte finns på din hemsida. Säg att ert företag lagt upp en “offert”-sida. Att se vilka besökare som når denna sida kan avslöja vad dina presumtiva kunder söker. Om du ser massor av sökord på “fastpris offert” (särskrivet i exempelvärldens underbara värld) så kanske detta bör framhävas på hemsidan om ni erbjuder denna tjänst.

Att ändra sidan efter de sökord där den passar är lite som att anpassa varorna i en butik efter vem som kommer och handlar. Kunden har alltid rätt, även på Internet!

What does J mean?

Anyone else been confused by random Js in their e-mails? I have received many e-mails that often end up with a J at the end of a line, often where one thought a smiley would fit right in. At first I suspected that it was a particular company that had an internal way of typing jokes since I couldn’t find anything about the abbreviation J as Internet slang. Since I never asked at first it felt silly to ask later on when many Js had passed. One day however I started receiving Js from other companys as well! I got more confused. How could these companies, with employees who where not particullary Internet savvy, have the same unbeknownst to me style of Internet slang? The clue lies in the character code for J, 0x4A.

charactermap

It turns out that J and a Wingdings smiley share the same character code 0x4A. What was happening was that the persons where most likely typing a smiley which the e-mail system, Microsoft Exchange, automatically converted to a Wingdings smiley. I also examined the source code of the e-mail and found this to be true:

<span style=3D'font-s=ize:11.0pt;font-family:Wingdings;color:#1F497D'>J</span>

Unfortunately neither my e-mail client nor gmail showed me the J in Wingdings, rather just a plain old J. So next time you see a random J, don’t get confused, it’s most likely a smiley!

iPhone specific pages

I recently learned some tips and tricks for developing webpages for the iPhone. There is really only one major difference that must be there for it to work properly. The viewport-meta.

<meta name=”viewport” content=”width=320; user-scalable=true” />

This meta-tag explains to the iPhone what dimensions the page is intended to be viewed in. If this tag is not supplied the iPhone will assume a width of 980 pixels. This way, even if you have made a page with “small” content it would still be scaled unless you also supplied the viewport-command.

If on iPhone, remember viewport!

How to avoid a page being cached

All web programmers have probably had trouble with browsers caching pages it ought not to. So what can we do about it? Well in good old HTTP 1.0 we had a nice header that simply said:

Pragma: no-cache

Easy huh? Yes. Probably to easy. If not browsers then sure some proxy server will dissobey that simple command and require that we explain it to them more thoroughly. This brings on the next HTTP-command:

Expires: -1

Acctually any invalid date format will do, the meaning should be interpreted as “this page have ceased to be” [mental image of John Cleese banging a parrot on the desk]. Only problem is still some missbehaving browsers and proxys interpret this as “well you might have written an erranous date, so we play nice and cache the page for you still”. Cue HTTP 1.1 and we have another header:

Cache-control: no-cache

Oh, remember this directive? Easy huh? Heard it before. Yes, it’s to easy to be true as well. The problem with this one is that some missbehaving reverse-proxys apparently fails to deliver these pages through the proxy in what seems to be their inability to forward it since they are not allowed to save it. At least in my case it was a reverse proxy that seemed to think very little of pages it wasn’t allowed to keep. We had to give it “Cache-control: private” in order for it to acctually pass the page on. The obvious problem with this is that it no longer prehibits the end user agent (as opposed to a in the middle proxy) to cache the page.

Now all available headers have failed in some way, add to this that someone using HTTP 1.0 might try and send a cache-control which will fail due to it not being part of 1.0 or in reverse someone using 1.1 sending Pragma header which might be ignored due to being replaced by cache-control in 1.1.

What is a programmer to do? Well, since proxys have made me not rely on normal HTTP headers the next step is into HTML and the http-equiv META tags. Let’s blast the browser with everything we have:

<meta http-equiv=”Expires” content=”-1″>
<meta http-equiv=”Pragma” content=”no-cache”>
<meta http-equiv=”Cache-Control” content=”no-cache”>

Now no proxy should ever interfere with our headers. The problem with cache-control and pragma remains so if you use HTTP 1.0 the former is ignored and in 1.1 the latter. If we include both we are safe, at least until they decide to probably change the whole thing in a future 1.2 version. We also send the expires tag which should make its way all the way to the browser without being cached. Hopefully at least one of these will be treated with respect by the browser, this is even partly recommended in an old KB-article from Microsoft. Still http-equiv is not as safe as real HTTP headers, it requires the browsers to support them. Some support them better than others (the article is old but still sends my head spinning in dissbelief).

Being dissillusioned by the current state of cache control (not the header, the subject) I ended up doing what probably most people are doing allready. Appending a random 10 character string to every call I ever make effectivly fooling the browser that this information might be improtant and making it update the page properly. Just append it to the back of every GET and include a random field in every POST.

Fireflake

Fireflake

Not the same page. Obviously. Please don’t tell any browser developer this or they might include a “random cache of everything in the known universe”-feature in their next build.

Canonical links, SEO news

googleGoogle, Yahoo and Microsoft have togheter announced the support of a new tag for web development where you can specify your canonical links. The point of this is to enable webmasters themselves to “point out” which page contains the original copy of certain information in case multiple copies are shown on the same page. In essence, if multiple links into the website can display the same content you now have the ability to point the search engine to the page that you would rather have indexed.

The code is quite simple, on each page where the information can be found simply add the following tag:

<link rel=”canonical” href=”http://www.example.com/destination.php” />

This will inform the search engine of which of the pages is the true origin of the information and which are only redundant copies. For more detailed explenation of the new tag visit the Official Google Webmaster Central.

I bet many CMS authors right now are digging into their code to add support for this new convention.

2009 – the year of the browsers

In 1989 we had zero web browsers as we know them today, allthough just about to be invented around the corner. In 1999 we had two web browsers fighting a death match, Internet Explorer and Netscape Navigator – a fight with Netscape cleverly lost by dying and coming back several open source reincarnations of which Firefox of course is the most well known today. 2009 is turning out to be yet another battle year for browsers, this time many more of them! We have (in no special order) the newcommer Google Chrome fighting Firefox and Internet Explorer (mainly the PC-side). We have Opera who has cut out a piece of the action on several systems but shine mostly in portable devices. Safari is ruling the Macintosh but is starting to get some interference from Firefox.

Well that is now, what is next? I ready a post about current state of browser development, and many of the major browsers have a beta our that will maybe go live sometime during the next year. While this might be very good news for home users I am sure it will mean alot of work for someone like myself who create on-line applications. There used to be a lot of tuning to make web pages and applications look and work the same on the old “two major browsers”, now we have at least 5! Unless the browser developers makes a great effort to follow the rules of the standards each web page have to compensate for how a particular browser parses the data.

In the past Internet Explorer have seemingly intentionally ignored several standards in favour of making programmers like myself forced to make pages look good on their browser. Internet Explorer is afterall the dominating browser and it have to work. The question is if this strategy is allowed to continue. I really hope for the sake of us programmers that while there are five new browser versions about to be released that several of them will render the basic pages using the same rulset.

Zombie safe homepage

It’s been a while since I had time to post some useful information here, meanwhile so you don’t turn into zombies or even worse get attacked by them here is some helpfull code from Google themselves:

http://www.google.com/robots.txt

And, just in case they change the file, here are the top lines from their robots.txt:

User-agent: zombies
Disallow: /brains

User-agent: *
Allow: /searchhistory/
Disallow: /news?output=xhtml&
Allow: /news?output=xhtml
Disallow: /search
...

CSS layout made easy

While it hurts my nitpicking handcodeing image to admit it, some frameworks are to good to ignore. One of them I found recently in Google BlueprintCSS. It’s a very flexible framework and with a license “you cannot refuse”.

I”ve used to code all CSS and HTML by hand (still!) and it is getting pretty tiresome to stumble over the same defects in every design I make. To pressed for time I’ve never developed a framework of my own, rather just copy and pasted bits and pieces from my old code that I knew was working.

There are many other CSS frameworks out there but somehow I fell for Google Blueprint though I have far from done an extensive testing on them. If anyone have found any other framework very good and flexible please post a comment, I’d love to hear it.

Google opens Knols – wiki on ads?

Google opened the doors for the public to the beta Knols. While Knols are very similar to Wikis there are two main overall design differences between the two. First of all while Wikis are “community authored” with a group of people behind most articles the Knols are author dependant. Whoever posts a Knol first becomes the author and moderator of the article. Secondly Knols have a direct interface to Google AdSense making it easy for the author to include ads in the articles, something few community based wikis allow (at least not to the benefit of the authors).

When writing Knols you have several options for each Knol you write (and by the way, Google defines “Knol” as a unit of Knowledge).

As an author you can choose to inlcude other authors or have an open (or moderated) author system much like the Wikis.

The information published in a Knol can be published under three different licenses that the author may choose from; “CC Attribution 3.0”, “CC Attribution-Noncommercial 3.0” and finally “All rights reserved”.

While I do not believe the wikis are in danger (and specifically the Wikipedia) the Knol does open up much more towards companies who might up until now found it pointless to give their information away for free. Being able to publish copyrighted material without losing control over it and at the same time gaining from AdSense published on those pages might interest some.

Small update on Google Analytics

I’ve used the code from my previous post on almost all my sites now a couple of days and all the statistics are still working and I no longer experience any slow loading times using Firefox with NoScript. Until I see either a change in Googles code (that they use a single domain for javascripts) or a new version of NoScript (that makes an exception for Google-related domains if you allow google-analytics.com) I will keep this code as it greatly improves the performance of my website.