Blog Archives

Installing Hugs98 (Haskell) on Mac 10.6 Snow Leopard

i have to use a program called hugs as part of one of my university modules, getting it installed on my mac proved to be a challenge, but i got it working. here is a brief guide on getting it working on 10.6. NOTE: if you are a 10.5 user the default macports install should be fine for you.

Lets Begin

hugs98 is the package available in macports to install the hugs program, so to get it you need to install macports, i wont go into the detail of that but point you to the instructions available here

to install hugs98 it should be as simple as running this command in

[bash]sudo port install hugs98[/bash]

but it should error out pretty quickly with a build error similar to:

—> Building hugs98
Error: Target returned: shell command ” cd “/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_lang_hugs98/work/hugs98-plus-Sep2006″ && /usr/bin/make -j2 all ” returned error 2
Command output: cd src; /usr/bin/make all
make[1]: Nothing to be done for `all’.
cd libraries; /usr/bin/make all
cd ../cpphs; HUGSFLAGS=-P../libraries/bootlib HUGSDIR=../hugsdir ../src/runhugs -98 ../packages/Cabal/examples/hapax.hs configure –verbose –hugs –prefix=’/opt/local’ –scratchdir=’../hugsdir/packages/cpphs’ –with-compiler=../src/ffihugs
runhugs: Error occurred
ERROR “../libraries/bootlib/Foreign/Ptr.hs” – Error while importing DLL “../libraries/bootlib/Foreign/”:
dlopen(../libraries/bootlib/Foreign/, 9): image not found

make[1]: *** [../hugsdir/programs/cpphs/Main.hs] Error 1
make: *** [all] Error 2

Error: Status 1 encountered during processing.

all is not lost however the macports bug reporter has some patches that fix the issue below are the instructions and files if you dont want to browse to the issue page and manually go and get the files, so you get hugs 98 working on your 10.6 machine:

download the 2 patch files from the issue tracker or here: patch-hugs98-Portfile, patch-libraries-tools-make-bootlib

make sure to download them to your ~/Downloads folder or amend the following instructions with the different path to the files, now run the following commands in

cd $(port dir hugs98)
sudo cp ~/Downloads/patch-libraries-tools-make-bootlib.diff files
sudo patch < ~/Downloads/patch-hugs98-Portfile.diff
sudo port clean hugs98
sudo port install hugs98

this should take a while and your terminal window should look like this:
[bash]anthonysomerset@Anthony-Somersets-MacBook:~$ cd $(port dir hugs98)
ant@ASMB:/opt/local/var/macports/sources/$ sudo cp ~/Downloads/patch-libraries-tools-make-bootlib.diff files
ant@ASMB:/opt/local/var/macports/sources/$ sudo patch < ~/Downloads/patch-hugs98-Portfile.diff patching file Portfile ant@ASMB:/opt/local/var/macports/sources/$ sudo port clean hugs98 —> Cleaning hugs98
ant@ASMB:/opt/local/var/macports/sources/$ sudo port install hugs98
—> Computing dependencies for hugs98
—> Fetching hugs98
—> Verifying checksum(s) for hugs98
—> Extracting hugs98
—> Applying patches to hugs98
—> Configuring hugs98
—> Building hugs98
—> Staging hugs98 into destroot
—> Installing hugs98 @plus-Sep2006_0
—> Activating hugs98 @plus-Sep2006_0
—> Cleaning hugs98
ant@ASMB:/opt/local/var/macports/sources/$ [/bash]

to launch hugs just type the following in the terminal ๐Ÿ™‚


now go and enjoy your hugs98 geekery ๐Ÿ™‚ SAN 2.0 early thoughts

thought i would put some of my early thoughts of’s new SAN infrastructure as i have now been using them for a couple of weeks in production usage.

when i first started using the new London clouds (London Zone E – F opening soon) there were a very limited range of templates available

one good bonus of the new SAN infrastructure is that Windows is now available on – i have yet to get the chance to test it first hand and for 99% of anything i do i am unlikely to need windows anyway but its good for obviously as it will attract another stream of users (namely the windows junkies).

my first step into SAN 2.0 territory was pretty smooth, i didnt have a cpanel template to choose from but i normally install it from a scratch centos install anyway so no problem for me. i didnt do any detailed timings but it certainly felt like the install went much fasterย  (most likely due to the faster and probably more consistent SAN performance) which if anyone knows how long a cpanel install from scratchย  is a good thing!

have only been running the server a couple of weeks so far, but things seem to run much more stably i have not had any strange temporary performance issues that i used to have on the old SAN’s in fact performance has been so stable i have been running my server without litespeed on 3 nodes (litespeed causes issues with a reverse proxy domain i have setup in that it just doesnt work with dns based reverse proxy only IP) where as before i “had” to run litespeed to keep consistant performance without adding extra nodes that generally weren’t needed.

the new tweaks to the control panel look really really good, the new console is actually usable now and i could quite happily use it for client servers over the trusty terminal – sure it wont ever totally replace a terminal or ssh client but its a huge step forward over the old version. the new bandwith and cpu usage graphs are really nice looking (although i didnt mind the old graphs in anyway just annoyed that they used to break all the time)

the only negative of the new graphs is that you can only see the last 14 hours of data, there is as yet no way of editing the timescale or seeing further back or over a longer period. the other one thing about the new clouds is that monthly bandwith used is no longer accurate to my knowledge. because since i started my server 2 weeks ago its total bandwith is 6.4MB this includes the cpanel install and software updates and the constant syncing to S3 for the cdn of this site supposedly. i have yet to raise a support ticket for this as its not critical for me as i dont yet get near the transfer limits of

all in all its been a long awaited upgrade and its certainly worth dipping in to test. if you are a potential new customer. use the new SAN’s dont even bother using the old cloud Zones.

now hopefully i should recieve one of the stress toy robots soon ๐Ÿ™‚

Spreading the Load

A nice geeky post again ๐Ÿ™‚

been looking into different alternatives to this last week or 2. i started looking after they experienced there major outage at the UK DC taking all there UK Zones/Clouds out of action for anything from 2 hours to 14 hours in the worst case for me. Although i have not really wanted to it would have been just unprofessional of me to not be aware of the current alternatives around just in case the same thing happens again. i need servers in the UK and i need them to be reliable, i thought that putting servers in different UK Zones at would be ok (they didnt need load balancing or anything fancy) my theory was that if a zone went down i would only have to deal with 1 perhaps 2 servers or clients and thats much more manageable. i wasnt planning for all 3 zones to hit out and a total of 7 servers that i manage for clients (including my own) went down at the same time, you can guess my phone got busy quickly! it really hurt me by being at a conference and suffering from the worlds worst wifi and mobile coverage (Brighton) in the UK during the middle of it.

so i started my looking and only came up with a few possibles. my requirements were simple, UK based, reliable and competitive costs or at least on the spec. the main few i found were:

so for my first requirement for UK servers i had to eliminate Slicehost and Rackspace, i remember speaking to Rackspace live chat and asking about whether they had a UK VPS service and the response while not these words exactly, i got the impression the agent was thinking, “why would you want to do that?” and it looks like rackspace dont have any imminent plans to bring it here.

Linode next on my list recently in the last 12 months or so setup a London based zone and they have pretty good pricing, but have reasonably tight bandwith limits that would mean potentially paying more than to make sure we dont have over bandwith issues. the dont offer cpanel (and for transparency even though i hate it, plesk) licenses meaning you have to get them yourself and often have to pay 3-4 times as much as if you go direct through a provider such as going through there UI and i just didnt feel comfortable with it, sure i could use it it just felt overly complex to get things setup quickly and easily. that said i do like there specify how big disk images are (so that you can have multiple images attached to a vps and they are movable) they have some nifty features such as LiSH (Linode Shell) which is basically ssh access for when you mess up network or firewall settings, its like having a screen, keyboard and mouse plugged into your vps direct.

Next up is i got recomended these guys a few months ago when had another major issue (which i was largely unaffected by luckily) they are more traditional vps hosts in that you generally get a fixed instance size but you get good resources for your price (i am getting double ram and CPU than equivalent prices at however this comes at the price of not having hypervisor redundancy like does, but you are on top of the range hardware and drives and they are a company that has been around a lot longer than has (although’s parent uk2group likely has been around longer) and just seem to get on with the job and do it well. the thing about is that its not for normal people (read people that dont really know what they are doing – which seems to be much more prevalent at “big” companies like Linode or which means support resources (FAQ’s forums etc) are somewhat lacking, there support is excellent though and although they dont have a big fancy control panel (it just looks like rebranded whmcs for billing side and support only) they have importantly an ssh based emergency console and full slave dns which is a doddle to slave off your servers and white label – i am actually researching the best way to code up a cpanel module to automate this too. setup is not instant with them but if you sign up within british office hours you generally get set up in under an hour (mine was 20 mins).

so my outcome for me is the clustered is a great choice of provider, in reality all of them are good and i would reccomend the best tool for the job which in a nutshell is: – if you need the flexibility and very easy management system
Linode – if you dont need to worry about the added expense of a server control panel – if you dont need to worry about the management interface and just want guarenteed reliability and good prices

all in all i am keeping most of my services at for now – i am more than happy with the service they provide, one client has moved to MediaTemple and getting round the plesk as control panel its a damn good service even for something based solely in the US and pretty swift. i have moved my personal server off to as i got better resources for the same price and i dont need to worry about the management interface. besides its good to road test properly ๐Ÿ˜‰ This will also allow me to look at something that can monitor my servers at and be able to do stuff that via there API that i may not neccesarily be able to do if the server its hosted on is down at ๐Ÿ™‚

its always better to have your eggs in lots of baskets in the server hosting world because host’s get targeted, DC’s have issues etc etc, the more you can minimise that impact the better. i will still be reccomending to most people that ask me, however i am much much more informed about the competition and can definitely give a much more objective reccomendation to clients now.

Amazon Cloudfront

i have been fighting in my spare time set up Amazon Cloudfront to take some of the hit off my server (i can then run it more efficiently and then save money) i wished i could have used‘s akamai offerings but seeing as the idea was to try and save money it would have defeated the purpose. anyway shall post some thoughts now briefly and will later post some more info on how i got things going.

Firstly Cloudfront is a Content Delivery Network (CDN) there core job is to speed up your website by serving all the static files on your website (javascript, css, images and other files – no php) in a normal situation putting these files on another server doesnt neccesarily speed up a site but the core difference between that and a CDN is that the CDN is optimised to serve those files and also a CDN will have what are known as Edge Servers all over the world, the way the CDN works is that it caches all your files on these edge servers and when someone asks for the parts of your website off the CDN they get the server closest to them (closer server is a faster server). There are 2 types of CDN, Origin Pull & Point of Presence and they work in 2 different ways (obviously). by far often the most simple to setup and maintain is Origin Pull, the way this works is you upload your site as normal, you set the CDN to “Pull” its files from your server (like a normal client) and then to get your users to pull from a different CDN’ url for example your site is and your cdn is at your cdn pulls all your images as they are requested on the CDN from but you provide links on your site to obviously for a big site renaming all those links to images and css files etc will be time consuming so you would need to perhaps use a plugin if you are using a CMS, Rails even has this built in as a configuration line! PoP is slightly different in that you still reference a CDN url but you have to prime the caches by uploading your data to your CDN provider, this obviously takes the pressure of pulling off your server but has the added complexity that you have to make sure files are uploaded, there are tools out there that will automate this part of the task though such as CDN module for Drupal or W3 Total Cache for WordPress.

Amazon Cloudfront is a PoP CDN and uses its S3 service as its origin so you have to manually make sure you have your content loaded up to s3 to make use of it, W3 Total Cache takes care of this and even in the sneak peak i had of the next version supported expires setting and gzip, more on that next time. the actual setup of s3 and cloudfront couldnt have been much easier, i got my Amazon Web Services (AWS) account up and running and enabled s3 and cloudfront. i created my bucket for hosting files and setup the cloudfront distribution attached to the bucket in no time, even setting up the CDN Cname record was quick and easy.

next is uploading content for now i am using W3 Total cache to take care of this but i have had one or 2 issues with url’s not being rewritten properly that i have yet to address, there are however many tools that can be setup to run via cron or in the background to sync your files to S3 s3cmd or s3sync are 2 that come to mind first. Amazon have no origin pull which i think is there first negative.

next hiccup i had was when setting up the site i changed the logo file because i had previously set expires for a week (and for some reason it registered as a year :S) i was stuck with an old file until i decided to just rename the file and force the change. there is a big problem here – i couldnt invalidate the cache on cloudfront as is normal with most other cdn providers, this is a big minus point – the only way to invalidate cache was to delete and recreate the cloudfront distribution which then left me at the mercy of dns propogation for changes.

next thing is gzip, cloudfront wont automatically send files as gzip encoding unless your pre gzip it or set the headers, this means you have to have 2 of all your CSS and JS files one compressed the other uncompressed and then rewrite your css and js rules appropriately depending on browsers this is a large overhead (Luckily solved in upcoming version of W3 Total Cache)

that said Cloudfront is decently fast, its extremely cheap although dont forget to factor in storage costs and it can also host html files with some providers such as akamai wont do on there basic object caching services

its early days and i wish i had a full blown cdn like akamai but cloudfront wins the day for me for the cheap pay as you go costs which work out well for me

  • Blog Post: Education - ,
  • on a serious note happy to help with helping get things out on race weekends i rent a server just for times such as these ;),
  • wow amazing - did not realise demonoid was back was back!!!!,
  • have you considered private trackers like iptorrents? would love to assist if i can as i donโ€™t want to miss my fix!!,
  • I say keep pushing them here with magnet links!,