,

Passengers don’t want to pay for wi-fi

According to a new survey, passengers believe they should not pay extra for wi-fi on flights.

The Holiday Extras survey found that a massive 84% of more than four thousand travellers surveyed would prefer not to have inflight internet access than pay extra for the privilege.

25% even thought that it was inappropriate to charge because access to the internet is fast becoming a human right. 

27% of men and 23% of women felt that they would make productive use of wi-fi access on a flight, whereas others surveyed were ambivalent about the effect of access, stating that they would choose to ignore emails while they ate their inflight meal or watched a film. 10% admitted that knowing they had access to their emails and work would mean that they could not fully relax.

James Lewis, head of partnerships for Holiday Extras said: “There is undoubtedly an appetite for internet connectivity on planes but price could be the stumbling block.”

“Of course, airlines need to be sure that the cost of providing wi-fi on board can be justified; there’s no doubt that it’s a necessity for business travellers, but most holidaymakers like to utilise the time before the flight and the flight itself to get into the holiday mood – making the most of their valuable leisure time.”

 

Would you use wi-fi on a flight? And would you pay for the privilege?

,

Using WordPress with Amazon Opsworks and git

Deploying WordPress on Amazon Opsworks with git

We’ve been spending a lot of time with Amazon Opsworks. We already use Amazon EC2 for most of our server and web application hosting, and were using Opscode Chef to handle a lot of the deployment – so Opsworks is a logical next step.

We’ve just successfully set up SugarCRM using Opsworks, and that’s been really successful. Next up – moving our WordPress sites over.

Version controlling WordPress with Git

The first step for us was to begin managing our WordPress codebase with Git. The core WordPress code is already available as a git repository, of course (eg. at the WordPress Github repo).

We followed David Winter’s guide to managing custom WordPress sites with git. The idea is to check out the WordPress code as a submodule in your git repo, and then set up separate git repos for your themes. Finally, he suggests version controlling the whole thing, allowing you to manage / version the config as well.

Once you have a Git repo for your WordPress, you can then add a new app in Opsworks. We use BitBucket for our version control, so we put the BitBucket repo address for our WordPress git repo into the Repository URL field in the App setup, and pasted our private BitBucket key into the “Deploy SSH Key” field as normal.

Deploying WordPress on Opsworks

Deploying the app gives you a checked-out wordpress directory in a minute or two.

Next step is to manage that config file using a custom chef cookbook. I’d like to put relevant data, like the database connection details, into custom JSON in Opswork, and use that to populate the wp-config file. We’ve already done a similar setup using a SugarCRM cookbook, for deploying Sugar on Opsworks, and that works really nicely – making it super-easy to set up subsequent sites. I’d like to get the same thing for all our WordPress sites — eg. the idea is that, in a couple of clicks, you can bring up a new WordPress site, with new database credentials and custom themes. I’ll be working on this over the next few days, so stay tuned for that…

Are you managing WordPress on Opsworks in a different way? Do let me know in the comments!

New gadget to prove mishandling of parcels

DropTag image from Cambridge Consultants

DropTag image from Cambridge Consultants

A new gadget has been developed to recognise when a package, suitcase or any other form of parcel has been dropped or mishandled in transit. 

DropTag has a motion sensor which detects mishandling of packages and will also send a warning message to the owner’s phone via the accompanying app.

The new device, which has just been granted a patent, looks set to have many useful applications if developed more fully. Read the full story on Villaseek.

Rise in demand for wi-fi in holiday cottages as parents work on vacation

Over two-thirds of parents (68%) work while on holiday, according to a new survey. The results co-incide with a rise in demand for wi-fi in holiday cottages.

The Tots To Travel survey on the work/life balance of parents had over 500 respondents. An astonishing 41% said they would work for five or more hours a week while on holiday, 68% admitted to taking work calls, and 78% would check their emails.

The reasons for working while on holiday ranged from feeling under pressure from their employer to feeling that it would be good for their business.

For more information on the results, see the full article on Holiday Cottages.cc 

 

, ,

Disaster recovery to the Cloud – Amazon EC2 & ShadowProtect

Disaster recovery to the Cloud – Amazon EC2 & ShadowProtect

I’ve spent the last week setting up a cloud-based backup system which will image a physical or virtual Windows server to Amazon EC2’s cloud-based servers, and allow a complete copy of that image to be spun up in the cloud if and when a company’s primary office becomes unavailable.

Disaster recovery and backup systems are usually measured using two key metrics – “Recovery Point Objective” (RPO) and “Recovery Time Objective” (RTO). The first of these is the amount of data you’re preparated to lose if a disaster strikes. So if you do a tape-based nightly backup, you could easily lose a day of data, and it might take you two days to recover it. The RPO of the solution I’ve got here is 15 minutes. That’s achieved using ShadowProtect from Storage Craft – which takes a consistent snapshot up to 96 times a day – meaning that you shouldn’t lose any more than the last 15 minutes of your work, even if your office is wiped out in a fire or flood.

ShadowProtect can also create a hardware independent restore which means that the image of your server can run on a different system and still boot up. Even if it’s a physical server we’re backing up, the image can be run on a virtual server if your main server becomes unavailable.

And we’re extending that idea with Amazon EC2. We run ShadowProtect Image Manager software on our cloud-based Windows 2008 R2 server, which sits on a virtual EC2 instance in a data centre in Ireland. At client sites, ShadowProtect runs on their key servers, backing up in the first instance to a computer on their site. That computer also runs Image Manager, and transfers the incremental images over to the server at Amazon.

The tricky bit comes when you need to convert all those 15 minute snapshots into a cloned server at the EC2 end. Shadow Protect handles this for us  — but there’s a problem. A vital part in the process is the hardware independent restore process, which needs to run on the system we’re restoring to. Now, that software runs on a CD or USB key, and right now there’s no way to handle that boot process on Amazon.

That’s what I’ve spent the last week on. I’ve looked at various ways to do it, including running the hardware independent restore software on a new instance, and connecting that to the volume that will become our new computer. What I’ve settled on is to use Oracle’s VirtualBox software running on Amazon EC2.

Lots of people will tell you that running VirtualBox on Amazon EC2 simply won’t work. Well, it does. The problems come if you’re running a 64bit server inside VirtualBox, which in turn sits on a virtualised computer in EC2. But running a 32bit image in VirtualBox works just fine.

So – we’re logged onto the (64bit) Windows 2008 R2 instance that we use to store all the server images – inside Amazon EC2. Then we load up VirtualBox inside that, and boot a fresh virtual machine off the Shadow Protect recovery disk iso. We have a blank vhd disk in there as well (which will become the new server). And we have a second vhd disk which contains ShadowProtect’s continuous incremental backup files, (spi files) plus its initial backup (spf file).

ShadowProtect restores those image files onto our new VHD file. We go through the HIR process as normal, make sure the partition we’ve restored to is active, and shut down the virtual machine again.

We’ve now got a vhd file, which is the root volume of our newly recovered server. Now, if it’s a 64bit server (which it probably is), then there’s no way we can run this in VirtualBox. And even if it’s 32 bit, it would be slow (running a virtual machine inside another virtual machine). Certainly not much good for 100 staff who are all wanting to login to the servers from home the day after a client’s building has burned down.

So here’s where the Amazon part comes in. We use their VM Import / Export tools to pull our new VHD image into a fresh EC2 computer instance. The aws API tools are installed on our master Amazon machine, so we run the ec2-import-instance, specifying the new VHD image, and Amazon begins pulling that into a new instance.

Once that process completes, there’s a new instance sitting in the Amazon Management Console, ready to be booted. We access that with Remote Desktop, in the same way we’d have accessed the original physical machine, and we’re away.

And that’s it — complete disaster recovery into the Amazon cloud, with an RPO of 15 minutes.

There are a few caveats. Firstly, Storage Craft claim a 2 minute RTO for their ShadowProtect solution. In other words, if you have a disaster, you can be up and running with a cloned replacement server in 2 minutes. Now, that may be true for the standby machine we’ve got on a client’s site — and that machine would always be the one you’d restore first, if that was possible. But if the client site is no longer accessible following a disaster and we need to do a remote restore on Amazon, that process is going to be longer.

The HIR part on VirtualBox takes probably 15 minutes, and then the import of the VHD into a new Amazon instance takes a couple of hours. This is very dependent on the amount of data we’re restoring, of course. All in all, we’re looking at an RTO of something like 4 hours — though I’m going to be looking at the process to get it slicker, and to look at ways we can automate as much of it as possible. It’s still miles better than getting those nightly backup tapes restored onto replacement hardware, but it’s not the miracle RTO that you’d get from a local ShadowProtect instance.

Another issue — while we have blazing fast replacement servers in Amazon’s data centre, it’s trickier when the time comes to restore to physical servers at the client’s new offices again. To do that, we need to get the image files back out of the cloud and onto the client’s new facility. Fine if they have a 100Mbit connection, but slow if not. An option is to use Amazon’s import/export facility (where they send your data on a hard disk from Ireland).  The good news is that the replacement cloud servers are maintained for as long as needed, so that allows buildings to be set up, servers to be fixed and Internet connections to be commissioned without stopping staff being able to work on the temporary servers.

The advantage is clear, though: Off-site disaster recovery to a facility that has large amounts of bandwidth and servers that can be as powerful as needed. Plus, no need to have a stack of physical machines sitting in our office just waiting to be called up to replace machines at a client’s temporary site. Instead, we provision the replacements only when they’re needed. And that means the costs of a full off-site disaster recovery can be much lower than other solutions.

Tom

StorageCraft Shadow Protect and disaster recovery

Traditional backup systems for many SMEs involve a big tape drive, and a set of maybe 10 tapes that are religiously changed each night — perhaps encrypted and taken off site if things are working really well.

The backup jobs will kick into life in the early hours. Typically the backup software will send a report by email to let you know how everything went, and the process will continue each night, with someone swapping the next tape into the drive at some point the next morning.

That’s fine as far as it goes, but if there’s a problem with a server — or worse, a fire, flood or other disaster — at some point that evening, then you could lose a whole day’s work.

You also need to still have a working tape drive, and if there’s been a catastrophic problem with the server, you’d need new server hardware too.

The problem is, that can take a day to arrive, and then you’ve got to do the restore, which can take several hours. So as well as the lost day, you’re looking at perhaps two more days to fully restore everything. And another thing – if it’s a Windows server, quite often you’re only able to restore to identical or similar hardware. Stick your recovery image onto new hardware, and you’ll find that Windows won’t boot — unless you have specialist backup software that can handle it.

I’ve spent a lot of time recently evaluating and testing StorageCraft’s “Shadow Protect” software. I was impressed at how easy it is to restore an image of a server. What’s really cool is that you can bring up a “Virtual Boot” of the backed-up server, using Oracle’s VirtualBox. So you can open up a window on your machine and boot up the server. This provides a really quick way to check that all is okay with your backup images.

And finally, Shadow Protect takes a full image of the server every 15 minutes. That means that if there’s a disaster, then theoretically you should only lose 15 minutes of work. And because you can restore quite easily onto completely new, dissimilar hardware, then it’s a lot quicker to get back up and running to.

We can move mountains!

We can move mountains!

We can also remove them!  For those of you just joining us, our website used to feature a really nice picture of a mountain.  Ok, actually it was a sand dune, but it looked a lot like a mountain because it was sand, but in black and white.  And we had it up there because we’re DUNE Root and…well, you know…picture of aforementioned dune.

Anyhoo, we’re doing a little website housekeeping, and we decided it was time for a new look.  (And, it’s easier to read.  We’re getting older now, and the mountain made us squint a little.)  We hope you like it too!

The Univerge SV8100

We’ve been doing a lot of work with the NEC Univerge SV8100 phone system recently. It’s a fantastic, flexible system, currently used at one of our clients, with 80 phones. We’re using a mixture of NEC’s digital IP handsets, plus some new VoIP handsets – all of which work really nicely together.