<![CDATA[theCruskit]]>https://thecruskit.com/Ghost 0.11Mon, 04 May 2020 08:50:43 GMT60<![CDATA[Scheduling OSX jobs with launchd]]>https://thecruskit.com/scheduling-osx-jobs-with-launchd/581ccca7-13b0-4b26-94be-e36763663382Sat, 16 Jan 2016 00:19:03 GMTApple's preferred method of scheduling jobs is via launchd (rather than cron). One of the main benefits of using launchd is that jobs scheduled to occur while the machine was sleeping will run once it wakes up. This is different to cron where if the machine was sleeping the job just doesn't get executed.

The steps below give a practical example of how to schedule a script that performs a backup of photos to another machine on a daily basis.

The script to be scheduled is: /Users/paul/remoteSyncPictures.sh
It is just a simple script that rsync's the Photo Library to another machine as per below (assumes that keys have been configured for login):

rsync -av "/Users/paul/Pictures/Photos Library.photoslibrary" tv@macmini:"/Users/tv/Pictures/"  

We want to configure this script will run nightly (at 22:30) to backup the photo album.

To do this we need to define a launch agent. The launch agent is what tells OSX what to run and when. It is defined in an plist file (which is just an xml based configuration file). The plist file to be used for configuring the script is provided below for reference. The default location for storing per-user plist files (so they get picked up automatically) is: ~/Library/LaunchAgents. (Note that there are other locations for system wide agents). It is generally recommended that the name of the plist file match the Label defined in the definition so that it is easy to trace them (note that the labels need to be globally unique as well).

So the plist file is named

~/Library/LaunchAgents/com.thecruskit.remoteSyncPictures.plist

and contains:

<?xml version="1.0" encoding="UTF-8"?>  
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">  
<plist version="1.0">  
<dict>  
    <key>Label</key>
    <string>com.thecruskit.remoteSyncPictures</string>
    <key>ProgramArguments</key>
    <array>
        <string>/Users/paul/remoteSyncPictures.sh</string>
    </array>
    <key>StartCalendarInterval</key>
    <dict>
        <key>Hour</key>
        <integer>22</integer>
        <key>Minute</key>
        <integer>30</integer>
    </dict>
    <key>StandardErrorPath</key>
    <string>/Users/paul/remoteSyncPictures.log</string>
    <key>StandardOutPath</key>
    <string>/Users/paul/remoteSyncPictures.log</string>  
</dict>  
</plist>  

The example above schedules based on StartCalendarInterval so that the job runs at the same time each day. It is also possible to schedule based on StartInterval which will run the job with defined intervals between execution. The full range of properties that it is possible to configure for a job are captured in the Apple OSX launchd.plist man page

Once the configuration file has been created, it is necessary to load it. This is performed using the launchctl utility.

To load the job definition, use:

launchctl load ~/Library/LaunchAgents/com.thecruskit.remoteSyncPictures.plist

The job is now scheduled and will run at the desired time.

Once the job has been loaded, it can be manually triggered (rather than waiting for the scheduled time) by running:

launchctl start com.thecruskit.remoteSyncPictures

Check the status of scheduled jobs using:

launchctl list

To unload the job:

launchctl load ~/Library/LaunchAgents/com.thecruskit.remoteSyncPictures.plist

The full range of options available using launchctl can be found on the Apple OSX Man page for launchctl

Another useful reference on launchd is Nathan Grigg's Scheduling Jobs with launchd page. It provides a bit more background on launchd as well as options for GUI interfaces for configuring the jobs.

]]>
<![CDATA[Fixing 403 errors when using nginx with SELinux]]>https://thecruskit.com/fixing-403-errors-when-using-nginx-with-selinux/088e9029-7d77-45da-99cc-b459c198189eFri, 01 Jan 2016 04:51:48 GMTI was trying to configure up a new static content directory in nginx (so that I could use the letsencrypt webroot domain verification method), but kept getting 403 permission denied errors when accessing any files from the directory.

Eventually tracked it down to SELinux blocking access to the new directory that I'd created because it wasn't part of the policy applied to nginx.

The requests by nginx to read the file could be seen as being blocked in the /var/log/audit/audit.log file as:

type=SYSCALL msg=audit(1451621937.716:74556): arch=c000003e syscall=2 success=no exit=-13 a0=7ff62cb994ad a1=800 a2=0 a3=7ff62be64ed0 items=0 ppid=5698 pid=5699 auid=4294967295 uid=997 gid=995 euid=997 suid=997 fsuid=997 egid=995 sgid=995 fsgid=995 tty=(none) ses=4294967295 comm="nginx" exe="/usr/sbin/nginx" subj=system_u:system_r:httpd_t:s0 key=(null)  
type=AVC msg=audit(1451621939.616:74557): avc:  denied  { open } for  pid=5699 comm="nginx" path="/wwwroot/letsencrypt/.well-known/acme-challenge/foo.txt" dev="xvda1" ino=17646770 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=file  

The quick fix: change the context of the new directory /wwwroot so that nginx can read it:

chcon -Rt httpd_sys_content_t /wwwroot  

After making this change, nginx can then read the directory and will serve the files from it without any errors.

Checking the changes that were made: with a test file foo.txt in the directory, the original permissions before adding to context:

# ls -Z *
-rwxr--r--. root root unconfined_u:object_r:user_tmp_t:s0 foo.txt

New permissions after adding to context:

# ls -Z *
-rwxr--r--. root root unconfined_u:object_r:httpd_sys_content_t:s0 foo.txt

Useful references in sorting this out:

]]>
<![CDATA[Fixing home ADSL latency under load with fq_codel]]>https://thecruskit.com/fixing-home-adsl-latency-under-load-with-fq_codel/045c79f7-bd06-466f-95e3-a9cf66056781Thu, 22 Oct 2015 11:31:57 GMTAbout the same time my Billion ADSL modem (5+ years old) went on the blink, I was listening to the Packet Pushers Podcast: Improve your home internet performance using CoDel and figured it sounded worth experimenting with fq_codel to see how it would impact my home internet performance.

The gist is that fq_codel is an algorithm to deal with the issues associated with Buffer Bloat whereby excess buffering cases high latency spikes and poor overall network performance.

So I started looking at replacement ADSL routers for home that would support fq_codel. After a little research I was looking at getting an basic ADSL modem (eg: a TP-Link TD-8817 or W8968 or similar) running in bridge mode and then putting a router running DD-WRT or OpenWRT behind it to provide the fq_codel capability - there didn't seem to be many ADSL routers that natively supported fq_codel.

After looking around for an router that I could install OpenWRT on I ended up finding the Ubiquity EdgeRouter X which supports fq_codel out of the box. So I went the easy path and picked one up so I wouldn't have to go through the hassle of getting a router and flashing it with OpenWRT.

To start with I replaced the modem with a TP-Link W8968 ADSL modem/router and configured it so that it was working as a router (with WiFi disabled).

Latency Test Scripts

To see the effects of buffer bloat I used two methods:

  • the speed tests from DSLReports which measure latency (and give a bufferbloat score)

  • a little script I wrote that just runs a ping test and then every 5 seconds starts an upload and download task using curl. Then I just manually monitor the ping response time. Nothing fancy, just looks like:

ping www.internode.on.net &  
sleep 5  
echo Starting first stream  
curl -s -o /dev/null http://speedcheck.cdn.on.net/50meg.test -w "%{speed_upload}" &  
curl --connect-timeout 8 -F "file=@10meg.test" http://128.199.65.191/webtests/ul.php -w "%{speed_upload}" -s -o /dev/null &

sleep 5  
echo Starting second stream  
curl -s -o /dev/null http://speedcheck.cdn.on.net/50meg.test  -w "%{speed_upload}" &  
curl --connect-timeout 8 -F "file=@10meg.test" http://128.199.65.191/webtests/ul.php -w "%{speed_upload}" -s -o /dev/null &

sleep 5  
echo Starting third stream  
curl -s -o /dev/null http://speedcheck.cdn.on.net/50meg.test -w "%{speed_upload}" &  
curl --connect-timeout 8 -F "file=@10meg.test" http://128.199.65.191/webtests/ul.php -w "%{speed_upload}" -s -o /dev/null &

Testing with no fq_codel

In the configuration using just the TP-Link W8968 with a laptop connected directly to one of its switch ports (ie: without any fq_codel support enabled), I got results like:

which wsn't too bad - the latency went from 30ms to 90ms under load. Where I really noticed it was when I ran the test script, latency for the pings reached as high as 3 seconds. Not very helpful for my voip phone call quality!

Testing with fq_codel

After this, I moved to the desired configuration where the TP-Link modem was in bridge mode connected to the EdgeRouter-X with fq_codel enabled. After tuning the fq_codel settings - upload 1390kbps, download 13800kbps (by picking upload and download numbers close to what you get without limits and then work up/down from there while testing to see outcome), I was able to get results like this (note the A+ for bufferbloat):

But the real test was when under load using the script, the latency looks like:

PING www.internode.on.net (150.101.140.197): 56 data bytes  
64 bytes from 150.101.140.197: icmp_seq=0 ttl=57 time=46.533 ms  
64 bytes from 150.101.140.197: icmp_seq=1 ttl=57 time=46.341 ms  
64 bytes from 150.101.140.197: icmp_seq=2 ttl=57 time=46.162 ms  
64 bytes from 150.101.140.197: icmp_seq=3 ttl=57 time=46.399 ms  
64 bytes from 150.101.140.197: icmp_seq=4 ttl=57 time=46.082 ms  
Starting first stream  
64 bytes from 150.101.140.197: icmp_seq=5 ttl=57 time=46.765 ms  
64 bytes from 150.101.140.197: icmp_seq=6 ttl=57 time=94.550 ms  
64 bytes from 150.101.140.197: icmp_seq=7 ttl=57 time=60.278 ms  
64 bytes from 150.101.140.197: icmp_seq=8 ttl=57 time=59.467 ms  
64 bytes from 150.101.140.197: icmp_seq=9 ttl=57 time=70.207 ms  
Starting second stream  
64 bytes from 150.101.140.197: icmp_seq=10 ttl=57 time=113.590 ms  
64 bytes from 150.101.140.197: icmp_seq=11 ttl=57 time=64.240 ms  
64 bytes from 150.101.140.197: icmp_seq=12 ttl=57 time=81.501 ms  
64 bytes from 150.101.140.197: icmp_seq=13 ttl=57 time=81.129 ms  
64 bytes from 150.101.140.197: icmp_seq=14 ttl=57 time=74.700 ms  
Starting third stream  
64 bytes from 150.101.140.197: icmp_seq=15 ttl=57 time=128.408 ms  
64 bytes from 150.101.140.197: icmp_seq=16 ttl=57 time=66.781 ms  
64 bytes from 150.101.140.197: icmp_seq=17 ttl=57 time=77.045 ms  
64 bytes from 150.101.140.197: icmp_seq=18 ttl=57 time=65.625 ms  
64 bytes from 150.101.140.197: icmp_seq=19 ttl=57 time=57.672 ms  

Note that there is a spike in the latency for one ping immediately after the stream starts, but the router quickly adjusts and the latency comes back down to 20-30ms higher than what it was originally - with 3 full rate upload and download streams running simultaneously. This is compared to the 3 seconds I was getting prior to enabling fq_codel!

Admittedly, I had to give up some of the pure top end download speed to get the lower latencies (I did try higher upload/download settings for fq_codel and was able to get closer to the original speed, but the closer I got the higher the latency went). Now, though, I don't have to worry if I'm streaming tv or downloading stuff when the voip phone rings as the latency stays low. The latency will have more user impact on a day to day basis rather than a slight loss of top end speed so happy to make the tradeoff.

If you haven't looked at how buffer bloat and latency are impacting your connection, I'd suggest it is worthwhile running a couple of tests and you may also find it worthwhile investigating fq_codel for your connection.

]]>
<![CDATA[Fixing failed voip calls to mobiles from a Cisco SPA112 ATA]]>https://thecruskit.com/fixing-failed-voip-calls-to-mobiles-from-a-cisco-spa112-ata/4825a5c4-d72d-41e7-91bf-144338f05ecaThu, 15 Oct 2015 11:21:56 GMTSo, my old Billion adsl modem with a built in fxo port for voip went on the blink. I replaced it with a Cisco SPA112 ATA (in conjunction with a new ADSL modem which didn't have built in voip support).

However, after configuring up the SPA112 with my Internode Nodephone details I was getting some funny behaviour. Calls to landlines and Optus network mobiles would work ok, as would incoming calls. However, calls to Telstra mobiles were failing - the mobile phone would just come up with a call failed message and then the call would either go through to voicemail or the mobile would ring again (but would get call failure if answered).

I got the same behaviour when configured using the simple wizard and with the full Internode config for a Sipura ATA.

Eventually found a forum post from vange, who experienced something similar on Optus broadband with a Cisco voip phone.

It turns out the solution is to change the rtp packet size from the default of 0.030 to 0.020. The setting can be found under Voice -> SIP -> RTP Parameters -> RTP Packet Size.

After changing this it all seems to work and it's happy days again.

]]>
<![CDATA[Changing the Location of Referenced Folders in iPhoto]]>https://thecruskit.com/changing-the-referenced-folder-location-in-iphoto/39291b72-8a52-4564-9b3e-1b7ed2012e67Tue, 20 Jan 2015 12:25:23 GMTIf you use Referenced Folders in iPhoto there will probably come a time when you want to move the folders with the photos in them to another location. An example would be moving them to an external drive.

There isn't a nice method for updating the locations of the referenced folders through iPhoto after moving the folders - when you open the iPhoto library after moving them, iPhoto will still show the photos and their thumbnails, but when you try and view the full image it will complain that it can't find it and ask you to locate it. Rather a pain when you need to locate each image in your library.

The easiest way to fix the references is to modify the iPhoto database directly to update the path. I'd suggest you backup the iPhoto library before performing the steps below so you can easily restore it if you don't quite get it right. Also make sure iPhoto is not running when you are performing the database updates...

iPhoto stores its information in a sqlite database located in the folder (replace path_to_library with the location of your iPhoto library :
<path_to_library>/iPhoto Library.photolibrary/Database/apdb

The table with the folder locations for the images is RKMaster. You can see the image locations from a Terminal using:

cd <path_to_library>/iPhoto Library.photolibrary/Database/apdb  
sqlite3 Library.apdb 'select imagePath from RKMaster LIMIT 4'  

This will return the first 4 rows in the table, looking something like:

Cycling1.jpg|Users/myuser/Pictures/Cycling1.jpg  
cycling3.jpg|Users/myuser/Pictures/cycling3.jpg  
cycling2.jpg|Users/myuser/Pictures/cycling2.jpg  
IMG_0534.JPG|Users/myuser/Pictures/Suzhou/IMG_0534.JPG  

For this example, I am moving the referenced folders from /Users/myuser/Pictures to /Users/Shared/Pictures.

To change the location of the referenced folders in iPhoto, run an update against the table as per below. Change the paths in the replace arguments as required for your case.

cd <path_to_library>/iPhoto Library.photolibrary/Database/apdb  
sqlite3 Library.apdb "update RKMaster set imagePath = replace(imagePath, 'Users/myuser', 'Users/Shared')"  

You should now be able to open iPhoto and when you navigate through to the Photos in your referenced library and view the full photo it should work.

There's further information and more detail on steps in a post on the Apple Support Communities about Referenced Libraries

]]>
<![CDATA[Spring, @PathVariable, dots and truncated values...]]>If you are using a Spring @PathVariable with a @RequestMapping to map the end of a request URI that contains a dot, you may end up wondering why you end up with a partial value in your variable - with the value truncated at the last dot. For example, if

]]>
https://thecruskit.com/spring-pathvariable-and-truncation-after-dot-period/b821af16-0801-4d18-a7e4-40c84627a9b2Thu, 27 Nov 2014 10:30:28 GMTIf you are using a Spring @PathVariable with a @RequestMapping to map the end of a request URI that contains a dot, you may end up wondering why you end up with a partial value in your variable - with the value truncated at the last dot. For example, if you had the mapping:

@RequestMapping(value = "/test/{foo}", method=RequestMethod.GET)
    public void test(@PathVariable String foo){
    }

and you made a request to /test/fred.smith, then foo="fred" and the .smith is nowhere to be seen. Similarly, if you requested /test/fred.smith.jnr, then foo=fred.smith and the .jnr would be absent. (Spring thinks these are file extensions and helpfully removes them for you.)

The simple fix for this is to add a regex to the RequestMapping variable definition as per:

@RequestMapping(value = "/test/{foo:.*}", method=RequestMethod.GET)
    public void test(@PathVariable String foo){
    }

Then for /test/fred.smith you will get foo="fred.smith".

Others have obviously encountered the same issue - see the StackExchange questions Spring MVC PathVariable getting truncated and Spring MVC Path Variable with dot is getting truncated for some additional information.

]]>
<![CDATA[CentOS 7 AMI on AWS has SELinux enabled]]>https://thecruskit.com/centos-7-ami-on-aws-has-se-linux-enabled/d47f5d6a-07cc-4d39-a16f-96ef63c0fa5aTue, 18 Nov 2014 11:55:41 GMTHaving configured a working VagrantFile that could spin up a CentOS 7 image on Digital Ocean, install and configure Ghost + nginx (see cruskit/vagrant-ghost, it should have been a simple matter of adding the AWS Vagrant provider to get the image running on AWS as well...

It was easy enough to add the provider, and provisioning would run without errors, but nginx would return bad gateway errors whenever trying to proxy to Ghost. Checking the nodejs Ghost process, it thought it was up and running ok. Trying to access the Ghost port (2368) however, didn't play so nicely and wouldn't connect.

After a bit a of troubleshooting, it turns out that the AWS CentOS 7 AMI has SELinux (Security Enhanced Linux) enabled, whereas it is disabled in the Digital Ocean image. SELinux has a preconfigured list of HTTP ports that it allows connectivity on and 2368 was not one of these and so it was being blocked.

(To be fair, SELinux being enabled is mentioned in the AMI notes, but I missed it...)

So, to make it work it was necessary to add 2368 to the list of allowed http ports. This can be done via semanage using:

semanage port -a -t http_port_t  -p tcp 2368  

(It would also have been possible to disable SELinux by editing /etc/selinux/config and setting SELINUX=disabled and then performing a reboot, but building a reboot into a vagrant provisioning sequence would be a pain, and for a prod box it would be nice to leave SELinux enabled anyway.)

Some simple commands that can help you trying to troubleshoot an issue like this:

Find out whether SELinux is running and its status:

sestatus  

Find things that SELinux is impacting:

cat /var/log/messages | grep "SELinux"  

List the configured ports in SELinux:

semanage port -l  

produces output (filtered for http) like:

http_port_t   tcp  80, 81, 443, 488, 8008, 8009, 8443, 9000  

If you want further info the following are useful:

]]>
<![CDATA[Automatically applying updates to Centos 7]]>If you are like me, you probably don't log into your system every day and run yum update to check for updates to apply. There is an easy way to keep up to date, though, using yum-cron. yum-cron allows you to configure your system to periodically check for updates and

]]>
https://thecruskit.com/automatically-applying-patches-to-centos-7/bb3a31d5-b39a-420c-b778-9b65555d95a7Mon, 17 Nov 2014 11:50:58 GMTIf you are like me, you probably don't log into your system every day and run yum update to check for updates to apply. There is an easy way to keep up to date, though, using yum-cron. yum-cron allows you to configure your system to periodically check for updates and automatically apply them.

Using yum-cron in Centos 7, you also have the flexibility to specify what level of upgrades you want applied, eg: all updates, security updates, minimal security updates, etc. For stability on your production servers you'll probably only want to go with security updates, but on dev servers where it's not as much of an issue just go with all.

Install with yum -y install yum-cron and then edit the file /etc/yum/yum-cron.conf to set your options. Some of the options you'll want to change (the file is well commented to indicate what the options available are):

update_cmd = security  
download_updates = yes  
apply_updates = yes  

Don't forget to make sure it's started and running:

systemctl restart yum-cron  
systemctl status yum-cron  

There's a good description of how to get going with yum-cron at linuxaria with more details on the configuration options available.

]]>
<![CDATA[Using systemd to manage Ghost & nginx on CentOS 7]]>So, you want to be able to easily stop and start Ghost & nginx on CentOS7? (at least I did when I was setting up this blog...)

Centos 7 uses systemd rather than the older /etc/init.d method of managing services and daemons, so it could be a little

]]>
https://thecruskit.com/using-systemd-to-manage-ghost-nginx-on-centos/ce09b439-e2c0-4549-a4c1-856b22ab07adMon, 17 Nov 2014 10:49:28 GMTSo, you want to be able to easily stop and start Ghost & nginx on CentOS7? (at least I did when I was setting up this blog...)

Centos 7 uses systemd rather than the older /etc/init.d method of managing services and daemons, so it could be a little bit of a change from what you're used to (and a source of a little controversy).

systemd uses service files (unit definitions) that declaratively capture the expected behaviour of an application (eg: how to start/stop, automatic restart behaviour upon failure, etc) as its native method of configuration.

nginx comes with a systemd unit definition (see: /usr/lib/systemd/system/nginx.service), so if you've installed from the standard repos using yum, you should be able to start, stop, restart & check status nginx using the following (as root or via sudo):

systemctl start nginx  
systemctl stop nginx  
systemctl restart nginx  
systemctl status nginx  

If you install Ghost by using the zip file installer then you will need to provide a unit definition for Ghost so it can be managed by systemd. An example of a unit definition is below (change user, group, paths to ghost as appropriate for your installation). Create the unit definition as:

/etc/systemd/system/ghost.service

[Service]
ExecStart=/usr/bin/node /ghost/index.js  
Restart=always  
StandardOutput=syslog  
StandardError=syslog  
SyslogIdentifier=ghost  
User=ghost  
Group=ghost  
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target  

Once you've created the ghost.service file you'll be able to manage ghost in the same way as nginx, ie:

systemctl start ghost  
systemctl stop ghost  
systemctl restart ghost  
systemctl status ghost  

Note that having Restart=always set in the unit definition means that if the node.js process running ghost dies for any reason, then systemd will automatically restart it.

For more detailed information on how to use systemd (enabling/disabling services, auditing, logging, etc), some useful references include:

]]>
<![CDATA[Specifying SSD or Magnetic EBS volumes using the Vagrant AWS provider]]>https://thecruskit.com/specifying-ssd-volumes-using-vagrant-aws-provider/bde616d6-927c-4d28-afc1-9b41e61ec869Sun, 02 Nov 2014 10:23:32 GMTThe Vagrant AWS Provider allows you to use Vagrant to provision virtual machines in AWS. When provisioning a machine in AWS, you are able to specify whether to use Magnetic, General Purpose SSD or Provisioned IOPS SSD volumes, depending on where you want to be on the performance/cost curve.

General Purpose SSD is now the AWS default volume type (though when I was provisioning via Vagrant, it was creating Magnetic volumes if no specific parameters were set in the VagrantFile).

You can specify the volume type in your VagrantFile using the block_device_mapping parameter in conjunction with VolumeType. Valid values for VolumeType include:

  • gp2 : General Purpose SSD
  • standard : Magnetic Discs

The below example shows how to specify SSD discs (gp2) for an 8GB volume that will be deleted when the machine terminates:

aws.block_device_mapping = [{
  'DeviceName' => '/dev/sda1',
  'Ebs.VolumeSize' => 8,
  'Ebs.VolumeType' => 'gp2',
  'Ebs.DeleteOnTermination' => 'true' }]

For a Magnetic volume, use the value standard for the VolumeType.

There's more detail on possible parameter values included in the AWS API Docs. A complete example of a VagrantFile using the AWS provider with the VolumeType option can be found at https://github.com/cruskit/vagrant-ghost/blob/master/Vagrantfile.

]]>
<![CDATA[Hello world]]>I thought it was about time to setup a place to capture some thoughts, as well as those little one liners that seem to take ages to find but make all the difference.

It would have been too easy to go with a hosted blog option (and also provided limited

]]>
https://thecruskit.com/hello-world/764b1a31-59fc-4bff-83ba-39259c2f03c3Sun, 02 Nov 2014 09:33:30 GMTI thought it was about time to setup a place to capture some thoughts, as well as those little one liners that seem to take ages to find but make all the difference.

It would have been too easy to go with a hosted blog option (and also provided limited learning opportunities), so instead I thought I'd spin one up myself.

This involved a tour through Vagrant, Centos (and SELinux), Ghost (and themes), Nginx, AWS (via Digital Ocean), Disqus, Google Analytics and New Relic. It's amazing what is readily available out there for use these days to make your life easier.

If you want to build one for yourself, I've captured the Vagrant configuration in a project on GitHub that will allow you to easily spin up a Ghost blogging server. I'll explain the project a bit more in a future post.

For now, "Hello World"!

]]>