I feel like I’m in the homestretch of my migration off of my current hosting provider – FullControl. Nothing against these guys; they’ve been an absolute stellar service provider. I just don’t need the dedicated virtual server I was paying for with them. It’s a short story that came down to rightsizing my hosting provider to align with my current needs. I’ll tell the somewhat longer version of the story in this blog post though since there are a couple of interesting corollaries along the way.

I have three requirements of a hosting provider. Once they could fulfill these 3 requirements, I am looking to optimize on costs. My three requirements are:

  • Host WordPress blogs.
  • Provide Subversion source control services.
  • Support OSQA, which essentially means running Python and Django

Both my ex-provider and my new provider met these three requirements – my ex-provider at the high cost side and my new provider at the low cost side. At both ends of the price extremes, they still have similar architectures though – a single server that can host PHP, Python and MySQL. Despite the fact that one is Windows and one is Linux, they’re still both standard hosting stacks.

When I first started thinking about moving hosting providers, I considered some slightly more esoteric approaches, especially as they relate to blog hosting. I did a bit of probing and they all fell short in one area or another but are worth mentioning just due to the irregular architectures they embody.

  1. WordPress on Windows Azure. You can most certainly host WordPress on Windows Azure and SQL Server Azure. Zach Owens is an evangelist for Microsoft who is supporting this and blogs all about it. It sounds interesting but I get the sense that this is just some sort of Microsoft pet project and the floor on it could drop out at any time.
  2. WordPress on Amazon EC2 Micro Instances. I loved the ideas Ashley Schroder presented in his blog post on clustering WordPress on EC2 micro instances. His approach and experiences are worth reading about and will cause you to think about and investigate EC2 spot instance pricing structure, if nothing else.
  3. BlogEngine on EC2 Micro Instances using SQL Azure. A radical extension of Ashley’s ideas onto the Microsoft platform: host BlogEngine.NET on EC2 Micro Instances and talk to SQL Azure on the back end. This fell apart on BlogEngine’s architecture, which many posts indicate doesn’t scale out at all due to architectural limitations in the DAL and caching layers.

The more I thought about it, the more I just want a stack that just works for my personal web apps. As exciting as the above options were, they sounded like massive black holes that would suck in my free time. I ultimately decided to go with a simple solution: the tried-and-proven Dreamhost (http://www.dreamhost.com), a Linux provider, for my needs. I get what I need for less than $10 US per month and I can ramp up Amazon EC2 spot instances when I need a throw-away playground. The move over was a lot easier than I expected, consisting of the following three steps:

  1. Export WordPress content from my old provider wordpress site into my new site.
  2. Flip over DNS to point at my WordPress blog on my new provider’s site. This included flipping over DNS on all of my binary content (e.g. images) that I host on Amazon S3 and redirect to with a CNAME entry from a beckshome subdomain.
  3. Flip the switch on the DNS routing for Google Apps after I noticed my beckshome.com email dried up for a couple of days.

Comments No Comments »

I recently had the opportunity to look into and make use of the Microsoft System.Security.SecureString class. This class is one of those dark corners of the .NET Framework that you don’t think about on a day-to-day basis but are really glad that it’s there when your security auditor starts asking questions about how PII data such as social security numbers are protected while resident in memory. The SecureString class takes care of this problem, helping you avoid a situation where unencrypted sensitive String data is left lingering around on the .NET heap. However, since this class does reference unmanaged memory buffers, its use is not entirely intuitive. I’ve attempted to demystify things with the explanation, drawing and code snippets in this post.

The diagram below shows that, in the case of System.String, what you get is an unencrypted string located in managed memory. Due to the immutability of String objects and the nondeterministic nature of the .NET Garbage Collector, the need for one string may result in multiple string objects scattered across managed memory, waiting to be compromised.

In the case of a SecureString, you don’t have an unsecure String in managed memory. Instead, you get a DPAPI encrypted array of characters in unmanaged memory. And, since SecureString implements the IDisposable interface, it’s easy to deterministically destroy the string’s secure contents. There are some limited .NET 4.0 Framework classes that accept SecureStrings as parameters, including the cryptographic service provider (CSP), X.509 certificate classes and several other security related classes. But what if you want to create your own classes that accept and deal with secure strings? How do you deal with the SecureString from managed .NET code and how do you ensure that you don’t defeat the purpose of the SecureString by leaving intermediate strings unsecure in memory buffers?

The simple console application below exhibits how a SecureString can be properly used and disposed; with the SecureString contents being made available to managed code and the intermediate memory zeroed out when no longer needed.

using System;
using System.Security;
using System.Runtime.InteropServices;

namespace SecureStringExample
{
    class Program
    {
        static void Main(string[] args)
        {
            // Wrapping the SecureString with using causes it to be properly  
            // disposed, leaving no sensitive data in memory
            using (SecureString SecString = new SecureString())
            {
                Console.Write("Please enter your password: ");
                while (true)
                {
                    ConsoleKeyInfo CKI = Console.ReadKey(true);
                    if (CKI.Key == ConsoleKey.Enter) break;

                    // Use the AppendChar() method to add characters
                    // to the SecureString 
                    SecString.AppendChar(CKI.KeyChar);
                    Console.Write("*");
                }
                // Make the SecureString read only
                SecString.MakeReadOnly();
                Console.WriteLine();

                // Display password by marshalling it from unmanaged memory  
                DisplaySecureString(SecString);
                Console.ReadKey();
            } 
        }

        // Example demonstrating what needs to be done to get SecureString value to
        // managed code. This method uses unsafe code; project must be compiled
        // with /unsafe flag in the C# compiler 
        private unsafe static void DisplaySecureString(SecureString SecString)
        {
            IntPtr StringPointer = Marshal.SecureStringToBSTR(SecString);
            try
            {
                // Read the decrypted string from the unmanaged memory buffer
                String NonSecureString = Marshal.PtrToStringBSTR(StringPointer);
                Console.WriteLine(NonSecureString);
            }
            finally
            {
                // Zero and free the unmanaged memory buffer containing the 
                // decrypted SecureString
                Marshal.ZeroFreeBSTR(StringPointer);
                if (!SecString.IsReadOnly())
                   SecString.Clear();
            }
        } 
    }
}

This example should be useful to you in working SecureString into your own application. Like any other security measure, there’s a cost to the additional security. In the case of the SecureString, there’s overhead to adding characters to the SecureString as well as marshaling data out of unmanaged memory.  The final reference example I’ll provide is from Microsoft’s SecureString implementation, specifically the code to initialize the secure string. From this code, you can clearly see the check for platform dependencies, buffer allocation, pointer creation and the ProtectMemory() call which invokes the Win32 native RtlEncryptMemory function.

[HandleProcessCorruptedStateExceptions, SecurityCritical]
private unsafe void InitializeSecureString(char* value, int length)
{
    this.CheckSupportedOnCurrentPlatform();
    this.AllocateBuffer(length);
    this.m_length = length;
    byte* pointer = null;
    RuntimeHelpers.PrepareConstrainedRegions();
    try
    {
        this.m_buffer.AcquirePointer(ref pointer);
        Buffer.memcpyimpl((byte*) value, pointer, length * 2);
    }
    catch (Exception)
    {
        this.ProtectMemory();
        throw;
    }
    finally
    {
        if (pointer != null)
        {
            this.m_buffer.ReleasePointer();
        }
    }
    this.ProtectMemory();
}

Comments No Comments »

I’ve been sitting on my offsite backup upgrade for a long while now and finally decided to pull the trigger this week. I’ve used MozyHome for many years but the Mozy rate hike 6 months back agitated me. Combine this with the fact that, for more money, I’m not even getting the amount of backup I used to get and it was clearly time to move on, even though I’m nowhere near the 18 billion Gigabytes of storage Mozy claims I’m using.

I looked at some side-by-side reviews of home backup products and found that gigaom had the most useful reviews. Their original review, which was done in 2009, compared the two top contenders at that point in time: MozyHome and Carbonite. I’ve included the link more for completeness at this point since these I wasn’t really interested in these two players. Gigaom’s review of upstart providers Backblaze and Crashplan was much more interesting and convinced me to go with Crashplan as my new backup provider (bye, bye Mozy). I’ve always been interested in Crashplan’s unique peer-to-peer backup option. With their unlimited offsite backup now being extremely price competitive and with an optional family plan, Crashplan has all the features I’m looking for.

For local backups, Apple TimeMachine to an external drive has always worked extremely well for me. However, Scott Hanselman’s recent podcast on Network Attached Storage (NAS) has left me wanting a Synology NAS device. Check out the NAS product features on Synology’s site and the incredible reviews of their products on Amazon.com. Some of the killer features that caught my eye include:

  • Hybrid RAID and easy storage expansion
  • Backup to Amazon S3
  • Built In FTP and WebDAV
  • Surveillance and IP Camera Recording (How Logical Is That?)
  • Apple TimeMachine Support
  • Mobile Device Support
  • Ability to Function as an iTunes Server

This simple YouTube video “Be Your Own Cloud” sums up pretty well some of the challenges I’m trying to address.

Comments No Comments »

One of the things I was really eager to do was help one of our clients manage the archival and history of projects within their TFS repository. Historically, VSS volumes sizes have gotten out of control over time, resulting in commensurately poor performance. Obviously, a SQL Server backing database offers lots of advantages over the Jet database engine but even SQL Server performance will degrade over time as the history volume in long-running projects backs up.

I was hoping that TFS 2008 had built in functionality to manage project archiving and history management. Not only does the TFS 2008 not have such a function but the co-mingling of data (all the projects on a server write to the same database) means that it’s nearly impossible to break out what data belongs to what project and apply different types of information lifecycle management rules such as modifying the type of storage used, applying specific backup criteria to different projects, or taking a project completely offline so that it no longer impacts the performance of the TFS database but can still be retained for historical purposes.

The good news is that, if you’re willing to make the move, TFS 2010 has functionality to explicitly address the requirement for TFS archiving and history management. TFS 2010 Team Project Collections allow you to organize similar projects into collections and, most importantly for our needs, allocate a different set of hardware resources for each team project collection. The benefit of this setup and applicability to the intent of this blog post should be immediately obvious. The downside of this approach is that you can’t work (link work items, branch & merge, etc.) across project collections. An annotated version of a diagram from the MSDN Team Project Collections documentation can be found below.

Comments No Comments »

I’ve included below my Amazon.com review of the book “Making It Big In Software: Get the Job, Work the Org, Become Great”. I diligently read this book from cover to cover and just couldn’t seem to like it. It became pretty monotonous after a while to go through what felt like a very academic handling of what could have been a very interesting topic. This is in stark contrast to the other book I’m reading now, “Delivering Happiness” by Zappos CEO Tony Hsieh, which is a pragmatic blow-by-blow tale of how someone actually made it big by leveraging technology. My review:


I really wanted to like Sam Lightstone’s book “Making It Big In Software” and read it cover-to-cover, at some times forcing myself to read on. There are some good points in the book, which at its best represents a blend between the interviewing style of “Founders at Work” and the pragmatic advice of “Career Warfare”. Unfortunately, the book is at its best far too infrequently to make it a recommended read.

Aside from really lacking any really original advice or insights that are fairly common knowledge to folks who have spent a couple of years in the software industry, there are several other reasons I probably won’t be referring back to this book very frequently:

  • The questions were pretty much the same for every interview. That’s great for statistical comparability but really didn’t do much to draw out the stories from the interviewees. At one point, I found myself thumbing to the end of each interview to find out if the “Do you think graduate degrees are professionally valuable?” question was going to be asked again.
  • An earlier reviewer pointed out the value in the use of personas to illustrate examples. Done correctly, I agree that this is a very powerful technique. However, the software development antics of Moe, Larry, and Curly in this book seemed less like personas and more like an attempt to compensate for the lack of more illustrative examples.
  • Lots of borrowed material. Much of it from the standard software journeyman’s body of knowledge and some of it from popular authors such as Steven Covey, who seems to be a personal favorite of the author.
  • A chapter on compensation with salary ranges? C’mon, really? Aside from immediately dating the book, this is information that clearly could have been put out on a website and updated periodically so that the reference doesn’t get immediately stale.

This book may be of slightly more value (3 stars) to someone new to the field of software. I hope I’m not being unduly harsh but I find it hard to see how folks who have been around in the industry for 5 – 10 years can rate this book with 4 or 5 starts.

Comments No Comments »