Couple of exciting things to link to from last week:

  • Docker – O’Reilly released an awesome Introduction to Docker video. Total runtime is less than 2 hours. Definitely a recommended watch if you’re looking to get familiar with the Docker container. Free to those who have an O’Reilly Safari subscription.
  • Azure VM Images – Microsoft announces a host of Azure Virtual Machine Images for Visual Studio. A great way to get exposure to the other Visual Studio SKUs or run the newer versions of VS without having to deal with the headaches of an install.
  • App Development – Awesome article on how independent mobile app development works and can be quite lucrative.
  • Feature Toggles – Good course from Pluralsight on implementing feature toggles in .NET. Something architects should be aware but the process and patterns are not widely documented.

Comments No Comments »

The new year brought with it the chance to reflect on technologies that I see making a splash in the coming year. I’m enthralled by big data and analytics but I’m not a data scientist; likewise, I only see so much value in the wearables themselves, although they’ll certainly feed the big data beast. My list of technologies is strongly influenced by my background in software and devops — without being a list of language or tool features.

  • Blockchains and Ethereum. Marc Andreesen’s piece on Bitcoin is inspiring. The guy who invented the web browser told Stanford students in a lecture that if he was hacking today, what he’d be working on is applications of blockchains (the cryptographic technology that underpins Bitcoin). Ethereum has some potential as a platform, leveraging the decentralized nature of the blockchain and building on it with programmable contracts (limitless possibilities — hence the excitement) expanding the currently narrow cryptocurrency focus of Bitcoin.
  • Cognitive Computing and Watson. When you see Watson compete (and win) against Jeapordy and Chess masters, big data and predictive analytics look so passé. Cognitive computing and AI is where the big boys are putting their money with Apple, Microsoft and Amazon in the game with the technologies behind Siri, Cortana and Echo, respectively. IBM clearly has the best hand in this deal; question now is how they’ll play it. Will they be able to parlay their initial Watson developer and API offering into a fully-public, commercial, pay-as-you-go service or will they build a walled garden like they did with their cloud offering and get overtaken by competitors? Take a look at the details on the probability that computerization will lead to job losses in the next 2 decades  and it’s hard to think about encouraging your kids to be an airline pilot or an accountant. However, creating, maintaining and training the machines on the corpus of knowledge necessary to perform these tasks will be big business.
  • Containerization and Docker. Amazon EC2 supports Docker containers as of November 2014. Microsoft Azure supports Docker too. This technology seems to have just skipped all the ups and downs of the hype cycle and gone straight to productivity. Time will tell if that’s true of not but Docker offers something for everyone: more efficiency (versus virtual machines) for the data center manager, a well-defined sandbox for system admins and a packaging story made for the DevOps cookbooks.
  • Microservices and ASP.NET vNext. Martin Fowler and the folks at Thoughtworks have been driving the concept of Microservices. Although Martin seems a bit conflicted himself about how this aligns with his first law of distributed objects, I’m a believer. If you look at the SOA projects we worked on 10 years ago, this is a natural progression and probably should have been the jumping off point as opposed to heavy-handed governance. What really appeals to me is the product-based “you build it, you run it” nature of microservices. Works great in places like Amazon and Netflix but it’s hard to know if/how that translates to large enterprises. Technologies like Node have always been naturally amenable to a microservice-based approach; it’s good to see the managed memory enterprise platforms getting onboard as well, for example the lightweight, deploy what you want .NET hosting container available as part of ASP.NET vNext.
  • ALM Service Bus and TaskTop. While not as sexy as the other technologies in my list, Tasktop seems to have honed in on a much-needed and lucrative corner of the enterprise software space, integration of enterprise ALM tools like Atlassian Jira, BMC Remedy, HP QC, Microsoft TFS and others. If TaskTop can deliver on this promise, they’ll certainly find takers. I have a project coming up that’s using TaskTop — will be interesting to see how expectations and reality align.

Comments No Comments »

I feel like I’m in the homestretch of my migration off of my current hosting provider – FullControl. Nothing against these guys; they’ve been an absolute stellar service provider. I just don’t need the dedicated virtual server I was paying for with them. It’s a short story that came down to rightsizing my hosting provider to align with my current needs. I’ll tell the somewhat longer version of the story in this blog post though since there are a couple of interesting corollaries along the way.

I have three requirements of a hosting provider. Once they could fulfill these 3 requirements, I am looking to optimize on costs. My three requirements are:

  • Host WordPress blogs.
  • Provide Subversion source control services.
  • Support OSQA, which essentially means running Python and Django

Both my ex-provider and my new provider met these three requirements – my ex-provider at the high cost side and my new provider at the low cost side. At both ends of the price extremes, they still have similar architectures though – a single server that can host PHP, Python and MySQL. Despite the fact that one is Windows and one is Linux, they’re still both standard hosting stacks.

When I first started thinking about moving hosting providers, I considered some slightly more esoteric approaches, especially as they relate to blog hosting. I did a bit of probing and they all fell short in one area or another but are worth mentioning just due to the irregular architectures they embody.

  1. WordPress on Windows Azure. You can most certainly host WordPress on Windows Azure and SQL Server Azure. Zach Owens is an evangelist for Microsoft who is supporting this and blogs all about it. It sounds interesting but I get the sense that this is just some sort of Microsoft pet project and the floor on it could drop out at any time.
  2. WordPress on Amazon EC2 Micro Instances. I loved the ideas Ashley Schroder presented in his blog post on clustering WordPress on EC2 micro instances. His approach and experiences are worth reading about and will cause you to think about and investigate EC2 spot instance pricing structure, if nothing else.
  3. BlogEngine on EC2 Micro Instances using SQL Azure. A radical extension of Ashley’s ideas onto the Microsoft platform: host BlogEngine.NET on EC2 Micro Instances and talk to SQL Azure on the back end. This fell apart on BlogEngine’s architecture, which many posts indicate doesn’t scale out at all due to architectural limitations in the DAL and caching layers.

The more I thought about it, the more I just want a stack that just works for my personal web apps. As exciting as the above options were, they sounded like massive black holes that would suck in my free time. I ultimately decided to go with a simple solution: the tried-and-proven Dreamhost (http://www.dreamhost.com), a Linux provider, for my needs. I get what I need for less than $10 US per month and I can ramp up Amazon EC2 spot instances when I need a throw-away playground. The move over was a lot easier than I expected, consisting of the following three steps:

  1. Export WordPress content from my old provider wordpress site into my new site.
  2. Flip over DNS to point at my WordPress blog on my new provider’s site. This included flipping over DNS on all of my binary content (e.g. images) that I host on Amazon S3 and redirect to with a CNAME entry from a beckshome subdomain.
  3. Flip the switch on the DNS routing for Google Apps after I noticed my beckshome.com email dried up for a couple of days.

Comments No Comments »

I recently had the opportunity to look into and make use of the Microsoft System.Security.SecureString class. This class is one of those dark corners of the .NET Framework that you don’t think about on a day-to-day basis but are really glad that it’s there when your security auditor starts asking questions about how PII data such as social security numbers are protected while resident in memory. The SecureString class takes care of this problem, helping you avoid a situation where unencrypted sensitive String data is left lingering around on the .NET heap. However, since this class does reference unmanaged memory buffers, its use is not entirely intuitive. I’ve attempted to demystify things with the explanation, drawing and code snippets in this post.

The diagram below shows that, in the case of System.String, what you get is an unencrypted string located in managed memory. Due to the immutability of String objects and the nondeterministic nature of the .NET Garbage Collector, the need for one string may result in multiple string objects scattered across managed memory, waiting to be compromised.

In the case of a SecureString, you don’t have an unsecure String in managed memory. Instead, you get a DPAPI encrypted array of characters in unmanaged memory. And, since SecureString implements the IDisposable interface, it’s easy to deterministically destroy the string’s secure contents. There are some limited .NET 4.0 Framework classes that accept SecureStrings as parameters, including the cryptographic service provider (CSP), X.509 certificate classes and several other security related classes. But what if you want to create your own classes that accept and deal with secure strings? How do you deal with the SecureString from managed .NET code and how do you ensure that you don’t defeat the purpose of the SecureString by leaving intermediate strings unsecure in memory buffers?

The simple console application below exhibits how a SecureString can be properly used and disposed; with the SecureString contents being made available to managed code and the intermediate memory zeroed out when no longer needed.

using System;
using System.Security;
using System.Runtime.InteropServices;

namespace SecureStringExample
{
    class Program
    {
        static void Main(string[] args)
        {
            // Wrapping the SecureString with using causes it to be properly  
            // disposed, leaving no sensitive data in memory
            using (SecureString SecString = new SecureString())
            {
                Console.Write("Please enter your password: ");
                while (true)
                {
                    ConsoleKeyInfo CKI = Console.ReadKey(true);
                    if (CKI.Key == ConsoleKey.Enter) break;

                    // Use the AppendChar() method to add characters
                    // to the SecureString 
                    SecString.AppendChar(CKI.KeyChar);
                    Console.Write("*");
                }
                // Make the SecureString read only
                SecString.MakeReadOnly();
                Console.WriteLine();

                // Display password by marshalling it from unmanaged memory  
                DisplaySecureString(SecString);
                Console.ReadKey();
            } 
        }

        // Example demonstrating what needs to be done to get SecureString value to
        // managed code. This method uses unsafe code; project must be compiled
        // with /unsafe flag in the C# compiler 
        private unsafe static void DisplaySecureString(SecureString SecString)
        {
            IntPtr StringPointer = Marshal.SecureStringToBSTR(SecString);
            try
            {
                // Read the decrypted string from the unmanaged memory buffer
                String NonSecureString = Marshal.PtrToStringBSTR(StringPointer);
                Console.WriteLine(NonSecureString);
            }
            finally
            {
                // Zero and free the unmanaged memory buffer containing the 
                // decrypted SecureString
                Marshal.ZeroFreeBSTR(StringPointer);
                if (!SecString.IsReadOnly())
                   SecString.Clear();
            }
        } 
    }
}

This example should be useful to you in working SecureString into your own application. Like any other security measure, there’s a cost to the additional security. In the case of the SecureString, there’s overhead to adding characters to the SecureString as well as marshaling data out of unmanaged memory.  The final reference example I’ll provide is from Microsoft’s SecureString implementation, specifically the code to initialize the secure string. From this code, you can clearly see the check for platform dependencies, buffer allocation, pointer creation and the ProtectMemory() call which invokes the Win32 native RtlEncryptMemory function.

[HandleProcessCorruptedStateExceptions, SecurityCritical]
private unsafe void InitializeSecureString(char* value, int length)
{
    this.CheckSupportedOnCurrentPlatform();
    this.AllocateBuffer(length);
    this.m_length = length;
    byte* pointer = null;
    RuntimeHelpers.PrepareConstrainedRegions();
    try
    {
        this.m_buffer.AcquirePointer(ref pointer);
        Buffer.memcpyimpl((byte*) value, pointer, length * 2);
    }
    catch (Exception)
    {
        this.ProtectMemory();
        throw;
    }
    finally
    {
        if (pointer != null)
        {
            this.m_buffer.ReleasePointer();
        }
    }
    this.ProtectMemory();
}

Comments No Comments »

I’ve been sitting on my offsite backup upgrade for a long while now and finally decided to pull the trigger this week. I’ve used MozyHome for many years but the Mozy rate hike 6 months back agitated me. Combine this with the fact that, for more money, I’m not even getting the amount of backup I used to get and it was clearly time to move on, even though I’m nowhere near the 18 billion Gigabytes of storage Mozy claims I’m using.

I looked at some side-by-side reviews of home backup products and found that gigaom had the most useful reviews. Their original review, which was done in 2009, compared the two top contenders at that point in time: MozyHome and Carbonite. I’ve included the link more for completeness at this point since these I wasn’t really interested in these two players. Gigaom’s review of upstart providers Backblaze and Crashplan was much more interesting and convinced me to go with Crashplan as my new backup provider (bye, bye Mozy). I’ve always been interested in Crashplan’s unique peer-to-peer backup option. With their unlimited offsite backup now being extremely price competitive and with an optional family plan, Crashplan has all the features I’m looking for.

For local backups, Apple TimeMachine to an external drive has always worked extremely well for me. However, Scott Hanselman’s recent podcast on Network Attached Storage (NAS) has left me wanting a Synology NAS device. Check out the NAS product features on Synology’s site and the incredible reviews of their products on Amazon.com. Some of the killer features that caught my eye include:

  • Hybrid RAID and easy storage expansion
  • Backup to Amazon S3
  • Built In FTP and WebDAV
  • Surveillance and IP Camera Recording (How Logical Is That?)
  • Apple TimeMachine Support
  • Mobile Device Support
  • Ability to Function as an iTunes Server

This simple YouTube video “Be Your Own Cloud” sums up pretty well some of the challenges I’m trying to address.

Comments No Comments »