Author Archive

Couple of exciting things to link to from last week:

  • Docker – O’Reilly released an awesome Introduction to Docker video. Total runtime is less than 2 hours. Definitely a recommended watch if you’re looking to get familiar with the Docker container. Free to those who have an O’Reilly Safari subscription.
  • Azure VM Images – Microsoft announces a host of Azure Virtual Machine Images for Visual Studio. A great way to get exposure to the other Visual Studio SKUs or run the newer versions of VS without having to deal with the headaches of an install.
  • App Development – Awesome article on how independent mobile app development works and can be quite lucrative.
  • Feature Toggles – Good course from Pluralsight on implementing feature toggles in .NET. Something architects should be aware but the process and patterns are not widely documented.

Comments No Comments »

The new year brought with it the chance to reflect on technologies that I see making a splash in the coming year. I’m enthralled by big data and analytics but I’m not a data scientist; likewise, I only see so much value in the wearables themselves, although they’ll certainly feed the big data beast. My list of technologies is strongly influenced by my background in software and devops — without being a list of language or tool features.

  • Blockchains and Ethereum. Marc Andreesen’s piece on Bitcoin is inspiring. The guy who invented the web browser told Stanford students in a lecture that if he was hacking today, what he’d be working on is applications of blockchains (the cryptographic technology that underpins Bitcoin). Ethereum has some potential as a platform, leveraging the decentralized nature of the blockchain and building on it with programmable contracts (limitless possibilities — hence the excitement) expanding the currently narrow cryptocurrency focus of Bitcoin.
  • Cognitive Computing and Watson. When you see Watson compete (and win) against Jeapordy and Chess masters, big data and predictive analytics look so passé. Cognitive computing and AI is where the big boys are putting their money with Apple, Microsoft and Amazon in the game with the technologies behind Siri, Cortana and Echo, respectively. IBM clearly has the best hand in this deal; question now is how they’ll play it. Will they be able to parlay their initial Watson developer and API offering into a fully-public, commercial, pay-as-you-go service or will they build a walled garden like they did with their cloud offering and get overtaken by competitors? Take a look at the details on the probability that computerization will lead to job losses in the next 2 decades  and it’s hard to think about encouraging your kids to be an airline pilot or an accountant. However, creating, maintaining and training the machines on the corpus of knowledge necessary to perform these tasks will be big business.
  • Containerization and Docker. Amazon EC2 supports Docker containers as of November 2014. Microsoft Azure supports Docker too. This technology seems to have just skipped all the ups and downs of the hype cycle and gone straight to productivity. Time will tell if that’s true of not but Docker offers something for everyone: more efficiency (versus virtual machines) for the data center manager, a well-defined sandbox for system admins and a packaging story made for the DevOps cookbooks.
  • Microservices and ASP.NET vNext. Martin Fowler and the folks at Thoughtworks have been driving the concept of Microservices. Although Martin seems a bit conflicted himself about how this aligns with his first law of distributed objects, I’m a believer. If you look at the SOA projects we worked on 10 years ago, this is a natural progression and probably should have been the jumping off point as opposed to heavy-handed governance. What really appeals to me is the product-based “you build it, you run it” nature of microservices. Works great in places like Amazon and Netflix but it’s hard to know if/how that translates to large enterprises. Technologies like Node have always been naturally amenable to a microservice-based approach; it’s good to see the managed memory enterprise platforms getting onboard as well, for example the lightweight, deploy what you want .NET hosting container available as part of ASP.NET vNext.
  • ALM Service Bus and TaskTop. While not as sexy as the other technologies in my list, Tasktop seems to have honed in on a much-needed and lucrative corner of the enterprise software space, integration of enterprise ALM tools like Atlassian Jira, BMC Remedy, HP QC, Microsoft TFS and others. If TaskTop can deliver on this promise, they’ll certainly find takers. I have a project coming up that’s using TaskTop — will be interesting to see how expectations and reality align.

Comments No Comments »

I recently had the opportunity to look into and make use of the Microsoft System.Security.SecureString class. This class is one of those dark corners of the .NET Framework that you don’t think about on a day-to-day basis but are really glad that it’s there when your security auditor starts asking questions about how PII data such as social security numbers are protected while resident in memory. The SecureString class takes care of this problem, helping you avoid a situation where unencrypted sensitive String data is left lingering around on the .NET heap. However, since this class does reference unmanaged memory buffers, its use is not entirely intuitive. I’ve attempted to demystify things with the explanation, drawing and code snippets in this post.

The diagram below shows that, in the case of System.String, what you get is an unencrypted string located in managed memory. Due to the immutability of String objects and the nondeterministic nature of the .NET Garbage Collector, the need for one string may result in multiple string objects scattered across managed memory, waiting to be compromised.

In the case of a SecureString, you don’t have an unsecure String in managed memory. Instead, you get a DPAPI encrypted array of characters in unmanaged memory. And, since SecureString implements the IDisposable interface, it’s easy to deterministically destroy the string’s secure contents. There are some limited .NET 4.0 Framework classes that accept SecureStrings as parameters, including the cryptographic service provider (CSP), X.509 certificate classes and several other security related classes. But what if you want to create your own classes that accept and deal with secure strings? How do you deal with the SecureString from managed .NET code and how do you ensure that you don’t defeat the purpose of the SecureString by leaving intermediate strings unsecure in memory buffers?

The simple console application below exhibits how a SecureString can be properly used and disposed; with the SecureString contents being made available to managed code and the intermediate memory zeroed out when no longer needed.

using System;
using System.Security;
using System.Runtime.InteropServices;

namespace SecureStringExample
{
    class Program
    {
        static void Main(string[] args)
        {
            // Wrapping the SecureString with using causes it to be properly  
            // disposed, leaving no sensitive data in memory
            using (SecureString SecString = new SecureString())
            {
                Console.Write("Please enter your password: ");
                while (true)
                {
                    ConsoleKeyInfo CKI = Console.ReadKey(true);
                    if (CKI.Key == ConsoleKey.Enter) break;

                    // Use the AppendChar() method to add characters
                    // to the SecureString 
                    SecString.AppendChar(CKI.KeyChar);
                    Console.Write("*");
                }
                // Make the SecureString read only
                SecString.MakeReadOnly();
                Console.WriteLine();

                // Display password by marshalling it from unmanaged memory  
                DisplaySecureString(SecString);
                Console.ReadKey();
            } 
        }

        // Example demonstrating what needs to be done to get SecureString value to
        // managed code. This method uses unsafe code; project must be compiled
        // with /unsafe flag in the C# compiler 
        private unsafe static void DisplaySecureString(SecureString SecString)
        {
            IntPtr StringPointer = Marshal.SecureStringToBSTR(SecString);
            try
            {
                // Read the decrypted string from the unmanaged memory buffer
                String NonSecureString = Marshal.PtrToStringBSTR(StringPointer);
                Console.WriteLine(NonSecureString);
            }
            finally
            {
                // Zero and free the unmanaged memory buffer containing the 
                // decrypted SecureString
                Marshal.ZeroFreeBSTR(StringPointer);
                if (!SecString.IsReadOnly())
                   SecString.Clear();
            }
        } 
    }
}

This example should be useful to you in working SecureString into your own application. Like any other security measure, there’s a cost to the additional security. In the case of the SecureString, there’s overhead to adding characters to the SecureString as well as marshaling data out of unmanaged memory.  The final reference example I’ll provide is from Microsoft’s SecureString implementation, specifically the code to initialize the secure string. From this code, you can clearly see the check for platform dependencies, buffer allocation, pointer creation and the ProtectMemory() call which invokes the Win32 native RtlEncryptMemory function.

[HandleProcessCorruptedStateExceptions, SecurityCritical]
private unsafe void InitializeSecureString(char* value, int length)
{
    this.CheckSupportedOnCurrentPlatform();
    this.AllocateBuffer(length);
    this.m_length = length;
    byte* pointer = null;
    RuntimeHelpers.PrepareConstrainedRegions();
    try
    {
        this.m_buffer.AcquirePointer(ref pointer);
        Buffer.memcpyimpl((byte*) value, pointer, length * 2);
    }
    catch (Exception)
    {
        this.ProtectMemory();
        throw;
    }
    finally
    {
        if (pointer != null)
        {
            this.m_buffer.ReleasePointer();
        }
    }
    this.ProtectMemory();
}

Comments No Comments »

I’ve been sitting on my offsite backup upgrade for a long while now and finally decided to pull the trigger this week. I’ve used MozyHome for many years but the Mozy rate hike 6 months back agitated me. Combine this with the fact that, for more money, I’m not even getting the amount of backup I used to get and it was clearly time to move on, even though I’m nowhere near the 18 billion Gigabytes of storage Mozy claims I’m using.

I looked at some side-by-side reviews of home backup products and found that gigaom had the most useful reviews. Their original review, which was done in 2009, compared the two top contenders at that point in time: MozyHome and Carbonite. I’ve included the link more for completeness at this point since these I wasn’t really interested in these two players. Gigaom’s review of upstart providers Backblaze and Crashplan was much more interesting and convinced me to go with Crashplan as my new backup provider (bye, bye Mozy). I’ve always been interested in Crashplan’s unique peer-to-peer backup option. With their unlimited offsite backup now being extremely price competitive and with an optional family plan, Crashplan has all the features I’m looking for.

For local backups, Apple TimeMachine to an external drive has always worked extremely well for me. However, Scott Hanselman’s recent podcast on Network Attached Storage (NAS) has left me wanting a Synology NAS device. Check out the NAS product features on Synology’s site and the incredible reviews of their products on Amazon.com. Some of the killer features that caught my eye include:

  • Hybrid RAID and easy storage expansion
  • Backup to Amazon S3
  • Built In FTP and WebDAV
  • Surveillance and IP Camera Recording (How Logical Is That?)
  • Apple TimeMachine Support
  • Mobile Device Support
  • Ability to Function as an iTunes Server

This simple YouTube video “Be Your Own Cloud” sums up pretty well some of the challenges I’m trying to address.

Comments No Comments »

One of the things I was really eager to do was help one of our clients manage the archival and history of projects within their TFS repository. Historically, VSS volumes sizes have gotten out of control over time, resulting in commensurately poor performance. Obviously, a SQL Server backing database offers lots of advantages over the Jet database engine but even SQL Server performance will degrade over time as the history volume in long-running projects backs up.

I was hoping that TFS 2008 had built in functionality to manage project archiving and history management. Not only does the TFS 2008 not have such a function but the co-mingling of data (all the projects on a server write to the same database) means that it’s nearly impossible to break out what data belongs to what project and apply different types of information lifecycle management rules such as modifying the type of storage used, applying specific backup criteria to different projects, or taking a project completely offline so that it no longer impacts the performance of the TFS database but can still be retained for historical purposes.

The good news is that, if you’re willing to make the move, TFS 2010 has functionality to explicitly address the requirement for TFS archiving and history management. TFS 2010 Team Project Collections allow you to organize similar projects into collections and, most importantly for our needs, allocate a different set of hardware resources for each team project collection. The benefit of this setup and applicability to the intent of this blog post should be immediately obvious. The downside of this approach is that you can’t work (link work items, branch & merge, etc.) across project collections. An annotated version of a diagram from the MSDN Team Project Collections documentation can be found below.

Comments No Comments »