I recently had the opportunity to look into and make use of the Microsoft System.Security.SecureString class. This class is one of those dark corners of the .NET Framework that you don’t think about on a day-to-day basis but are really glad that it’s there when your security auditor starts asking questions about how PII data such as social security numbers are protected while resident in memory. The SecureString class takes care of this problem, helping you avoid a situation where unencrypted sensitive String data is left lingering around on the .NET heap. However, since this class does reference unmanaged memory buffers, its use is not entirely intuitive. I’ve attempted to demystify things with the explanation, drawing and code snippets in this post.
The diagram below shows that, in the case of System.String, what you get is an unencrypted string located in managed memory. Due to the immutability of String objects and the nondeterministic nature of the .NET Garbage Collector, the need for one string may result in multiple string objects scattered across managed memory, waiting to be compromised.
In the case of a SecureString, you don’t have an unsecure String in managed memory. Instead, you get a DPAPI encrypted array of characters in unmanaged memory. And, since SecureString implements the IDisposable interface, it’s easy to deterministically destroy the string’s secure contents. There are some limited .NET 4.0 Framework classes that accept SecureStrings as parameters, including the cryptographic service provider (CSP), X.509 certificate classes and several other security related classes. But what if you want to create your own classes that accept and deal with secure strings? How do you deal with the SecureString from managed .NET code and how do you ensure that you don’t defeat the purpose of the SecureString by leaving intermediate strings unsecure in memory buffers?
The simple console application below exhibits how a SecureString can be properly used and disposed; with the SecureString contents being made available to managed code and the intermediate memory zeroed out when no longer needed.
static void Main(string args)
// Wrapping the SecureString with using causes it to be properly
// disposed, leaving no sensitive data in memory
using (SecureString SecString = new SecureString())
Console.Write("Please enter your password: ");
ConsoleKeyInfo CKI = Console.ReadKey(true);
if (CKI.Key == ConsoleKey.Enter) break;
// Use the AppendChar() method to add characters
// to the SecureString
// Make the SecureString read only
// Display password by marshalling it from unmanaged memory
// Example demonstrating what needs to be done to get SecureString value to
// managed code. This method uses unsafe code; project must be compiled
// with /unsafe flag in the C# compiler
private unsafe static void DisplaySecureString(SecureString SecString)
IntPtr StringPointer = Marshal.SecureStringToBSTR(SecString);
// Read the decrypted string from the unmanaged memory buffer
String NonSecureString = Marshal.PtrToStringBSTR(StringPointer);
// Zero and free the unmanaged memory buffer containing the
// decrypted SecureString
This example should be useful to you in working SecureString into your own application. Like any other security measure, there’s a cost to the additional security. In the case of the SecureString, there’s overhead to adding characters to the SecureString as well as marshaling data out of unmanaged memory. The final reference example I’ll provide is from Microsoft’s SecureString implementation, specifically the code to initialize the secure string. From this code, you can clearly see the check for platform dependencies, buffer allocation, pointer creation and the ProtectMemory() call which invokes the Win32 native RtlEncryptMemory function.
I’ve been sitting on my offsite backup upgrade for a long while now and finally decided to pull the trigger this week. I’ve used MozyHome for many years but the Mozy rate hike 6 months back agitated me. Combine this with the fact that, for more money, I’m not even getting the amount of backup I used to get and it was clearly time to move on, even though I’m nowhere near the 18 billion Gigabytes of storage Mozy claims I’m using.
I looked at some side-by-side reviews of home backup products and found that gigaom had the most useful reviews. Their original review, which was done in 2009, compared the two top contenders at that point in time: MozyHome and Carbonite. I’ve included the link more for completeness at this point since these I wasn’t really interested in these two players. Gigaom’s review of upstart providers Backblaze and Crashplan was much more interesting and convinced me to go with Crashplan as my new backup provider (bye, bye Mozy). I’ve always been interested in Crashplan’s unique peer-to-peer backup option. With their unlimited offsite backup now being extremely price competitive and with an optional family plan, Crashplan has all the features I’m looking for.
One of the things I was really eager to do was help one of our clients manage the archival and history of projects within their TFS repository. Historically, VSS volumes sizes have gotten out of control over time, resulting in commensurately poor performance. Obviously, a SQL Server backing database offers lots of advantages over the Jet database engine but even SQL Server performance will degrade over time as the history volume in long-running projects backs up.
I was hoping that TFS 2008 had built in functionality to manage project archiving and history management. Not only does the TFS 2008 not have such a function but the co-mingling of data (all the projects on a server write to the same database) means that it’s nearly impossible to break out what data belongs to what project and apply different types of information lifecycle management rules such as modifying the type of storage used, applying specific backup criteria to different projects, or taking a project completely offline so that it no longer impacts the performance of the TFS database but can still be retained for historical purposes.
The good news is that, if you’re willing to make the move, TFS 2010 has functionality to explicitly address the requirement for TFS archiving and history management. TFS 2010 Team Project Collections allow you to organize similar projects into collections and, most importantly for our needs, allocate a different set of hardware resources for each team project collection. The benefit of this setup and applicability to the intent of this blog post should be immediately obvious. The downside of this approach is that you can’t work (link work items, branch & merge, etc.) across project collections. An annotated version of a diagram from the MSDN Team Project Collections documentation can be found below.
I’ve included below my Amazon.com review of the book “Making It Big In Software: Get the Job, Work the Org, Become Great”. I diligently read this book from cover to cover and just couldn’t seem to like it. It became pretty monotonous after a while to go through what felt like a very academic handling of what could have been a very interesting topic. This is in stark contrast to the other book I’m reading now, “Delivering Happiness” by Zappos CEO Tony Hsieh, which is a pragmatic blow-by-blow tale of how someone actually made it big by leveraging technology. My review:
I really wanted to like Sam Lightstone’s book “Making It Big In Software” and read it cover-to-cover, at some times forcing myself to read on. There are some good points in the book, which at its best represents a blend between the interviewing style of “Founders at Work” and the pragmatic advice of “Career Warfare”. Unfortunately, the book is at its best far too infrequently to make it a recommended read.
Aside from really lacking any really original advice or insights that are fairly common knowledge to folks who have spent a couple of years in the software industry, there are several other reasons I probably won’t be referring back to this book very frequently:
The questions were pretty much the same for every interview. That’s great for statistical comparability but really didn’t do much to draw out the stories from the interviewees. At one point, I found myself thumbing to the end of each interview to find out if the “Do you think graduate degrees are professionally valuable?” question was going to be asked again.
An earlier reviewer pointed out the value in the use of personas to illustrate examples. Done correctly, I agree that this is a very powerful technique. However, the software development antics of Moe, Larry, and Curly in this book seemed less like personas and more like an attempt to compensate for the lack of more illustrative examples.
Lots of borrowed material. Much of it from the standard software journeyman’s body of knowledge and some of it from popular authors such as Steven Covey, who seems to be a personal favorite of the author.
A chapter on compensation with salary ranges? C’mon, really? Aside from immediately dating the book, this is information that clearly could have been put out on a website and updated periodically so that the reference doesn’t get immediately stale.
This book may be of slightly more value (3 stars) to someone new to the field of software. I hope I’m not being unduly harsh but I find it hard to see how folks who have been around in the industry for 5 – 10 years can rate this book with 4 or 5 starts.
These tool discussions are also recurring themes on all of the major discussion forums. It seems that every so often one of these questions hits StackOverflow and everyone chimes in with their favorite current tools. Invariably, for the .NET tool lists, there are some tools that always show up and; enjoying near universal advocacy in the .NET developer community. This includes tools like Reflector and Fiddler on the free side and Ants Profiler and Resharper on the commercial side.
For this blog post, I’ve decided to go with 5 tools you’re not likely to find on any/many of these lists. While some of these tools are .NET-specific, other tools are just solid development tools that are likely to be great additions to any .NET team’s toolbox with the added benefit that they work across multiple technologies.
Badboy. Likely the biggest sleeper on my list. Badboy is an extremely easy-to-learn web application testing tool. Check out the online documentation to understand features and then use it to guide your learning. Chances are that you’ll have most of the basic and intermediate level scripting tasks mastered within the first 30 minutes of using the tool. Compare the cost of a Badboy license ($45 / individual or $30 / each for a 10-pack) with the cost of your existing web application testing tool. Chances are you’d be saving hundreds, if not thousands of dollars per license. If you need to scale beyond simple Badboy threading / load testing capabilities, Badboy scripts can be exported in a format consumable by Apache JMeter for more heavy duty controller/generator type load testing. Also, the Wave Test Manager server, from the makers of Badboy, allows you to upload and share badboy scripts across a project, schedule execution of the scripts, and access the reports from the tests on a central server.
Lightspeed ORM. When the discussion of Object Relational Mappers (ORMs) comes up, NHibernate and the Entity Framework are almost always at the forefront of the conversation. LLBLGen gets added to the list as well if commercial ORM’s are on the table. Rarely, if ever, is the Lightspeed ORM from the Mindscape team down under ever brought up. It should be. If an awesome Visual Studio modeling experience and second generation LINQ provider don’t convince you, maybe the Rails’esque data migration facilities will. Still not convinced? Check out the custom LinqPad provider and LINQ-to-SQL to Lightspeed drag and drop conversion. If there are new features you’d like to see or if you need bug fixes, Ivan and the team at Mindscape are all ears and provide a near legendary turn around time.
Silverlight Spy. Let’s recap just in case you missed the news – Silverlight is hot!!! It’s a pretty significant change from either the MVC or WebForms approach most .NET web developers are used to and takes a while to wrap your mind around. Silverlight Spy does for Silverlight what Reflector did for the .NET Framework, pulls back the covers so that you can inspect and understand. Silverlight Spy provides insight into the XAP package, isolated storage information, performance data, an accessibility view and so much more. The message from Microsoft over the last 6 months has been – learn Silverlight. That task is made so much easier with Silverlight Spy at your side.
DTM Data Generator. Microsoft recently finally got around to including a data generator in some versions of Visual Studio. If you restrict yourself to SQL Server and are willing to deal with slow data generation, it might even be a good fit for you. RedGate’s SQL Data Generator, which I’ve written about before is much more efficient at loading data, as long as you stick with SQL Server. If you’re looking for data generation tool to meet your needs, irrespective of the underlying database you use, DTM’s Data Generator is the tool for you. DTM’s data generator supports SQL Server, Oracle, MySQL, DB2, Sybase, and any database you can access through OLE DB or a DSN. It supports inserts of most major datatypes, including BLOB generation and supports a variety of rules comparable to RedGate’s product, including the use of custom rules. The enterprise version can be executed from the command line in silent mode, making it perfect for generation of data in preparation for the execution of an automated test suite.
Performance Analysis of Logs (PAL). This tool just doesn’t get enough love from the .NET development community. Oft maligned as the “poor man’s SCOM”, PAL can be a real timesaver and/or lifesaver. It’s so simple: capture the PAL specified counters for the platform being monitored (most major MS products such as Windows Server, IIS, MOSS, SQL Server, BizTalk, Exchange, and AD are supported), import the counters and let PAL do its thing. It’s “thing” is producing a detailed report for the counters showing how they looked across the duration of the capture and when the counters exceeded thresholds. PAL also provides explanations for each of the counters and details the implications are of exceeding the thresholds. More useful information for a better price you will not find.