Adopting cloud technologies with both feet on the ground


If a Google search is any guide (and it usually is) 2016 is turning out to be ‘the year of the cloud’ (or perhaps the real year of the cloud).  Here’s one prominent example.

While this represents an exciting opportunity for those of us who have focused our career objectives on SaaSIaaSPaaS and DevOps, there’s also the risk of misunderstanding what these technologies mean and how they should be adopted by your organization.

One of the things I’ve noticed is a tendency – among both vendors who’re excited about their products and colleagues who’re excited about new methods – to speak of powerful offerings such as Cortana Data Analytics and other thourougly 21st century creations without first concentrating on the hard work of dissecting current on-premises network, data and computational usage.

This can lead to the equivalent of inventing advanced space flight technologies without first developing airplanes  – of, in other words, trying to run before you’ve even walked.

To offer a concrete example (because I’m all about specifics), I’ve witnessed businesses attempting to adopt the most advanced cloud-based technologies such as Amazon EMR, while skipping other, more mundane, ‘keeping the lights on’ platforms which could also benefit from being hybridized or moved entirely to the cloud.

This has led me to adopt a basic cloud adoption outline:

1.) Have you considered the impact on your network?

This is at the most basic level but is often overlooked. Users and processes that consume cloud services will need robust bandwidth or perhaps even dedicated connections such as Microsoft’s ExpressRoute or Amazon’s DirectConnect to enjoy a consistent experience.

2.) Have you inventoried and analyzed your current data center portfolio to see what can be hybridized or moved completely to the cloud?

If, like most companies, you’re using Active Directory hosted by on-premises servers,  they’re probably aging and in need of upgrading (at both hardware and functional level). Maybe it’s time to consider a cloud supplement such as Azure AD or Amazon’s Directory Service instead of performing yet another data center project.

The same can no doubt be said of your on-premises database investment. The point, of course, is to start with the basics and then work your way to the exotic.

3.) Have you determined the licensing cost of upgrading your on-premises assets when compared to the usage costs of cloud services?

Years of experience and habit have trained us to upgrade hardware and software as needed, after receiving management buy-in (and usually after running against significant performance issues with strained and aging platforms).

But in recent years, there’ve been changes to the licensing model of on-premises systems that may make cloud alternatives more financially attractive at scale.

For example, Microsoft’s per core licensing may significantly increase the costs associated with a bare metal server upgrade for SQL server.  The better your server, the higher your license costs may turn out to be.

Cloud technologies offer organizations an exciting opportunity to, as Amazon says, reinvent the way computing power is used.  To truly make the most of this opportunity however, a lot of homework is necessary.

Amazon Web Services: Building an Exchange Test Environment


Amazon provides a cloud computing platform named Amazon Web Services (AWS).

Although AWS is primarily targeted towards enterprise-level customers such as Expedia, who use it to host their extensive web server farms and cloud-based data centers, there’s also an affordable tier called Elastic Computing 2 (EC2). Amazon also provides rapidly deployable test-drive environments for several Microsoft server products, including Exchange. No doubt these are useful for a variety of testing scenarios but unfortunately don’t give you an opportunity to build and configure an entire environment from scratch.  For that, you’ll need EC2.

Using EC2, it’s possible to build a modest-sized Exchange test environment at a reasonable cost (as of this writing, a four server environment cost approx. $40 to  $50 per month to keep online.  Note that pricing is flexible and dependent on virtual machine specs and uptime, among other factors).

You’ll need an Amazon account to get started (yes, the same account you use for shopping).

Once you’ve logged into the EC2 dashboard, you can create a virtual machine instance:

Create Instance

You’ll be presented with a list of Amazon Machine Images including Linux and Windows images.

Choose and Amazon Machine Image

By scrolling down, you’ll see a list of Windows Server options.  For building an Exchange server environment, I’ve typically used Windows Server 2008 Base 64-bit

Windows Server 2008 Base

The best option for making a low-cost testing platform is the t2.micro machine type which is included in the free tier.  Although the servers in this tier are indeed free, there are associated costs for bandwidth usage, CPU time and so on. See the AWS EC2 pricing guide for full information.

Choose an instance type-review and launch

In this example, we’re accepting the default machine configuration (i.e., no expansion of the standard RAM and hard drive space options) so we can choose the “Review and Launch” button at the bottom of the page to proceed.

Choose an instance type-review and launch

Recently, Amazon introduced solid state drives (SSD) as an option for their virtual instances.  In my experience, the SSD drives are an excellent choice for creating reasonably performing machines (especially in the free tier where the machines are not particularly robust – 30 GB of hard drive space and low RAM configs).

Boot from General Purpose SSD

Amazon Web Service instances are secured with public key data.  Part of the process of creating a machine is making a key pair (or selecting an already existing one)


choose a key pair

Once you’ve downloaded a key pair the instance will launch

Initiating Launch

There are several other tasks required to build your Exchange test environment using AWS. The biggies include:

1.)    Downloading trial copies of Exchange 2010 or Exchange 2013 from

2.)    Creating a domain controller and configuring its DNS to use the DC’s external IP as the lookup host (about which, see below)

3.)    Configuring the AWS security group your machines are part of to allow the public IP addresses of your instances inbound access to all member servers.  This is a critical step for creating an Active Directory and Exchange environment since, by default, the EC2 instances use DHCP for IP addressing.  Their public addresses however, are fixed and so, these can be used to work around this limitation as per the image shown below, which diagrams the topology for a simple, two node DAG cluster Exchange set-up:

AWS domain topology

There’s quite a bit more to say but this post shows the basics – at least as I’ve experienced and worked out.  With AWS it’s possible to make – for a fairly low cost – a sandbox for safely practicing on a working Exchange environment.