If you’re like me, an IT professional of ‘a certain age’, (and come to think of it, even if you’re younger but toiling in an enterprise still struggling with legacy practices) you know what it’s like to work in a siloed, IT environment.
I’m sure you know what I mean by “siloed”: the database team works separately from the SharePoint team who speak, imperfectly with the various dev teams and so on, and so on.
This approach to enterprise IT, which fosters an emphasis on individual, technical prowess over solutions, and a tendency towards isolation from the concerns and pain points of end-users and business units, is losing whatever charms it once held as cloud technologies and methodologies become standard practice.
Here’s a concrete example…
For many companies, messaging, in the form of Exchange Online, is the entry point to SaaS as represented by Office365. Typically, the goal is to reduce server footprint, licensing costs and operational complexity by moving the email function to the cloud.
And just as typically, the messaging person, long accustomed to fulfilling that role more or less in isolation from other IT roles (with interaction, as needed with teams who need messaging services) expects to continue along that track.
But the movement of this workload to the cloud makes that nearly impossible.
Cloud services, such as Office 365, operate on a scale not achievable for most enterprises and take advantage of computing fabrics (in the case of Office 365, the Microsoft Graph) that turn discrete technologies – such as SharePoint, messaging and cloud storage, into aspects of a unified collaboration framework.
This represents a powerful change to the IT function which alters the demands placed on IT professionals:
Solutions: a focus on solutions over pure technical prowess
Flexibility: a willingness to cross technology boundaries that follow the data flow throughout your cloud platform
Communication: assuming an ‘evangelist’ role in your organization, promoting workflow modernization via cloud services
You find solutions by listening, seeking to mate technology to an organization’s needs instead of trying to bend people and their work process, to the constraints of a technology. In the cloud era, failure to do this leads to the use of ‘shadow’ and ‘credit card’ IT as teams work around central IT obstacles by adopting cloud technologies independently of company strategy.
You achieve flexibility by leaving your silo (dev, operations, messaging, database, etc.) and developing a broad, cross-functional body of expertise that is built on an understanding of a platform, thinking of the service in utility terms.
You develop an effective communication strategy by understanding that, a key part of your responsibility during this moment of transition from exclusively on-premises technology methods to hybrid or all-in cloud adoption, is to explain the benefits and provide guidance.
These skills have always been important, but in the cloud era, they have achieved a critical importance not seen for quite some time. As an IT professional, your success will be measured more and more by your strength in these areas, even above your (surely solid) technical chops.
I wish more of my friends and colleagues in the Information Technology field would share their stories.
There’s a vast, hidden treasury of insight locked away in our heads – and not about technology alone but also, how organizations use and adapt to technology (or don’t).
This recently came to mind (and inspired this post) as I reviewed the last few years of my career over a few glasses of wine. During this brief time, my entire point of view about the purpose and future of IT has dramatically changed. I’ve travelled the path from cloud skeptic to cloud enthusiast. What transported me from one pole to the other?
That’s the story I’m going to tell.
A Sense of Dread
My career in Information Technology – which started well over a decade ago – was practically an accident. After leaving college, I worked in banking in a very entry level position. It was a tedious job that involved the manual reconciliation of account data (i.e., did deposits match withdrawals? …and other minutiae). Hour after hour of eyeballing columns of information searching for inconsistencies inspired a sense of ennui.
Wasn’t there a better way? Wasn’t this a perfect job for software? Surely there was an algorithm that could accomplish this. I’d worked extensively with computational methods in college, solving statistical problems using the resources available in the computer lab so I knew there were powerful alternatives to this drudgery.
I presented my ideas to management who, with one notable exception, politely thanked me and promptly returned to their 1950s mental cocoon. Until, that is, the FDIC came along.
Mechanization Takes Command
Without going into deep detail I’ll say that when the bank was audited it received a failing grade for the lack of investment in Information Technology (among other sins). Suddenly, there was a mandate to modernize the organization’s minimal IT infrastructure. A VP with whom I was friendly pointed towards me and said: ‘that’s the guy who will make it happen’. As a professor of mine often said, ‘repetition is the key to learning’ – my mantra about the need for IT, combined with a government directive and the sponsorship of a mentor had changed my career, almost overnight.
Welcome to the Present – and Future
This ushered in an exciting time; network cables were laid, a data center was made, a client server infrastructure was built and methods were created to import data from offsite mainframes into on-premises servers for real-time analysis by financial personnel, all coordinated by me. It was a whirlwind of activity that completely transformed the way the bank operated. And yes, the account reconciliation process – tailor made for automation – left human hands and became the work of algorithms.
The Age of Consultancy
Eventually, there were ‘no more worlds to conquer’ at the bank and I found myself growing restless – a not uncommon condition of people in our field. A friend suggested I interview with a consultancy start-up he’d recently joined – a firm composed of a combination of young hotheads looking to dive into the world of client server development and older, infrastructure veterans, weary of the politics and mission silos of corporate IT. I was impressed by this group of visionaries and made the leap.
This started the next phase of my career, defined by a sort of creative chaos as I was sent from one assignment to another with only the vaguest idea of what I was supposed to be doing. One moment, it was writing transact SQL code and the next, it was acting as a sysadmin for a massive farm of Solaris servers.
Despite the uncertainty, I learned three valuable lessons from this time:
To be open minded and technology agnostic
To cultivate a spirit of constant learning
To think of myself as a technologist first and not as the champion of a particular company’s stack
These lessons would serve me well as the next chapter began.
The Importance of Deep Knowledge
By now, I was comfortably operating as an IT generalist, working under the umbrella of the consulting firm whose business was growing at a rapid pace. An encounter with a seasoned professional however, would shake my confidence in future prospects and reorient my thinking towards deeper topics.
While engaged on a lengthy project, one with a heavy emphasis on Tru64 Unix, I had the pleasure of working with a man whose knowledge of that platform was profound. He took me under his wing, stressing one important message: ‘it’s good to have a wide range but you must possess deep knowledge in at least one area to be a serious professional. Pick something you love and make it a part of you. If you do that, and it’s critical to business, you’ll always excel.’
I knew what I needed to do: I would become a messaging expert.
You’ve Got Mail
This turned out to be precisely the right decision as Microsoft Exchange – once a ‘toy’ product – was coming into its own as a robust messaging platform. Integration with Active Directory and the publishing of an API that programmatically extended the platform and broadened the amount of knowledge required to truly be considered a subject matter expert. With the introduction of versions 2007 and above, Exchange graduated to enterprise class. And also, the foundation for SaaS versions of the product were laid.
Messaging is the SaaS Gateway for Many Firms
Having established myself as a messaging SME focused on MS Exchange, it was only a matter of time before Office 365, the mature successor to what was once known as the Business Productivity OnlineSuite (or BPOS) entered my life. My first encounter with BPOS left me cold – I was firmly rooted in the world of data centers you could touch, bare metal and virtual machines you owned and the illusion of control.
Of course, along with that supposed control there came a host of challenges that often wrecked weekends and ruined sleep: server malfunctions, active directory issues, VMWare host or VDI problems, network communication challenges, firewall configuration mysteries and on and on.
Despite this nearly constant churn of drama – even in well-designed and reasonably well behaved infrastructures – I was deaf to the potential of (then nascent) cloud technologies.
But all that was about to change.
I accepted a position with a firm that had gone all in with AWS and Office 365: AWS on the PaaS and newly created DevOps side of the house and Office 365 on the SaaS/back office side (oh and of course, the nearly ubiquitous SalesForce SaaS was heavily in-use). Office 365 was adopted, it was hoped, as a way to eliminate the expense and infrastructural complexity of on-premises Exchange – the theory was that less knowledge would be required to manage these cloud technologies. Of course, this turned out to be wrong but what was discovered along the way was the scalable power, flexibility and velocity made possible by leveraging the public cloud.
My discovery was that by letting go of an attachment to legacy practices – of a fixation on ‘owning’ the infrastructure – I could explore the use of computing power as a utility and change my career direction from being part of a cost center, often beset by crises, to crafting solutions and actually being the business.
Through Office 365, I reoriented my thinking away from isolated areas (i.e., the ‘messaging’, or SharePoint, or IM silos as separate areas of expertise) and towards SaaS as a collaboration tool set that enabled the organization to become nimble. Through AWS (and a little later, Azure) I learned to rethink my relationship to server assets from the pet to cattle model.
This has reinvigorated my career and opened an exciting new chapter.
So much so, that I’ve become an unabashed enthusiast and ‘evangelist’ for cloud technologies.
If a Google search is any guide (and it usually is) 2016 is turning out to be ‘the year of the cloud’ (or perhaps the real year of the cloud). Here’s one prominent example.
While this represents an exciting opportunity for those of us who have focused our career objectives on SaaS, IaaS, PaaS and DevOps, there’s also the risk of misunderstanding what these technologies mean and how they should be adopted by your organization.
One of the things I’ve noticed is a tendency – among both vendors who’re excited about their products and colleagues who’re excited about new methods – to speak of powerful offerings such as Cortana Data Analytics and other thourougly 21st century creations without first concentrating on the hard work of dissecting current on-premises network, data and computational usage.
This can lead to the equivalent of inventing advanced space flight technologies without first developing airplanes – of, in other words, trying to run before you’ve even walked.
To offer a concrete example (because I’m all about specifics), I’ve witnessed businesses attempting to adopt the most advanced cloud-based technologies such as Amazon EMR, while skipping other, more mundane, ‘keeping the lights on’ platforms which could also benefit from being hybridized or moved entirely to the cloud.
This has led me to adopt a basic cloud adoption outline:
1.) Have you considered the impact on your network?
This is at the most basic level but is often overlooked. Users and processes that consume cloud services will need robust bandwidth or perhaps even dedicated connections such as Microsoft’s ExpressRoute or Amazon’s DirectConnect to enjoy a consistent experience.
2.) Have you inventoried and analyzed your current data center portfolio to see what can be hybridized or moved completely to the cloud?
If, like most companies, you’re using Active Directory hosted by on-premises servers, they’re probably aging and in need of upgrading (at both hardware and functional level). Maybe it’s time to consider a cloud supplement such as Azure AD or Amazon’s Directory Service instead of performing yet another data center project.
The same can no doubt be said of your on-premises database investment. The point, of course, is to start with the basics and then work your way to the exotic.
3.) Have you determined the licensing cost of upgrading your on-premises assets when compared to the usage costs of cloud services?
Years of experience and habit have trained us to upgrade hardware and software as needed, after receiving management buy-in (and usually after running against significant performance issues with strained and aging platforms).
But in recent years, there’ve been changes to the licensing model of on-premises systems that may make cloud alternatives more financially attractive at scale.
For example, Microsoft’s per core licensing may significantly increase the costs associated with a bare metal server upgrade for SQL server. The better your server, the higher your license costs may turn out to be.
Cloud technologies offer organizations an exciting opportunity to, as Amazon says, reinvent the way computing power is used. To truly make the most of this opportunity however, a lot of homework is necessary.
Although AWS is primarily targeted towards enterprise-level customers such as Expedia, who use it to host their extensive web server farms and cloud-based data centers, there’s also an affordable tier called Elastic Computing 2 (EC2). Amazon also provides rapidly deployable test-drive environments for several Microsoft server products, including Exchange. No doubt these are useful for a variety of testing scenarios but unfortunately don’t give you an opportunity to build and configure an entire environment from scratch. For that, you’ll need EC2.
Using EC2, it’s possible to build a modest-sized Exchange test environment at a reasonable cost (as of this writing, a four server environment cost approx. $40 to $50 per month to keep online. Note that pricing is flexible and dependent on virtual machine specs and uptime, among other factors).
You’ll need an Amazon account to get started (yes, the same account you use for shopping).
Once you’ve logged into the EC2 dashboard, you can create a virtual machine instance:
You’ll be presented with a list of Amazon Machine Images including Linux and Windows images.
By scrolling down, you’ll see a list of Windows Server options. For building an Exchange server environment, I’ve typically used Windows Server 2008 Base 64-bit
The best option for making a low-cost testing platform is the t2.micro machine type which is included in the free tier. Although the servers in this tier are indeed free, there are associated costs for bandwidth usage, CPU time and so on. See the AWS EC2 pricing guide for full information.
In this example, we’re accepting the default machine configuration (i.e., no expansion of the standard RAM and hard drive space options) so we can choose the “Review and Launch” button at the bottom of the page to proceed.
Recently, Amazon introduced solid state drives (SSD) as an option for their virtual instances. In my experience, the SSD drives are an excellent choice for creating reasonably performing machines (especially in the free tier where the machines are not particularly robust – 30 GB of hard drive space and low RAM configs).
Amazon Web Service instances are secured with public key data. Part of the process of creating a machine is making a key pair (or selecting an already existing one)
Once you’ve downloaded a key pair the instance will launch
There are several other tasks required to build your Exchange test environment using AWS. The biggies include:
2.) Creating a domain controller and configuring its DNS to use the DC’s external IP as the lookup host (about which, see below)
3.) Configuring the AWS security group your machines are part of to allow the public IP addresses of your instances inbound access to all member servers. This is a critical step for creating an Active Directory and Exchange environment since, by default, the EC2 instances use DHCP for IP addressing. Their public addresses however, are fixed and so, these can be used to work around this limitation as per the image shown below, which diagrams the topology for a simple, two node DAG cluster Exchange set-up:
There’s quite a bit more to say but this post shows the basics – at least as I’ve experienced and worked out. With AWS it’s possible to make – for a fairly low cost – a sandbox for safely practicing on a working Exchange environment.