This is a great question, and one that many of us in IT tend to minimize when adopting new and more efficient technologies. Sometimes the simple cost savings calculations distract us:
“If I can put twenty VM’s on one server and save $60K in overall cost of ownership compared to my previous environment, then I’m going to take that $60K.”
The above statement from an otherwise knowledgeable IT professional misses some key points. Here’s another example that might ring a bell from the past:
In 1995, it cost roughly $30K to buy a tower server from Compaq, HP, or IBM. We would put these servers in computer rooms and data centers and treat them like gold. After all, if you can only afford one or two of them, you’d better treat them very carefully. Then by the year 2000, servers and server prices had changed dramatically, and now you can buy four 1U servers for $6K each instead of one at $30K.
“Wow, now I have eight servers instead of two, just think how much more I can do? I can cut my costs, or better yet put even more applications in my data center for the same hardware cost.”
Do you see a common response here? Sure, technologies improve and IT vendors bring you more capabilities for less than you paid a few years back, but besides that? What I see is the initial reaction to overlook “best practices” in favor of building more stuff, more applications… just plain more. Not that “more” is necessarily wrong, however, best practices for successful projects and avoiding more costs and risks in the future suggest that a little temperance is in order.
New thinking: look at new technology improvements to address areas neglected when you’re stuck with a tight budget.
Instead of simply doing more of the same, what if you paused and considered addressing all the things that get left out currently because of a limited budget or poor prioritization?
An ownership strategy
- It may not be popular, but much of the cost and effort that goes into a successful strategic implementation is the not-fun part. See my “Who Owns Your Cloud” blog for thoughts on “ownership”, as it applies to areas of IT.
Test & Development
- Don’t be afraid to point out that we often don’t do all the testing and change management we could because it’s too expensive to have enough equipment, people or capacity.
- The current technology improvement curve of virtualization and cloud really lend themselves to finally implementing usable and testable DARP procedures.
These may not be fun or sexy, but it’s funny how when something goes really wrong it’s the “not sexy” stuff like procedures, planning, testing and backups that really come in handy, which brings me to the heart of my blog.
The adage of “garbage in, garbage out” still holds true
It’s interesting to note that moving an application into the cloud is effectively an outsourcing activity. Whether you’re outsourcing an application or a helpdesk the following still holds true:
- If it’s broken when you’re managing it, outsourcing it won’t fix it. In fact it will most often exacerbate the problem and increase your costs and risks. Any vendor that ever helps you push through an outsource effort without comprehensive due diligence of the current solution in combination with process and ownership knowledge is leading you down the road to ruin. There isn’t an easy way. You just have to do the hard work of understanding what you own before you can give it to someone else.
- You need to really understand why you’re outsourcing. It’s strange how the previous sentence often seems to stump people. I’ll often hear answers like “it’s cheaper”. Really and how do you know it’s cheaper? If you don’t fully understand what it costs to own it internally and haven’t captured the risk of failure, you can’t know if it will be cheaper. Just like I’ve said about moving to the cloud, before you outsource you need to have a larger strategy and determine that an outsourced solution will help you achieve the goals of your strategy.
Factoring “built for failure” in to your application architecture requirements
This is where the rubber meets the road. Are you going to continue your current DR strategy of “Don’t ask, don’t tell?”* or are you going to capture business requirements and communicate the risk vs. opportunity to your business? It may be that moving to the cloud could cost more than you were paying before, but if the incremental cost increase also provides you with some or all of the following benefits, then they should be considered:
- Geographic diversity which provides you with resiliency against regional disasters
- Distribution of applications or data to improve the user (customer) experience
- Rapid deployment and scale options
- Relief from the overhead of managing more data center space and hardware, etc.
The recent AWS failures starting on April 21st 2011 highlight a few of these issues. I know the folks at Amazon will do everything they can to avoid similar problems in the future, but regardless of the true culpability for this event, it doesn’t change the fact that as the application owner it’s ultimately your responsibility to mitigate risks as well as costs. Like most of you, I believe AWS is a terrific service, but they are a single solution. No matter how effective they are at segregating environments, any monolithic solution caries the risk that one problem could affect the whole environment. You must consider more than your distribution within an environment like AWS. You must also consider distribution across competing internal or external services.
Adopting a new strategy for delivering IT is an excellent opportunity to rethink everything about how that service is owned. It might just be time to overrule “don’t ask don’t tell” and force your cohorts and business stakeholders to accept the realities of delivering a high performance service that is a fit to your business requirements.
In other words, if you’ve just bought a new home and you move all the junk you’ve collected in your old garage to the new garage, it’s still going to be junk. It might have more scale (2 car to new 3 car garage), but it’s just going to be better scaling junk. If you don’t know what the usage characteristics of your application are, and you haven’t architected it to leverage its new found location in the cloud you’re vulnerable to being caught with unexpected costs and risks, with the same pile of junk.
*A little background on my “Don’t ask don’t tell” reference.
My experience tells me that there are very few organizations that always do everything they know they should to protect the ongoing operations of their IT solutions. It might be something as small as a contract that hasn’t been reviewed or as serious as a DR plan that was never fully tested. The bottom line is that most either avoid the “boring” stuff, or we use upper management “heads buried in the sand” folks to justify ignoring it. The move to cloud is in many cases a once in a lifetime opportunity for us in IT to clean out the garage and actually move to the new garage the way we know we should.
You must be logged in to post a comment.