Last week at the HP Discover conference in Las Vegas, I visited the HP Anywhere booth to learn about the HP Anywhere mobile development platform. While at the booth, I was particularly intrigued by the Poken devices that a few folks had. But more on that in a minute
HP’s Anywhere provides a cross between an enterprise App Store and a portal to access enterprise applications. It allows enterprise mobile developers to create apps that are consumed by internal users on a variety of devices. The concept of an enterprise app store is interesting but not new. The combination of an app store along with a portal to enterprise applications is where I think the value is. To an established enterprise that may have a number of legacy applications, making them available to a growing mobile workforce is not a trivial matter. At first blush, it appears that HP Anywhere could provide the interface to mobile-enable enterprise legacy apps while maintaining the mobile experience.
Across the show floor, I noticed a few folks carrying this curious device hanging from their badges.
Beyond the cute shape of a playful hand, is a platform that could be very interesting. However, at HP Discover, the Poken device was mainly used to store and exchange business card information via a NFC connection. That aspect is novel, but definitely not new or interesting. The functionality arrived more than 10 years ago with the Palm “beam” capability where two users could ‘beam’ their business card info back and forth via Infrared. Years later Bump provided similar capabilities via a mobile app.
If we look beyond just exchanging contact info, the Poken device demonstrates an interesting platform for mobile exchange to collect or share data. The company targets the exhibitor market, but just imagine if enterprises could share information in a similar way with clients, prospects or providers. In addition to just exchanging contact information, data could include technical documents, data sheets or website URLs…maybe more.
IT is dead. Long live IT! 2013 is turning into a watershed year for CIOs as the traditional CIO is not the CIO of today. Business expectations, IT leadership styles and technology solutions are all driving an evolutionary change for CIOs and the IT organizations they lead. The changes did not happen overnight as business shifts, Shadow IT and New Technology were all contributors.
Now that the industry is reaching a tipping point where the new CIO model becomes the new standard, look for a number of changes moving forward:
The CIO will lead the charge in transforming the relationship between IT and the rest of the business. In a related post I talk about the importance of the Three-Legged Race on the business, CIO and IT relationship. As such, business leaders start to view IT as providing greater business value rather than just a support organization. The new CIO works with peers toward revenue growth, not just expense containment.
Similar to changes in the CIO role, the IT function changes as well. The IT focus shifts from technology to business innovation. IT strategy directly aligns with and is driven by the overall business strategy. If an IT activity is not directly supporting the business strategy, why is IT doing it? Day-to-day operations become table stakes while innovation and revenue growth take priority. In essence, the IT organization becomes a business organization rather than a technology organization. IT’s customer is no longer the internal user, it is the business’ customer.
The CIO and his/ her lieutenants all speak a common language: business. The days of talking technology are behind us. To support this change, the CIO creates is a strong culture within IT that breeds business focus and career growth. Outside of IT, the CIO leads the charge for true business engagement with fellow executives and their organizations.
The purpose of technology changes to become an enabler, nothing more. The new model moves away from religious technology debates. With technology, CIOs shift focus on the business value for the entire company, not just the IT organization. The new model CIO embraces new and innovative technology as a differentiator rather than waiting for peers to adopt it first. Common stumbling blocks like security, while still very important, are not used as a wet blanket to smother opportunities like cloud computing.
The new IT model presents a massive opportunity for the CIO and business that looks very different from today’s CIO and IT organization. Moving from technology to business and turning data into information creates the new business currency. The new CIO presents the core differentiator between competing businesses. In essence, by leveraging the new model, the difference between businesses is influenced by the CIO and IT organization. The business looks to the CIO and IT organization for information to make core business decisions and the conversations take a decidedly business-focused turn.
These changes represent a significant shift from prior or even the current model being used by many organizations. None of these changes will be easy or quick. 2013 simply presents a tipping point for the new model. Reaching critical mass between the business, IT and technology creates the catalyst for change. The new model is strong and provides a strong opportunity for the CIO to shine. Now is the time for CIOs in the old model to turn the corner and adopt the new model.
Last week, I attended HP’s Converged Cloud Tech Day in Puerto Rico. Fellow colleagues attended from North, Latin and South America. The purpose of the event was to 1) take a deep dive into HP’s cloud offerings and 2) visit HP’s Aguadilla location, which houses manufacturing and an HP Labs presence. What makes the story interesting is that HP is a hardware manufacturer, a software provider and a provider of cloud services. Overall, I was very impressed by what HP is doing…but read on for the reasons why…and the surprises.
HP Puerto Rico
HP, like many other technology companies, has a significant presence in Puerto Rico. Martin Castillo, HP’s Caribbean Region Country Manager provided an overview for the group that left many in awe. HP exports a whopping $11.5b from Puerto Rico or roughly 10% of HP’s global revenue. In the Caribbean, HP holds more than 70% of the server market. Surprisingly, much of the influence to use HP cloud services in Puerto Rico comes from APAC and EMEA, not North America. To that end, 90% of HP’s Caribbean customers are already starting the first stage of moving to private clouds. Like others, HP is seeing customers move from traditional data centers to private clouds to managed clouds to public clouds.
Moving to the Cloud
Not surprisingly, HP is going through a transition by presenting the company from a solutions perspective rather than a product perspective. Shane Pearson, HP’s VP of Portfolio & Product Management explained that “At the end of the day, it’s all about applications and workloads. Everyone sees the importance of cloud, but everyone is trying to figure out how to leverage it.” By 2015 the projected markets are: Traditional $1.4b, Private Cloud $47b, Managed Cloud $55b, Public Cloud $30b for a cloud total of $132b. In addition, HP confirmed Hybrid Cloud approach as the approach of choice.
While customers are still focused on cost savings as the primary motivation to move to cloud, the tide is shifting to business process improvement. Put another way, cloud is allowing users to do things they could not do before. I was pleased to hear HP offer that it’s hard to take advantage of cloud if you don’t leverage automation. Automation and Orchestration are essential to cloud deployments.
HP CloudSystem Matrix
HP’s Nigel Cook was up next to talk about HP’s CloudSystem Matrix. Essentially, HP is (and has been) providing cloud services across the gamut of potential needs. Internally, HP is using OpenStack as the foundation for their cloud service offering. But CloudSystem Matrix provides a cohesive solution to manage across both internal and external cloud services. To the earlier point about automation, HP is focusing on automation and self-service as part of their cloud offering. Having a solution that helps customers manage the complexity that Hybrid Clouds presents could prove interesting. Admittedly, I have not kicked the tires of CloudSystem Matrix yet, but on the surface, it is very impressive.
During the visit to Aguadilla, we joined a Halo session with HP’s Christian Verstraete to discuss architecture. Christian and team have built an impressive cloud functional reference architecture. As impressive as it is, one challenge is how to best leverage such a comprehensive model for the everyday IT organization. It’s quite a bit to chew off. Very large enterprises can consume the level of detail contained within the model. Others will need a way to consume it in chunks. Christian goes into much greater depth in a series of blog entries on HP’s Cloud Source Blog.
HP Labs: Data Center in a Box
One treat on the trip was the visit to HP Labs. If you ever get the opportunity to visit HP Labs, it’s well worth the time to see what innovative solutions the folks are cooking up. HP demonstrated the results from their Thermal Zone Mapping (TZM) tool (US Patent 8,249,841) along with CFD modeling tools and monitoring to determine details around airflow/ cooling efficiency. While I’ve seen many different modeling tools, HP’s TZM was pretty impressive.
In addition to the TZM, HP shared a new prototype that I called Data Center in a Box. The solution is an encapsulated rack system that supports 1-8 racks that are fully enclosed. The only requirement is power and chilled water. The PUE numbers were impressive, but didn’t take into account every metric (ie: the cost of chilled water). Regardless, I thought the solution was pretty interesting. The HP folks kept mentioning that they planned to target the solution to Small-Medium Business (SMB) clients. While that may have been interesting to the SMB market a few years ago, today the SMB market is moving more to services (ie: Cloud Services). That doesn’t mean the solution is DOA. I do think it could be marketed as a modular approach to data center build-outs that provides a smaller increment to container solutions. Today, the solution is still just a prototype and not commercially available. It will be interesting to see where HP ultimately takes this.
I was quite impressed by HP’s perspective on how customers can…and should leverage cloud. I felt they have a healthy perspective on the market, customer engagement and opportunity. However, I was left with one question: Why are HP’s cloud solutions not more visible? Arguably, I am smack in the middle of the ‘cloud stream’ of information. Sure, I am aware that HP has a cloud offering. However, when folks talk about different cloud solutions, HP is noticeably absent. From what I learned last week, this needs to change.
HP’s CloudSystem Matrix is definitely worth a look regardless of the state of your cloud strategy. And for data center providers and service providers, keep an eye out for their Data Center in a Box…or whatever they ultimately call it.
For over 30 years, we have successfully run IT organizations. And things (generally) have worked well for IT organizations and the businesses they serve. Best practices were identified, shared and enhanced. Technology evolved and became increasingly more sophisticated. We built the backbone that business runs on today.
However, the traditional way of running IT is dead and a change is needed before we face extinction. That may seem a bit dramatic, but IT needs to make a significant change to remain relevant. There are three components that need to change: the Chief Information Officer (or IT executive), the individuals within the IT organization and the business that they serve.
Minimizing Information Technology
In 2004, Nicholas Carr created quite a dustup with his book Does IT Matter suggesting a change in the value of IT organizations. In essence, he was correct…based on the direction IT is heading today. As IT focuses more on minimizing expenses and less on value creation, the intrinsic value the IT organization brings to the business is lessened. IT becomes a fundamental support organization and little more. That balance needs to change.
Value Creation is King
At this point operations and core support are table stakes. IT needs to identify ways to create value for the organization. By value creation I mean creating revenue in ways not previously possible. In his book The Digital Edge: Exploiting Information and Technology for Business Advantage, Mark McDonald delves further into this point. Today, executive recruiters are hiring CIOs that are focused on value creation rather than just table stakes.
The CIO needs to lead the change. There is no other organization within a business that has the potential to lead this kind of change. More than just about any other group, the IT organization has both the breadth and depth across the business. IT Transformation takes place when the IT organization is aligned toward value creation while still providing operational support. IT Transformation is challenging, but not a pipe dream. Progressive CIOs from Boston Scientific, General Motors, Chevron and UPS are already heading in that direction.
Changing the way the CIO and IT organization functions are only two of the three components. The business also needs to change the way it perceives value from IT. I have heard many references to IT organizations
- “(IT is) where big projects go to die”
- “The IT police”
- “The ‘no’ organization”
- “Those nice people you call when you can’t connect to the network”
IT can be much more than this. Why isn’t that the case. Well, a history lesson of IT’s evolution over the past 30 years would be required. So, how do we make the change?
The Three-Legged Race
In order for the paradigm to change and IT Transformation to truly take hold, all three components need to work in unison. The CIO, the IT organization and the business all need to evolve for the evolution to take place. That seems like a pretty tall order. Who takes the first step to affect the change? The CIO. The others, while valuable, do not have the connections and relationships of the CIO. Challenge the status quo and evolve the paradigm. The rewards are too great to miss.
In 2013, the cloud industry endures a maturing of sorts. Cloud, as a concept, moves from the lab to the default for many. In order to transit the maturing process, a number of changes will start to take place in 2013. The changes take place in both the industry and within the IT organization itself.
- Rise of the Cloud Verticals: Today, the cloud marketplace offers a smorgasbord of general-purpose solutions. In a fledgling industry, providers needed to focus on solutions that served a wide range of client requirements. Now with critical mass for some specific verticals, expect to see industry-specific cloud-based solutions. These solutions may include a suite of services or ecosystem geared to specific industries.
- Widespread Planning of IaaS Migrations: Now that cloud has moved beyond the lab, organizations will include IaaS solutions in their roadmap planning. SaaS will continue to take a role, however, the rise of hybrid cloud solutions will drive IaaS in IT roadmaps in earnest.
- CIO’s Look to Cloud to Catapult IT Transformation: The role of the CIO and IT organization is evolving as quickly as the underlying technology methodologies. The evolutionary shift in IT’s role in the business will (in turn) cause a re-evaluation of solutions used. Expect to see IT organizations leverage cloud as one of the most significant opportunities to fuel this early transformation.
- Mobile Increases Intensity of Cloud Adoption: As the prominence of mobile use increases, look for the adoption rates of cloud to increase respectively. Traditional IT methodologies provide a cumbersome solution for many mobile requirements. The move to cloud provides an elegant solution to a rather complex problem.
- Cloud Innovation Shifts from New Solutions to Integration & Consolidation: Cloud’s shine starts to fad as focus moves to reality. Organizations are less interested in the solving a singular point problem. Look for a move to solutions that solve multiple issues and integrate with other solutions. Expect consolidation of singular cloud solutions for opportunities to provide more robust solutions.
Between natural disasters like Hurricanes Sandy and Irene or man-made disasters like the recent data center outages, disasters happen. The question isn’t whether they will happen. The question is: What can be done to avoid the next one? Cloud computing provides a significant advantage to avoid disaster. However, simply leveraging cloud-based services is not enough. First, a tiered approach in leveraging cloud-based services is needed. Second, a new architectural paradigm is needed. Third, organizations need to consider the holistic range of issues they will contend with.
Technology Clouds Help Natural Clouds
If used correctly, cloud computing can significantly limit or completely avoid outages. Cloud offers a physical abstraction layer and allows applications to be located outside of disaster zones where services, staff and recovery efforts do not conflict.
- Leverage commercial data centers and Infrastructure as a Service (IaaS). Commercial data centers are designed to be more robust and resilient. Prior to a disaster, IaaS provides the ability to move applications to alternative facilities out of harms way.
- Leverage core application and platform services. This may come in the form of PaaS or SaaS. These service providers often architect solutions that are able to withstand single data center outages. That is not true in every case, but by leveraging this in addition to other changes, the risks are mitigated.
In all cases, it is important to ‘trust but verify’ when evaluating providers. Neither tier provides a silver bullet. The key is: Take a multi-faceted approach that architects services with the assumption for failure.
Changes in Application Resiliency
Historically, application resiliency relied heavily on redundant infrastructure. Judging from the responses to Amazon’s recent outages, users still make this assumption. The paradigm needs to change. Applications need to take more responsibility for resiliency. By doing so, applications ensure service availability in times of infrastructure failure.
In a recent blog post, I discussed the relationship cloud computing provides to greenfield and legacy applications. Legacy applications present a challenge to move into cloud-based services. They can (and eventually should) be moved into cloud. However, it will require a bit of work to take advantage of what cloud offers.
Greenfield applications, on the other hand, present a unique opportunity to fully take advantage of cloud-based services…if used correctly. With Hurricane Sandy, we saw greenfield applications still using the old paradigm of relying heavily on redundant infrastructure. And the consequence was significant application outages due to infrastructure failures. Consequently, greenfield applications that rely on the new paradigm (ie: Netflix) experienced no downtime due to Sandy. Netflix not only avoided disaster, but saw a 20% increase in streaming viewers.
Moving Beyond Technology
Leveraging cloud-based services requires more than a technology change. Organizational impact, process changes and governance are just a few of the things to consider. Organizations need to consider the changes to access, skill sets and roles. Is staff in other regions able to assist if local staff is impacted by the disaster? Fundamental changes from change management to application design processes will change too. And at what point are services preemptively moved to avoid disaster? Lastly, how do governance models change if the core players are out of pocket due to disaster? Without considering these changes, the risks increase exponentially.
So, where you do you get started? First, determine where you are today. All good maps start with a “You Are Here” label. Consider how to best leverage cloud services and build a plan. Take into account your disaster recovery and business continuity planning. Then put the plan in motion. Test your disaster scenarios to improve your ability to withstand outages. Hopefully by the time the next disaster hits (and it will), you will be in a better place to weather the storm.
The IT industry is in a state of significant flux. Paradigms are changing and so are the underlying technologies. Along with these changes come the way we think about solutions. Over time, IT organizations have amassed a phenomenal number of solutions, vendors, complex configurations and experience. Continuing to support that ever-expanding model is starting to show cracks. Trying to sustain this approach is just not possible…nor should it be. It is time for a change. Consolidation, integration, efficiency and value creation are the current focal points. Those shifts create a significant shift in how we function as IT organizations and providers.
Changes in Buying Habits
In order to truly understand the value of an ecosystem, one first needs to understand the change in buying habits. IT organizations are making a significant shift from buying point solutions to buying ecosystems. In some ways, this is nothing new. IT organizations have bought into the solutions from major providers for decades. The change is in the composition of the ecosystem. Instead of buying into an ecosystem from a single provider, buyers are looking for comprehensive ecosystems that span multiple providers. This lowers the risk for the buyer and creates a broader offering while providing an integrated solution.
Creating the Cloud Supply Chain
Cloud Computing is a great use-case of the importance of building a supply chain within the ecosystem. Think about it. Applications, services and solutions that IT organization provides to users are not single-purpose, non-integrated solutions. At least they shouldn’t be. Good applications and services are integrated with other offerings. When buyers choose a component, that component needs to connect to another component. In addition, alternatives are needed, as one solution does not fit all. In many ways, this is no different from a traditional manufacturing supply chain. The change is to apply those fundamentals to the cloud ecosystem.
In concert with the supply chain, each component needs solid integration with the next. Today, many point solutions require the buyer to figure out how to integrate solutions. This often becomes a barrier to adoption and introduces risk into the process. One could go crazy coming up with the permutations of different solutions that connect. However, if each solution considered the top 3-4 commonly connected components, the integration requirements become more manageable. And they are left to the folks that understand the solutions best…the providers.
As cloud-based ecosystems start to mature, the natural progression is to develop cloud verticals. Essentially, creating ecosystems with components for a specific vertical or industry. In the healthcare vertical, an ecosystem might include a choice of EHR solutions, billing systems, claims systems and patient portal. For SMB or Mid-Tier businesses, it might be an accounting system, email, file storage and website. Remember that the ecosystem is not just a brokerage of selling the solutions as a package. It is a comprehensive solution that is already integrated.
Bottom Line: Buyers are moving to buying ecosystems, especially with cloud services. The value of your solution comes from the value of your ecosystem.
In the past week, The New York Times (and Greenpeace previously) called attention to the inefficiency in data centers. These stories bring light to a serious issue, but are a bit misguided and do not include the whole story. Generally speaking, are data centers inefficient? Absolutely! Read on to fully understand the significance of the situation, the reasons why they are inefficient and opportunities that lie ahead.
Data centers are large consumers of power. According to a 2007 U.S EPA report, data centers in 2006 accounted for a full 1.5% of the US energy consumption. That number was expected to double to 3% by 2011. At the time, 38% was attributable to the nation’s largest data centers. However, these numbers do not represent the entire footprint of data centers. Smaller facilities, closets and lab spaces were not included in the study. From experience, these represent a significant aggregate footprint.
Organizations like The New York Times and Greenpeace have called out the issues around inefficiency in data centers. Good points are made, but the focus is misdirected and backfiring. The companies called out in their reports are operating some of the most efficient data centers on the planet. So, what’s the problem? The vast majority of data centers operated by everyone else.
In my post The Future Data Center Is… Part II, I breakdown the importance of understanding differences between SMB, Mid-Tier, Enterprise and Very Large Enterprise data centers. There is a very significant difference between the different tiers, their requirements and ability to run an efficient data center. My good friend and fellow Data Center Pulse board member Mark Thiele provided additional color in his SwitchScribe post Measuring the Size of a Data Center – Yes, it Matters. Unfortunately, the majority of articles focus on the Very Large Enterprise data centers. These facilities have very specific requirements and do not represent the vast majority of data centers in use today.
Sadly, the SMB, mid-tier and enterprise (to a smaller degree) data centers are some of the most inefficient operations on the planet. And that doesn’t account for the closets and rooms that house IT equipment. Is there something to learn from the larger enterprises? Absolutely. Should (or can) the rest of data center operators mimic them? No. The very large enterprises are getting more and more efficient every day. The SMB, mid-tier and enterprise data centers (to a smaller degree) are simply not able to keep up.
Different Purposes of Data Centers
In my post A Workload is not a Workload, is not a Workload, I delve into the differences that drive data center architecture and operation. The fundamental premise behind the story is that the larger providers have two things that differentiate them from the rest.
- Monolithic Application: Organizations like eBay, Zynga, Google and Facebook run very specialized applications. These apps (and the infrastructure they run on) are highly tuned. In most other organizations, the workload is very mixed and presents challenges (if not impossible) to effectively tune.
- Scale: The same companies run their monolithic applications at web-scale, which is much larger than typical enterprise applications. The very nature of the scale requires and allows specialization in the tuning of the application and related infrastructure.
These two factors lead to a much different situation than exists in the typical enterprise, mid-tier or SMB environment.
Beyond the technical specifics, the organizational complexities cannot go unmentioned. For the above-mentioned companies, they have teams of people with specific jobs dedicated to supporting the data center and its operation. This dedication allows specialization that drives further efficiency in the facility and operations. The staff understands the PUE of the data center and how each specific each tweak impacts the number. They often run their facilities as close to maximum efficiency and capacity as possible. Why? They clearly understand the business impact.
The typical enterprise, mid-tier or SMB presents a much different situation. In this case, the responsibility of the data center is often one of many requirements in a person’s job description. That’s on top of everything else. These organizations simply don’t have the scale to justify specialization of data center operations.
There is no question that data centers are huge energy consumers. That will not change. The opportunity is to run more efficient facilities that leverage renewable energy sources. Some larger (and newer) facilities are being located near renewable power sources. Yahoo, VMware and others recently built data centers in Wanatchee, WA near the Grand Coulee dam’s hydroelectric power source. In other cases, wind and solar farms are being built near larger data centers.
It should be noted that these decisions do not always make good business sense. Renewable energy sources are not available at reasonable costs everywhere. And moving data centers near renewable power sources is not always feasible either. Staffing, backbone network connectivity and a host of other factors influence the decision. Regardless of the interest in social and environmental responsibility, a data center is an expensive business asset that requires analysis of many factors.
Other issues present challenges for data centers including costs, knowledge, legacy applications, governance and security. Data centers are complex ecosystems that require attention, understanding and specialized management. Over time, data centers are getting more complex…not simple. There needs to be an appreciation and acceptance to these issues.
In summary, the very large data center operators are running some of the most efficient facilities and operations. The data center (and broader IT) industry needs to learn from their examples. However, because of articles in the NYT and Greenpeace calling out their flaws, there is growing hesitation to share what they’re doing for fear of bad PR. Can you blame them? But leaders like Dean Nelson (VP at eBay and fellow Data Center Pulse board member) are still fighting the trend in order to benefit the industry. The data center and IT industry needs an environment where ideas and experiences can be freely shared without concern of misguided criticism in the mass media.
Data centers are huge consumers of energy. Their demand is increasing and not expected to shrink. So, what can be done to address the issues? There are several immediate changes that are needed.
- Strategic Differentiation: Organizations need to take a hard look at their own focus. Is the company in the data center business? Is the company willing to invest in the data center to truly run it efficiently? Is the data center a strategic differentiator? For the vast majority of companies currently running data centers the clear answer to this will be no.
- Efficient Facilities: The very large enterprises are already running efficient facilities and driving hard toward greater efficiency. Let’s encourage them to continue to do so! Those not able to run at this level of efficiency need to stop running their own facilities. SMB, mid-tier and some enterprise organizations need to develop plans to eliminate their own data centers and leverage more efficient ones. Consider colocation, hosted infrastructure and other options.
- Efficiency Programs: Several power utilities have offered programs with incentives to offset costs to implement energy efficient solutions. The problem is that programs were not consistent across power utilities and some do not offer the programs. The industry needs a consistent program similar to the National Data Center Power Reduction Incentive Program proposed by Data Center Pulse in 2009.
- Virtualization: Virtualization is not new and a very mature technology. However, the adoption rates are still very anemic. Depending on which analyst organization you believe, the numbers average roughly 30-40%. This isn’t the whole story though. From experience, organizations sit at opposite ends by being either heavily virtualized or virtualizing only a few servers. The excess, unused capacity is simply wasteful. There are a number of reasons for this including cost, legacy applications, knowledge, fear, and an already overwhelming plate of issues to address.
- Cloud Computing: The cloud services market is maturing very quickly. Even highly regulated industries with compliance requirements are leveraging cloud computing for even their most sensitive applications. Cloud based services is a good solution to leverage where possible.
In summary, let’s commend those that are trying hard to help our industry move forward. Let’s bring the focus to the real issues preventing the efficient operation of data centers. There are a number of viable and immediate solutions available today to help the larger contingent of organizations.
- Glanz, James. “Power, Pollution and the Internet.” The New York Times. 22 Sep 2012. <
- Glanz, James. “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle.” The New York Times. 23 Sep 2012. <
- Fehrenbacher, Katie. “NYT’s data center power reports like taking a time machine back to 2006.” GigaOM. 24 Sep 2012. <
- Weinman, Joe. “The Power of IT (it’s not all in energy consumption).” GigaOM. 26 Sep 2012. <
- “How Clean is Your Cloud?” Greenpeace. 17 Apr 2012. <
- “National Data Center Power Reduction Incentive Program.” Data Center Pulse. 6 May 2009. <
- “The Green Grid Power Efficiency Metrics: PUE & DCiE.” The Green Grid. 23 Oct 2007. <
- “Report to Congress on Server and Data Center Energy Efficiency.” U.S. Environmental Protection Agency. 2 Aug 2007. <
- Thiele, Mark. “Measuring the Size of a Data Center – Yes, it Matters.” SwitchScribe. 30 Jan 2012. <
Several years in and there is still quite a bit of confusion around the value of cloud computing. What is it? How can I use it? What value will it provide? There are several perspectives on how to approach cloud computing value. Interesting, that very question elicits several possible responses. This missive specifically targets how applications map against a cloud value matrix. From the application perspective, scale along with the historical component governs the direction of value.
As scale increases, so does the potential value from cloud computing. That is not to say that traditional methods are not valuable. It has more to do with the direction and velocity that the scale of an application is taking. Greenfield applications provide a different perspective from legacy applications. Rewriting legacy applications simply to use cloud brings questionable value. There may be extenuating circumstances to consider. However, those are not common.
Legacy vs. Greenfield (x-axis)
The x-axis represents the spectrum of applications from legacy to greenfield. Greenfield applications may include either brand new applications or rewritten legacy applications. Core, off the shelf applications may fall into either category. The current state of the cloud marketplace maturity suggests that any new or greenfield applications should consider cloud computing. That includes both PaaS and SaaS approaches.
The first step is to map the portfolio of applications against the grid. Each application type and scale is represented in relation to the others. This is a good exercise to 1) identify the complete portfolio of applications, 2) understand the current state and lifecycle and 3) develop a roadmap for application lifecycles. The roadmap can then become the playbook to support a cloud strategy.
The value cloud computing brings increases as application requirements move toward the upper-left quadrant. In most cases, applications will move horizontally to the right rather than vertically upward. The clear exception is web-scale applications. Most of those start in the lower-right quadrant and move vertically upward.
The matrix is intended to be a general guideline to characterize the majority, but not all applications and situations. In one example, legacy applications may be encapsulated to support cloud-based services as an alternative to rewriting.
Last week’s Salesforce Dreamforce event had to be the largest conference I have seen at San Francisco’s Moscone Center. It covered Moscone North, South and West plus several hotels. And if that was not enough, Howard Street was turned into a lawn area complete with concert stage, outdoor lounge area and exhibits. Dreamforce presented a great opportunity to learn more about the Salesforce community…and a number of missed opportunities.
Walking the expo floor, one thing becomes clear very quickly: Salesforce is the largest exhibitor. Taking up 25-30% of the expo floor the Salesforce area maintained focal points around sales, marketing and service. Surrounding the Salesforce area were partners in their ecosystem. Some based on their Force.com platform, while others with their own platforms. There were solutions for all types of needs. Unfortunately, the different subject matter was intertwined throughout the floor (Sales next to Service next to Marketing). Salesforce is a broad platform. If you were interested in a specific aspect of Salesforce-based solutions, it was hard to find the related solutions. Interestingly, consulting firms held some of the largest booths outside of Salesforce.
Moscone West held the Developer Zone with less structured community areas for folks with similar interests to gather. Multiple presentations were taking place in the Developer Zone non-stop. In addition to the Unconference area, there was plenty of space for folks with common interests to gather around tables complete with power and Wi-Fi.
The 750+ sessions provided a wide range of presentations from how-to to case studies. In addition, there was a good mix of detailed to high-level session depending on your particular interest level.
Dreamforce is a good example of the maturity of Salesforce’s ecosystem. However, the large prominence of consulting firms provides a bit more contrast to that statement. Just walking around the expo floor one could get the impression that there is a solution to every problem imaginable. Not true and several of the basics are still woefully absent. Many of the solutions are excellent point solutions to address specific pain points.
Unfortunately, there are two aspects missing: Integration and Accessibility. Earlier this year, I wrote about the importance of onramps. At the expo, I randomly sampled several folks walking the show floor to get their thoughts. The theme was consistent: Great solutions, but each of them looking for an integrated solution. And it was not clear how they get from their current state to a future state leveraging the innovative solution. The prominence of consulting firms could serve as both a solution and further validation. Consulting firms provide a good short-term solution to the integration and onramp problem. However, the both issues need to be baked into the ecosystem’s solutions to sustain the ecosystem long-term.
Are conferences like Saleforce’s Dreamforce valuable to attend? In a nutshell…yes! If you knew very little about Salesforce before last week, Dreamforce presented a great opportunity to get an overview of opportunities, dig further into specific details and network with peers. If you were already an established customer, there is plenty of innovation still coming from the ecosystem.