Last week at the HP Discover conference in Las Vegas, I visited the HP Anywhere booth to learn about the HP Anywhere mobile development platform. While at the booth, I was particularly intrigued by the Poken devices that a few folks had. But more on that in a minute
HP’s Anywhere provides a cross between an enterprise App Store and a portal to access enterprise applications. It allows enterprise mobile developers to create apps that are consumed by internal users on a variety of devices. The concept of an enterprise app store is interesting but not new. The combination of an app store along with a portal to enterprise applications is where I think the value is. To an established enterprise that may have a number of legacy applications, making them available to a growing mobile workforce is not a trivial matter. At first blush, it appears that HP Anywhere could provide the interface to mobile-enable enterprise legacy apps while maintaining the mobile experience.
Across the show floor, I noticed a few folks carrying this curious device hanging from their badges.
Beyond the cute shape of a playful hand, is a platform that could be very interesting. However, at HP Discover, the Poken device was mainly used to store and exchange business card information via a NFC connection. That aspect is novel, but definitely not new or interesting. The functionality arrived more than 10 years ago with the Palm “beam” capability where two users could ‘beam’ their business card info back and forth via Infrared. Years later Bump provided similar capabilities via a mobile app.
If we look beyond just exchanging contact info, the Poken device demonstrates an interesting platform for mobile exchange to collect or share data. The company targets the exhibitor market, but just imagine if enterprises could share information in a similar way with clients, prospects or providers. In addition to just exchanging contact information, data could include technical documents, data sheets or website URLs…maybe more.
Last week, I attended HP’s Converged Cloud Tech Day in Puerto Rico. Fellow colleagues attended from North, Latin and South America. The purpose of the event was to 1) take a deep dive into HP’s cloud offerings and 2) visit HP’s Aguadilla location, which houses manufacturing and an HP Labs presence. What makes the story interesting is that HP is a hardware manufacturer, a software provider and a provider of cloud services. Overall, I was very impressed by what HP is doing…but read on for the reasons why…and the surprises.
HP Puerto Rico
HP, like many other technology companies, has a significant presence in Puerto Rico. Martin Castillo, HP’s Caribbean Region Country Manager provided an overview for the group that left many in awe. HP exports a whopping $11.5b from Puerto Rico or roughly 10% of HP’s global revenue. In the Caribbean, HP holds more than 70% of the server market. Surprisingly, much of the influence to use HP cloud services in Puerto Rico comes from APAC and EMEA, not North America. To that end, 90% of HP’s Caribbean customers are already starting the first stage of moving to private clouds. Like others, HP is seeing customers move from traditional data centers to private clouds to managed clouds to public clouds.
Moving to the Cloud
Not surprisingly, HP is going through a transition by presenting the company from a solutions perspective rather than a product perspective. Shane Pearson, HP’s VP of Portfolio & Product Management explained that “At the end of the day, it’s all about applications and workloads. Everyone sees the importance of cloud, but everyone is trying to figure out how to leverage it.” By 2015 the projected markets are: Traditional $1.4b, Private Cloud $47b, Managed Cloud $55b, Public Cloud $30b for a cloud total of $132b. In addition, HP confirmed Hybrid Cloud approach as the approach of choice.
While customers are still focused on cost savings as the primary motivation to move to cloud, the tide is shifting to business process improvement. Put another way, cloud is allowing users to do things they could not do before. I was pleased to hear HP offer that it’s hard to take advantage of cloud if you don’t leverage automation. Automation and Orchestration are essential to cloud deployments.
HP CloudSystem Matrix
HP’s Nigel Cook was up next to talk about HP’s CloudSystem Matrix. Essentially, HP is (and has been) providing cloud services across the gamut of potential needs. Internally, HP is using OpenStack as the foundation for their cloud service offering. But CloudSystem Matrix provides a cohesive solution to manage across both internal and external cloud services. To the earlier point about automation, HP is focusing on automation and self-service as part of their cloud offering. Having a solution that helps customers manage the complexity that Hybrid Clouds presents could prove interesting. Admittedly, I have not kicked the tires of CloudSystem Matrix yet, but on the surface, it is very impressive.
During the visit to Aguadilla, we joined a Halo session with HP’s Christian Verstraete to discuss architecture. Christian and team have built an impressive cloud functional reference architecture. As impressive as it is, one challenge is how to best leverage such a comprehensive model for the everyday IT organization. It’s quite a bit to chew off. Very large enterprises can consume the level of detail contained within the model. Others will need a way to consume it in chunks. Christian goes into much greater depth in a series of blog entries on HP’s Cloud Source Blog.
HP Labs: Data Center in a Box
One treat on the trip was the visit to HP Labs. If you ever get the opportunity to visit HP Labs, it’s well worth the time to see what innovative solutions the folks are cooking up. HP demonstrated the results from their Thermal Zone Mapping (TZM) tool (US Patent 8,249,841) along with CFD modeling tools and monitoring to determine details around airflow/ cooling efficiency. While I’ve seen many different modeling tools, HP’s TZM was pretty impressive.
In addition to the TZM, HP shared a new prototype that I called Data Center in a Box. The solution is an encapsulated rack system that supports 1-8 racks that are fully enclosed. The only requirement is power and chilled water. The PUE numbers were impressive, but didn’t take into account every metric (ie: the cost of chilled water). Regardless, I thought the solution was pretty interesting. The HP folks kept mentioning that they planned to target the solution to Small-Medium Business (SMB) clients. While that may have been interesting to the SMB market a few years ago, today the SMB market is moving more to services (ie: Cloud Services). That doesn’t mean the solution is DOA. I do think it could be marketed as a modular approach to data center build-outs that provides a smaller increment to container solutions. Today, the solution is still just a prototype and not commercially available. It will be interesting to see where HP ultimately takes this.
I was quite impressed by HP’s perspective on how customers can…and should leverage cloud. I felt they have a healthy perspective on the market, customer engagement and opportunity. However, I was left with one question: Why are HP’s cloud solutions not more visible? Arguably, I am smack in the middle of the ‘cloud stream’ of information. Sure, I am aware that HP has a cloud offering. However, when folks talk about different cloud solutions, HP is noticeably absent. From what I learned last week, this needs to change.
HP’s CloudSystem Matrix is definitely worth a look regardless of the state of your cloud strategy. And for data center providers and service providers, keep an eye out for their Data Center in a Box…or whatever they ultimately call it.
Last week’s Salesforce Dreamforce event had to be the largest conference I have seen at San Francisco’s Moscone Center. It covered Moscone North, South and West plus several hotels. And if that was not enough, Howard Street was turned into a lawn area complete with concert stage, outdoor lounge area and exhibits. Dreamforce presented a great opportunity to learn more about the Salesforce community…and a number of missed opportunities.
Walking the expo floor, one thing becomes clear very quickly: Salesforce is the largest exhibitor. Taking up 25-30% of the expo floor the Salesforce area maintained focal points around sales, marketing and service. Surrounding the Salesforce area were partners in their ecosystem. Some based on their Force.com platform, while others with their own platforms. There were solutions for all types of needs. Unfortunately, the different subject matter was intertwined throughout the floor (Sales next to Service next to Marketing). Salesforce is a broad platform. If you were interested in a specific aspect of Salesforce-based solutions, it was hard to find the related solutions. Interestingly, consulting firms held some of the largest booths outside of Salesforce.
Moscone West held the Developer Zone with less structured community areas for folks with similar interests to gather. Multiple presentations were taking place in the Developer Zone non-stop. In addition to the Unconference area, there was plenty of space for folks with common interests to gather around tables complete with power and Wi-Fi.
The 750+ sessions provided a wide range of presentations from how-to to case studies. In addition, there was a good mix of detailed to high-level session depending on your particular interest level.
Dreamforce is a good example of the maturity of Salesforce’s ecosystem. However, the large prominence of consulting firms provides a bit more contrast to that statement. Just walking around the expo floor one could get the impression that there is a solution to every problem imaginable. Not true and several of the basics are still woefully absent. Many of the solutions are excellent point solutions to address specific pain points.
Unfortunately, there are two aspects missing: Integration and Accessibility. Earlier this year, I wrote about the importance of onramps. At the expo, I randomly sampled several folks walking the show floor to get their thoughts. The theme was consistent: Great solutions, but each of them looking for an integrated solution. And it was not clear how they get from their current state to a future state leveraging the innovative solution. The prominence of consulting firms could serve as both a solution and further validation. Consulting firms provide a good short-term solution to the integration and onramp problem. However, the both issues need to be baked into the ecosystem’s solutions to sustain the ecosystem long-term.
Are conferences like Saleforce’s Dreamforce valuable to attend? In a nutshell…yes! If you knew very little about Salesforce before last week, Dreamforce presented a great opportunity to get an overview of opportunities, dig further into specific details and network with peers. If you were already an established customer, there is plenty of innovation still coming from the ecosystem.
I just penned a guest post for Parallel’s Enterprise blog. It talks to the impact BYOD and CoIT has enabled Apple in the enterprise market. You can view the post here:
Gaining visibility to application performance is key. Application Performance Management (APM) solutions are not new and provide insight to tiers within an application stack. With the entry of cloud based computing in the past couple of years, the APM world got a bit more complex.
APM is mature enough to consider cloud-based providers in the application stack. In the classic model, an application has three layers in the stack: 1) Database layer, 2), Application layer and 3) Web layer. Depending on the complexity of the application, it may have 5 or more layers in the mix. Today, a cloud service provider may serve one or more of these layers.
Several solutions exist that support cloud-based APM. New Relic, OPNET, and CA are just a few examples. At the Under the Radar conference, Tracelytics presented their approach to APM. Tracelytics started two years ago by a small team of three to address a growing problem they observed in research from Brown University. I met with Spiros Eliopoulos, Co-Founder and CTO to discuss how Tracelytic’s approach differs from the competition.
So, what’s different? Bottom line: It has to do with the flexibility of the solution. As the application stack gets increasingly more complex, so does the management. The number of providers and shared resources is growing exponentially. According to Spiros, their solution “looks at each layer individually, then ties together the different layers to provide a complete view.” Tracelytics allows APM visibility through “drilldown performance across layers.” Their clever approach uses heat maps to visually find problem spots. Managing APM within layers and up/down the entire stack is key to providing clear visibility to correct problem areas quickly.
Many providers struggle with pricing strategies in today’s cloud and virtualized world. In the traditional computing world, it was easy to license solutions. Tracelytic’s approach continues to provide flexibility by focusing on the tracing volume rather than hosts or layers. The entire stack of an application is considered one application. So, whether you engage one application to report 10x per hour or 10 applications once per hour, the cost is the same. This is true regardless of the number of layers within the application stack. Nice!
Moving workloads into Cloud Computing environments is on everyone’s task list. As one evaluates the choices between public and private cloud, the sizing of an environment quickly comes into view. How large or small should an environment be? Once you get started, how does one “rightsize” their cloud environment? As the cloud based environment, or environments start to grow, sizing them correctly will ensure that performance and financial objectives are kept in check.
Last week at the Under The Radar conference, I had a chance to meet with one company that addresses this need. I met with the Founder and CEO of Cloudyn, Sharon Wagner. Cloudyn’s approach is to evaluate cloud details and provide a set of recommendations. But that is just the start. Cloudyn’s approach is to ingest a number of variables via provider APIs from cost information to performance characteristics. Their solution is able to do this automatically even if negotiated pricing is in play with public cloud providers. The engine ingests cost elements from both public and private clouds. According to Sharon, the SaaS-based solution uses “a predefined algorithm that the user can modify to produce actionable recommendations. The recommendations provide specific details on the action to take and why”. Understanding the reason behind a decision puts users in a better position to make informed decisions. Armed with this information, users can size cloud environments more accurately and manage costs. Cloudyn’s solution takes it a step further to tie business metrics with technical metrics to derive metrics like ‘cost per transaction’.
Taking it a different direction, users can leverage the recommended actions to feed into the orchestration layer of their cloud. While this step may be a bit too automated for some, those with a clear understanding of their workloads and capable of setting boundaries might enjoy this valuable perk.
Ok, so you’re selling technology products, solutions or services. You’re looking for the largest buyers and typically look to the enterprise market. You develop the strategy and start going to work. You setup a sales team, check. You setup a channel and partner program, check. Then you start leveraging the relationships, check. But how do you cover the consumer angle? Huh? Yes. Using consumers as a sort of ‘Trojan Horse’ into the enterprise space.
In just the past few years, we’ve seen an uptick in the impact of Consumerization of IT (CoIT) in the enterprise space. The movement shifts the power pendulum away from IT and toward users. BYOD is also making an impact on the movement too. For more info on BYOD vs. CoIT:
In the case of Apple, they’ve attempted entry into the enterprise market a few times. Each time, they’ve been unsuccessful in creating a beachhead and establishing momentum. In the past two years, their attempt to enter the enterprise has largely succeeded. According to Apple’s latest quarterly earnings call, “94% of the Fortune 500 and 75% of the global 500 are testing or deploying iPads”. Others are also in the testing phase (see link below). And that doesn’t take into account the number of devices already in play via the consumer angle. So, is Apple changing their strategy to enter the enterprise environment? Regardless of the specific answer, they are progressing. The move gives Apple an interesting beachhead into the enterprise space…whether they intended to or not.
Interestingly, if consumers are used to using a given technology, they’re more supportive of using it in their professional life too. And that is a good thing for IT organizations from an adoption standpoint. The question is how providers can help enable this process. Apple is a good use-case of a different approach.
The point is: If you’re a provider looking to make a beachhead, there are options to sell into enterprises beyond the traditional approaches. Consumers is one way…but doesn’t fit every company’s solution. If your solution does fit, it might be an interesting model to consider. And this doesn’t cover the other targets open to most providers. But more on that later…
CIO Magazine – Is Apple changing Its Enterprise Tune?
Yesterday (June 2, 2010), AT&T announced changes to it’s data service pricing. The full press release can be found at:
The big news is that AT&T is doing away with their “unlimited” data plans. Many folks will question whether this is a good thing or not. However, the first question is to get an idea of your data usage. The data usage will depend on the device you use and how you use that device. For most, the usage of the device will peak at first while getting to know it. Then usage will taper off. It is possible that usage will continue to grow as you find new ways to use your device.
I took a sampling of two devices I use regularly; an iPhone and an iPad. I’ve used the iPhone regularly for the past year. So, my usage has somewhat normalized. Over the past 6 months, I’ve averaged 243MB/ month in data usage on the iPhone. The peak month was 442MB and lowest usage month was 154MB. The usage is trending upward.
I have only had the iPad for 2 days now. As such, there is still quite a bit of “trying things out” that is happening right now. If I take the data usage over the past two days and extrapolate that over 30 days, I come up with 500MB/ month. Even if I double the value, it is still only 1GB/ month.
AT&T’s two new data plans top out at 200MB and 2GB per month respectively. Using my use case as a typical-heavy user of bandwidth, the 2GB data plan should provide more than adequate coverage for both devices.
As we leverage the cloud more over time, I would expect these numbers to grow. And if you’re starting to use new applications like video streaming from Netflix on the iPad, then the numbers will grow even further.
There is an alternative. Leverage AT&T’s Wi-Fi when you can. Access to AT&T’s Wi-Fi Hotspots is included with the data plans. It’s both faster than the cellular network and provides unlimited bandwidth under AT&T’s new plans. You can also leverage free Wi-Fi in a variety of locations in addition to the Wi-Fi that you may have in your home/ workplace.
Bottom Line: AT&T states that the new plans should help lower the data bills for 98% of their customers. Based on my use case scenarios, I would tend to agree.
Rackspace, a popular hosting provider in the cloud, suffered a significant outage on June 29, 2009. Apparently, a power interruption caused their Dallas (DFW-Grapevine) data center to go offline. Rackspace has posted a copy of the incident report here:
As a consequence, Rackspace expects to issue service credits to customers in the range of $2.5m-$3.5m. In response, Rackspace filed a Form 8-K with the SEC:
The Rackspace outage is bound to bring questions about the stability of services in the cloud. But should it? The outage that Rackspace (and their customers) experienced could have happened to any data center owner. So, why is Rackspace being held to a different standard?
Whenever a provider fails to deliver a service, it can affect a business that relies on those services. Just as a traditional IT organization would not rely on a single data center, nor should we expect the services we leverage in the cloud.
When working in the cloud, a change to the traditional method of redundancy is warranted. Cloud providers could potentially provide geo-diversity for customers. But the customer should really consider how to provide redundancy across providers. That way, if any failure happens with one provider, a second provider is there to pickup the demand.
In some ways, this potentially eliminates the value of an SLA (Service Level Agreement). I will discuss more on SLA value in a future blog post.
This redundancy does come at a cost (cloud-based or traditional model). A risk assessment and cost benefit analysis should be performed to better understand the options and path to take.