Hyperconverged – Hyper Market Acceleration

Before I jump into the wild world of IT hypervconverged infrastructure, let’s quickly remind ourselves of the benefits ESG has seen from these types of deployments:

Top Five Benefits Realized by Deploying Integrated Computing Platforms

The above research includes additional deployment models beyond hyperconvergance, but the benefits remain relatively the same (Source: ESG Research Brief, Integrated Computing Platform Trends, August 2014.). IT simply wants an easy to deploy solution that is predictable and simple to manage. And just as we observed through ESG research presented in this infographic, the market was lighting up with hyperconverged solutions and further spotlight was placed on the market with the announcement of VMware EVO solutions at VMworld 2014.

Here is a quick (and likely incomplete list) of vendors that are busy positioning themselves in the hyperconverged market:

HP: Yes – not an original participant of the VMware EVO announcement, but HP now has an EVO offering set for GA in 2015, and add the StoreVirtual (perhaps the widest deployed) solution to this mix as well.

Maxta: First went to market early in 2014 as a storage solution, but hassince pivoted its messaging directly at the hyperconverged market.

Nimble Storage: Here is another storage vendor that pivoted some of its go to market and has teamed up with Cisco UCS to deliver SmartStack.

Nimboxx: I’ll predict these guys are about to get more attention than they have. They are KVM-based and call attention to the cost of many of these other VMware-only solutions.

Nutanix: Interesting things happening with its OEM agreement with Dell, but still not 100% clear how it will snap into Dell’s breadth of solutions.

Scale Computing: Another KVM-based solution, but a super simple UI and focus on the mid-market makes these folks worth watching.

SimpliVity: One of the pioneering vendors that re-engineered the storage architecture and delivered a complete solution, but is facing increased market pressure.

VMware: EVO solutions with Dell, EMC, Fujitsu, Inspur, NetOne, and Supermicro. These solutions are going to create further competition and attention in the market once they all GA later in 2014 or early in 2015.

Why does this all matter? Convergence, whether it be full hyperconvergence or better engineering between infrastructure components that is delivered in a pre-configured turnkey manner, is here to stay. Traditional and emerging IT vendors are going to quickly have to determine how to stand out from the pack and light up their go to market campaigns and sales initiatives. Some vendors in this general market, VCE for example, are focused in on the large enterprise and are tooled with professionals that can carry an enterprise application conversation while other vendors; Maxta, for example, is still balancing its storage capabilities with hyperconvergance messaging.

The next 6 months matter! Messaging and marketing have to stand out from the adjacent IT vendor participants, and candidly, these vendors need to find ways to shorten sales cycles and get their go to market partners involved and incented so they can help transact in this new consumption model. 

Posted in Cloud Computing, Private Cloud Infrastructure | Tagged , , , | Leave a comment

Time to Embrace or Terminate National Cybersecurity Awareness Month (NCSAM)

Most people know that October is National Breast Cancer Awareness Month. Far fewer people know that October is also American Archives Month, National Book Month, and Pastors Appreciation Month. 

Oh yeah. October is also National Cybersecurity Awareness Month and unfortunately, few security professionals or industry leaders either know about it or pay much attention to this designation. 

Now, dissing National Cybersecurity Awareness Month isn’t a universal problem. In fact, it’s sort of a big deal in Washington DC where the month actually begins with a Presidential proclamation. In his proclamation issued on September 30, President Obama declared, “I call upon the people of the United States to recognize the importance of cybersecurity and to observe this month with activities, events, and training that will enhance our national security and resilience.”

The Presidential proclamation is usually followed by a DHS-led event attended by Washington-based industry groups, federal sales teams, lobbyists, and various government cybersecurity wonks. I actually attended the National Cybersecurity Awareness Month kickoff back in 2009. At this event, Janet Napolitano, the Secretary of DHS, announced that the agency would be adding 1,000 cybersecurity professionals to its staff by 2012. Napolitano said:  “This new hiring authority will enable DHS to recruit the best cyber-analysts, developers and engineers in the world to serve their country by leading the nation's defenses against cyber-threats.” 

I remember leaving Washington with a sense of pride about National Cybersecurity Awareness Month and Secretary Napolitano’s bold statement. In 2009 and 2010, I tried to monitor DHS’s progress on this hiring commitment but in spite of my efforts, I never found another published word about how DHS was progressing in its cybersecurity hiring effort. Given the cybersecurity skills shortage, bureaucratic federal hiring procedures, and low federal salaries, I doubt whether DHS fulfilled the Secretary’s promise—but then again, I’ll never know. 

Aside from this personal experience, there are a few other reasons why I’ve become so cynical about National Cybersecurity Awareness Month:

  • Most cybersecurity technology comes from the Silicon Valley, not the Beltway, but unfortunately, National Cybersecurity Awareness Month is a pretty much a non-entity on the Peninsula. Don’t believe me? Check out the websites of leading cybersecurity technology firms like Check Point, Cisco, FireEye, Fortinet, HP, IBM, McAfee, RSA, Symantec, or Trend Micro. These 10 companies account for billions of dollars in infosec revenue but you’d never know about NCSAM based upon the marketing rhetoric on their sites. Heck, NCSAM was even absent from Washington insiders like Booz Allen, Leidos, Lockheed-Martin, and Raytheon when I checked their websites at the beginning of the month. How can NSCAM be successful if industry leaders aren’t interested enough to participate? 
  • The “Stop, Think, Connect” message isn’t enough. NCSM has featured this message (or similar messages) for years. I understand that we need a foundation of basic infosec hygiene but given the alarming attacks at Home Depot, JP Morgan Chase, and Target, elementary cybersecurity education is no longer enough. We need wide-ranging programs to educate business leaders, federal/state/local legislators, and critical infrastructure providers. Yes, consumers need to have the right knowledge to protect themselves but we need to educate the folks who are responsible for protecting all of us.
  • Few leaders are stepping up. When October comes around, an impressive group of breast cancer survivors make sure to pepper the media with interviews, campaigns, and live appearances to get the message to the masses. In my many years in cybersecurity, I’ve yet to see a similar PR effort around cybersecurity awareness. Special Assistant to the President and Cybersecurity Coordinator, Michael Daniel, should be making the rounds to CNN, Fox News, Good Morning America, etc. Where is he? Beats me. Come to think of it, can anyone point to a person who represents NCSAM or cybersecurity in general? 

To be clear, I’m am not criticizing the worthwhile programs and organizations that actually promote cybersecurity education and deliver value. That said, these efforts would still be meaningful if they were done independently of a half-hearted awareness month that few pay attention to.

So here’s where I stand on NCSAM: Before next October 1st, Washington supporters like the National Cyber Security Alliance need to enlist grassroots participation (and money) from the infosec industry and work with ISC2, SANS, ISACA, and others to get security professional organizations more engaged. At the same time, we need our elected officials to increase funding for cybersecurity programs and take these programs to their constituents. Finally, let’s try and get some international participation since there are no borders on the Internet. 

In lieu of these changes, I suggest we stop pretending that National Cyber Security Awareness Month matters and let other more committed groups enjoy their month in the spotlight. 

Posted in Information and Risk Management, Security and Privacy | Leave a comment

How to Protect an EVO RAIL (video series)

VMware’s EVO RAIL is an architecture for a hyper-converged, software-defined data center in a single appliance form-factor … to be delivered by various hardware partners.  But how do you protect that all-in-one solution?

For the next several weeks, ESG will be releasing a seven-part series of ESG Capsules, 2 minute video segments, where I’ll talk more about some of the protection possibilities and caveats in an EVO world:

part 1 – Introductory ideas for protecting EVO RAIL (below)

part 2 – Solution Spotlight : VMware

part 3 – Solution Spotlight : EMC 

part 4 – Solution Spotlight : Dell

part 5 – Solution Spotlight : HP

part 6 – BC/DR possibilities

part 7 – Channel considerations

Here’s part 1 on ideas for protecting an EVO RAIL.  Check back here for updated hyperlinks … or follow @JBuff on twitter to see more of this series.

Thanks for watching

Posted in Data Protection, Information and Risk Management, IT Infrastructure, Private Cloud Infrastructure | Tagged , , , , , | Leave a comment

Informatica and the Challenge of Data Unification

Informatica is clearly a leader in data integration. In fact, a case could be made for Informatica being the leader in data integration. Since superlatives are not typically part of my lexicon, this represents something of an accomplishment on Informatica’s part. Informatica has been around for just over 20 years and is now driving over $1 billion in revenue. Informatica is unique because it’s the only large leading vendor in the data integration space that is a pure-play in integration. This means that Informatica’s future is inexorably tied to how enterprises leverage data. This is a good thing.

When you look at IT, you find that everything is data driven. Solutions and tools differ only by what data they align with and how they put this data to use. The reason we can say this with confidence is that every event is the result of one or more changes in state. As a result, whether we chose to formally recognize these changes in state from a data standpoint, they are responsible for initiating IT activities. For a comprehensive discussion of this topic, see ESG’s market summary report on Decision Analytics: Building the Foundation for Predictive Intelligence and Beyond.

For the majority of the last 20 years, enterprises have been entrenched in developing at least one system of record (SoR) to manage their data. Specialization gave rise to multiple SoRs, which drove data warehousing (DWH), master data management (MDM), data integration (DI), data quality (DQ), and enterprise application integration (EAI) needs. Informatica caught this wave and delivered products to address all of these needs.

Now that the web and more recently mobility have come of age, there is a transition taking place in application design. The focus is shifting from SoR to system of engagement (SoE). This is a significant shift that involves interactions that are multi-channels, contextual, potentially socially aware, data dependent, and often performed in real time. SoE interactions also will have a distinct bi-directional M2M orientation meaning that they may follow a variety of interaction patterns including request/reply, pub/sub, and sense/respond. What sets Informatica aside is that it provides explicit support for real-time application integration across all of these interaction patterns.  This is because Informatica brings together data integration, event-driven architecture, data streaming, event processing, and decisioning. The foundation for this is an ultra low-latency messaging transport – Informatica’s Ultra Messaging (UM) platform. With performance within optimized environments down in the 50-100 ns range, UM is clearly high performance. When you then layer on PowerCenter connectivity, Vibe data streaming (VDS), CEP for real-time data analysis, and RulePoint for decisioning, you have a comprehensive and high-performance solution to SoE data unification needs. I’m choosing to use the word unification purposely because Informatica’s combination of capabilities goes well beyond what we think of when we say data integration. Data unification is a combination of data integration (streaming, aggregation, transformation, and enrichment), analytics, and decisioning set within a real-time framework for processing and management. Although Informatica is being actively pursued by Dell, IBM, Oracle, SAP, TIBCO, and a host of smaller vendors, Informatica currently trumps them on functionality and vision.

Informatica’s thorough treatment of data unification ideally positions them to address the next generation of use cases for the Internet of Things (IoT). With an estimated 50 billion devices by 2020 and nearly all of these devices producing and/or consuming data, the future will be far more data-driven, calling for even more capabilities focused on data routing, aggregation, transformation, integration, machine learning, analysis, and unification. There will also be a need for a logical and physical data specific abstraction layer to manage how data is aggregated, transported, consolidated, and distributed. Although new standards, conventions, terminology, and architectures are needed to move IoT forward, the data-centricity of IoT activities puts Informatica in the center of a significant opportunity. Although Informatica is being fairly tight lipped on its immediate IoT plans, the direction of the portfolio over the last several years provides a very good foundation for becoming a leader in the data unification needs associated with IoT.

Posted in Application Development & Deployment, Cloud Computing, Data Management & Analytics, Enterprise Software | Tagged , , , , , , , , , | Leave a comment

Proofpoint Report Exposes Details about Cybercrime Division-of-Labor and Malware Architecture

One of the more vapid cybersecurity cliché statements goes something like this: “Hacking is no longer about alienated teenagers spending countless hours in the basement on their PCs. Rather, it is now the domain of organized crime and nation states.” While this is certainly true, it is also blatantly obvious. It is also nothing more than a meaningless platitude with no details about why this is true, how hackers operate differently than teenagers, or what the implications are.

If you want to understand these issues, I strongly suggest that you read a new threat report, Analysis of a Cybercrime Infrastructure, published this week by Proofpoint. The report follows the tactics and techniques used by a Russian organized crime group as it launched an attack on US- and European-based users aimed at stealing online banking credentials.

Reader warning, this report is a tad on the geeky side using technical terminology like browser plug-ins, droppers, microshells, and static/dynamic injections. Nevertheless, I suggest that readers move beyond these technical points and plough through the report. Eschewing the technical depth, the report can still provide readers with a conceptual feel for the strategies and tactics used by the bad guys.

With this is mind, here are a few of my biggest takeaways from the report:

  1. It takes a village to commit a cybercrime. Like the team of crooks recruited to rob a casino in the movie Ocean’s Eleven, organized crime is all about specialization and division of labor. Everyone knows this but few people can talk about the actual details about who does what. This report does a great job of exploring these kinds of nuances around the cybercrime market. For example, the Russian hacking group at the center of this report purchased lists of administrator passwords from others in order to compromise sites using the WordPress open source content management system. While this group used its own homegrown traffic distribution service (TDS) to direct victims to exploit servers, the report mentions that other cybercriminals provide SaaS offerings for TDS. Finally, the highlighted Russian hacking group didn’t stop at stealing banking credentials; it also leveraged its network of compromised PCs to develop a cybercrime proxy service it then leased to other hackers. So hackers are making money coming and going. 
  2. Hackers look for the path of least resistance. In order to attain a high rate of success, cyber criminals determine which of several exploits to use based upon a profile of a victim’s PC. In other words, my PC may be compromised through a Java exploit while the person sitting next to me may get powned using an IE vulnerability. The bad guys aren’t wasting time with one-off attacks but rather are sizing up each victim, finding his weaknesses, and then storming through one of several open doors.
  3. Attacks are designed to stay one step ahead of the law. It’s common wisdom that hackers test their malware against all the popular AV software to avoid detection. In this case, the Russian hackers went beyond checking the detection rates of the malicious payload by making sure to steer clear of IP addresses and URLs that might pop up on reputation lists. The bad guys also instrumented their code with “lookout” capabilities. When any AV software starts to detect their exploit, the tool notifies the group immediately. So each time Kaspersky, McAfee, Sophos, Symantec, and Trend Micro catch up, the bad guys figure out a way to disappear again. 
  4. Ease of use is part of the process. Yes, hackers are highly skilled, but they don’t have to be technical savants who can whistle into pay phones at 2,600 hertz. The report displays a multitude of administrator screens that would make sense to any reasonably competent system administrator. In some cases, hacking groups also use ease-of-use administration/operations as a way to differentiate their services from the competition. This also helps cybercrime groups delegate tasks to junior administrators and thus free up talented hackers for more high-value projects.

To mix metaphors, the Proofpoint report takes the reader “behind the curtain” to understand “how the sausage is made.” Given this, it is a worthwhile – and frightening – read for all cybersecurity participants. On a final note, the Proofpoint report provides a detailed case study of what we white hats are up against. We need to get our act together and prepare our defenses for Russian professional organized crime syndicates like the one described in this report. Alas, too many organizations still treat the cybersecurity battle as if they were still facing alienated teenagers in basements.  

Posted in End-User Computing, Information and Risk Management, Security and Privacy | Leave a comment

Oracle Open World…& the new FS1 SAN (with video)

This year’s Oracle Open World (OOW) was – as ever – huge from just about every measurable dimension. While the weather is seemingly always lovely (except at SFO where “flow control” seems to have been the order of the day all September), it is not available to any regular tourists unless they are prepared to pay the stupendous rates that a sold-out city can charge.

Stealing a page from Microsoft when it “got” the Internet (what seems like an eternity ago!) Oracle spent its time at OOW confirming that its flirtation with this cloud thing is a full blown romance! Of course there were a ton of specific product announcements (a very beguiling new SAN product – the FS1 – being of course what caught my eye! More on that below). But this event was also about the occluded front that often accompanies clouds: that occlusion being the change in role for Larry Ellison and the emergence of the Safra-Mark show (lest there be one more “Hurd-ing Katz” jibe….). The change was managed effortlessly with Larry revelling in his “lead techy” role. What were the key takeaways? My colleague Nik Rouda and I already commented in our joint blog about Oracle OpenWorld but here’s a bit more depth in one of our ESG on Location video reports….

While I was of course fascinated by the big picture stuff, my myopia always sets in for the storage stuff. The highlight for this year was the long-awaited arrival of a new flagship enterprise SAN product from Oracle – which, as I mentioned above is called the FS1. FS stands for “Flash Storage” (or was it FlagShip!?), which is how it was designed and built; although its ability to use that flash storage (for performance of course) in any percentage mix with HDDs (for bulk inexpensive capacity) plus its plethora of functions means that it could just as easily be called “Flexible SAN”.  That flexibility is borne not just from all those standard operational features one has come to expect these days (snaps, thin provisioning, replication, HA etc) but is helped by the data/business-focused abilities Oracle has added: not just sub-LUN auto-tiering, but extended QoS abilities, and secure system partitioning. The overall package looks like it could be attractive to any enterprise user…but of course Oracle sweetens the attraction for its broader-use customers via close integration – and added features – with its own “red stack” products. 

The ZS3 has made considerable strides for Oracle in the (mainly) file/NAS world, and this new FS1 has the right stuff to do the same for Oracle’s market share in the (mainly) block/SAN arena.  The storage market is fascinating right now – both in and of itself, and also when viewed against the larger industry backdrop of such things as convergence, big data, and clouds; all of which, we now know, Oracle is in love with!   

Posted in Cloud Computing, Data Management & Analytics, Enterprise Software, IT Infrastructure, Storage | Tagged , , , , , , , , | Leave a comment

HP – Parsed and Future

So, after a furor of news, we can all settle down now in the knowledge that there will be two HP's. I so wanted one to be called Hewlett and the other Packard! Maybe with a lower-case “i” in front of each name for a contemporary nod and wink to the founders. By the way, if ever you are having trouble remembering which HP is which, they did at least make that easy for us: the ink is in the Inc.

Frankly I really don't have a lot to add to all the financial excitement: spin-outs seem to be the name of the game right now (think IBM and eBay re PC's/servers and PayPal respectively), But, hey, when a company splits and still has two “siblings” each north of $50B revenue, one feels one should mark the occasion. So, farewell, HP, long live HPs. And I don't say that just to be cute: HP is one of a handful of companies where – outside of the day-to-day fisticuffs of sales – even its competitors root for it…it is part of the fabric of IT and indeed of the US.

But what does this split – when it actually happens – mean for the area I focus on…storage systems? In the short (now) to medium (say 2016/7) term I really can't see that it is going to be that much. Of course there have been other recent, well-publicized rumors swirling around HP (of the 3 letter EMC variety!) but for the sake of this, I am assuming they are just that…rumors. At face-value the HP storage business – which has actually been doing pretty well compared to its big competitors of late – remains just a part of the business: Of course, it is a key element in the Converged Infrastructure that HP has been driving [towards] for some time now, but then again it already was. Now, all the blurb around the logic for the split talks about increased focus, nimbleness, investment, and so on, but I have not seen any major lack of focus or nimbleness (indeed quite the opposite) in the HP Storage ranks of late…and if investment resources were tight (is there anywhere where they are not felt to be so!?) it is hard to foresee any significant immediate affect when roadmaps in this business take many years to manifest into GA products. I'm not negative on the change….but I simply don't see a great deal of upside or downside as far as the storage unit and its customers/prospects go. Basically, if you like the existing HP Storage story then you should feel at least as happy as you were already to deal with it. And if you happen to prefer some other vendor right now, then I wouldn't hold off any decisions expecting dramatic new choices anytime soon.

Like many, I really do wish HP(s) well. Some things are perplexing, to be sure: quite how splitting the company into two leads to extra layoffs (as HP also announced) I fail to grasp…although I assume it is simply an admission that there was [at least seen to be] more to cut in the first place. Aside from the internal organizational streamlining and the financial analysis of the split, the fact remains that HP – however many operating companies or divisions there are – still simply has to execute. In ESG's last storage trends research, one of the questions posed was this: “In general, what would you consider to be the most important criteria to your organization when it comes to selecting a storage vendor/solution?” The number one response (each respondent could check five criteria) was “Total cost of ownership” for 65% of respondents, followed by “Service and support” at 53%. You have to look a long way down the criteria list to get to things like “Existing relationship with vendor” (22%) and “size/financial stability of vendor” (just 15%). In other words, product, value, and service matter a lot….the business card and scale of the vendor much less so. A split HP is no real guarantee of more future success in the storage arena (where it is/was trucking along pretty well), whereas executing against its existing strategy and product roadmap is.                       

Posted in IT Infrastructure, Storage | Tagged , , | Leave a comment

Leading Enterprise Organizations Have Established a Dedicated Network Security Group

When an enterprise organization wanted to buy network security equipment a few years ago, there was a pretty clear division of labor.  The security team defined the requirements and the networking team purchased and operated equipment.  In other words, the lines were divided.  The security team could describe what was needed but didn’t dare tell the networking team what to buy or get involved with day-to-day care and feeding related to “networking” matters.

This “us-and-them” mentality appears to be legacy behavior.  According to ESG research on network security trends, 47% of enterprise organizations now claim that they have a dedicated group in charge of all aspects of network security.  Additionally, network security is done cooperatively by networking and security teams at 26% of organizations today but these firms insist that they are in the process of creating a dedicated network security group to supplant their current division of labor. 

As part of its data analysis, ESG built a scoring system it used to segment enterprise organizations into three groups (based upon their infosec skills, resources, and practices):  Advanced organizations (approximately 20% of the total survey population), progressing organizations (approximately 60% of the survey population), and basic organizations (approximately 20% of the survey population). 

When viewed through this segmentation model, the results are telling:  64% of advanced organizations have a dedicated network security group, 50% of progressing organizations have a dedicated network security group, and 36% of basic organizations have a dedicated network security group.  Based upon this information, ESG concludes that there is a strong correlation between cybersecurity best practices, infosec maturity, and organizations with a dedicated network security group.

This organizational change makes sense for CISOs and IT organizations but as it gains strength it will impact enterprise information security behavior and the market at large in several ways:

  • Network security will integrate with other infosec components.  In the past, firewalls, IDS/IPSs, and network gateways were grounded in the networking domain.  Now that these systems belong to a network security group, they are being integrated with other cybersecurity technologies like endpoint security and security analytics.  The goal?  Weave network security into an enterprise-class infosec technology architecture. 
  • Large organizations are balancing network performance and security.  In the past, network security controls almost always ran in passive mode by monitoring/alerting but not blocking suspicious packets.  This strategy was instituted to guard against false positives disrupting critical network traffic but there seems to be a change in the air.  Many organizations are now automating network security remediation efforts in order to decrease the network attack surface, prevent attacks, and quarantine compromised assets.  Given the financial impact of security breaches, automated remediation will only increase – especially as network security technology gains tighter integration with global threat intelligence. 
  • The network security market opens up.  When the security team’s role was limited to defining requirements, it was easy for the organizations to purchase network security equipment from the same people that sold switches and routers.  Independent network security groups are breaking this historical bond as they look for best-of-breed security efficacy and strong integration with other security technologies across the enterprise.  This doesn’t mean that Cisco and Juniper are out of the game but it does mean that their relationships with networking buyers may carry less weight in future purchasing decisions.  Yet another reason why Cisco purchased Sourcefire. 

The ESG data suggests that network security is moving away from the gear that transports bits and closer to the technologies that protect the bits.  In my humble opinion, that’s a good thing.  As this transition gains strength, it should truly open up the market to network security vendors with more holistic infosec architectural strategies.  Good news for security firms like Check Point, FireEye, Fortinet, McAfee, and Palo Alto Networks.  HP and IBM should also experience a network security renaissance, driven by their network security, security analytics, and professional/managed services offerings.  

Posted in Information and Risk Management, IT Infrastructure, Networking, Security and Privacy | Tagged , , , , , , , , , , , , | Leave a comment

Data Protection Appliances are better than PBBAs

Too many folks categorize every blinky-light box that can be part of a data protection solution as a “Purpose Built Backup Appliance” or PBBA.   But the market isn't just a bunch of apples with an orange or two mixed in, data protection appliances (DPAs) can be apples, oranges, bananas, or cherries — but if you lump them all together, all you have is a fruit salad.

So, let's reset the term to understand the market:

  • “Backup” alone isn't enough — so call the all-encompassing category what it should be delivering = “Data Protection”
  • And there isn't just one kind of appliance, there are at least four:
    • (real) Backup Appliances
    • Storage / Deduplication Appliances
    • Cloud-Gateway Appliances
    • Failover Appliances

Check out this video to see how I look at Data Protection Appliances:

As always, thanks for watching


Posted in Data Protection, Information and Risk Management | Tagged , , , , | Leave a comment

Could VeeamON be the next MMS?

This week is the first VeeamON, Availability for the Modern Data Center, conference in Las Vegas.

As I listened to the side conversations and such, I was reminded of the special-ness of Microsoft Management Summit (MMS). Not MMS 2010+, when Microsoft started shoe-horning everything in the Server & Tools line-up, before eventually killing it and Tech-Ed behind it … but MMS 1995-2005, which was as much about “community” as it was “technology.

Veeam has very smartly done something that other data protection vendors several times larger have failed to do — create a community of avid influencers and advocates that are made up of Microsoft MVPs, VMware vExperts and an army of well-intentioned backup folks that are passionate about telling people how Veeam saved their jobs by reliably and quickly recovering a a VM. Many larger companies have tried to programitize that community initiative, and most haven't seen success on any scale. But Veeam has … so a conference is the next logical step.

The question will be whether Veeam can convert the cyber-community that advocates their products year-round and parties with Veeam at TechEd/VMworld. Can Veeam maintain or build on that community vibe in an in-person event? If they can, and then build anticipation for VeeamON 2015, then lightning will have struck and VeeamON could be for many what was revered about MMS.

There is a notable difference with VeeamON over MMS, though. Veeam is adamantly 100% channel to the degree that they don’t even maintain a direct sales team. So, VeeamON is as much for partners (vendor, channel, and cloud), as it is the customers — which is different than the much more enterprise-vibe of MMS in its latter years. Another difference is the accessibility of Veeam execs walking throughout the venue and striking up personal conversations throughout the day — something that again shows the strength of the “community” of the Green Army. With Veeam aspiring to be the next $1B player in IT, there are more parallels that one could make with MS System Center, which also became a $1B during MMS's hay-day. Veeam is doing it without a juggernaut behind them, though their partnerships with MS, VMware, NetApp, Cisco, HP, Exagrid and others doesn’t hurt.

The event itself is as much style (at the Vegas Cosmopolitan) as it is substance (deep technical breakouts) — so the rest is left to be seen. Congrats to Veeam on what is looking to be a great start to what could be a powerful event in IT availability, through data protection.

Posted in Data Protection, Information and Risk Management | Tagged , | Leave a comment