Egy újszülöttnek minden vicc új, így én a régi viccekre szakosodtam, azokat mondom el újra és újra.

Floorshrink diaries

Floorshrink diaries

TLDR nr. 1: “It’s Difficult to Make Predictions, Especially About the Future”

2019. március 25. - Floorshrink

I am not quite sure if the observation above is from Niels Bohr or Yogi Bear, but I wholeheartedly believe in it. Some companies create predictions for living, and sometimes they might be wrong. A few months ago, Dave Cappuccio from Gartner made a prediction: “By 2025, 80% of enterprises will have shut down their traditional data center, versus 10% today.” While I am convinced that for many workloads and many companies moving to the public cloud is the only reasonable option, I think this prediction is false. This article attempts to summarize why.

What customers actually want

 If you ask CEOs what they need from their IT department they are likely to answer with some of these demands:

  • Increased speed to achieve economic return on their investments
  • The ability to respond quickly to any market change
  • Increased productivity of any group in their firm

If you ask the same question from CIOs of these firms, you are likely to get answers like these:

  • Increased productivity of their developers
  • To run any workload at an optimal location (own DC, public cloud, edge)
  • A two-way street between their on prem DCs and the cloud (the ability to switch from one provider to another or back to their own DC without major cost or time implications)
  • IT infrastructure elasticity: optimal utilization of their existing assets
  • Consuming new functionality without the risk and hassle of upgrading

Summarizing these wishes by Wikibone: Customers want a cloud experience where their data lives. This is not necessarily a public cloud provider though. One thing is for sure: most of the prevailing myths about the risks related to a public cloud (eg. data security, reliability) are simply false, the top tier cloud players spend more energy and resources on cyber security and disaster recovery capabilities than even the largest enterprise clients. Simply put, they can afford it and will be the best in these areas soon. If you add elasticity, scalability and the chance to avoid upfront CAPEX expenditure you get an impressive combination that is compelling to any CEO. This created a tremendous “cloud first” pressure on enterprise CIOs.

Smoking a pipe while playing the flute

Customers want to make a safe bet while fear being milked due to a vendor lock in. Betting on a single vendor can be good, since you need to spend less on integrating your infrastructure components. “Nobody gets fired for buying IBM” as the adage from the 1960s tells. On the other hand, when a well-established RDBMS vendor started twisting the arm of its customers by increasing the support fees for its products other vendors could ramp up the budding RDBMS business on hatred against the previous vendor.

Cloud providers also want to lock you in: eg. they let you ingest your data free but extracting the same data may cost you your proverbial shirt. As a result, most enterprise customers are likely to end up with three, potentially mismatched ecosystems on their hands: their own data center and two public clouds, plus may not use the features of any vendor that would differentiate them.

Where the market is moving

The following list is by far not complete, but may give an overview what is happening in the public cloud arena:

  • Getting closer to the customer’s data: Azure Stack, AWS Outpost and GKE On-Prem are all acknowledgements of the fact that - while customers love the cloud experience – they stick to their on prem DC investments (mostly due to the stickiness of the application layer), and integrating the two realms – making cloud bursting real - is a daunting task. These new offerings are likely to make this integration easier. The AWS – VMWare cooperation tackles the same problem the other way around, providing the existing VMWare experience in a public cloud. There is an ongoing investment in this area by the giants, eg. the acquisition of Avere by MSFT.  
  • Fighting latency and legal issues: several years ago I pinged a host in the Amsterdam DC of Azure from Johannesburg. The 400 ms round trip was a showstopper for most UI intensive apps. This is not a surprise that MSFT placed their first African Azure DCs in Johannesburg and Cape Town. The following quote on the legal side is from a friend of mine: "the European Union is determined to remove obstacles to move data from one member to another, as evidenced by a recent regulation requiring member states to remove data residency barriers (framework for the free flow of non-personal data in the European Union, Regulation (EU) 2018/1807). Additionally, the latest European Banking Authority recommendation has some soft statements for preference for data location in EEA and no requirements to limit data location to national boundaries."
  • Moving to containers and microservices: probably this is the most important change with implications bigger than virtual machines brought 10+ years ago. The caveat is that microservices require the rebuild of your entire application stack. The promise is – a déjà vu for JVMs some twenty years earlier – write once and run anywhere. This time it may work.
  • Custom chipsets for specific workloads – albeit a bit controversial since common purpose chips ruled the industry in the last 15 years – the largest players buy chips in quantities that allow them to request custom designs. Some of these new chips will target specific workloads (eg. artificial intelligence, where the battle is already on between GPUs and CPUs), some might be just slightly more efficient in a generic task or will consume a bit less energy, but at AWS/Azure scale this might make a difference.
  • Consolidation: if you miss the boat in development, you may catch up with an acquisition, this is what IBM did with buying Red Hat. The Dell – EMC merger a few years ago falls into a similar bucket, staying competitive together. The recent acquisition of Mellanox by nVidia is also related to the cloud, it will help nVidia to realize its plans to become a top HPC provider, competing with Intel head to head for AI/ML specific workloads. Amazon’s close partnership with VMWare – albeit not a takeover – also shows the clear signs of consolidation.

What a local datacenter provider can do to stay competitive

The good news: hyper scale cloud providers: Amazon, Microsoft, Google, IBM and Alibaba are not likely to establish a major DC footprint in Hungary soon, we are just too small for that. Beyond this many Hungarian SMB customers do not need the WW coverage (eg. servers in Hong Kong, while the disaster recovery in California), they may not want a CDN (Content Delivery Network) or chipsets built for AI but do need to comply with local and EU data regulations and like the proximity of the DC to their business. It seems that the HUN local regulation is more strict than the equivalent EU rules. Side note:  even the otherwise tough GDPR regulation does not create any restriction on non-personal data storage. (within the EU boundaries)

The bad news: colocation, server hostels and leased physical servers and VMs do not cut it, local DC providers have to provide more if they plan to survive. They also need to keep in mind that the big boys eventually will go beyond the current DC services, will offer specialization, layers of additional security, management and monitoring services and 3rd party integrated SW functionality (or their own productivity suits, using their directory services and storage) that will be difficult to copy. Some players (eg. Rackspace) already adopted a dual strategy: they still offer OpenStack (their own stuff) while they provide services on top of all major public cloud providers. I gave it a try to list what local DC providers could do to stay relevant vs. the giants in the next ten years.

  1. The immediate homework: automated provisioning, patching and monitoring of their server offerings, installing common workloads from a canned image (RDBMS, noSQL, application and web servers, frameworks)
  2. Giving more: load balancing, alerting, log analysis, security scans, archiving, telco neutrality (same latency from all major local telco networks)
  3. Going beyond VMs: containers and microservices will become a commonplace within a few years, local offerings have to be compatible with them, ie. will have to provide the services listed in point 1 and 2 for them, not just for VMs.

In a nutshell: copy most things big boys do, at a local scale and potentially with some local flavor.

Where we will be two years from now

The warning in the title applies, these are my bets, will be happy to evaluate later on:

  • the top tier public cloud providers will widen their lead vs. the bulk of the industry regarding their core strengths: scalability, elasticity, worldwide footprint and added services.
  • Their market share will force new applications to be compatible with them out of the box, potentially sold in their marketplaces first.
  • As my favorite fridge magnet from the Computer History Museum puts it: “Software makes hardware happen.” In a bit more elaborated way SW defined networks will rule the DCs soon. (servers with fixed IP addresses maintained in a spreadsheet should go away asap.) HW manufacturers “mimic” cloud providers: Hyperconverged Infrastructures (eg. offerings from the Dell/EMC – VMWare duo) will become a compelling choice for any enterprise or DC provider revamping their data centers.
  • The hurdle that will slow down cloud supremacy is the stickiness of the application layer. Any large enterprise accumulated a portfolio of applications over the years, where the sheer effort of porting these apps to the cloud kills the business case. (Let alone is the long-term lease agreements of the DC space)  
  • The cloud “operating model” will become dominant: regional players, telcos and enterprises alike will adopt it.
  • This one is from Wikibon: “The long-term industry trend will not be to move all data to the public cloud, but to move the cloud experience to the data.”

The final word

While Mark Twain was on a worldwide speaking tour, local newspapers reported on his serious health issues. One of them even issued his obituary. Twain’s answer seems appropriate to disband the Gartner prediction on the demise of the on prem DC: “Reports of my death are greatly exaggerated.” For sure it will require a lot of change and investment by the DC guys, but this game is not quite over yet.

Sources used for this blog post

 

A bejegyzés trackback címe:

https://floorshrink.blog.hu/api/trackback/id/tr5714715829

Kommentek:

A hozzászólások a vonatkozó jogszabályok  értelmében felhasználói tartalomnak minősülnek, értük a szolgáltatás technikai  üzemeltetője semmilyen felelősséget nem vállal, azokat nem ellenőrzi. Kifogás esetén forduljon a blog szerkesztőjéhez. Részletek a  Felhasználási feltételekben és az adatvédelmi tájékoztatóban.

Nincsenek hozzászólások.
süti beállítások módosítása