Quantcast
Channel: Blog | Dell
Viewing all 8970 articles
Browse latest View live

VMworld 2017: VxRack SDDC 2.0 and VMware Cloud Foundation 2.2

$
0
0
EMC logo

[UPDATED 8/29/2017 6:40am PT – minor, but important typo corrections]

VxRail’s “bigger sibling” VxRack SDDC just also got better, stronger.

In a simple sentence: VxRack SDDC is for customer who have standardized on VMware, are ready for network transformation with NSX and the physical network all in the system and lifecycle management (LCM) scope.

When I say “bigger sibling” – I don’t mean scale.  We have VxRail customers with hundreds upon hundreds of appliances in a single datacenter.   HCI Appliances can scale as big as you want.   “Bigger” in this context means that it’s for customers ready for NSX, and ready to look at the network as part of their system. 

Yes, there is a correlation with scale and customers thinking about network as “in scope”, but it’s not causal.

VxRack SDDC is built on top of VMware Cloud Foundation.  It is the “consume” choice for VCF as the “DIY choice” (read here to understand what I’m talking about).   VxRack SDDC doesn’t stop with VCF, but it has VCF at its heart.   So what’s new in VxRack SDDC 2.0?

image1) VMware Cloud Foundation 2.2.  This means a lot.

a) vSphere 6.5U1

b) vSAN 6.6.1

c) NSX 6.3.3

d) Support for automated install of Horizon 7.2, as well as support for customers to install vRA 7.3, and vROPs 6.6.1 and later.

e) a series of important architectural changes, including: a new HMS service inside SDDC Manager, shrinking the SDDC Manager element to 2 simple VMs; Backup / restore of SDDC mgt. components

I want to reiterate how important the simpler concepts of a management domain and tenant domain are, as well as moving hardware management services out of the networking switch.   This means that scaling up VxRack SDDC configs just got a lot simpler.

It’s also a critical step for the next major releases of both VMware Cloud Foundation and VxRack SDDC.   More on this later – and I’ll need to tiptoe around roadmap.

Beyond the VMware Cloud Foundation goodies, there are more hardware options (up to 40 total node types):

image

There are some updates to the networking domain also:

image

VxRack SDDC 2.0 will be orderable… NOW.  In fact, it’s orderable as of 8/10.   GA for net new customers will be 9/20 as we gear up the factory and Release Certification Matrix for the new system build, and upgrades for existing VxRack SDDC customers will be in Q3.

VxRack SDDC is ready, selling, and being deployed at customers around the world.

image

We have customers running in 6 of 7 continents (darn Antarctica) – and they have gone in smoothly.   We’re working hard to apply all the learnings of VxRail support and sustaining as the business ramps.

...Now what’s most interesting to me is what we have in store for VxRack SDDC NEXT.

  • We have about two orders of magnitude more VxRail customers than VxRack SDDC.   This is somewhat natural, for two reasons: 1) not every customer is ready to transform their network; 2) VxRack SDDC as an HCI Rack-Scale System starts much larger than VxRail, which is a HCI Appliance.   Imagine if we could bring the hardware support of VxRail and VxRack SDDC to be one.  That would mean that customer could start with VxRail, and move to VxRack SDDC when they are ready.
  • Clearly the needs for customers for integrated hardware lifecycle management, integrated ESRS, integrated hardware/software stack views is needed.   VxRail manager does this for VxRail.  Imagine if we could do the same thing with VxRack SDDC – maybe even with a common software model.
  • The customer needs for integrated data protection, cloud storage, filesystem capabilities are no different for VxRack SDDC customers than they are for VxRail customers.  Imagine if we could make common sets of capabilities.  After all – the needs for data replication at scale are MORE important for larger deployments, not less.  We have RecoverPoint for VMs (integrated with VxRail) customers
  • While vSAN covers many use cases today, there are some times where having the choice of additional storage stacks for “side car” use cases that need something specific would be powerful.  We have many customers who have large 2-tier ScaleIO deployments, and want the ability to share ScaleIO across vSphere clusters and get the extreme horizontal pooled scaling/IOps that they get with ScaleIO.  We have many customers who while they see the path forward being 90% vSAN for their VMs, have some that need specific data services.  Imagine if we could make that an “and” choice – without breaking the incredible simplification they get with VxRack SDDC and VCF.
  • There’s clearly an opportunity to use more powerful open networking choices in VxRack SDDC.  Imagine if we could bring 50/100GbE at the same price points we can in the system today, with NSX integration like you have never seen.  And imagine if we could integrate it with our datacenter fabric systems into a single lifecycle for the entire datacenter.  Cool.
  • There’s clearly an opportunity to refine the VxRack supply chain (for both VxRack SDDC and VxRack FLEX) in the same way we have with VxRail.   As the new Dell EMC, we’ve cranked on VxRail on this topic for a year, and it’s improved speed and velocity by an order of magnitude.   Furthermore, if we can tap into the power of the Dell EMC PowerEdge 14G platforms imagine what that would do.
  • We have made VxRack SDDC a fully supported option for the Enterprise Hybrid Cloud (which builds on the VMware Validated Design program) and early access for Native Hybrid Cloud deployments.   Interestingly, we’re standardizing on VxRack SDDC for our VMware and Pivotal Ready System offers – which take the base “cloud foundation” of VxRack SDDC and VMware Cloud Foundation and aim to make the vRealize suite and Pivotal Stacks (both PCF and Kubo – more on this tomorrow), simpler to deploy and lifecycle.   While today those need to be two things – based on use case support and lifecycle questions… that’s not intrinsic.   As VMware, Pivotal and Dell EMC increasingly align here – there’s the power to do something really cool.  Radical simplification.

That gets your mind spinning.   It’s not the idle musing of your neighborhood Virtual Geek – but rather the things that the teams are working on furiously, and you can expect to come to a VxRack SDDC at a theater near you in the near future.

It’s the same team at Dell EMC that works on VxRail and VxRack SDDC – and their mission in life is simple.  Be the BEST HCI Appliance and Rack Scale systems for customers who have standardized on VMware.   In total lockstep, in total alignment.

People may wonder the fit for VxBlock and VxRack FLEX.

Dell EMC must of course must have the most facemeltingly awesome CI/HCI choices for customers who prefer a “horizontal” approach to infrastructure (VMware and non-VMware) – with all the openness and composability that means, and that’s what VxBlock and VxRack FLEX are for.   For more on our thinking on this, please read this post.  VxRack FLEX and Vxblock compete and win in the market where people favor flexibility and stack choice over the simplest vertical stack integration (VxRail, VxRack SDDC). 

For the VMware faithful, the VxRail and VxRack SDDC team are a machine, aligned with VMware and partnered with the VCF team, ultimately aligned with you.


VMworld 2017: making DIY HCI easier vSAN Ready Nodes

$
0
0
EMC logo

While clearly I have a point of view (perhaps naturally a bias) towards customers choosing the “consume” path and choosing VxRail and VxRack if they want Hyper-Converged Infrastructure, I don’t want ANYONE to misunderstand and think that Dell EMC doesn’t lead the “DIY” route.  We do.

Consider:

  • YES, a vSAN Ready Node is in essence, a server.  It’s a server on a VMware HCL.   vSAN Ready Nodes don’t differentiate even nearly as much as VxRail.   That said not all vSAN Ready Nodes are created equally.
  • YES, the easiest “easy button” (and on this VMware and Dell EMC agree) for a customer who has standardized on vSphere and is ready for vSAN is VxRail.   BUT, there are huge numbers of vSAN Ready Node customers who chose the “DIY” route.

More vSAN Ready Node customers choose Dell EMC PowerEdge than any other vSAN Ready Node – by a long shot.

image

Why?  Well:

  • DIY always gets access to tech faster – though of course, the customer does testing/validation at the system level.   That means that Dell PowerEdge 14G is supported now.  Support for 14G comes for VxRail in November.  If you need the latest hardware NOW (and can’t wait a couple months – even though you will likely spend that time doing stack-level testing), then vSAN Ready Nodes.
  • Dell EMC Ready Nodes are always accurate.  Some don’t tweak past the base HCL test.  For example – vSAN 6.0 Ready Nodes will not ship with 512e HDDs while vSAN 6.5/6.6 will ship with either 512e or 512n so customers always get known good configs.
  • More configurability than vSAN Ready Node competition - 3 models (640, 740, 740xd) available now plus 6 more launching in Sept.  Massive configure-to-order flexibility across cpu, mem, drive sizes and networking.   Interestingly even more configuration support exists in VxRail – but if you’re comparing vSAN Ready Nodes – there are FAR more variations with Dell EMC.
  • BOSS boot (bigger, more robust than SD cards + allow for local log file capture) or redundant 16GB SD cards
  • Dell EMC OpenManage™ integration for VMware vCenter integrates all of OM’s hardware management right into vCenter so there’s no need to switch mgmt. apps
  • ProSupport™ – vSAN-trained support is contextual hardware support for an SDS environment & ProDeploy™ – installation and configuration of the hardware and the vSAN software.

VMworld 2017: Thoughts on VMware Cloud on AWS

$
0
0
EMC logo

I love that finally the VMware Cloud on AWS is now out in the wild, and customers can try, buy, evaluate.   This may surprise some people – after all, from Dell EMC eyes, isn’t it bad?

Not these Dell EMC eyes.  I think it’s great.

The Dell Technology cloud strategy can be summarized in a couple strategic principles:

  • We think that Cloud is really an operating model, not a place.
  • We think that there is a clear market need for both on-premises and off-premises clouds.   Hybrid is the answer.   This means that technologies which “connect” on and off-premises models are very important.   NSX is an example.   vRealize is an example.  Data Protection is an example.
  • We think that there there will be multiple cloud stacks – all optimizing for slightly different things and delivering multiple different services.   This means that technologies that “span” different clouds are important.  Pivotal Cloud Foundry is an example.  Pivotal Container Services is an example.  Boomi is an example.

Simple, and obvious right?   Operating model, not a place.  Hybrid.  Multi.

The biggest challenge I see at customers is them literally intractably stuck – trying to navigate choices.   Confused as all get out.   Listening to whomever says “I have the answer”.  It’s a great time to be a consultant :-)

VMware on AWS is something powerful in a time of confusion – a simplification.

Customers using the VMware software stack on premises now have a choice where they can directly extend that software defined compute, storage, network and run it on AWS hardware, in AWS datacenters.  

I was deeply skeptical at first – not understanding why a customer would want an IaaS on and IaaS – but that’s not what VMware Cloud on AWS is

The VMware Cloud on AWS is the SDC (vSphere), SDS (vSAN), and SDN (NSX) stacks from VMware running on AWS bare metal – provisioned and supported by VMware, and something they can buy directly from VMware.

It’s familiar to customers – to the point where it looks like a linked vCenter datacenter in the web client.   VMware Cloud on AWS runs any workload you can run on premises – not just cloud native ones. 

  • Yes, I think it’s really cool that VMware Cloud on AWS helps customers use the public cloud in a way that is easy – not all apps can be rebuilt using cloud native principles (it’s not a technical barrier – sometimes there are other barriers).   It’s also a great way for VMware IP to be extended into the public cloud.
  • Yes, I think it’s really cool that the Dell EMC Data Protection Suite for VMware is already built in and ready to go with the VMware Cloud on AWS.
  • Yes, it’s not perfect – but it’s ready enough for day one.   There evident features/functions that are needed in the roadmap (from the top of my head - flat overlay NSX networking/peering, the ability to span AWS AZs for more resiliency, etc) but those will come over time.

But – what I REALLY like about it may not be what you think.

I like that it’s SIMPLE & CLEAR.   I like that it will SHINE A LIGHT on a topic of enormous confusion.

In one easy step, it clears up all the **ahem** fog there is about cloud being about straight up economics.  

I’m so tired of people claiming (without doing the math) about the economics of steady-state workloads, or workloads with high persistence:compute ratios in public cloud.  

Watch this.   I have a homework assignment for you dear reader! 

Look at the economics of VMware Cloud on AWS with pay-as-you go, 1 year commit, and 3 year commit pricing.   Assume a solid, but normal consolidation ratio. 

Then do a comparison with the relative economics of the DIRECT on premises peer with the same assumptions.   The direct peer is VxRack SDDC.

Do that comparison looking at VxRack SDDC – as 100% capital.  This is close to analagous to the 3 year VMware Cloud on AWS model assuming a 3 year depreciation term.    Also do that comparison VxRack SDDC via CloudFlex (which delivers a cloud economic model with built-in annual price declines, 100% opex monthly payments, and full right to scale down/return after 1 year).    This is close to analagous to the 1 year VMware Cloud on AWS model.   In both cases, make some assumptions about power/space/cooling – but otherwise it’s a 1:1 compare.

PLEASE share your findings in the comments.  I’ll update the post in a couple weeks as the dust settles.

The ability to solve the Gordian Knot of customers struggling with the the reasons for on/off-premises and capex/opex economic models by having a simple direct item for them to price compare will be very interesting!

Making Friends in Dallas

$
0
0
EMC logo

Since 1930, the official State motto of Texas has been “Friendship”.  This is an apt description of the largest GRC user group in the world, RSA Charge, being held in Dallas, Texas, October 17 – 19. In a previous blog, Steve Schlarman shared an overview and highlights of this year’s event.

 

One of the event tracks this year is “Inspiring Everyone to Own Risk”.  This track brings together risk management practitioners across various industries and geographies to discuss challenges and successes they have experienced managing risk using the RSA Archer Suite of solutions.  This track includes a representative sampling of subjects from each of the Enterprise and Operational Risk Management challenges including: Issues Management, Establishing and maintaining a risk taxonomy and risk register, Self-assessments, Engaging the lines of defense, Third-party risk and performance management, and Business continuity management.

 

We had a great pool of speaker submissions this year.  In some cases, like Issues management and Third-party risk management, we had so many submissions we turned them into panel discussions so that you can benefit from the collective knowledge of multiple experts in these fields.

 

Combined with tracks at RSA Charge focused on regulatory and corporate compliance and information security management, practitioners have an opportunity to learn about each of the most important topics facing Operational Risk Managers today, including how to transform technology risk into Business-Driven Security.  In addition, you will have the opportunity to share ideas and learn from your peers, thought leaders, and specialists in these areas as well as see demonstrations of the RSA Archer Suite.  

 

For those of you that haven’t looked at the complete Agenda, you will find it full of great sessions. We have over 200 submissions from customers and partners, representing over 70 companies from a wide range of industries and geographies, along with a great representation of government agencies.

 

Yes, hosting RSA Charge in the State of Friendship is very apropos.  You will create and renew friendships with attendees with similar challenges and governance perspectives; learn new and innovative risk management methods; and affirm your best practice approaches.

 

We are looking forward to seeing you in Dallas!  If you haven’t registered, do so today.

 

RSA Charge 2017, the premier event on RSA® Business-Driven Security™ solutions, unites an elite community of customers, partners and industry experts dedicated to tackling the most pressing issues across cybersecurity and business risk management. Through a powerful combination of keynote speeches, break-out sessions and hands-on demos, you’ll discover how to implement a Business-Driven Security strategy to help your organization thrive in an increasingly uncertain, high-risk world. Join us October 17 – 19 at the Hilton Anatole in Dallas, Texas.

We’re Ready. Are you Ready?

$
0
0
EMC logo

Organizations such as the Translational Genomics Research Institute (TGen), a leading biomedical research institute, are on the forefront of enabling a new generation of life-saving treatments. To help achieve its goal to successfully use genomics to prevent, diagnose and treat disease, the Phoenix-based non-profit research institute selected a Dell EMC Ready Solution that helped reduce the data analysis time from 7 days to 4 hours!

We are truly at the intersection of technology and human progress! And the Dell EMC Ready Solutions team is dedicated to working with our customers to make a measurable business impact.

High-performance computing (HPC), data analytics, software-defined infrastructure, business applications and hybrid cloud platforms– these are the technologies that are shaping the digital economy. Many organizations are actively looking to transform IT to gain a greater ability to capitalize on these trends and accelerate business outcomes.

At Dell EMC, we understand that some of our customers may prefer converged or hyper-converged systems, while others may prefer to build infrastructure that meets their specific needs. This is why we are ready with a portfolio of solutions across Ready Nodes, Ready Bundles, Ready Systems and Hybrid Cloud Platforms.

Turn-key Hybrid Cloud Platforms

Some of our largest customers have shown they want turn-key comprehensive hybrid cloud platforms with one-contact support, full life-cycle management to reduce customers’ operational risk and a roadmap of features that extend the value of the platform. These customers choose Enterprise Hybrid Cloud (EHC) and Native Hybrid Cloud (NHC).

EHC will be available globally on Dell EMC VxRack SDDC VMware Cloud Foundation, giving customers a turn-key hybrid cloud experience on proven HCI with VMware’s software-defined data center stack. With functionalities such as more public cloud options by adding Microsoft Azure as an end-point, offers our customers greater flexibility to choose among public clouds.

NHC built on VxRack SDDC is available today via an early access program. Dell EMC Native Hybrid Cloud offers developer-ready infrastructure for Pivotal Cloud Foundry, the world’s most powerful application developer platform for legacy and cloud-native apps. New NHC capabilities include new developer tools and highly available deployments on VxRail, delivering multi-site, multi-foundation and multi-availability zone configurations to support global-scale deployments.

New Cloud Ready Systems

For those looking to take more ownership of customizations and lifecycle management to meet their unique business requirements, we announced  new VMware Ready Systems. These new Ready Systems will be based on hyper-converged infrastructure (HCI) in configurations tested and validated for specific Cloud use cases or workloads.

VMware Ready System from Dell EMC combines VMware Cloud Foundation and vRealize Suite with pre-configured and tested configurations of VxRack SDDC and VxRail. Optional fixed price services will be offered for those customers who prefer to have Dell EMC handle the initial deployment.  Designed for customers and deployments of any size, they will enable IT to optimize the cloud platform to meet unique business requirements while starting with a proven foundation of hyper-converged infrastructure and the latest VMware technology.

Extending this further, we also plan to bring to market Ready Systems on VxRack SDDC to support the new Pivotal Container Service when they’re available. This will enable enterprises and service providers to deliver production-ready Kubernetes and will be integrated with Pivotal Cloud Foundry® (PCF) and VMware’s software-defined data center (SDDC) infrastructure.

Likewise, for customers who want to build and manage the lifecycle of their own HCI stacks, Dell EMC vSAN Ready Nodes are now available with the latest award-winning Dell EMC 14th Generation PowerEdge servers and VMware vSAN 6.6. Dell EMC vSAN Ready Nodes deliver the capital economic benefits of HCI for organizations that prefer to take on more operational aspects.

We’re excited to extend our Hybrid Cloud & Ready Solutions portfolio with an eye towards accelerating our customers’ business results – aimed at helping them win. At the races, drivers come in to the pit for tire-changes, fuel, and adjusting the mirrors. And a really good team can accomplish all that in a matter of seconds. We are looking forward to accelerating the adoption of Ready Solutions and delighting our customers as ONE efficient F1 like team! Let all get Ready together to accelerate digital transformation!

Stop GIF - Find & Share on GIPHY

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Racecar-Pit-Are-You-Ready-1000x500.jpg

The #1 Reason Ransomware Works

$
0
0
EMC logo

Ransomware can be a nightmare, but it doesn’t have to be.

Last May, some 200,000 PC owners worldwide suffered a costly wake-up call from the aptly named WannaCry cryptoworm that encrypted their data. The attackers demanded $300 in bitcoin payments to unlock the data. Hospitals, universities, automakers and logistics firms were among those reported hit, but the worst affected were small and medium-size businesses.

Why those businesses? Because their data wasn’t sufficiently protected with complete backups like larger enterprises. And that’s the whole reason ransomware works: Victims can’t afford to lose their data, so they pay up, even if payment is no guarantee they’ll get their data back.

With WannaCry, hundreds paid the ransom but got nothing in return. And if Dell EMC’s 2016 Global Data Protection Index is any indication, ransomware can still find lots of targets: 90 percent of the survey’s respondents are behind the curve in the maturity of their data protection strategies.

Dell EMC aims to help them change that with practical, cost-effective and greatly simplified data protection (DP) that works 24×7 across all kinds of data workloads, no matter where they reside — on-premise, in the cloud or some hybrid of the two.

Growing Needs for Comprehensive Data Protection

Organizations of all sizes need DP. While malware and anti-virus safeguards are necessary to safeguard data from threat actors, they are not sufficient to guard against phishing attacks, which are how ransomware and advanced persistent threats can circumvent defense-in-depth strategies. Also, organizations need DP’s backup and recovery capabilities to provide business continuity in the case of system failures or disasters.

So why do the DP strategies and practices of so many organizations fall short? Here are some important reasons:

  • Data growth: Keeping up with the growth of both structured and unstructured data from various sources is one of IT’s biggest challenges affecting DP practices.
  • Complexity: Managing DP for complex IT environments — especially diverse platforms and applications — can be complicated, often requiring time-consuming management of many legacy point solutions already in place.
  • Inconsistency: Inconsistent backup processes can arise from trying to protect applications and data in varied places — from on-premises infrastructures to virtualized environments to private, public and hybrid clouds.
  • Cloud migrations: More and more enterprises are migrating workloads to the cloud, but they lack the tools to protect their data residing there, or they mistakenly believe their cloud provider will provide that protection.
  • Copy data sprawl: By some estimates, 60 percent of all data storage is a copy of some kind, whether for disaster recovery, development and testing, or application maintenance — just to name a few. And a typical database environment can have 5–12 copies for each production instance.

As a result, IT groups, especially their DP specialists, can have trouble meeting backup windows and SLAs governing goals for recovery point objective (RPO) and recovery time objective (RTO). Regulatory compliance and audits can be issues, too.

Dell EMC Streamlines the Data Protection Process

To simplify and streamline the DP process, Dell EMC has introduced its updated Data Protection Software Enterprise Edition. This set of 11 powerful DP tools covers the entire DP continuum of availability, replication, snapshots, backup and archiving.

For more specific use cases — such as heavily virtualized environments, mission-critical apps and archive-only solutions — specific tool sets are available in four packages: Data Protection Software for Backup; Data Protection Software for VMware; Data Protection Software for Applications; and Data Protection Software for Archive.

Data Protection Software Enterprise Edition can protect everything from user laptops to the largest data centers. It enables long-term retention of data to private, public or hybrid clouds, too. With deduplication rates of up to 99 percent plus a tight integration with Dell EMC Data Domain infrastructure, the Dell EMC Data Protection software can help lower DP’s overall TCO dramatically by reducing:

  • Storage capacity requirements for backups by up to 97 percent
  • Network traffic by up to 99 percent
  • Backup times by 50 percent

What’s more, the Dell EMC Data Protection Software Enterprise Edition delivers a consistent user experience for DP administrators across all its different tools to minimize learning and maximize productivity. At the same time, organizations can gain global data protection and copy oversight without compromising self-service workflows of their lines of business.

Dell EMC Data Domain Solutions Enhance the DP Benefits

Dell EMC also offers a broad portfolio of Data Domain storage infrastructure solutions for enterprises of all sizes. This includes the recently introduced all-in-one Integrated Data Protection Appliance and the software-defined Data Domain Virtual Edition.

The different Data Domain solutions can scale up to 150 PB of logical capacity managed by a single system. They can also achieve backup speeds of up to 68 TB/hour, making it possible to complete more backups in less time and provide faster, more reliable restores.

Check out Gartner’s 2017 Magic Quadrant for Data Center Backup and Recovery Solutions and see Dell EMC’s leading position among all DP offerings. Then get more information on the full line of Dell EMC Data Protection software and Data Domain options, to discover which DP solution can meet your organization’s requirements.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Why-Ransomware-Works-Banner-1000x500.jpg

VMworld 2017: Continued Advances in Hybrid Clouds - Lessons Learnt.

$
0
0
EMC logo

4 years ago a great team of people responded to what we were starting to see over and over again: customers wanting to consume an on-premises cloud as part of a hybrid cloud.

Frequently, these customers had personal experience trying to instantiate their own “DIY” variant – picking hypervisors, picking Cloud Management Platforms (CMPs), and building their own standardized infrastructure stack.   Frequently they also learnt about just how hard it was to deploy, and how much harder it was to lifecycle the whole stack.

This brave set of individuals have been cloud builders now for 4 years.  

They gave birth to the Enterprise Hybrid Cloud – completely oriented around the VMware stack and targeted making instantiating and sustaining an enterprise-worthy IaaS as a turnkey stack.

2 years ago that same team delivered the first Native Hybrid Cloud – targeting developers and infrastructure teams trying to build cloud native apps and build DevOps cultures completely oriented around the Pivotal Cloud Foundry stack in a turnkey PaaS instantiation.

We’re now on EHC 4.1.2 and NHC 1.4 – and there are few folks with the degree of experience, scars, and knowledge about what it takes.  

It’s one thing to put together a stack and write it up.   It’s another thing entirely to take a platform approach and deliver/sustain a brand promise to sustain the platform from release to release.

What have we learnt?  A lot.

  • Forget doing this on a set of “what ever you have” infrastructure.  It’s not about the infrastructure, but variation at the bottom is insanely difficult.   We quickly stopped offering EHC on “bring your own mess”.   You can build a cloud on your own of course – but good luck.
  • Doing it on traditional 3-tier infrastructure – even when it’s a VxBlock is still almost as hard.   We have to abstract the storage using things like ViPR (adds complexity), and the very manual process driven lifecycle management of the VxBlock (and remember, we do this more and better than anyone else) through the Release Certification Matrix is hard.   VMware and Dell EMC have MASSIVE customers on VxBlock, and we will continue to support them and evolve EHC – but we know that this cannot be sustained forever.
  • HCI is a huge simplifier.  As we introduced VxRail – we were able to make massive order-of-magnitude leaps in automation of the upper parts of the stack.  It’s not a coincidence that our EHC automation on VxRail is far beyond what we do on VxBlock.   It’s not a coincidence that EHC is now available on VxRack (both SDDC and FLEX).   It’s not a coincidence that NHC was born on VxRail and VxRack FLEX, and is now available in early access on VxRack SDDC
  • When your offer is all about LCM – versioning of elements is critical.  It’s important for people who specialize in one part of the stack, the features/functions of said stack to internalize this.   When you look at vRealize Automation, Operations, vCO, NSX, vSphere all as linked – how you version the whole thing is really, really hard.  An example is that many EHC customers felt a long “pause” before we versioned from vRealize 6.5 to vRealize 7.x   There were reasons.  There were periods where you couldn’t brownfield update workflows.  There were periods where NSX and vRealize versioning was out of sync.   Add into this that customers needed security fixes on another interval pace – and validation of the whole stack in those periods was very hard.   Huge progress was made, but also hard lessons learnt.
  • We have a lot to do to simplify the lifecycle of the upper parts of the stack.   There are big improvement in vRealize Automation 7.3 with out-of-the-box integration with Puppet Enterprise, and since vRealize Automation 7.2 there has been out-of-the-box integration with ServiceNow.  These are critical ecosystem partners in EHC, but also what our customers want generally.   That said – we have GOT to make big leaps forward together with VMware around full vRealize Suite LCM, and we have, and will.   Likewise, with NHC, we could use Pivotal Ops Manager to help with core LCM – but the concept of LCM has to extend to the whole ecosystem around the stack – Concourse for example.

I want to be clear.  The Dell Technologies (VMware, Pivotal, Dell EMC) technologies here are best of breed.  We have more maturity about sustaining these stacks than anywhere else.  

We’re continuing down this path.   Our customers need turnkey cloud stacks more than ever.   But together with EHC and NHC experience, as well as the joint development with Microsoft around Azure Stack has shown us something else:

  1. We have to embrace a more “guided” DIY path, not only maniacally follow the turnkey path.  Not only do some customers want that path (“consume” at the virtual infra layer, and DIY at the cloud layer) but it creates more alignment and a faster vehicle for software releases.  This is what VMware Ready Systems and Pivotal Ready Systems are all about.   They assume a “consume” path for the underlying infrastructure using VxRack SDDC and then leverage VMware Validated Designs to assist customers in building their own cloud.   Furthermore, as native LCM improves (see below), these get closer and closer to the value of EHC/NHC.  
  2. We have to recognize that each of us need to do our piece – and focus on how it all comes together.  VMware, Pivotal are the leaders in each of their domains – and have active efforts to refine the LCM of their own stacks.   I’m super pumped about what I see coming from the vRealize team, from the VCF team.   The efforts to construct VMware Cloud on AWS required (by definition) a huge effort around simplification of LCM of the stack.   Likewise, the learnings about KUBO and how to use BOSH for lifecycle managemnt at Pivotal is priceless.   

This is a big development – because clearly the work for LCM for each stack is best done in the team that is closest.  They get the feedback more directly from the customer.  They have a vested interest in nailing simplicity at their layer.

That all said – I want to be crystal clear. 

Today, if a customer wants a turnkey IaaS, PaaS – the answer is EHC/NHC.

Today, if a customer wants a turnkey a cloud foundation on which they can build DIY Clouds aligned with VMware Validated Designs, and the VMware and Pivotal LCM roadmaps – the answer is VMware Ready Systems and Pivotal Ready Systems built on VCF/VxRack SDDC.

So – what’s new at VMworld on this front?

  • Read here for the latest on Enterprise Hybrid Cloud
  • Read here for the latest on Native Hybrid Cloud
  • Read here for the latest on VMware Ready Systems and Pivotal Ready Systems.

VMworld 2017: Continued Advances in Hybrid Cloud - EHC

$
0
0
EMC logo

If you’re coming here first – I strongly recommend reading the “Lessons Learnt” post in this series, here.  It will help with context, and understanding your options and choices.

Ok, with that context – lets talk about the Enterprise Hybrid Cloud (EHC) 4.1.2. 

EHC is designed to be a full cloud stack – built, designed, sustained as one full entity.  It’s analagous to VxRail/VxRack relative to vSAN/VCF in the “DIY” and “Consume” picture.   Some customers want to build clouds.  Some want to consume a cloud.

EHC is a turnkey IaaS focused on enterprise use cases and workloads – and built completely around the VMware stack.

image_thumb1

EHC 4.1.2 is critical, and a big achievement for the team and customers for a couple reasons. 

The first is increased overall alignment with VMware’s stack, versions, and strategic direction.

The second is a big leap forward in automation, particularly on VxRail.

Here’s the whole scoop:

  • VxRack SDDC support.   This is critical, because customers who are aligned with VMware view VMware Cloud Foundation as what it says in the name “the foundation for a cloud”.   Not supporting VCF/VxRack SDDC created some tricky mis-alignment.  
  • VMware Validated Design alignment.   We are maniacal about using the VVD program as the baseline – and this also helps align EHC with the future of VMware Ready Systems, which also build on the VVD program on top of VCF and VxRack SDDC.
  • Major software stack updates:
    • VMware vRealize Automation 7.3 (another critical VMware/Dell EMC alignment point) which includes the addition of Microsoft Azure as a public cloud end point
    • VMware vRealize Operations Manager 6.5
    • VMware NSX for vSphere 6.2.7
    • VMware vSphere ESXi 6.0.x
    • Dell EMC ViPR 3.6
  • Major platform Release Certification Matrix alignment
    • VxBlock 6.0.5 – 6.0.13
    • VxRail, 4.0.301
    • Dell EMC Avamar 7.4
    • Dell EMC Recover Point CL/EX 5.0.0
    • Dell EMC Recover Point for VM 5.0.1
    • Dell EMC VPLEX GeoSynchrony® 6.x
    • CloudLink 6.0
  • Huge updates for VxRail-based deployments – which is the easiest EHC path – period.
    • 4 site multi-site object model.   This is something that EHC customers demanded – the ability to have a full cloud model that spanned sites, so workflows, templates, all policy behavior was able to operate across multiple vRealize Suite instances, and be replicated.
    • Enhanced multi-site DR.  In EHC 4.1.1 on VxRail, we added Recoverpoint for VMs support.  In EHC 4.1.2 we have extended the multi-site policy engine and object model to DR on VxRail.
    • Total automation.  In 4.1.1 we automated the install.  In 4.1.2 we can automate the full stack (including the full vRealize Suite) LCM on VxRail deployments.   Ultimately we will “hand over” this LCM responsibility to vRealize as VMware keeps enhancing their native capability here at which point it becomes available on VMware Ready Systems on VxRack SDDC.   For now though, this ultimate easy button is on EHC on VxRail.

That last one is worth a demo, because it’s that cool.

Ok – internalize what’s happening in that video.   Complete LCM of the full stack.  From VxRail itself, to NSX, to vRealize Automation/Business for Cloud/Operations, to RecoverPoint for VMs, to the Data Protection Suite for VMs.  All workflows maintained.  

That’s an amazing achievement.  

The automation framework (codenamed “Ozone”) was developed completely as a cloud native app, modularized with sets of microservices.   we cannot automate VxBlock deployments to the same degree (more complex, manual platform updates, ViPR/VPLEX considerations and more).   But it shows what is available now on VxRail with EHC.

It also shows where we will get VVDs and VMware Ready Systems directionally over time.  

It will take some time between us and VMware – but ultimately we know we need to make LCM easier more generally – and we will apply a lot of the learnings from EHC to accelerate VMware Ready Systems on VxRack SDDC and work with the VMware vRealize team to keep partnering about full stack LCM.

Pretty cool.   Would love your thoughts!


VMworld 2017: Continued Advances in Hybrid Cloud - NHC

$
0
0
EMC logo

If you’re coming here first – I strongly recommend reading the “Lessons Learnt” post in this series, here.  It will help with context, and understanding your options and choices.

Ok, with that context – lets talk about the Native Hybrid Cloud (NHC) 1.4 

NHC is designed to be a full cloud stack – built, designed, sustained as one full entity.  It’s analagous to VxRail/VxRack relative to vSAN/VCF in the “DIY” and “Consume” picture.   Some customers want to build clouds.  Some want to consume a cloud.

NHC is a turnkey PaaS focused on the most lean way to stand up a developer platform centered on Pivotal Cloud Foundry, and giving the infrastructure team the tools to join the developers on their DevOps cultural shift.

It’s core principle matches a PCF core principle – “focus above the value line”. 

image

All those things are NOT things that a developer really wants to spend time worrying about (though arguably as you move further up to the right, security, app management and marketplace are things they think about to some degree).

The goal of the NHC team is to take a platform approach to everything below the value line.

image

NHC 1.4 is built around PCF 1.11, and runs on VxRail, and always includes an Elastic Cloud Storage Object store – not only does every developer need an object store they can count on, but it’s also used for multi-AZ, multi-datacenter NHC behaviors. 

First, let’s talk about what’s new in PCF 1.11 – since this is at the core:

Now, NHC embraces the multi-availability zone approach in Pivotal Cloud Foundry (read more here: https://docs.pivotal.io/pivotalcf/1-11/customizing/understand-az.html)

Multiple HA deployment approaches are supported in NHC:

  • Single Site
    • Multi AZ
    • Multi Foundation
  • Multi-Site
    • Multi-AZ
    • Multi Foundation

Note that multi-AZ requires an expansion NIC on VxRail, and that means you cannot do it on G-series nodes.   Furthermore, external vCenter is Mandatory for HA on VxRail.   Note that system-level design and single support is filled with these sort of critical gotchas.

This is an example of a single site HA multi-AZ configuration of NHC.

image

 

NOTE: For Virtual Geek readers wondering why VxRail and optionally VxRack FLEX for NHC rather than VxRack SDDC/VCF – this is the root of the technical answer.  

This comes up often at customers where the VMware team is laser focused on VCF, which is great – but don’t know about PCF… a couple quick comments:

  • We are doing an early adopter program around VxRack SDDC – this helps drive feedback into the VCF/VxRack SDDC roadmap
  • You can see why VxRail and not VxRack SDDC – VCF workload domain behaviors mean that today, configuring for an multi-AZ PCF deployment is tricky.   External vCenter also is another example.

As we work on the VMware Ready System roadmap together with the VCF team – this is an opportunity for simplification.   If we can make configuration of VCF/VxRack SDDC optimized for PCF, and make tweaks to the VMware CPI and PCF Ops Manager Director for VMware it would be a big leap forward.

image

ECS is included in every NHC.

It’s used for several functions:

  • ECS serves as the Blob Store for PCF
  • It is the anchor for backup and restore of NHC Stack (beta)
  • It is the Object Store for Application Developers (Future)
  • It will be the target for backup for data services in the future.
  • The customer can utilize the ECS for other functions.
  • A minimum of 5 nodes as part of the starter kit.

Now… Always interesting – where are we going next?

There’s a clear chance to simplify.

  1. If we could shift some VCF behaviors, VxRack SDDC would be an ideal deployment platform for PCF and KUBO.  It’s not yet, but both VMware and Dell EMC are working to get it there.
  2. Clearly, there’s an opportunity to make the object store an embedded, vs. bolted on option.  The ECS appliance form factor is ideal at scale (almost all ECS deployments are big – many hundreds of TB, generally more in PB ranges – and there appliances are obvious.   But – at small scale – a software-only ECS would be great.
  3. Some of the automation we do really should ideally be contributions to Ops Manager Director, rather than NHC itself.
  4. We need to build tighter integration with Concourse for CI/CD that is aligned with Pivotal’s strategy.

We’re going to rally around Pivotal Ready Systems on VxRack SDDC as the vehicle for us to pull the roadmaps on 1, 2, 3 closer and closer together.

VMworld 2017: Pivotal Container Services (PKS)

$
0
0
EMC logo

IMO – this is one of the biggest announcements at VMworld this week.   It is a major shift to the Dell Technologies strategic perspective, and important for our customers.

It’s something we’ve been spending a lot of time on internally for a while – I’ve spent hours at end with Scott Yara and James Watters at Pivotal, and with Ray O’Farrell and Paul Fazzone at VMware, and it’s not just me, but others in Dell EMC too.

Let’s start from the start.  There are 4 abstractions customers need as they develop cloud native apps:

image 

Each has a purpose – one that the others are not ideal for.

  • Kernel-Mode VM =   an instantiation of an abstraction of a physical host.  Useful when you need to get low-level for one reason or another, including hardware abstraction.   It’s notable that almost all containers run on some form of kernel mode virtualization.   Things like hardware abstraction, security abstraction, software defined networking are insanely valuable even when the sole purpose of the VM is to host containers.   Yes, in the 100% container scenario some functions like VM consolidation, HA, resource scheduling have reduced value – but those other things = valuable.
  • Container/Cluster Manager = an instantiation of an abstraction of a OS including a namespace, and meta filesystem.   Useful when you can’t use a structured PaaS, but need your own specific set of tools, functions, runtimes, micro-services – in other words, build your own DIY PaaS on top of the container/cluster management layer.
  • A structured PaaS = an instantiation of an abstraction of a given set of runtimes and given platform services.  Useful when you want a turnkey platform and it’s a fit for what you need.
  • Data Services and persistence = an instantiation of given set of data services (like Dynamo, RDS, but also SQL, Redis, you name it) and object/blob storage.  Useful when you need to store/pricess stateful information.

The preference is always to use the highest order abstraction you can…  that means you need to do less, and can focus on things that have value.

Now – as Dell Technologies – we’re already doing great when it comes to 3 out of 4.

  • VMware has the best Kernel Mode abstraction platform on the planet.  Mature.  Widely deployed.  Feature rich.
  • Pivotal is the leader in structured PaaS abstractions with Pivotal Cloud Foundry.
  • Dell EMC has great object storage abstraction models for developers with Elastic Cloud Storage – and Pivotal Data services is a rich set of data service abstractions of all types.

We all collectively participate and contribute (and use) the container ecosystem – but didn’t have a strong opinionated point of view.

  • Dell EMC {Code} makes tons of contributions to volume controllers into Docker, Mesos, Kubernetes. 
  • Pivotal make Docker a choice underneath Diego. 
  • VMware integrates Photon platform elements and vSphere Integrated Containers (VIC) with the whole ecosystem.

But… historically, our point of view on the container/cluster manager abstraction ecosystem wasn’t clear.

I’ve heard some folks vehemently and passionately argue that container/cluster managers aren’t needed (“just use a structured PaaS!”.   I’ve heard others argue that developers don’t need Kernel Mode VMs.  I’ve heard others claim infrastructure doesn’t matter. 

Hint – none of those are correct.  There is a lots of confusion…. and that’s not a surprise, this is a VERY dynamic/chaotic ecosystem right now.

The container/cluster manager ecosystem seems to be starting to settle.

From my point of view, and it’s a broadly held point of view in Dell Technologies – Kubernetes is the likely “winner”.   It seems that it’s hit critical mass – and enough people are rallying around it – just like people have rallied around Cloud Foundry as the structured PaaS of choice.

The strategic pivot was to acknowledge that we needed to do our part, and that meant a couple things:

  1. Make our point of view clear – anyone inside the company that thinks there isn’t a need for container/cluster managers is not aligned with our strategy.   PCF is great – but it isn’t the answer for need of a developer, of a customer building new cloud native apps.
  2. Continue to be open to the container/cluster ecosystem, but have a strong opinion.  We do – Kubernetes.
  3. Find the way where we can best contribute and help.    The answer is around contributing, hardening, and generally making Kubernetes enterprise-ready in every way we can.

Google is the primary contributor to Kubernetes – and saw the great opportunity of us adding to the community – and thus Kubo was born.   Kubernetes on BOSH – the same very bottom level part that underpins Pivotal Cloud Foundry.   BOSH performs an analagous function to the BORG layer within google itself – a low-level lifecycle management and physical resources scheduling function.

This is an area where we have a lot to give.  We can make core Kubernetes services liked etcd easier to deploy, to lifecycle.  We can build integrations with the VMware Cloud Foundation stack that make Kubernetes stronger and better – and we announced material resources working on Photon at VMware shifting to focus there.   We can take the concepts of Developer-Ready Infrastructure and NSX/PCF integration and extend that to the Kubernetes ecosystem.   Dell EMC can integrate ECS more tightly.   We can build and integrate object snapshot/versioning schedulers and managers as a “backup” analog for cloud native apps.

Personally, the team I lead can work to make VxRack SDDC and VxRail the best way, hands down, to deploy this stack on premises.

Today – Pivotal, VMware, and Google announced Pivotal Container Services – which has an acryonym of PKS – with the K signifying Kubernetes.   Pivotal Container Services is the enterprise distribution of Kubo.

Here is is in a simple picture I’ve sketched up:

image

There are some key observations here:

  • We will now give developers – all four abstractions, and leaders in each.
  • While we continue to be open – we’re now strongly aligned and opinionated on the best way to do this – something the whole of Dell Technologies can and will rally around:
    • Pivotal Cloud Foundry is the most mature, most deployed structured PaaS.
    • Pivotal Container Service is the best Kubernetes based container/cluster manager for the enterprise.
    • VMware stack is the most mature kernel mode virtualization stack on which to build in the enterprise, and it’s one you already know.
    • Dell EMC VxRack SDDC (build on VMware Cloud Foundation and extended with additional services like ECS) and VxRail (for customers who want to start really small) will be the best turnkey HCI optimzed for the full stack.
  • We know that not every customers will want/need it all.  Some will be OK with just PCF.  Other OK with just PKS.  We will not connect the licensing model – ergo, while PKS shares an architectural underpinning with PCF in BOSH, a customer doesn’t need PCF to use PKS.

BTW – this is one of the elements behind the Pivotal Ready System announcement I talked about here.

What you have today is a cool technical announcement, a sign that Pivotal, VMware, Dell EMC are embracing Kubernetes.

But – as a leader in the company, I also see something else

We now have a Cloud Native/Digital Transformation stack where there is a SINGLE target we are furiously running towards now as VMware, Pivotal, and Dell EMC – no mis-alignment, no differences in PoV. 

Today you see the hand of the leadership of the new Dell Technologies company (Michael Dell) and the CEOs of the strategically aligned companies (VMware – Pat Gelsinger & Pivotal – Rob Mee) all saying: “we’re going THIS way, come with us”.

Lots of exciting work going on here – and I’m pumped to finally get to talk about it!

Unite and Thrive: Introducing OpenManage Integration 4.1 for VMware VCenter

$
0
0
EMC logo

With the recent launch of our new generation of PowerEdge servers, we’ve provided customers with a bedrock on which to build their modern data center. These are also the first generation of servers created in the merger or Dell, EMC, and VMware, realizing the true potential of customer-focused design.

One of the biggest customer challenges is the need to adapt and grow their infrastructure as their workloads change. They also need smarter management and automation tools to help keep the process simple as their data center gets more complex.

OpenManage Integration for VMware vCenter (OMIVV) addresses this need by bringing simplified management of the entire server infrastructure — physical and virtual — into the vCenter console.

By providing unique cluster-level hardware views directly within vCenter, the OMIVV plug-in simplifies the process for scaling and applying updates to multiple Dell EMC hosts in a single workflow.

OMIVV includes these essential capabilities:

  • Monitoring and alerting: Get extensive details for inventory, monitoring and alerting, and recommended vCenter actions based on Dell EMC hardware alerts
  • Deployment and provisioning: Perform “Zero-Touch” bare-metal hypervisor deployment without PXE — leveraging iDRAC with Lifecycle Controller
  • Firmware updates: Deploy server BIOS and firmware updates from vCenter
  • Hardware management: Access and renew all hardware warranties online via the OMIVV console

An optional OpenManage vROps Management Pack is included with the licensing of OpenManage Integration for VMware vCenter. This management pack brings hardware objects, health and alerts into vRealize as objects and dashboards. It includes these great features:

  • Monitoring and alerting: Adds hardware health metrics, performance data, and events from managed PowerEdge servers / chassis into vROps
  • High-level reporting: Provides detailed reports and views of server and chassis information across the OMIVV v4.1 managed environment
  • Hardware contextual mapping: Maps physical relationships with vROps objects for simplified troubleshooting

OpenManage Integration for VMware vCenter and PowerEdge servers are essential elements of the modern data center. Plus, our intelligent management tools not only allow your infrastructure to thrive, they keep things simple and secure as you scale.

Dell EMC also delivers OpenManage Power Center – the first server management solution in the industry to provide virtual (VM) specific power mapping. It enables you to determine the best placement of workloads and VMs based on power consumption within the servers and VMs themselves. Also, OMPC 4.0 enables proportional power consumption for chargeback, down to the individual VM. This means owners of applications are only charged for what they are actually using.

Dell EMC’s OpenManage systems management portfolio delivers practical innovation focused on simplifying and automating the entire IT lifecycle. For more information on the full portfolio, visit dell.com/en-us/work/learn/enterprise-systems-management.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Blurry-Lights-1000x500.jpg

VMworld 2017: Continued Advances in Hybrid Cloud DIY Choices

$
0
0
EMC logo

If you’re coming here first – I strongly recommend reading the “Lessons Learnt” post in this series, here.  It will help with context, and understanding your options and choices.   Also – I strongly recommend reading this “DIY” and “Consume” blog post here – it’s very useful.

Ok, with that context – lets talk about something new for us at Dell EMC – a renewed effort around supporting Hybrid Cloud DIY choices.

For Dell EMC – for the last 4 years, we’ve been maniacally focused on full turnkey cloud stacks (EHC/NHC).  I want you to internalize what that mean by that.   The Dell EMC team has taken on the mission of design/engineering, testing/validation, packaging, full lifecycle support and single support – FOR EVERYTHING.   This is analagous to what we do at the virtual infrastructure layer with CI (VxBlock) and HCI (VxRail and VxRack).

This is a material challenge.   We’ve made a ton of progress in EHC (as you can see here) and NHC (as you can see here).

That said – invariably, these cloud stacks have a couple fundamental challenges that result in them not being the right answer for every customer for these reasons:

  • Market need for a DIY approach.  Some customer want a turnkey sustained system “consume” approach at the HCI infrastructure (VxRack, VxRail) – but want to pursue a guided DIY approach for the IaaS/PaaS/CaaS layer of the stack.    This can be for very real reasons (constraints of turnkey cloud platform is not a fit, for example).
  • Component lag.  By definition – this fully engineered, sustained, supported as one approach of the Hybrid Cloud platforms lags releases of the component ingredients like vRealize, or NSX.   This is also true of the DIY vs. Consume choices at the virtual infrastructure layers (HCI) – but the lag is more material.   This isn’t because people aren’t working hard – it’s because the moving pieces are more complex, and interoperability needs work.
  • We’ve made it harder than we needed to.  It’s notable that with public cloud IaaS/PaaS/CaaS models, you can’t make a lot of choices – but they work well.   Conversely, when customers started asking for on-premises clouds, it started with a TON of variation.   We’ve learnt hard fought lessons about the price of even small variations.   It’s not a coincidence that our turnkey systems are focused on HCI going forward.   It’s not a coincidence that we support DR-as-a-service in EHC only using Recoverpoint for VMs – and not SRDF, and other forms of replication.   Variation = failure when it comes to simplicity and lifecycle management.   You need the barest minimum variation (there is a point where you don’t have enough flexibility to be compelling) – and no more.
  • “Native” Lifecycle Management is improving.   In relatively recent timeframes, there are growing “Apollo moon missions” to make LCM of the stacks easier.   This is true for IaaS (vRealize + VMware Cloud Foundation), or a PaaS (PCF + PCF Ops Manager and Ops Director) or a CaaS (like Pivotal Container Services aka PKS) has gotten a lot more focus in each company.   Historically the focus of VMware, Pivotal, Dell EMC (at least from where I sit) has been on innovation in each layer, rather than how the layers all work together and are life-cycled together.   In the Converged Platforms and Solutions team (where I work) – our whole life has been this simplification purpose, so it’s very familiar to us at every level of the stack.   But – lack of stack LCM automation meant that DIY approaches historically involve a lot of expertise – either at the customer, or via professional services from the vendor and partner ecosystem.   This is changing.

This means we can now:

  1. Create a new option – DIY Hybrid Cloud on top of turnkey “consumed” HCI VxRack SDDC systems – these are VMware Ready Systems and Pivotal Ready Systems.
  2. Continue to offer EHC and NHC for customers who want a “consumed” model for Hybrid Cloud.
  3. Work the roadmaps for VMware Ready Systems, Pivotal Ready Systems as a rally point for several key pieces: VMware Cloud Foundation, VMware’s efforts to simplify the LCM of vRealize, and Pivotal’s efforts to simplify the LCM of PCF and Pivotal Container Services.  

image

VMware Ready Systems are available now – their first incarnation are VxRack SDDC using VMware Cloud Foundations and the VMware Validated Design for vRealize.   We actually use the VVD program as the “base building element” for the full blown EHC – so this is a natural step.

Pivotal Ready Systems will take some time to complete, but will be ready soon.

Like all things in Dell EMC Ready Solutions – Ready is how we make things easier for the DIY customer.   Think “Redbook”.   Think Cisco Validated Design.   Think VMware Validated Design.   These are all programs and real work to de-risk customers who are taking a DIY path.

  • Ready Nodes are more than just a server + software packaged around a given workload…  it’s READY.  Tested, validated, documented – and setup for easy acquisition.
  • Ready Bundles are more than just server + network + storage + software packaged around a given workload…  it’s READY.   Tested, validated, documented – and setup for easy acquisition.
  • Ready Systems are more than just CI/HCI + software packaged around a given workload.  It’s READY.  Tested, validated, documented – and setup for easy acquisition.

Now, it’s important to understand who is responsible for what in Ready Systems – the picture looks like this:

image

This means there’s a very nice degree of clarity.   As is always the case with CI/HCI – the full lifecycle is the Dell EMC responsibility.   We own the VxRack SDDC and VxRail experience, with single support – for that layer.  Our job is to nail the easy simple consumption of this layer – taking the complexity and opex away from the customer.

VMware is responsible for how the customers deploy and lifecycle the vRealize Suite using the VVD program, coupled with VMware PSO and partner capabilities.

Pivotal is responsible for how the customer deploy and lifecycle of Pivotal Cloud Foundrty and Pivotal Container Services.   The lifecycle for this using PCF Ops Manager and Ops Director is improving.

Is there a “single throat to choke” like EHC/NHC?  No.  That’s what makes this a DIY path to Hybrid Cloud vs a consume path – logically it is this path:

image

Over time – there’s an opportunity for great simplification in new option.   You can see total Pivotal/VMware/Dell EMC alignment in VMware and Pivotal Ready Systems – RIGHT NOW.

Clearly – each party leads the efforts in their domain.  We have alignment on the full technology stack.   Unlike EHC/NHC – it has what it does in the name.   VMware.  Pivotal.   Unlike EHC/NHC – the “who does what” is clear.   Each party does their piece.   We work together to bring it together for the customer.

It will get better over time as we align work and roadmaps around this rally point.

  • Imagine that VxRail and VxRack SDDC become closer – sharing hardware variability, and a common base low-level LCM element.   At that point – we can align everything on VxRack SDDC – because there would be an easy path for customers to start with VxRail, and then move to VxRack SDDC as they wanted the on-premises foundation for a cloud.
  • Imagine if VMware cracks the code of LCM for the full vRealize Suite and it was integrated with VMware Cloud Foundation and VxRack SDDC.  That would be a huge simplification.  
  • Imagine that VMware Cloud Foundation targeted PCF and PKS as first-order use cases for the workload-domain behavior, and aligned for things PCF needs naturally, like multi-az handling (to understand this, read this post here).

If we were to do these things, we ultimately simplify this aligned technology stack – and close the gap with EHC/NHC.   At that point – they become one thing – with the simple choice of do you want multi-vendor or single vendor as your preferred acquisition and support model.  

Are we working on these things and more?  Of course :-)

Stay tuned to this space for more about VMware Ready Systems and Pivotal Ready systems.

I’m very curious for your thoughts – the perspectives of our customers, our partners, and our VMware/Pivotal/DellEMC employees are very valuable to me. 

Do you see what we’re doing here?  

Do you get the delta in the value proposition of a Ready System relative to a Hybrid Cloud Platform?

The Business Case for Hyper-Converged Infrastructure

$
0
0
EMC logo

 

Earlier this year, Wikibon proposed a theory that “the optimum hybrid cloud strategy” is based on an integrated hyper-converged infrastructure (HCI) as opposed to the traditional enterprise architecture built on separate servers and separate traditional SAN.

To test and validate their premise, Wikibon experts performed a three year comparison of a traditional white box / storage array vs. a hyper-converged system. The hyper-converged system they selected was Dell EMC’s VxRack FLEX, which is based on ScaleIO software-defined storage. Through their research, they were able to analyze and compare costs across these four main categories: initial investment, time-to-value, cost over time and ease of use.

While you can read all the details on the approach and methodology for comparing these two instances in the full report here, I’d rather jump right into the juicy stuff: the results!

Initial Investment

The initial investment for a traditional SAN solution was $734K versus $405K for a hyper-converged system. To give you the full picture, there are several reasons why HCI solutions are less expensive to purchase up front. Not only are the equipment costs lower (storage arrays are replaced with DAS and specialized Fibre Channel networking and HBAs are no longer required), but most customers only buy what they need for the short term rather than over purchasing infrastructure to support unknown future requirements, which is common when the infrastructure is intended to last for multiple years. That’s the power of HCI – it enables you to start small and grow incrementally as your demand for resources increase. But the savings and cost discrepancies didn’t stop there.

Time-to-Value

Practitioners who took part in deploying the solutions onsite found that it took 33 days to configure, install and test the traditional white box/storage array as opposed to the hyper-converged infrastructure which took a mere 6 days. The time-to-value for hyper-converged was then calculated to be 6x faster. This is one of the key benefits of an engineered system. IT no longer bears the burden of architecting the solution, procuring separate hardware and software from multiple sources, and installing and testing the configuration before migrating workloads to production. Instead, the system is delivered pre-integrated, pre-validated and ready for deployment.

Cost Over Time

In addition to the upfront costs, the three year comparison then showed maintenance and operational expenses increasing the total cost for a traditional storage array to $1.3 million while maintenance and operations costs for hyper-converged came out to a total of $865K over three years. The main driver behind the savings is that HCI solutions greatly simplify IT operations in a number of ways: HCI provides a more efficient environment for those responsible for managing and provisioning storage; the ease of provisioning storage enables application developers to do more; and automation, streamlined provisioning, and higher reliability saves IT staff time on activities such as data migrations and capacity planning. There are also substantial savings from lower operating expenses such as power, cooling, and data center space. Other staggering numbers from the study included the following:

  • Traditional solutions are 47% more expensive than a hyper-converged solution
  • Storage costs for the traditional approach are 100% higher
  • Overall, a hyper-converged solution is $400K less expensive

Ease of Use 

In this last category, all practitioners interviewed, except one, agreed that the hyper-converged solution was easy to upgrade (more so than the traditional architecture), easy to manage and that it performed well. This makes sense given that HCI is typically managed from a central console that controls both compute and storage and automates many traditional manual functions, helping to reduce the number of tools that have to be learned and used. HCI systems also have full lifecycle management – meaning they are fully tested, maintained, and supported as one system over the entire lifespan. As a result of these efficiencies, people costs for HCI are much lower than the traditional approach. In fact, over a three year time span, people costs are 600% higher for traditional SANs.

So, when looking at the TCO breakdown, it looks like hyper-converged is the way to go over a traditional storage approach. HCI offers lower equipment and operational costs, lower latency and faster bandwidth infrastructure, as well as a “single hand to shake and a single throat to choke” for vendor support and service needs.

Dell EMC’s VxRack FLEX is a fully tested, pre-configured, hyper-converged system that meets all of these criteria while offering extreme scalability (start small and grow to 1,000+ nodes). It truly provides a turnkey experience that enables you to deploy faster, improve agility and simplify IT. You can download the report here to read about all the findings from the research or you can skip right on over to the Dell EMC VxRack website to start exploring your options.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Desk-Laptop-Tea-Briefcase-1000x500.jpg

With VMware Embedded, OEMs Have Even More Options

$
0
0
EMC logo

What motivates you to get up every day? For me, the answer is pretty simple, I’m inspired by the work of our customers and partners. I’m talking innovative solutions that have the power to radically transform lives and improve business productivity – whether it’s an MRI machine in a hospital, a way for farmers to measure and improve crop growth, a smart building that is responsive to occupant needs or an industrial-strength PC helping to run a factory line more smoothly. At the end of the day, it’s all about technology simplifying things, improving lives and making business more efficient.

In fact, the whole focus of the Dell EMC OEM division is to help our customers and partners bring their innovative solutions to market as quickly as possible. That’s precisely why Dell EMC OEM is the first (and only) tier-one technology vendor to offer VMware Embedded.

VMware Embedded – a Way to Expand Into the Virtual Space

VMware Embedded is a pre-configured, turnkey virtualization solution that offers a new way for OEMs to increase revenue and expand into the virtual space. In a nutshell, this offering, with a resellable licensing model, enables OEMs to sell more products, more efficiently. Additionally, customers have the option to purchase VMware Embedded with Dell EMC hardware, such as PowerEdge servers, or as a standalone product to streamline their supply chain.

Why Virtualization Matters

We have all seen the trend of businesses tapping into virtualization to gain longer solution life cycles, take advantage of cloud agility, reduce downtime and improve security. As a result, virtualization has become a key priority for a majority of enterprise solutions, built by OEMs and ISVs.

Now with VMware Embedded, customers have the option to run it as a virtual appliance, install on a physical appliance or use in Dell EMC OEM’s datacenter as a managed service offering. This maximizes efficiency and lifecycle across the OEM’s solution, ultimately benefiting the end customer.

Why VMware Is Great for OEMs

As an OEM, you can deliver VMware updates as a value-add service – and at a release cadence that matches your timelines – while serving as the single point of contact for support. To help decrease costs of goods sold and speed time-to-revenue, Dell EMC OEM will work with you to validate designs, align go-to-market strategies and create roadmaps. OEMs can also choose from a wide range of licensing and pricing models, including OEM sublicensing and global distribution rights, without multiple contracts.

For me, this is the main benefit of VMware Embedded – it enables our OEMs to provide quality support of VMware across all deployment models, offering advantages to customers in multiple markets, including manufacturing, video surveillance, communications, gaming, financial services, energy, healthcare, storage and networking.

But don’t take my word for it – this is what Darren Waldrep, vice president of strategic alliances at Datatrend, a Dell EMC OEM Preferred Partner, had to say. “Dell EMC and VMware’s embedded offering is a competitively priced solution that we are excited to offer our customers. VMware Embedded creates a much easier route to market for Dell EMC OEM partners and integrators, like ourselves.” Waldrep specifically highlighted Dell EMC’s and VMware’s “best of breed technologies” and our commitment to truly enabling the channel to deliver best pricing and experience for the end customer.

As we move deeper into the era of digital transformation, the need for speed will be imperative – no matter the industry. Understanding the unique needs of our customers and helping them to adapt to the constantly changing market is what will allow you as an OEM to thrive.

Check out the datasheet or visit Dell EMC OEM in the Dell EMC booth #400 at VMworld, Aug. 27-31 in Las Vegas. We hope to see you there!

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/City-Night-1000x500.jpg

PowerEdge at VMworld 2017: What Happens in Vegas Rules Your Data Center

$
0
0
EMC logo

It’s almost time for VMworld 2017. Less than a year ago Dell, EMC and VMware all combined to form the largest privately held technology company in the world. Recently, Dell EMC launched the 14th generation of PowerEdge servers, or the FIRST generation of Dell EMC PowerEdge servers.

The shared vision of Dell EMC and VMware is realized in PowerEdge servers; what we call the bedrock of the modern data center. Whether it’s traditional applications and architectures, Hyper-Converged or Cloud, PowerEdge is the singular platform that forms your entire data center foundation.

At VMworld 2017 we’ll be showcasing our full PowerEdge portfolio and you’ll see how PowerEdge is at the heart of data center infrastructure.

There are a number of exciting sessions highlighting the integration of Dell EMC and VMware.

State of the Union: Everything Multi-Cloud, Converged, Hyper-Converged and More!

You have questions? Chad Sakac has answers! The technology landscape is continuously evolving, making it a challenge for IT leaders to keep up. Chad will relate Dell Technologies’ perspective on multi-cloud, converged and hyper-converged, data analytics, and more. (Session: UEM3332PUS)

Deliver Real Business Results through IT Modernization

As businesses embark on new digital transformation initiatives, we’re seeing them simultaneously transform their core IT infrastructure. You will learn how Dell EMC and VMware together are driving advancements in converged systems, servers, storage, networking and data protection to help you realize greater operational efficiencies, accelerate innovation and deliver better business results. (Session: STO3333BUS)

Modern Data Center Transformation: Dell EMC IT Case Study

In this session, learn how Dell EMC IT is leveraging VxRail/Rack, ScaleIO and more to modernize its infrastructure with flash storage, converged and hyper-converged systems and software defined infrastructure, where all components are delivered through software. (Session: PBO1964BU)

Workforce Transformation: Balancing tomorrow’s Trends with Today’s Needs

The way we work today is changing dramatically, and organizations that empower their employees with the latest technologies gain a strategic advantage. In this session, you will learn how to modernize your workforce with innovative new devices and disruptive technologies designed to align with the new ways that people want to work and be productive. (Session: UEM3332PUS)

Modernizing IT with Server-Centric Architecture Powered by VMware vSAN and VxRail Hyper-Converged Infrastructure Appliance

With server-centric IT, data centers can offer consumable services that match the public cloud yet offer better security, cost efficiency, and performance. This session will discuss the benefits of x86-based server-centric IT and the convergence of technologies and industry trends that now enable it. The session will also explain how customers can realize server-centric IT with VMware vSAN and the Dell EMC VxRail appliance family, along with detailed analysis demonstrating these claims. (Session: STO1742BU)

The Software-Defined Storage Revolution Is Here: Discover Your Options

During the past decade, web-scale companies have shown how to operate data centers with ruthless efficiency by utilizing software-defined storage (SDS). Now, enterprises and service providers are joining the SDS revolution to achieve radical reductions in storage lifecycle costs (TCO). Learn how Dell Technologies’ SDS portfolio is revolutionizing the data center as we highlight customer success stories of VMware vSAN and Dell EMC ScaleIO. (Session: STO1216BU)

Come spend some time with Dell EMC experts to see how we can work together to help you achieve your business and IT initiatives. And be sure to follow us at @DellEMCServers for daily updates, on-site videos, and more.

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Circular-Staircase-1000x500.jpg


FPGA’s – Use Cases in the Data Center

$
0
0
EMC logo

I’m on the road quite a bit and get the opportunity to engage many customers on a range of topics and problems.  These discussions provide direct feedback that helps the Server team focus on customer- oriented problems and potential challenges vs. creating technology looking for a home. In earlier blogs, I mentioned how the performance CAGR was not keeping up at the same time we had new emerging problems.

Previously, we believed the impact of Moore’s Law on FPGA’s (Field Programmable Gate Arrays) would be more profound than ever – prior it seemed FPGAs were never quite big enough, couldn’t run fast enough and were difficult to program.  Technology moves quickly and those attributes of FPGAs have changed lot – they are certainly big enough now, clock rates are up, you can even get an embedded ARM core, and lastly the programming has improved a lot.  OpenCL has made it easier and more portable – NOTE: I said easier NOT easy – but the results for the right problem makes it worthwhile.

Let me do some context setting on where FPGAs work best – this is not an absolute but rather some high-level guidance.  If we take a step back, it’s clear that we’ve been operating in a world of Compute Intensive problems – meaning, problems and data that you can move to the compute because you are going to crunch on it for a result.  Generally, this has been a lot of structured data, convergence algorithms and complex math, and general purpose x86 has been awesome at these problems. Also, sometimes we throw GPUs at the problem – especially in life science problems.

But, there is a law of opposites. The opposite of Compute Intensive is Data Intensive.  Data Intensive is simple data that is unstructured and only used for simple operations.  In this case, we want the compute and simple operators to move as close to the data as possible.  For example, if you’re trying to count the number of blue balls in a bucket that’s a pretty simple operation that’s data intensive – you’re not trying to compute the next digit of π.  Computing the average size of each ball in the bucket would be more compute intensive.

The law of opposites for general purpose compute is optimized compute…that one is easy.  So, the X-Y coordinate 4 world approximately looks like below showing where various technologies best fit.

But why are CPUs not great for everything, and why are we talking about FPGAs today?  Well, CPUs are very memory-cache hierarchical centric to get data in and out from DRAM to Cache to registers for the CPU to do an operation – as it takes just as much data movement to do complex math as simple math with a general purpose CPU.  In this new world of big unstructured data that memory-cache hierarchy can get in the way.

If you think about the link list pointer chasing problem shown to the left here– in a general purpose CPU when you need to traverse the link list every time you do a head/tail pointer fetch due to the data’s unstructured nature you get a cache miss, and thus, the CPU does a cache line fill –  generally 8 datum’s.  But only the head/tail pointer was needed, which means 7/8th’s of the memory bus bandwidth was wasted on unnecessary accesses – potentially blocking another CPU core from getting datum it needed. Therein lies a big problem for general purpose CPUs in some of these new problems face today.

 

Now, let’s focus on some real world examples:

As mentioned earlier, programming is now simpler (simpler – NOT easy).  Open Computing Language (OpenCL) is a framework in C++ for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, DSPs, and FPGAs.   OpenCL provides a standard interface for parallel computing using task- and data-based parallelism.  A quick example and flow is shown below.

Now, I’ll walk you thru two examples that we’ve worked on in the Server Solutions Group to prove out FPGA technology and make sure our server platforms intercept the technology when it’s ready.

Problem #1:  “Drowning in pictures, save me…..”

Say you’re a picture gallery site, social media, etc… who want end users to upload full size images from their mega pixel smart phones, so that they can enjoy them on a wide range of devices/screen sizes – how do you solve this problem?  The typical approach is using scale out compute and resize as needed for the end device. However, as shown above, it’s not a great fit for general purpose compute, as it scales at a higher cost and you must manage scale out.  Other options are batching processes and saving static images of all the sizes you needed – so it becomes a blowout storage problem.  Or, force the end user device to resize, but you must send down the entire image – blowing out your network and delivering a poor customer experience.

To avoid any of the above options, we decided to do a real time offload resizing on the FPGA.  For large images, we saw around a 70x speedup and about 20x speedup on small images.  We replaced 20-70 servers into 1 and saved power, cost, and increased performance – easy TCO. So, now the CPU is handling the request for resized images and delivery but using an FPGA to process the images.  Below is high level pictorial.

Problem #2: “I have all these images, and I’d like to sort them by feature”

Digital content is everywhere, and we’re moving from text search to image search. Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. Edge detection is also used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision.  In this example, we simply wanted to see what we could accomplish on a CPU and FPGA.  We started on the CPU with OpenCL and quickly discovered that the performance was not up to par…. less than 1FPS (frames per second).  The compiler was struggling so we manually unrolled the code to swamp every core (all 32 of them) and got up to 110FPS.  But at 85% CPU load across 32 cores you could barely move the mouse.

The next step was the same OpenCL code (different #defines) and targeted an FPGA.  With the FPGA and parallel nature of the problem we could hit 108FPS. In the FPGA offload case the CPU was ONLY 1% loaded, so we had a server with compute cycles left to do something useful.  To experiment, we went back to the CPU and forced a 1% CPU load limit and found we could not even get 1FPS.   Point being that in this new world of different compute architectures and emerging problems “it depends” will come up a lot.  Below is the data showing the various results I described.

Future Problems

In the future, emerging workloads and use cases (below) will continue to drive the need for new and different compute.  Every company will become a data compute company and must optimize for these new uses. If not, they are open to disruption by those who embrace change more aggressively.  FPGAs can be a part of this journey when applied to the right problem. Machine learning inference is a great example, along with network protocol acceleration/inspection, image processing as shown, and others can benefit from the reprogrammable nature of FPGAs.

Summary

So, FPGAs can be really useful and can help solve real-world problems.  Ultimately, we are heading down a path of more heterogeneous computing where you will hear “it depends” more than you’ll might like.  But, as my Dad says, “use the right tool for the right job.”  If you have questions about how to use FPGAs in your solutions contact your Dell EMC account rep.  Maybe we can help you to.

(The data in this BLOG was made possible by the awesome FPGA team in the Server Solutions Group CTO Office – Duk Kim, Nelson, Mak, Krishna Ramaswamy)

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Server-Room-Neon-LED-1000x500.jpg

PowerEdge and VMware: HCI Innovation for the New Data Center

$
0
0
EMC logo

The data center is evolving at a rapid pace. As architectures adapt and grow in complexity to take advantage of new applications and technologies, new challenges always creep in.

“How do I adapt to unknown demands?”

“How do I manage all of this complexity?”

“Will my storage performance keep up?”

A natural progression towards a more server-centric architecture, along with the increasing adoption of Hyper-Converged Infrastructure (HCI) is addressing these core challenges. To further drive this adoption, Dell EMC has designed HCI-focused tools that integrate into your existing data center and keep it moving forward.

Our HCI architectures easily scale with the addition of nodes, while compute and storage virtualization offers more resiliency in case of a failure. For management simplicity, the OpenManage vCenter plug-in integrates server management into your vCenter console.

The rise of flash significantly changes storage performance. I/O challenges are no longer solved with an increase in the number of spindles. Now you solve it with flash. For example, a traditional HDD provides 400 4K IOPS, a PCIe NVMe SSD provides 750,000 4K IOPS.

With the launch of the new generation of PowerEdge servers and vSAN Ready Nodes, Dell EMC and VMware combine to create a scalable infrastructure that delivers intelligent automation and a secure foundation with no compromises.

With VMware vSAN you get the industry-leading software powering HCI solutions and with Dell EMC PowerEdge you get the world’s best-selling server. This combination of VMware vSAN with PowerEdge, provides up to 12X more IOPS in a vSAN cluster and 98% less latency, as well as a fully integrated management solution with vSAN Ready Nodes.

vSAN Ready Nodes from Dell EMC are pre-configured and easily orderable through templates optimized by profiles and use cases. A new factory installation process utilizes the Boot Optimized Storage System (BOSS) cards, installing the hypervisor on a robust bootable device without having to sacrifice drives from your vSAN storage capacity.

The Dell EMC vSAN Ready Nodes are pre-configured and validated building blocks that offer the most flexibility while reducing deployment risks, improving storage efficiency, and allowing you to scale at the speed your business desires.

By choosing a Dell EMC PowerEdge vSAN Ready Node with OEM licensing, and our latest vSAN Pro Deployment services, you get a single point of contact for sales and support for the entire solution.

For more information on Dell EMC vSAN Ready Nodes, visit: dell.com/en-us/work/shop/povw/virtual-san-ready-nodes

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Modern-Public-Space-Night-Neon-1000x500.jpg

Transformational Data Protection for Virtualized Mission Critical Applications

$
0
0
EMC logo

There is a wave of change sweeping across IT infrastructure. With the shift to cloud and SDDC, data protection must transform.

The journey to transformation begins with modernizing the current environment, which focuses on delivering IT with fewer people, less time, and less money.  The key is simplification.

Things really begin to change when backup stops being a separate product, managed by a separate team, run in its own silo. Instead, data protection becomes a feature- an integrated part of the environment. In this new world, applications are deployed already protected. Backups and recoveries should be faster, streamlined, with minimal hardware footprint and a lower impact on the production environment.

What does a transformed environment look like?  Data Protection becomes automatic.

The goal here is to speed up backups and recoveries, reduce duplication of costly backup tasks and empower application and database owners who are directly responsible for the data with self-service for data protection with guardrails. And, accomplish this within IT control with central visibility and oversight of backups by IT Administrators, while also solving the difficult data protection problem of large, virtualized, high-rate change mission critical apps.

Express Data Path to Protect Virtualized Mission-Critical Applications

With the increasing prevalence of very large (15 TB+) fast-changing, virtualized mission-critical databases, traditional data protection approaches struggle to keep up.  Application and data owners require speed and control to meet stringent Service Level Objectives (SLOs) for mission critical applications. By decoupling backup software from the data path, administrators can backup directly to protection storage, gaining up to 5x faster backup compared to traditional backup solutions.

With the updated Dell EMC Data Protection Suite for Applications, application owners are empowered to use native application interfaces to perform backups from the VMware hypervisor directly to Dell EMC Data Domain.  Additionally, admins gain discovery, automation and provisioning for the entire data center from storage, applications and VMs.  Impact on application servers during backup windows is significantly reduced, as little or no data flows through the application server.

Empower Administrators With Self-Service and Automation With IT Governance and Control

Modern data protection designed for self-service requires intelligent, consolidated oversight of data and service levels across the business to ensure protection SLO compliance is met.  Data Protection Suite for Applications provides self-service data protection capabilities with global oversight to maximize efficiency, streamline operations and ensure consistent service level compliance.  With the ability to discover physical and virtual SQL and Oracle databases, as well as VMware copies, administrators receive faster time to value via automated VM discovery and provisioning, as well as the elimination of duplicate work streams and the creation of unnecessary silos of storage by sending data directly to Data Domain.

Data Protection Suite for Applications non-disruptively discovers copies across the enterprise and automates protection SLO compliance.  Existing data copies are non-disruptively discovered to gain consolidated oversight of what already exists in the environment.  This also allows admins to maintain self-service by enabling storage administrators and DBAs to continue creating copies from their native interfaces instead of inserting a solution into the data path, and fully within the governance, SLOs and oversight of their IT data protection regime.

Empowering application owners with the ability use native tools within their applications provides the control they desire, including the ability to set, monitor and enforce SLOs.  And, by enabling cooperation and coordination between protection administrators and data owners, administrative overhead is reduced.

Summary

Architected for the modern and software driven data center, Dell EMC data protection provides automation across the entire data protection stack, delivers simpler scalability and faster performance, and protection for a broader scope of VMware workloads, including workloads in the cloud and mission-critical I/O intensive applications.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Business-Man-Using-Modern-Interface-1000x500.jpg

Dell EMC Delivers Superior VMware Data Protection Designed For the Modern, Software-Defined Data Center

$
0
0
EMC logo

It’s Day One of VMworld and Dell EMC is excited to discuss our powerful data protection solutions for VMware environments.  VMware has been a driving force behind the transition to the modern, increasingly software-defined data center (SDDC) and the movement to the cloud. Dell EMC ensures businesses are able to protect their data throughout this transition by enabling simplified management, easier scalability, multi-faceted support for the cloud, and comprehensive protection for a wide gamut of virtualized workloads, including mission-critical, IO intensive applications and databases. This is accomplished through our native vSphere integration and software defined architecture, which enables extensive automation and high performance.

Protecting VM workloads is complex and presents challenges that are not easily met by data protection solutions built for legacy SAN based architectures. These challenges include: VM sprawl as a result of the growth of virtualization and the ease of spinning up new VMs, increasingly stringent protection requirements due to government regulations and more critical applications moving to VMware, and shrinking backup windows.

Other data protection solutions are not architected for the SDDC and cannot adequately deal with these challenges. They cannot scale efficiently as the number of VMs goes ups, can only protect a subset of applications, have inflexible networking requirements leading to complex and costly architectures, and offer limited automation resulting in increased operational costs.

Dell EMC Data Protection simplifies the difficult process of protecting data in fast growing VMware environments. Our solutions provide a more modern and simpler to manage and scale software defined architecture for VMware protection with:

  • Automation across the entire VMware protection stack (VM backup policy, data movers/proxies and backup storage) with simpler scaling to more VMs without media server sprawl – Less than 5 minutes to deploy and configure a virtual proxy
  • Best in class performance and data efficiency with less capacity and bandwidth required – 72x deduplication rate and 98% reduction in network usage
  • Transformational management functionality that is designed to enable self-service to vAdmins, including native integration with vSphere UI and vRA, with oversight by backup/infrastructure admins.

Dell EMC Data Protection also provides comprehensive protection for your VMware environment today and tomorrow. Our solutions provide protection for virtualized mission critical IO intensive applications, support for all phases of your journey to the cloud, and are optimized for converged infrastructures:

  • Hypervisor direct backup and restore for IO intensive applications and databases for superior performance – 5x faster backups (New)
    • SLO driven protection built to enable self-service with consolidated oversight and automation
  • Support for all phases of cloud:
    • Long term retention to the cloud
    • Disaster recovery in the cloud (AWS)
    • Protect workloads in the cloud, including VMware Cloud workloads on AWS (New)
  • Optimized data protection for Dell EMC VxRail Appliance (hyper converged architecture) and pre-configured, easy to deploy, comprehensive data protection with the Integrated Data Protection Appliance

Combine all this with our flexible consumption model that lets business pay for their data protection in a way that makes the most financial sense and our customers can rest assured that their VMware environment is protected with a low TCO solution (Protection Storage, Protection Software, as well as an Integrated Appliance) that is ideal for today and designed to help them modernize and transform to a software defined data center.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Speed-of-Light-1000x500.jpg

The #1 Reason Ransomware Works

$
0
0
EMC logo

Ransomware can be a nightmare, but it doesn’t have to be.

Last May, some 200,000 PC owners worldwide suffered a costly wake-up call from the aptly named WannaCry cryptoworm that encrypted their data. The attackers demanded $300 in bitcoin payments to unlock the data. Hospitals, universities, automakers and logistics firms were among those reported hit, but the worst affected were small and medium-size businesses.

Why those businesses? Because their data wasn’t sufficiently protected with complete backups like larger enterprises. And that’s the whole reason ransomware works: Victims can’t afford to lose their data, so they pay up, even if payment is no guarantee they’ll get their data back.

With WannaCry, hundreds paid the ransom but got nothing in return. And if Dell EMC’s 2016 Global Data Protection Index is any indication, ransomware can still find lots of targets: 90 percent of the survey’s respondents are behind the curve in the maturity of their data protection strategies.

Dell EMC aims to help them change that with practical, cost-effective and greatly simplified data protection (DP) that works 24×7 across all kinds of data workloads, no matter where they reside — on-premise, in the cloud or some hybrid of the two.

Growing Needs for Comprehensive Data Protection

Organizations of all sizes need DP. While malware and anti-virus safeguards are necessary to safeguard data from threat actors, they are not sufficient to guard against phishing attacks, which are how ransomware and advanced persistent threats can circumvent defense-in-depth strategies. Also, organizations need DP’s backup and recovery capabilities to provide business continuity in the case of system failures or disasters.

So why do the DP strategies and practices of so many organizations fall short? Here are some important reasons:

  • Data growth: Keeping up with the growth of both structured and unstructured data from various sources is one of IT’s biggest challenges affecting DP practices.
  • Complexity: Managing DP for complex IT environments — especially diverse platforms and applications — can be complicated, often requiring time-consuming management of many legacy point solutions already in place.
  • Inconsistency: Inconsistent backup processes can arise from trying to protect applications and data in varied places — from on-premises infrastructures to virtualized environments to private, public and hybrid clouds.
  • Cloud migrations: More and more enterprises are migrating workloads to the cloud, but they lack the tools to protect their data residing there, or they mistakenly believe their cloud provider will provide that protection.
  • Copy data sprawl: By some estimates, 60 percent of all data storage is a copy of some kind, whether for disaster recovery, development and testing, or application maintenance — just to name a few. And a typical database environment can have 5–12 copies for each production instance.

As a result, IT groups, especially their DP specialists, can have trouble meeting backup windows and SLAs governing goals for recovery point objective (RPO) and recovery time objective (RTO). Regulatory compliance and audits can be issues, too.

Dell EMC Streamlines the Data Protection Process

To simplify and streamline the DP process, Dell EMC has introduced its updated Data Protection Software Enterprise Edition. This set of 11 powerful DP tools covers the entire DP continuum of availability, replication, snapshots, backup and archiving.

For more specific use cases — such as heavily virtualized environments, mission-critical apps and archive-only solutions — specific tool sets are available in four packages: Data Protection Software for Backup; Data Protection Software for VMware; Data Protection Software for Applications; and Data Protection Software for Archive.

Data Protection Software Enterprise Edition can protect everything from user laptops to the largest data centers. It enables long-term retention of data to private, public or hybrid clouds, too. With deduplication rates of up to 99 percent plus a tight integration with Dell EMC Data Domain infrastructure, the Dell EMC Data Protection software can help lower DP’s overall TCO dramatically by reducing:

  • Storage capacity requirements for backups by up to 97 percent
  • Network traffic by up to 99 percent
  • Backup times by 50 percent

What’s more, the Dell EMC Data Protection Software Enterprise Edition delivers a consistent user experience for DP administrators across all its different tools to minimize learning and maximize productivity. At the same time, organizations can gain global data protection and copy oversight without compromising self-service workflows of their lines of business.

Dell EMC Data Domain Solutions Enhance the DP Benefits

Dell EMC also offers a broad portfolio of Data Domain storage infrastructure solutions for enterprises of all sizes. This includes the recently introduced all-in-one Integrated Data Protection Appliance and the software-defined Data Domain Virtual Edition.

The different Data Domain solutions can scale up to 150 PB of logical capacity managed by a single system. They can also achieve backup speeds of up to 68 TB/hour, making it possible to complete more backups in less time and provide faster, more reliable restores.

Check out Gartner’s 2017 Magic Quadrant for Data Center Backup and Recovery Solutions and see Dell EMC’s leading position among all DP offerings. Then get more information on the full line of Dell EMC Data Protection software and Data Domain options, to discover which DP solution can meet your organization’s requirements.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Why-Ransomware-Works-Banner-1000x500.jpg

Viewing all 8970 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>