Quantcast
Channel: Blog | Dell
Viewing all 8970 articles
Browse latest View live

The Business Case for Hyper-Converged Infrastructure

$
0
0
EMC logo

 

Earlier this year, Wikibon proposed a theory that “the optimum hybrid cloud strategy” is based on an integrated hyper-converged infrastructure (HCI) as opposed to the traditional enterprise architecture built on separate servers and separate traditional SAN.

To test and validate their premise, Wikibon experts performed a three year comparison of a traditional white box / storage array vs. a hyper-converged system. The hyper-converged system they selected was Dell EMC’s VxRack FLEX, which is based on ScaleIO software-defined storage. Through their research, they were able to analyze and compare costs across these four main categories: initial investment, time-to-value, cost over time and ease of use.

While you can read all the details on the approach and methodology for comparing these two instances in the full report here, I’d rather jump right into the juicy stuff: the results!

Initial Investment

The initial investment for a traditional SAN solution was $734K versus $405K for a hyper-converged system. To give you the full picture, there are several reasons why HCI solutions are less expensive to purchase up front. Not only are the equipment costs lower (storage arrays are replaced with DAS and specialized Fibre Channel networking and HBAs are no longer required), but most customers only buy what they need for the short term rather than over purchasing infrastructure to support unknown future requirements, which is common when the infrastructure is intended to last for multiple years. That’s the power of HCI – it enables you to start small and grow incrementally as your demand for resources increase. But the savings and cost discrepancies didn’t stop there.

Time-to-Value

Practitioners who took part in deploying the solutions onsite found that it took 33 days to configure, install and test the traditional white box/storage array as opposed to the hyper-converged infrastructure which took a mere 6 days. The time-to-value for hyper-converged was then calculated to be 6x faster. This is one of the key benefits of an engineered system. IT no longer bears the burden of architecting the solution, procuring separate hardware and software from multiple sources, and installing and testing the configuration before migrating workloads to production. Instead, the system is delivered pre-integrated, pre-validated and ready for deployment.

Cost Over Time

In addition to the upfront costs, the three year comparison then showed maintenance and operational expenses increasing the total cost for a traditional storage array to $1.3 million while maintenance and operations costs for hyper-converged came out to a total of $865K over three years. The main driver behind the savings is that HCI solutions greatly simplify IT operations in a number of ways: HCI provides a more efficient environment for those responsible for managing and provisioning storage; the ease of provisioning storage enables application developers to do more; and automation, streamlined provisioning, and higher reliability saves IT staff time on activities such as data migrations and capacity planning. There are also substantial savings from lower operating expenses such as power, cooling, and data center space. Other staggering numbers from the study included the following:

  • Traditional solutions are 47% more expensive than a hyper-converged solution
  • Storage costs for the traditional approach are 100% higher
  • Overall, a hyper-converged solution is $400K less expensive

Ease of Use 

In this last category, all practitioners interviewed, except one, agreed that the hyper-converged solution was easy to upgrade (more so than the traditional architecture), easy to manage and that it performed well. This makes sense given that HCI is typically managed from a central console that controls both compute and storage and automates many traditional manual functions, helping to reduce the number of tools that have to be learned and used. HCI systems also have full lifecycle management – meaning they are fully tested, maintained, and supported as one system over the entire lifespan. As a result of these efficiencies, people costs for HCI are much lower than the traditional approach. In fact, over a three year time span, people costs are 600% higher for traditional SANs.

So, when looking at the TCO breakdown, it looks like hyper-converged is the way to go over a traditional storage approach. HCI offers lower equipment and operational costs, lower latency and faster bandwidth infrastructure, as well as a “single hand to shake and a single throat to choke” for vendor support and service needs.

Dell EMC’s VxRack FLEX is a fully tested, pre-configured, hyper-converged system that meets all of these criteria while offering extreme scalability (start small and grow to 1,000+ nodes). It truly provides a turnkey experience that enables you to deploy faster, improve agility and simplify IT. You can download the report here to read about all the findings from the research or you can skip right on over to the Dell EMC VxRack website to start exploring your options.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Desk-Laptop-Tea-Briefcase-1000x500.jpg


With VMware Embedded, OEMs Have Even More Options

$
0
0
EMC logo

What motivates you to get up every day? For me, the answer is pretty simple, I’m inspired by the work of our customers and partners. I’m talking innovative solutions that have the power to radically transform lives and improve business productivity – whether it’s an MRI machine in a hospital, a way for farmers to measure and improve crop growth, a smart building that is responsive to occupant needs or an industrial-strength PC helping to run a factory line more smoothly. At the end of the day, it’s all about technology simplifying things, improving lives and making business more efficient.

In fact, the whole focus of the Dell EMC OEM division is to help our customers and partners bring their innovative solutions to market as quickly as possible. That’s precisely why Dell EMC OEM is the first (and only) tier-one technology vendor to offer VMware Embedded.

VMware Embedded – a Way to Expand Into the Virtual Space

VMware Embedded is a pre-configured, turnkey virtualization solution that offers a new way for OEMs to increase revenue and expand into the virtual space. In a nutshell, this offering, with a resellable licensing model, enables OEMs to sell more products, more efficiently. Additionally, customers have the option to purchase VMware Embedded with Dell EMC hardware, such as PowerEdge servers, or as a standalone product to streamline their supply chain.

Why Virtualization Matters

We have all seen the trend of businesses tapping into virtualization to gain longer solution life cycles, take advantage of cloud agility, reduce downtime and improve security. As a result, virtualization has become a key priority for a majority of enterprise solutions, built by OEMs and ISVs.

Now with VMware Embedded, customers have the option to run it as a virtual appliance, install on a physical appliance or use in Dell EMC OEM’s datacenter as a managed service offering. This maximizes efficiency and lifecycle across the OEM’s solution, ultimately benefiting the end customer.

Why VMware Is Great for OEMs

As an OEM, you can deliver VMware updates as a value-add service – and at a release cadence that matches your timelines – while serving as the single point of contact for support. To help decrease costs of goods sold and speed time-to-revenue, Dell EMC OEM will work with you to validate designs, align go-to-market strategies and create roadmaps. OEMs can also choose from a wide range of licensing and pricing models, including OEM sublicensing and global distribution rights, without multiple contracts.

For me, this is the main benefit of VMware Embedded – it enables our OEMs to provide quality support of VMware across all deployment models, offering advantages to customers in multiple markets, including manufacturing, video surveillance, communications, gaming, financial services, energy, healthcare, storage and networking.

But don’t take my word for it – this is what Darren Waldrep, vice president of strategic alliances at Datatrend, a Dell EMC OEM Preferred Partner, had to say. “Dell EMC and VMware’s embedded offering is a competitively priced solution that we are excited to offer our customers. VMware Embedded creates a much easier route to market for Dell EMC OEM partners and integrators, like ourselves.” Waldrep specifically highlighted Dell EMC’s and VMware’s “best of breed technologies” and our commitment to truly enabling the channel to deliver best pricing and experience for the end customer.

As we move deeper into the era of digital transformation, the need for speed will be imperative – no matter the industry. Understanding the unique needs of our customers and helping them to adapt to the constantly changing market is what will allow you as an OEM to thrive.

Check out the datasheet or visit Dell EMC OEM in the Dell EMC booth #400 at VMworld, Aug. 27-31 in Las Vegas. We hope to see you there!

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/City-Night-1000x500.jpg

PowerEdge at VMworld 2017: What Happens in Vegas Rules Your Data Center

$
0
0
EMC logo

It’s almost time for VMworld 2017. Less than a year ago Dell, EMC and VMware all combined to form the largest privately held technology company in the world. Recently, Dell EMC launched the 14th generation of PowerEdge servers, or the FIRST generation of Dell EMC PowerEdge servers.

The shared vision of Dell EMC and VMware is realized in PowerEdge servers; what we call the bedrock of the modern data center. Whether it’s traditional applications and architectures, Hyper-Converged or Cloud, PowerEdge is the singular platform that forms your entire data center foundation.

At VMworld 2017 we’ll be showcasing our full PowerEdge portfolio and you’ll see how PowerEdge is at the heart of data center infrastructure.

There are a number of exciting sessions highlighting the integration of Dell EMC and VMware.

State of the Union: Everything Multi-Cloud, Converged, Hyper-Converged and More!

You have questions? Chad Sakac has answers! The technology landscape is continuously evolving, making it a challenge for IT leaders to keep up. Chad will relate Dell Technologies’ perspective on multi-cloud, converged and hyper-converged, data analytics, and more. (Session: UEM3332PUS)

Deliver Real Business Results through IT Modernization

As businesses embark on new digital transformation initiatives, we’re seeing them simultaneously transform their core IT infrastructure. You will learn how Dell EMC and VMware together are driving advancements in converged systems, servers, storage, networking and data protection to help you realize greater operational efficiencies, accelerate innovation and deliver better business results. (Session: STO3333BUS)

Modern Data Center Transformation: Dell EMC IT Case Study

In this session, learn how Dell EMC IT is leveraging VxRail/Rack, ScaleIO and more to modernize its infrastructure with flash storage, converged and hyper-converged systems and software defined infrastructure, where all components are delivered through software. (Session: PBO1964BU)

Workforce Transformation: Balancing tomorrow’s Trends with Today’s Needs

The way we work today is changing dramatically, and organizations that empower their employees with the latest technologies gain a strategic advantage. In this session, you will learn how to modernize your workforce with innovative new devices and disruptive technologies designed to align with the new ways that people want to work and be productive. (Session: UEM3332PUS)

Modernizing IT with Server-Centric Architecture Powered by VMware vSAN and VxRail Hyper-Converged Infrastructure Appliance

With server-centric IT, data centers can offer consumable services that match the public cloud yet offer better security, cost efficiency, and performance. This session will discuss the benefits of x86-based server-centric IT and the convergence of technologies and industry trends that now enable it. The session will also explain how customers can realize server-centric IT with VMware vSAN and the Dell EMC VxRail appliance family, along with detailed analysis demonstrating these claims. (Session: STO1742BU)

The Software-Defined Storage Revolution Is Here: Discover Your Options

During the past decade, web-scale companies have shown how to operate data centers with ruthless efficiency by utilizing software-defined storage (SDS). Now, enterprises and service providers are joining the SDS revolution to achieve radical reductions in storage lifecycle costs (TCO). Learn how Dell Technologies’ SDS portfolio is revolutionizing the data center as we highlight customer success stories of VMware vSAN and Dell EMC ScaleIO. (Session: STO1216BU)

Come spend some time with Dell EMC experts to see how we can work together to help you achieve your business and IT initiatives. And be sure to follow us at @DellEMCServers for daily updates, on-site videos, and more.

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Circular-Staircase-1000x500.jpg

FPGA’s – Use Cases in the Data Center

$
0
0
EMC logo

I’m on the road quite a bit and get the opportunity to engage many customers on a range of topics and problems.  These discussions provide direct feedback that helps the Server team focus on customer- oriented problems and potential challenges vs. creating technology looking for a home. In earlier blogs, I mentioned how the performance CAGR was not keeping up at the same time we had new emerging problems.

Previously, we believed the impact of Moore’s Law on FPGA’s (Field Programmable Gate Arrays) would be more profound than ever – prior it seemed FPGAs were never quite big enough, couldn’t run fast enough and were difficult to program.  Technology moves quickly and those attributes of FPGAs have changed lot – they are certainly big enough now, clock rates are up, you can even get an embedded ARM core, and lastly the programming has improved a lot.  OpenCL has made it easier and more portable – NOTE: I said easier NOT easy – but the results for the right problem makes it worthwhile.

Let me do some context setting on where FPGAs work best – this is not an absolute but rather some high-level guidance.  If we take a step back, it’s clear that we’ve been operating in a world of Compute Intensive problems – meaning, problems and data that you can move to the compute because you are going to crunch on it for a result.  Generally, this has been a lot of structured data, convergence algorithms and complex math, and general purpose x86 has been awesome at these problems. Also, sometimes we throw GPUs at the problem – especially in life science problems.

But, there is a law of opposites. The opposite of Compute Intensive is Data Intensive.  Data Intensive is simple data that is unstructured and only used for simple operations.  In this case, we want the compute and simple operators to move as close to the data as possible.  For example, if you’re trying to count the number of blue balls in a bucket that’s a pretty simple operation that’s data intensive – you’re not trying to compute the next digit of π.  Computing the average size of each ball in the bucket would be more compute intensive.

The law of opposites for general purpose compute is optimized compute…that one is easy.  So, the X-Y coordinate 4 world approximately looks like below showing where various technologies best fit.

But why are CPUs not great for everything, and why are we talking about FPGAs today?  Well, CPUs are very memory-cache hierarchical centric to get data in and out from DRAM to Cache to registers for the CPU to do an operation – as it takes just as much data movement to do complex math as simple math with a general purpose CPU.  In this new world of big unstructured data that memory-cache hierarchy can get in the way.

If you think about the link list pointer chasing problem shown to the left here– in a general purpose CPU when you need to traverse the link list every time you do a head/tail pointer fetch due to the data’s unstructured nature you get a cache miss, and thus, the CPU does a cache line fill –  generally 8 datum’s.  But only the head/tail pointer was needed, which means 7/8th’s of the memory bus bandwidth was wasted on unnecessary accesses – potentially blocking another CPU core from getting datum it needed. Therein lies a big problem for general purpose CPUs in some of these new problems face today.

 

Now, let’s focus on some real world examples:

As mentioned earlier, programming is now simpler (simpler – NOT easy).  Open Computing Language (OpenCL) is a framework in C++ for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, DSPs, and FPGAs.   OpenCL provides a standard interface for parallel computing using task- and data-based parallelism.  A quick example and flow is shown below.

Now, I’ll walk you thru two examples that we’ve worked on in the Server Solutions Group to prove out FPGA technology and make sure our server platforms intercept the technology when it’s ready.

Problem #1:  “Drowning in pictures, save me…..”

Say you’re a picture gallery site, social media, etc… who want end users to upload full size images from their mega pixel smart phones, so that they can enjoy them on a wide range of devices/screen sizes – how do you solve this problem?  The typical approach is using scale out compute and resize as needed for the end device. However, as shown above, it’s not a great fit for general purpose compute, as it scales at a higher cost and you must manage scale out.  Other options are batching processes and saving static images of all the sizes you needed – so it becomes a blowout storage problem.  Or, force the end user device to resize, but you must send down the entire image – blowing out your network and delivering a poor customer experience.

To avoid any of the above options, we decided to do a real time offload resizing on the FPGA.  For large images, we saw around a 70x speedup and about 20x speedup on small images.  We replaced 20-70 servers into 1 and saved power, cost, and increased performance – easy TCO. So, now the CPU is handling the request for resized images and delivery but using an FPGA to process the images.  Below is high level pictorial.

Problem #2: “I have all these images, and I’d like to sort them by feature”

Digital content is everywhere, and we’re moving from text search to image search. Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. Edge detection is also used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision.  In this example, we simply wanted to see what we could accomplish on a CPU and FPGA.  We started on the CPU with OpenCL and quickly discovered that the performance was not up to par…. less than 1FPS (frames per second).  The compiler was struggling so we manually unrolled the code to swamp every core (all 32 of them) and got up to 110FPS.  But at 85% CPU load across 32 cores you could barely move the mouse.

The next step was the same OpenCL code (different #defines) and targeted an FPGA.  With the FPGA and parallel nature of the problem we could hit 108FPS. In the FPGA offload case the CPU was ONLY 1% loaded, so we had a server with compute cycles left to do something useful.  To experiment, we went back to the CPU and forced a 1% CPU load limit and found we could not even get 1FPS.   Point being that in this new world of different compute architectures and emerging problems “it depends” will come up a lot.  Below is the data showing the various results I described.

Future Problems

In the future, emerging workloads and use cases (below) will continue to drive the need for new and different compute.  Every company will become a data compute company and must optimize for these new uses. If not, they are open to disruption by those who embrace change more aggressively.  FPGAs can be a part of this journey when applied to the right problem. Machine learning inference is a great example, along with network protocol acceleration/inspection, image processing as shown, and others can benefit from the reprogrammable nature of FPGAs.

Summary

So, FPGAs can be really useful and can help solve real-world problems.  Ultimately, we are heading down a path of more heterogeneous computing where you will hear “it depends” more than you’ll might like.  But, as my Dad says, “use the right tool for the right job.”  If you have questions about how to use FPGAs in your solutions contact your Dell EMC account rep.  Maybe we can help you to.

(The data in this BLOG was made possible by the awesome FPGA team in the Server Solutions Group CTO Office – Duk Kim, Nelson, Mak, Krishna Ramaswamy)

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Server-Room-Neon-LED-1000x500.jpg

Dell EMC News at VMworld 2017

$
0
0
EMC logo

The summer season is winding down here in the States, but the season for IT trade shows is in full swing.  VMworld 2017 kicks off Monday in, you guessed it, beautiful Las Vegas. VMworld is one of the industry’s largest events for cloud infrastructure and digital workspace technology professionals and Dell EMC is there alongside of VMware, representing the power of Dell Technologies. Watch this space to stay up to date on all the Dell EMC news from the show!

Stay tuned!

 

Quick Links  

VMworld Agenda

VMworld Blog

Social

@DellEMC

@DellEMCNews

@DellEMCStorage

@DellEMC_CI

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Trade-show-Blur-1000x500.jpg

Is That IaaS or Just Really Good Virtualization?

$
0
0
EMC logo

I was recently pulled into a Twitter conversation that spawned off a blog post Paul Galjan made concerning Microsoft Azure Stack IaaS and its use cases in which he correctly pointed out that Azure Stack is not just a VM dispenser.  The question was posed as to what exactly constitutes IaaS vs. virtualization (VM dispenser) and where does each fit into the hierarchy of “cloud”?

Let’s start out by clearly defining our terms:

Virtualization in its simplest form is leveraging software to abstract the physical hardware from the guest operating system(s) that run on top of it.  Whether we are using VMWare, XenServer, Hyper-V, or another hypervisor, from a conceptual standpoint they serve the same function.

In Enterprise use cases, virtualization in and of itself is only part of the solution. Typically, there are significant management and orchestration tools built around virtualized environments. Great examples of these are VMware’s vRealize Suite and Microsoft’s System Center, which allow IT organizations to manage and automate their virtualization environments. But does a hypervisor combined with a robust set of tools an IaaS offering make?

IaaS. Let’s now take a look at what comprises Infrastructure as a Service.  IaaS is one of the service models for cloud computing, and it is just that – a service model.  NIST lays it out as follows:

The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).

You may be looking at that definition and asking yourself, “So, based on that, virtualization with a robust set of management tools is capable of delivering just that, right?” Well, sort of, but not really. We must take into account that the service model definition must also jive with the broadly accepted “Essential Characteristics” of Cloud computing.

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

Essentially all of these core tenets of “cloud” are capabilities that must be built out on top of existing virtualization and management techniques.  IaaS (as well as PaaS for that matter) caters to DevOps workflows enabling Agile code development, testing and packaging, along with release and configuration management, without regard for the underlying infrastructure components. This is clearly not the same as virtualization or even “advanced virtualization” (a term I hear thrown around every so often).

So then how does that relate back to the conversation concerning Azure Stack IaaS and VM dispensers? For that we need to look at how it plays out in the data center.

Building a virtualization environment has really become table stakes as a core IT function in the Enterprise space.  There are plenty of “DIY” environments in the wild and increasingly over the past few years we’ve seen platforms in the form of converged and hyper-converged offerings that streamline virtualization efforts into a turnkey appliance for hosting highly efficient virtualized environments – Dell EMC’s VxBlock, VxRack, and VxRail being leaders in this market.  That said they are not “IaaS” in a box – they are in essence, and by no means do I intend this as derogatory, “VM dispensers,” albeit highly advanced ones.

For customers that want to move beyond “advanced virtualization” techniques and truly embrace cloud computing there are likewise several possibilities.  DIY projects are certainly plausible though they have an extremely high rate of failure – setting up, managing, and maintaining home grown cloud environments isn’t easy and the skill sets required aren’t cheap or inconsequential.  There are offerings such as Dell EMC’s Enterprise Hybrid Cloud which build on top of our converged and hyper-converged platform offerings, building out the core capabilities that make cloud, cloud.  For customers that are fundamentally concerned with delivery of IaaS services on-premises that can be delivered from a standardized platform managed and supported as a single unit, Enterprise Hybrid Cloud is a robust and capable cloud.

So, what about something like the Dell EMC Cloud for Microsoft Azure Stack?  In this case, it’s all about intended use case. First let’s understand that the core use cases for Azure Stack are PaaS related – the end goal is to deliver Azure PaaS services in an on-prem/hybrid fashion. That said, Azure Stack also has the capability to deliver Azure consistent IaaS services as well.  If your organization has a requirement or desire to deliver Azure based IaaS on-prem with a consistent “infrastructure as code” code base for deployment and management – Stack’s your huckleberry.  What it is not is a replacement for your existing virtualization solutions.  As an example, even in an all-Microsoft scenario there are capabilities and features that a Hyper-V/System Center-based solution can provide in terms of resiliency and performance that Azure Stack doesn’t provide.

In short – virtualization, even really, really, good virtualization, isn’t IaaS and it’s not even cloud.  It’s a mechanism for IT consolidation and efficiency. IaaS on the other hand builds on top of virtualization technologies and is focused on streamlining DevOps processes for rapid delivery of software and business results by proxy.

In scenarios where delivery of IaaS in a hybrid, Azure-consistent fashion is the requirement, Azure Stack is an incredibly transformative IaaS offering.  If Azure consistency is not a requirement, there are other potential solutions in the forms of either virtualization or on-prem based IaaS offerings that may well be a better fit for your organization.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Virtualization.jpg

The Business Case for Hyper-Converged Infrastructure

$
0
0
EMC logo

 

Earlier this year, Wikibon proposed a theory that “the optimum hybrid cloud strategy” is based on an integrated hyper-converged infrastructure (HCI) as opposed to the traditional enterprise architecture built on separate servers and separate traditional SAN.

To test and validate their premise, Wikibon experts performed a three year comparison of a traditional white box / storage array vs. a hyper-converged system. The hyper-converged system they selected was Dell EMC’s VxRack FLEX, which is based on ScaleIO software-defined storage. Through their research, they were able to analyze and compare costs across these four main categories: initial investment, time-to-value, cost over time and ease of use.

While you can read all the details on the approach and methodology for comparing these two instances in the full report here, I’d rather jump right into the juicy stuff: the results!

Initial Investment

The initial investment for a traditional SAN solution was $734K versus $405K for a hyper-converged system. To give you the full picture, there are several reasons why HCI solutions are less expensive to purchase up front. Not only are the equipment costs lower (storage arrays are replaced with DAS and specialized Fibre Channel networking and HBAs are no longer required), but most customers only buy what they need for the short term rather than over purchasing infrastructure to support unknown future requirements, which is common when the infrastructure is intended to last for multiple years. That’s the power of HCI – it enables you to start small and grow incrementally as your demand for resources increase. But the savings and cost discrepancies didn’t stop there.

Time-to-Value

Practitioners who took part in deploying the solutions onsite found that it took 33 days to configure, install and test the traditional white box/storage array as opposed to the hyper-converged infrastructure which took a mere 6 days. The time-to-value for hyper-converged was then calculated to be 6x faster. This is one of the key benefits of an engineered system. IT no longer bears the burden of architecting the solution, procuring separate hardware and software from multiple sources, and installing and testing the configuration before migrating workloads to production. Instead, the system is delivered pre-integrated, pre-validated and ready for deployment.

Cost Over Time

In addition to the upfront costs, the three year comparison then showed maintenance and operational expenses increasing the total cost for a traditional storage array to $1.3 million while maintenance and operations costs for hyper-converged came out to a total of $865K over three years. The main driver behind the savings is that HCI solutions greatly simplify IT operations in a number of ways: HCI provides a more efficient environment for those responsible for managing and provisioning storage; the ease of provisioning storage enables application developers to do more; and automation, streamlined provisioning, and higher reliability saves IT staff time on activities such as data migrations and capacity planning. There are also substantial savings from lower operating expenses such as power, cooling, and data center space. Other staggering numbers from the study included the following:

  • Traditional solutions are 47% more expensive than a hyper-converged solution
  • Storage costs for the traditional approach are 100% higher
  • Overall, a hyper-converged solution is $400K less expensive

Ease of Use 

In this last category, all practitioners interviewed, except one, agreed that the hyper-converged solution was easy to upgrade (more so than the traditional architecture), easy to manage and that it performed well. This makes sense given that HCI is typically managed from a central console that controls both compute and storage and automates many traditional manual functions, helping to reduce the number of tools that have to be learned and used. HCI systems also have full lifecycle management – meaning they are fully tested, maintained, and supported as one system over the entire lifespan. As a result of these efficiencies, people costs for HCI are much lower than the traditional approach. In fact, over a three year time span, people costs are 600% higher for traditional SANs.

So, when looking at the TCO breakdown, it looks like hyper-converged is the way to go over a traditional storage approach. HCI offers lower equipment and operational costs, lower latency and faster bandwidth infrastructure, as well as a “single hand to shake and a single throat to choke” for vendor support and service needs.

Dell EMC’s VxRack FLEX is a fully tested, pre-configured, hyper-converged system that meets all of these criteria while offering extreme scalability (start small and grow to 1,000+ nodes). It truly provides a turnkey experience that enables you to deploy faster, improve agility and simplify IT. You can download the report here to read about all the findings from the research or you can skip right on over to the Dell EMC VxRack website to start exploring your options.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Desk-Laptop-Tea-Briefcase-1000x500.jpg

With VMware Embedded, OEMs Have Even More Options

$
0
0
EMC logo

What motivates you to get up every day? For me, the answer is pretty simple, I’m inspired by the work of our customers and partners. I’m talking innovative solutions that have the power to radically transform lives and improve business productivity – whether it’s an MRI machine in a hospital, a way for farmers to measure and improve crop growth, a smart building that is responsive to occupant needs or an industrial-strength PC helping to run a factory line more smoothly. At the end of the day, it’s all about technology simplifying things, improving lives and making business more efficient.

In fact, the whole focus of the Dell EMC OEM division is to help our customers and partners bring their innovative solutions to market as quickly as possible. That’s precisely why Dell EMC OEM is the first (and only) tier-one technology vendor to offer VMware Embedded.

VMware Embedded – a Way to Expand Into the Virtual Space

VMware Embedded is a pre-configured, turnkey virtualization solution that offers a new way for OEMs to increase revenue and expand into the virtual space. In a nutshell, this offering, with a resellable licensing model, enables OEMs to sell more products, more efficiently. Additionally, customers have the option to purchase VMware Embedded with Dell EMC hardware, such as PowerEdge servers, or as a standalone product to streamline their supply chain.

Why Virtualization Matters

We have all seen the trend of businesses tapping into virtualization to gain longer solution life cycles, take advantage of cloud agility, reduce downtime and improve security. As a result, virtualization has become a key priority for a majority of enterprise solutions, built by OEMs and ISVs.

Now with VMware Embedded, customers have the option to run it as a virtual appliance, install on a physical appliance or use in Dell EMC OEM’s datacenter as a managed service offering. This maximizes efficiency and lifecycle across the OEM’s solution, ultimately benefiting the end customer.

Why VMware Is Great for OEMs

As an OEM, you can deliver VMware updates as a value-add service – and at a release cadence that matches your timelines – while serving as the single point of contact for support. To help decrease costs of goods sold and speed time-to-revenue, Dell EMC OEM will work with you to validate designs, align go-to-market strategies and create roadmaps. OEMs can also choose from a wide range of licensing and pricing models, including OEM sublicensing and global distribution rights, without multiple contracts.

For me, this is the main benefit of VMware Embedded – it enables our OEMs to provide quality support of VMware across all deployment models, offering advantages to customers in multiple markets, including manufacturing, video surveillance, communications, gaming, financial services, energy, healthcare, storage and networking.

But don’t take my word for it – this is what Darren Waldrep, vice president of strategic alliances at Datatrend, a Dell EMC OEM Preferred Partner, had to say. “Dell EMC and VMware’s embedded offering is a competitively priced solution that we are excited to offer our customers. VMware Embedded creates a much easier route to market for Dell EMC OEM partners and integrators, like ourselves.” Waldrep specifically highlighted Dell EMC’s and VMware’s “best of breed technologies” and our commitment to truly enabling the channel to deliver best pricing and experience for the end customer.

As we move deeper into the era of digital transformation, the need for speed will be imperative – no matter the industry. Understanding the unique needs of our customers and helping them to adapt to the constantly changing market is what will allow you as an OEM to thrive.

Check out the datasheet or visit Dell EMC OEM in the Dell EMC booth #400 at VMworld, Aug. 27-31 in Las Vegas. We hope to see you there!

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/City-Night-1000x500.jpg


PowerEdge at VMworld 2017: What Happens in Vegas Rules Your Data Center

$
0
0
EMC logo

It’s almost time for VMworld 2017. Less than a year ago Dell, EMC and VMware all combined to form the largest privately held technology company in the world. Recently, Dell EMC launched the 14th generation of PowerEdge servers, or the FIRST generation of Dell EMC PowerEdge servers.

The shared vision of Dell EMC and VMware is realized in PowerEdge servers; what we call the bedrock of the modern data center. Whether it’s traditional applications and architectures, Hyper-Converged or Cloud, PowerEdge is the singular platform that forms your entire data center foundation.

At VMworld 2017 we’ll be showcasing our full PowerEdge portfolio and you’ll see how PowerEdge is at the heart of data center infrastructure.

There are a number of exciting sessions highlighting the integration of Dell EMC and VMware.

State of the Union: Everything Multi-Cloud, Converged, Hyper-Converged and More!

You have questions? Chad Sakac has answers! The technology landscape is continuously evolving, making it a challenge for IT leaders to keep up. Chad will relate Dell Technologies’ perspective on multi-cloud, converged and hyper-converged, data analytics, and more. (Session: UEM3332PUS)

Deliver Real Business Results through IT Modernization

As businesses embark on new digital transformation initiatives, we’re seeing them simultaneously transform their core IT infrastructure. You will learn how Dell EMC and VMware together are driving advancements in converged systems, servers, storage, networking and data protection to help you realize greater operational efficiencies, accelerate innovation and deliver better business results. (Session: STO3333BUS)

Modern Data Center Transformation: Dell EMC IT Case Study

In this session, learn how Dell EMC IT is leveraging VxRail/Rack, ScaleIO and more to modernize its infrastructure with flash storage, converged and hyper-converged systems and software defined infrastructure, where all components are delivered through software. (Session: PBO1964BU)

Workforce Transformation: Balancing tomorrow’s Trends with Today’s Needs

The way we work today is changing dramatically, and organizations that empower their employees with the latest technologies gain a strategic advantage. In this session, you will learn how to modernize your workforce with innovative new devices and disruptive technologies designed to align with the new ways that people want to work and be productive. (Session: UEM3332PUS)

Modernizing IT with Server-Centric Architecture Powered by VMware vSAN and VxRail Hyper-Converged Infrastructure Appliance

With server-centric IT, data centers can offer consumable services that match the public cloud yet offer better security, cost efficiency, and performance. This session will discuss the benefits of x86-based server-centric IT and the convergence of technologies and industry trends that now enable it. The session will also explain how customers can realize server-centric IT with VMware vSAN and the Dell EMC VxRail appliance family, along with detailed analysis demonstrating these claims. (Session: STO1742BU)

The Software-Defined Storage Revolution Is Here: Discover Your Options

During the past decade, web-scale companies have shown how to operate data centers with ruthless efficiency by utilizing software-defined storage (SDS). Now, enterprises and service providers are joining the SDS revolution to achieve radical reductions in storage lifecycle costs (TCO). Learn how Dell Technologies’ SDS portfolio is revolutionizing the data center as we highlight customer success stories of VMware vSAN and Dell EMC ScaleIO. (Session: STO1216BU)

Come spend some time with Dell EMC experts to see how we can work together to help you achieve your business and IT initiatives. And be sure to follow us at @DellEMCServers for daily updates, on-site videos, and more.

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Circular-Staircase-1000x500.jpg

FPGA’s – Use Cases in the Data Center

$
0
0
EMC logo

I’m on the road quite a bit and get the opportunity to engage many customers on a range of topics and problems.  These discussions provide direct feedback that helps the Server team focus on customer- oriented problems and potential challenges vs. creating technology looking for a home. In earlier blogs, I mentioned how the performance CAGR was not keeping up at the same time we had new emerging problems.

Previously, we believed the impact of Moore’s Law on FPGA’s (Field Programmable Gate Arrays) would be more profound than ever – prior it seemed FPGAs were never quite big enough, couldn’t run fast enough and were difficult to program.  Technology moves quickly and those attributes of FPGAs have changed lot – they are certainly big enough now, clock rates are up, you can even get an embedded ARM core, and lastly the programming has improved a lot.  OpenCL has made it easier and more portable – NOTE: I said easier NOT easy – but the results for the right problem makes it worthwhile.

Let me do some context setting on where FPGAs work best – this is not an absolute but rather some high-level guidance.  If we take a step back, it’s clear that we’ve been operating in a world of Compute Intensive problems – meaning, problems and data that you can move to the compute because you are going to crunch on it for a result.  Generally, this has been a lot of structured data, convergence algorithms and complex math, and general purpose x86 has been awesome at these problems. Also, sometimes we throw GPUs at the problem – especially in life science problems.

But, there is a law of opposites. The opposite of Compute Intensive is Data Intensive.  Data Intensive is simple data that is unstructured and only used for simple operations.  In this case, we want the compute and simple operators to move as close to the data as possible.  For example, if you’re trying to count the number of blue balls in a bucket that’s a pretty simple operation that’s data intensive – you’re not trying to compute the next digit of π.  Computing the average size of each ball in the bucket would be more compute intensive.

The law of opposites for general purpose compute is optimized compute…that one is easy.  So, the X-Y coordinate 4 world approximately looks like below showing where various technologies best fit.

But why are CPUs not great for everything, and why are we talking about FPGAs today?  Well, CPUs are very memory-cache hierarchical centric to get data in and out from DRAM to Cache to registers for the CPU to do an operation – as it takes just as much data movement to do complex math as simple math with a general purpose CPU.  In this new world of big unstructured data that memory-cache hierarchy can get in the way.

If you think about the link list pointer chasing problem shown to the left here– in a general purpose CPU when you need to traverse the link list every time you do a head/tail pointer fetch due to the data’s unstructured nature you get a cache miss, and thus, the CPU does a cache line fill –  generally 8 datum’s.  But only the head/tail pointer was needed, which means 7/8th’s of the memory bus bandwidth was wasted on unnecessary accesses – potentially blocking another CPU core from getting datum it needed. Therein lies a big problem for general purpose CPUs in some of these new problems face today.

 

Now, let’s focus on some real world examples:

As mentioned earlier, programming is now simpler (simpler – NOT easy).  Open Computing Language (OpenCL) is a framework in C++ for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, DSPs, and FPGAs.   OpenCL provides a standard interface for parallel computing using task- and data-based parallelism.  A quick example and flow is shown below.

Now, I’ll walk you thru two examples that we’ve worked on in the Server Solutions Group to prove out FPGA technology and make sure our server platforms intercept the technology when it’s ready.

Problem #1:  “Drowning in pictures, save me…..”

Say you’re a picture gallery site, social media, etc… who want end users to upload full size images from their mega pixel smart phones, so that they can enjoy them on a wide range of devices/screen sizes – how do you solve this problem?  The typical approach is using scale out compute and resize as needed for the end device. However, as shown above, it’s not a great fit for general purpose compute, as it scales at a higher cost and you must manage scale out.  Other options are batching processes and saving static images of all the sizes you needed – so it becomes a blowout storage problem.  Or, force the end user device to resize, but you must send down the entire image – blowing out your network and delivering a poor customer experience.

To avoid any of the above options, we decided to do a real time offload resizing on the FPGA.  For large images, we saw around a 70x speedup and about 20x speedup on small images.  We replaced 20-70 servers into 1 and saved power, cost, and increased performance – easy TCO. So, now the CPU is handling the request for resized images and delivery but using an FPGA to process the images.  Below is high level pictorial.

Problem #2: “I have all these images, and I’d like to sort them by feature”

Digital content is everywhere, and we’re moving from text search to image search. Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. Edge detection is also used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision.  In this example, we simply wanted to see what we could accomplish on a CPU and FPGA.  We started on the CPU with OpenCL and quickly discovered that the performance was not up to par…. less than 1FPS (frames per second).  The compiler was struggling so we manually unrolled the code to swamp every core (all 32 of them) and got up to 110FPS.  But at 85% CPU load across 32 cores you could barely move the mouse.

The next step was the same OpenCL code (different #defines) and targeted an FPGA.  With the FPGA and parallel nature of the problem we could hit 108FPS. In the FPGA offload case the CPU was ONLY 1% loaded, so we had a server with compute cycles left to do something useful.  To experiment, we went back to the CPU and forced a 1% CPU load limit and found we could not even get 1FPS.   Point being that in this new world of different compute architectures and emerging problems “it depends” will come up a lot.  Below is the data showing the various results I described.

Future Problems

In the future, emerging workloads and use cases (below) will continue to drive the need for new and different compute.  Every company will become a data compute company and must optimize for these new uses. If not, they are open to disruption by those who embrace change more aggressively.  FPGAs can be a part of this journey when applied to the right problem. Machine learning inference is a great example, along with network protocol acceleration/inspection, image processing as shown, and others can benefit from the reprogrammable nature of FPGAs.

Summary

So, FPGAs can be really useful and can help solve real-world problems.  Ultimately, we are heading down a path of more heterogeneous computing where you will hear “it depends” more than you’ll might like.  But, as my Dad says, “use the right tool for the right job.”  If you have questions about how to use FPGAs in your solutions contact your Dell EMC account rep.  Maybe we can help you to.

(The data in this BLOG was made possible by the awesome FPGA team in the Server Solutions Group CTO Office – Duk Kim, Nelson, Mak, Krishna Ramaswamy)

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Server-Room-Neon-LED-1000x500.jpg

Earn your Dell EMC VMware Co-Skilled Digital Badge

$
0
0
EMC logo

In 2016, Dell EMC began offering Digital Badges as a way for our global learners to validate their knowledge on Data Center Modernization, IT Transformation, and more. Similarly, VMware adopted Digital Badging to recognize talent through secure, verifiable digital credentials that represent knowledge, skill, achievement, and contribution.

Now, Dell EMC is pleased to partner on VMware’s first Co-Skilled digital badge:

Dell EMC VMware Co-Skilled – VxRail 2017.

vmw-vxrail-badge-color-digital-final-hi-res_352x352-170x170This digital badge validates that those who earn it know how to simplify IT operations and extend their VMware environments through a fully optimized hyper-converged Dell EMC VxRail Appliance.

To achieve this Co-Skilled credential, technical professionals are required to have successfully completed Dell EMC VxRail Appliance training and earned VMware Certified Professional 6 – Data Center Virtualization (VCP6-DCV) or VMware Certified Professional 5 – Data Center Virtualization (VCP5-DCV) certification.

This is a great opportunity for you and your colleagues to demonstrate your expanding knowledge and skillset and increase your value to your organization as a trusted advisor.

Earn your Dell EMC VMware Co-Skilled – VxRail 2017 digital badge today.

What is a Digital Badge?

More reliable and secure than a paper-based certificate, this no-cost offer enables you to manage, share and verify your competencies digitally. Additional information about VMware Digital Badges can be found in this VMware blog post.

Where can I find Dell EMC VxRail Appliance training?

Dell EMC Education Services offers courses that will help you gain the skills you need to manage and monitor a VxRail system. Find VxRail courses here.

Dell EMC Education Services at VMWorld

The Dell EMC and VMware co-skilled badge will be launched on Aug 28, 2017 at VMWorld. If you are going to the event in Las Vegas or in Barcelona, you can join us for a technical overview of VxRail Appliance and how a hyper-converged infrastructure and VMware vSAN helps IT organizations to transform their data centers. We will explore system architecture, components, use cases, features and management options and more.

Twitter Image - VMworld Realize ThemeEducation Services will be offering certification exams at VMWorld and has some attractive offers on exams taken at the event (50% discount) and taken after the event (25% discount).

Stay tuned for more…

Education Services is planning to introduce co-skilled digital badges for VxBlock and more. Stay tuned for details as they become available.

The post Earn your Dell EMC VMware Co-Skilled Digital Badge appeared first on InFocus Blog | Dell EMC Services.

Design Thinking: Future-proof Yourself from AI

$
0
0
EMC logo

It’s all over for us humans. It may not have been “The Matrix”[1], but the machines look like they are finally poised to take our jobs. Machines powered by artificial intelligence and machine learning process data faster, aren’t hindered by stupid human biases, don’t waste time with gossip on social media and don’t demand raises or more days off.

 

Figure 1:  Is Artificial Intelligence Putting Humans Out of Work?

Figure 1:  Is Artificial Intelligence Putting Humans Out of Work?

While there is a high probability that machine learning and artificial intelligence will play an important role in whatever job you hold in the future, there is one way to “future-proof” your career…embrace the power of design thinking.

I have written about design thinking before (see the blog “Can Design Thinking Unleash Organizational Innovation?”), but I want to use this blog to provide more specifics about what it is about design thinking that can help you to harness the power of machine learning…instead of machine learning (and The Matrix) harnessing you.

The Power of Design Thinking

Design thinking is defined as human-centric design that builds upon the deep understanding of our users (e.g., their tendencies, propensities, inclinations, behaviors) to generate ideas, build prototypes, share what you’ve made, embrace the art of failure (i.e., fail fast but learn faster) and eventually put your innovative solution out into the world.  And fortunately for us humans (who really excel at human-centric things), there is a tight correlation between the design thinking and the machine learning (see Figure 2).

Figure 2: Integrating Machine Learning and Design Thinking

Figure 2: Integrating Machine Learning and Design Thinking

In fact, integrating design thinking and machine learning can give you “super powers” that future-proof whatever career you decide to pursue. To meld these two disciplines together, one must:

  1. Understand where and how machine learning can impact your business initiatives. While you won’t need to write machine learning algorithms (though I wouldn’t be surprised given the progress in “Tableau-izing” machine learning), business leaders do need to learn how to “Think like a data scientist” in order understand how machine learning can optimize key operational processes, reduce security and regulatory risks, uncover new monetization opportunities. See the blog “What tomorrow’s business leaders need to know about Machine Learning?” for more about machine learning.
  2. Understand how design thinking techniques, concepts and tools can create a more compelling and emphatic user experience with a “delightful” user engagement through superior insights into your customers’ usage objectives, operating environment and impediments to success.

Let’s jump into the specifics about what business leaders need to know about integrating design thinking and machine learning in order to provide lifetime job security (my career adviser bill will be in the mail)!

Warning:  University of San Francisco MBA students, expect this material on your next test!!

Note: Many of the graphics in this blog come from the most excellent book “Art of Opportunity Toolbox”. The book “The Art of Opportunity: How to Build Growth and Ventures Through Strategic Innovation and Visual Thinking” is a must read for anyone serious about learning more about design thinking.

Step 1:  Empathize and Analyze

The objective of the “Emphasize and Analyze” step is to really, and I mean really, understand your users, and to build a sense of empathy for the challenges and constraints that get in their way: Who is my user? What matters to this person? What are they trying to accomplish? What are their impediments to success? What frustrates them today? This step captures what the user is trying to accomplish (i.e., tasks, roles, responsibilities and expectations) versus what they are doing. Walk in your users’ shoes by shadowing them, and where possible, actually become a user of the product or service.

A useful tool for “Emphasize and Analyze” step is the Persona. A Persona is a template for capturing key user operational and usage requirements including the job to be done, barriers to consumption and hurdles to user satisfaction (see Figure 3).

Figure 3:  Persona Template

Figure 3:  Persona Template

The Persona template in Figure 3 is one that we use in our Vision Workshop engagements to capture the decisions that the key business stakeholders are trying to make – and the associated pain points – in support of the organization’s key business initiatives.

How does the “Emphasize and Analyze” step apply to Design Thinking and Machine Learning?

  • Design Thinking – understand and capture the user’s task objectives, operational requirements and impediments to success; learn as much as possible about the users for whom you are designing.
  • Machine Learning – Capture and prioritize the user’s key decisions; capture the variables and metrics that might be better predictors of those decisions.

Step 2:  Define and Synthesize

The “Define and Synthesize” step starts to assemble an initial Point of View (POV) regarding the user’s needs: What capabilities are the user going to need? In what type of environment will the user be working? What is likely to impede the execution of their job?  What is likely to hinder adoption?

Sharing your POV and getting feedback from key constituencies is critical to ensuring that you have properly defined and synthesized the requirements and potential impediments.  Use the Opportunity Report to document the story, gather feedback and input from your constituencies, and refine your thinking regarding the solution (see Figure 4).

Figure 4:  Opportunity Report

Figure 4:  Opportunity Report

How does the “Define and Synthesize” step apply to Design Thinking and Machine Learning?

  • Design Thinking – define, document and validate your understanding of the user’s task objectives, operational requirements and potential impediments. Don’t be afraid of being wrong.
  • Machine Learning – synthesize your understanding of the decisions (e.g., latency, granularity, frequency, governance, sequencing) in order to flesh out the potential variables and metrics, and assess potential analytic algorithms and approaches.

Step 3:  Ideate and uh… Ideate

The “Ideate and Ideate” step is all about… ideate! This is a chance to gather all impacted parties, stakeholders and other key constituents and leverage facilitation techniques to brainstorm as many creative solutions to the users’ needs and impediments as possible.  Exploit group brainstorming techniques to ideate, validate and prioritize the usage and operational requirements, document those requirements in a Strategy Report and identify supporting operational and performance metrics (see Figure 5).

Figure 5:  Strategy Map

Figure 5:  Strategy Map

It is useful to make use of storyboards to refine your thinking in Step 3. A storyboard is a graphic rendition and sequencing of the usage of the solution in the form of illustrations displayed as a story (see Figure 6).

Figure 6:  User Experience Storyboard Example http://web.mit.edu/2.744/www/Project/Assignments/userExperienceDesign.html

Figure 6:  User Experience Storyboard Example http://web.mit.edu/2.744/www/Project/Assignments/userExperienceDesign.html

Storyboards are an effective and efficient way to communicate a potential experience and approach for your users to review and provide feedback. Storyboarding can provide invaluable insights into usage behaviors and potential impediments without writing any code (said as if coding is something evil)!

While it would be nice to be an accomplished sketcher, even rough sketches can be an invaluable – and fast – way to gather feedback on the user experience and product design (see Figure 7).

Figure 7:  Power of Sketching

Figure 7:  Power of Sketching

How does the “Ideate and Ideate” step apply to Design Thinking and Machine Learning?

  • Design Thinking – brainstorm as many potential solutions as possible. Diverge in your brainstorming (“all ideas are worthy of consideration”) before you converge (priority those best ideas based upon potential business and customer value and implementation feasibility).
  • Machine Learning – start piloting potential analytic models and algorithms with small sample data sets to see what types of insights and relationships are buried in the data. Capture and refine the hypotheses that you want to test.

Step 4:  Prototype and Tune

The “Prototype and Tune” step starts to build the product and supporting analytics. Start to model your ideas so that you can validate usage and navigational effectiveness and identify the metrics against which usage and navigational effectiveness will be measured.  Wireframe and mockups are useful tools that can be used to validate product usage and navigation effectiveness (see Figure 8).

Figure 8:  Interactive Mockups

Figure 8:  Interactive Mockups

How does the “Prototype and Tune” step apply to Design Thinking and Machine Learning?

  • Design Thinking – create one or more interactive mockups with which your key constituents can “play”. Study users’ interactions with the mockups to see what works and where they struggle.  Identify what additional design guides and/or analytics insights could be provided to improve the user experience.
  • Machine Learning – Identify where analytic insights or recommendations are needed – and what additional data can be captured – as the users “play” with the mockups. Explore opportunities to delivery real-time actionable insights to help “guide” the user experience.  Fail fast, but learn faster!  Embrace the “Art of Failure.”

Step 5:  Test and Validate

The “Test and Validate” step seeks to operationalize both the design and analytics. But step 5 is also the start of the continuous improvement process from the user experience and analytic model tuning perspectives. Instrumenting or tagging the product or solution becomes critical so that one can constantly monitor its usage: What features get used the most? What paths are the most common? Are there usage patterns that indicate that users are confused? Are their usage paths from which users “eject” and never return?

Step 5 is also where product usage and decision effectiveness metrics can be used to monitor and ultimately improve the user experience. Web Analytics packages (like Google Analytics in Figure 9) provide an excellent example of the type of metrics that one could capture in order to monitor the usage of the product or solution.

Figure 9:  Google Web Analytics

Figure 9:  Google Web Analytics

Web analytic metrics like New Visits, Bounce Rate and Time On Site are very relevant metrics if one is trying to measure and improve the usage and navigational effectiveness of the product or solution.

How does the “Test and Validate” step apply to Design Thinking and Machine Learning?

  • Design Thinking – monitor usage and navigational metrics to determine the effectiveness of the product or solution. Create a continuous improvement environment where usage and performance feedback can be acted upon quickly to continuously improve the product’s design.
  • Machine Learning – exploit the role of “Recommendations” to improve or guide the user experience. Leverage the “wisdom of crowds” to continuously fine-tune and re-tune the supporting analytic models predictive and prescriptive effectiveness.

Design Thinking + Machine Learning = Game Changing Potential

I know that I am probably preaching to the choir here, but I am advising my students and my own kids about the power of integrating Design Thinking and Machine Learning. As an example, my son Max is creating the “Strong by Science” brand by integrating the disciplines of Kinesoleogy with Data Analytics. Heck, he’s even written his first book on the topic (which is probably one more book than he actually read his entire high school career). Check out “Applied Principles of Optimal Power Development.”

But it’s also not too late for us old codgers to also embrace the power of integrating design thinking with machine learning. If you don’t, well, then enjoy being  a human battery powering “The Matrix”…

 

[1] Sentient machines created “The Matrix” to subdue the human population and use the humans as a source of energy.

The post Design Thinking: Future-proof Yourself from AI appeared first on InFocus Blog | Dell EMC Services.

VMworld 2017: A Meta thought re the nature of DIY vs. Consume.

$
0
0
EMC logo

As I got ready for this week at VMworld – something hit me.

Virtual Geek readers know what that means.   A long, meandering stream of consciousness blog post.  Interesting for some, mind-numbingly strange for others.

For those that don’t like the sound of that, STOP.  For those that do, grab a bag of popcorn, a glass of wine/beer/water (your choice), make a mental commitment to give me 30 minutes, and read on!

My subconscious is always working furiously looking for patterns, and then building analogies and stories that tell the story of the pattern, and make it real.  

I’m lucky to be in a role where I see a LOT.   A LOT of customers.  A LOT of partners.   A LOT of engineering teams and roadmaps.   It’s all fodder for my pattern recognition engine :-)

I’ve also realized there is a bit of a theme this VMworld – at least for the stuff I’m working on, and the customers I see – a “meta” that ties things together.

It’s the forces of “DIY” vs. “Consumption” approaches to IT.   

I’ve talked about it as a “build to buy” continuum – but I hate to say it – there’s a flaw in the picture, an error in my earlier taxonomy. 

The error is that there’s a “builder” and a “consumer” approach – at every layer in the stack.

Let me start with a story.

In early August, I got a call from the UK Dell EMC and VMware teams to jump on a call with a customer.   They were on the verge of making a big “DIY” vs. “Consume” decision.  It would be for a radical simplification of their 3 data centers, and represented 126 total VxRail appliances.   Every customer is precious, and for this customer this was a massive decision.   However, the solution was about a 30% premium vs. picking Dell EMC PowerEdge servers from the VMware vSAN Ready node HCL – and the customer was struggling with the decision.

This is pretty common.  It’s the DIY vs. Consume trade-off.   It’s also something humans that tend to be “builders” or humans focused on their piece of the stack (examples I tend to see this with a lot: VMware vSAN specialists, Dell EMC PowerEdge specialists) scratch their heads over.

We jump on the call.   I ask the customer how their testing with vSAN and PowerEdge had been going.  Answer was NOT well.  For 6 months, they have been working to test, validate, work through various small issues, tweaking and tuning.   I asked them how their VxRail testing had been going.  Answer was well so far.   I asked them what their priorities were.  Answer was speed, outcome, and since this was for their core 2 datacenters in a stretched configuration and a 3rd DR site – they needed a single throat to choke, one accountable party.   I asked them about the people doing the testing of the DIY – were they stupid, their retreads?  Or were they some of the smartest people at the customer, in the local VMware, Dell EMC teams?  Answer: some of their best.    BTW - This is almost always the case.   The obvious question – putting aside all the value-added elements like built-in DR (every VxRail includes $35K worth of included Data Protection), or cloud storage, or integrated lifecycle management…. the obvious question was what value do they assign to 6 months of work for their smartest people – resulting in still no outcome?  Silence as they thought about it for the first time.

This is the crux of the DIY vs. Consume choice.  

Some people simply refuse to put a value on that integration, testing work.  

Most vSphere customers have become accustomed to taking the 6 month period between a major release (6.5) and the subsequent u1 release (6.5u1) and using that time to have some of their smartest people select hardware, build test clusters, integrate with their Data Protection, DR, and other automation tooling.   The people doing the work hate it, but secretly kind of love it – because they are DIY people at heart.

LOOK, PERSONALLY, I GET IT.

Like most people in IT – I played with a lot of Lego growing up.   I love learning, tinkering.   My idea of fun on Christmas morning is getting a box of hardware and building my own home systems and labs.   I end up on the receiving end of emails like this:

clip_image001

For people like me – there’s an inherent resistance to the idea of no longer getting to tinker.   But – there’s a freedom in passing that burden on to others so we can pursue places of new learning, new value (hint, hint, it’s up the stack silly!)

It’s funny the people who love to tinker say “but we’re doing it now, and we have the skill”.  Yes, but the question is not “can you do it”, but “should you do it”.

That particular story ended up with them moving forward with VxRail. 

They know that the buck stops with the VxRail team – not only to help them get going fast, but to do it over their project and their 6 year budget period, which will include multiple updates of the software and hardware stacks.   At each point for the full duration – they won’t be responsible for all the testing, integration work, support, we will.

Now did they make that choice because vSAN is bad?  Or PowerEdge is bad?  NO.  

Did they make that choice because the vSAN Ready Node program – which is no more than a refinement of the VMware vSphere HCL program to certify servers – didn’t work?   In fact, I think Dell EMC and VMware have the best vSAN Redy Nodes – and the market is speaking, with us doing FAR more VxRail AND vSAN Ready Nodes than the compeition.   That’s not me guessing.  I know.

Nope – it’s none of those things that made the choice for them clear.  They just got to mental clarity: what was killing them was all the stuff beyond the basics.  

A lot of customers happily choose the DIY route.   In fact, about 2/3 of the VMware vSAN customers choose the DIY route, and about 1/3 choose the turnkey system route. 

When you choose DIY – you’re wasting your time compare the ingredients to the Consume choices – because often they are the SAME.  

In the case of VxRail vs a vSAN Ready Node – there are multiple things that are not the same:

  • There’s the value of integrated data protection ($35K of value per appliance).  
  • There’s the value of integrated hardware/software reporting/dial-home. 
  • There’s the value of integrated cloud storage and NAS ($10K of value per appliance). 
  • There’s the value of the 200+ steps that VxRail automates every time you add/change/remove/update nodes – any step in which you can screw yourself up. 

All that said, the main difference in DIY vs. Consume is a shift in where you place value.   When you chose Consume vs. DIY at any level of the stack, you’re choosing simplicity over flexibility, you’re choosing to elevate and shift skills higher up in the stack.

… And remember – as I needed to remind that customer – it’s not a “on time” thing.   When you chose the DIY route, you’re actively choosing to keep configuring, testing, validating, integrating, and supporting said integration FOREVER.

We’ve quantified this value for VxRail – it’s around a 30% improvement vs. DIY.   Read this:

image

At the strategic level, the most senior leaders of the company, VMware and Dell EMC agree – that while we must support all the ways customers will deploy – vSAN, vSAN Ready Node, VCF for DIY customers or VxRail/VxRack SDDC for customers who want to consume.

But we also agree – there is NO EASIER WAY than the simple “Consume” path of VxRail and VxRack SDDC.   BTW – for Dell EMC readers, there’s short training video on this here internally.

Today – the realization that hit me – this is true at EVERY LEVEL of the stack, and there’s a pattern in this week’s announcements from Dell EMC at VMworld that reflects this.   Here’s a generalized visualization:

image

Here’s a mapping of how it applies using the VMware and Dell EMC offers at every level:

image

Note that while I’ve put labels beside these that are (in the spirit of the week) VMware and Dell EMC – it’s notable that you can put all sorts of other examples on their.   Azure Stack is an example of a “Consume your IaaS/PaaS”.   Clearly SaaS examples like SFDC and others, while we may be powering them behind the scenes – our particpation is invisible to you as the customer.

Always remember – I do not place value judgement in this, both paths are valid, and the market has a lot of both.

BUT – I have an opinion.  Every customer should carefully consider the burdens/optimization or “prioritization” at every level.  In my experience – customers who are not “moving resources/time/money” up the stack are wasting resources/time/money.

This is a valid path for the ultimate DIY customer – pick your infrastructure by hand, build/test/validate/lifecycle the whole stack.  Use VMware Validated Designs (VVDs) to reduce the risk as much as you can.   Pick your own PaaS, CaaS – and enjoy putting it all together, and then maintaining it.  Forever.   Whether you call that person a “purist” or a “masochist” is a matter of opinion :-)

image

For the customer who choses simplicity, skills transformation, and opex reduction at the virtual pools of SDS/SDC (in the case of VxRail) and SDN (in the case of VxRack SDDC and VMware Cloud on AWS), but still wants flexibility, focus on existing skills and optionality, they would follow this path:

image

They would pick VxRail, VxRack, VxBlock or VMware Cloud on AWS – which from that point on isn’t their problem.  They would then focus on either using VVDs to reduce risk or the VMware Ready System offers from Dell EMC and VMware – which narrow even further – only VxRack SDDC, and using the vRealize VVD and some further automation.

There are a ton of valid variations of this.   Some customer choose to hand over the keys at the IaaS layer (that’s what EHC exists for). 

image

I want to make this crazy real for you.   If you want to understand just SOME (!!) of the decisions that if you DIY you need to take, that in consume path are taken care of for you, you HAVE to read this blog post- click on the below:

image

Ok – are you back?   Now, think of the work that goes into not only the design, but then the automated install and upgrade of that whole stack, and ultimately the single support.    It’s a huge amount of work that a customer can do (guided by a VVD) or have someone else do (EHC).

Some customers choose to hand over the keys at the PaaS level (that’s what NHC is for).   Some choose to skip all the layers and just use SaaS.  Some choose to skips some layers and just consume Public Cloud PaaS/CaaS/IaaS.   Of course – the common path is a blend of many of these.

There’s one set of paths that are NOT allowed.

image

You cannot build a “consume path” on TOP of a “build path” – so once you cross from purple to blue – from then on up, it’s your problem, your stack, and you own the lifecycle management, integration/test, and support from that point on.

There’s also ONE BIG IMPORTANT thing to consider as we take the “big picture” view here, and also think over time.  

The whole ecosystem (vendors, customers, partners, you name it) recognizes this pattern.   Over time, we are all shifting to more and more purple.   We’re all working to make the lifecycle management, integration/test of the layers get simpler, get more integrated.   Want examples?

  • Look at how VUM in vSAN 6.6.1 can get firmware updates and apply them.   Simpler!  Who owns the problem?  It’s still you, but you can see how it’s making things easier for the DIY customer.
  • You can see what VMware is doing around lifecycle management for VMware Cloud Foundation eventually extending up to vRealize.  If it gets to a certain point, and someone takes on the full burden of lifecycle management, integration/test and support – well then the need for an EHC is reduced, and the VMware Ready System (currently a VVD on VxRack SDDC) changes color and becomes purple.    Our GOAL, our STRATEGY is to take VxRack SDDC – and ultimately make it turnkey all the way up to the IaaS as VMware works on lifecycle management of their IaaS stack.  It’s not there yet, but that’s the goal.
  • The effort around PCF + Concourse, and Kubo (more on this later this week) are the basis of Pivotal Ready Systems (those stacks on VxRack SDDC).   As we keep refining that, improving lifecycle management, ultimately it could cover the Native Hybrid Cloud use cases.  Our GOAL, our STRATEGY is to take VxRack SDDC – and ultimately make it turnkey all the way up to the PaaS/CaaS layer as VMware and Pivotal work on PCF and Kubo.  It’s not there yet, but that’s the goal.

But each of those examples is a journey, not an event.   It’s the never-ending shift of value further and further up the stack.

Perhaps that’s the most important “meta” of them all – for us tinkerers – what we need to embrace is that as stuff gets more turnkey, more “purple” if we don’t let go and hand over that responsibility to others, it means we’re holding ourselves back.

VMworld 2017: VxRail, the best HCI Appliance for VMware gets even better.

$
0
0
EMC logo

In October 2015, VMware and EMC (this was pre Dell/EMC merger) sat down and said “we have got to build the absolute best HCI Appliance for VMware – bar none”.   It was the moment where the VxRail program was born.

In February 2016, the first release of VxRail happened.  Here was that milestone: VxRail – An incredible product at an incredible time.

February 2017 showed that a year later, we were on an incredible trajectory – the formula was working: Happy Birthday VxRail -  What a a “year one”!

The plan worked.  The Dell EMC merger has been a huge accelerant as we can leverage the technology and strength of PowerEdge as an HCI platform, along with global reach.

Everyone talks a good game, but numbers (and customers) speak.   VxRail is the fastest growing product in the fastest growing segment of on-premises infrastructure.   VxRail grew in Q2 at 243% and did it on BIG numbers.   We exited 2016 at $400M run rate.   This is going to be a billion-dollar business before we blink.  

I also every week get an update on availability stats, customer happiness, ship times – and at this point, VxRail is amongst the best in the Dell EMC portfolio – a high watermark.   We can always get better – but the stats are great.

A huge “thank you” to the VMware/Dell EMC team behind VxRail – thank you, you folks are awesome!  

And… an even bigger “thank you” to our customers and partners who have put their faith in us.  We know we can always get better, but work with passion every day to not let you down!

The root of VxRail success is simple.   VxRail is designed to be unabashedly 100% VMware aligned.  

Along with VxRack SDDC – VxRail is the THE best HCI for customers who have standardized on VMware.

  • Part of this has been total roadmap alignment.  The vSAN roadmap is part of the vSphere roadmap.   Both are an integral part of the VxRail roadmap.
  • Part of this has been “one team, one purpose”.    We merged the VMware and Dell EMC teams together with a maniacal focus. Put simply – “be the easiest easy button for customers standardized on VMware”.
  • Part of this has been the incredible strength of vSAN for VMware customers.  vSAN 6.2 crossed a critical capability threshold (more on this later).
  • Part of this has been an intense focus on the full system from a design, engineering, and support standpoint.   Midway through 2017 we were firing on all cylinders re: config/quoting/supply chain.   Time for “order to complete and running” got down to weeks.   Together with VMware we instituted dedicated joint support personnel.   We went through a full Field Change Order (FCO).   This is a industrial process to ensure all customers get a critical fix (upgrading to 4.0.1.200 or later).

The team is laser focused.   Be the “easy button” for customers who have standardized on VMware.  

Yes, there are some customers who are ready for network transformation with NSX.  For them, VMware Cloud Foundation and VxRack SDDC are the answer.  

There are about 1000x more customers that are not quite there yet – but are ready to make their next server refresh their last SAN refresh.  They are among the 500,000+ VMware customers ready to transform their storage by embracing SDS and move to hyper-converged infrastructure appliances.  

For them, VxRail has nailed the formula.

image

So – what’s the detail on the news today?

Answer: The best HCI Appliance for VMware gets even better.   VxRail is bigger, and more awesome than ever.

Now that we’ve gotten to a volume cadence of VxRail, VMware and Dell EMC have aligned VxRail releases on u1 milestones going forward. 

This means that yes, there will always be a little time-lag between a vSphere/vSAN release. 

If you want the latest software, and take the burden of testing/validation yourself – go for it. 

Conversely, if you want the easy button – go VxRail.   We have hundreds of people that do this all day, every day.  

Frankly, I’ve found that customers actually don’t lose time, they gain it.  Customers tend to deploy on u1 milestones anyway – but spend months testing, validating, integrating with their systems management, backup, data protection.  

When they go the HCI Appliance/System route – they save time/money – they don’t need to have test clusters, and waste their brighest brains on low-level tasks that frankly they shouldn’t do.

image

People are figuring out where they sit – starting to really internalize – do they REALLY want to DIY HCI (vSphere/vSAN + HCL Server hardware – and own all the glue and maintenance and many other things needed for real use) or do they want a turnkey HCI system.   There are very, VERY real differences.   There’s a post here that provides the stark contrast between DIY and a system approach – and I note that the observations are applicable at a couple levels – infrastructure, the IaaS, and also at the PaaS/CaaS layers.

Think of the following: we now have 2 major software release trains in flight – and customers will be on both the 4.0 and 4.5 release trains – for a multi-year period.

image

With 5 different core variations of platform variations.  There is a critical “minimal configuration variability” for an offer to fly, and we’re there on VxRail (and we will get VxRack SDDC to the same point in the coming months).

image

And of course, unlike software-only DIY approaches, we have to work through global availiability, supply chain, sparing/depots, heck local-language support with the combined VxRail support team from VMware/Dell EMC.  

image

Think about that.   The difference between the DIY world and the Appliance/System world is pretty stark. 

If you DIY – you know whose responsibility it is to make sure that all works together, is supported together – all around the world?  Yours.  

If you go the HCI appliance/system approach – you know who’s responsibility that is?  Ours.  

If you think DIY somehow helps your business – go for it.   vSAN 6.6 helps DIY customers in many ways with VUM integration with firmware, the ability to blink lights on hardware.   That’s the tip of the iceberg.   

Look deep in your heart.  Does DIY really help you?    REALLY?    If you’re wising up to the fact that you have more important things to do – go the appliance/system route.   I’ve been spending a lot of time on this – and we’ve quantified the advantage.  It’s north of 30% TCO for the customer.   If you want to reflect on DIY or Consumption as right for you, ready this.

There’s a lot more in VxRail 4.5 beyond synchronization with vSphere 6.5u1 – most of which are not an option in the DIY universe because they have to do with system-level manageability.   Let’s take a look:

  • Manageability at scale (critical as the customers base gets bigger and customer deployments gets bigger):
    • RESTful API for lifecycle management
    • Dell OpenManage Essentials Support
    • Batch mode cluster expansion
    • Migrate from internal to Enterprise ESRS gateway
    • VxRail Manager support for web proxy
  • File Services – and you can see us doing some interesting things here….
    • Unity Virtual Storage Appliance
    • Isilon SD Edge
    • Start even smaller: SATA SSD capacity drives for lower all-flash entry-point
    • Simpler open networking ordering: New Connectrix D-Series (Dell S4048-ON) Switch
    • More Networking Connectivity Option: Intel X710 (2 port 10GbE and 4 port 10GbE)
    • And… since an appliance always looks at the hardware and software as one… LCM: BIOS and HBA330 F/W and driver updates

    For EVERY HCI – the SDS layer is one of the defining elements.  It defines not only VxRail, but VxRail competitors.  Now, I want to be very, very clear – the SDS is not the only element (management plane, paths to IaaS, PaaS/CaaS stacks are both also super important).   This is an important fact that sometimes the SDS teams themselves forget.  But let there be no confusion – when it comes to HCI the SDS is central to:

    • resilience – if something isn’t resilient, it’s flat out dangerous.
    • performance – if something is resilient and therefore safe… but doesn’t perform well, it’s still useless.
    • data services – if something is resilient and performant, it’s safe, and useful – but if it doesn’t support core data services, it’s not competitive and compelling in important use cases.

    So – the fact that VxRail is powered by vSAN is a super-power, particularly as vSAN gets stronger and stronger.   With vSAN 6.2 and vSphere 6.0u3 we crossed a critical threshold of resilience, performance and data services, and both vSAN and VxRail deployments accelerated enourmously.   At that point (mid year 2017) the use cases that could not be supported with vSAN shrunk to a rounding error.

    With vSAN 6.6 (and specifically vSphere 6.5u1, and vSAN 6.6.1) – that rounding error of use cases that aren’t a fit are shrinking to near zero.

    image

    There is a ton packed in there.  Yes, all the usual stuff about perfomance, resilience, and data services – but native data at rest encryption is fundamental, and things like stretched clusters with simultaneous local protection are really cool.

    I’d highly recommend reading up on on vSAN 6.6 here:

    https://cormachogan.com/2017/04/11/whats-new-vsan-6-6/

    http://www.yellow-bricks.com/2017/04/11/whats-new-vsan-6-6/

    https://cormachogan.com/2017/08/03/nice-new-features-vsan-6-6-1/

    http://www.yellow-bricks.com/2017/07/28/vsphere-6-5-u1-comes-vsan-6-6-1/

    Small things matter.  Speaking as the “where the buck stops” role at Dell EMC for all converged platforms (the “buck stops” on VxRail and VxRack SDDC at a great team led by Gil Shneorson who reports to me), there is a small change that is in effect a huge change.   It’s not a feature per se, but will help a lot of customers.   Prior to vSAN 6.6 vSAN required multicast networking to be configured.   Now, unicast is just fine.   Why is this important?   The largest root cause of deployment issues with VxRail was network configuration.   We built in checks, network prep tools – but in the end, the customer needed to configure their network switches for multicast.   No more – a huge simplification!

    The response to VxRail as a product has been great – and the response to our utility economic model has been a big accelerant.  It’s simple.   Pay by appliance, by month.  Price drops every year – up to 30% per year.  Return any/all appliances as your needs change after 1 year, with no penalty.  How awesome is that.

    image

    So, what’s next for VxRail?  A lot.   Here’s what we’re working on next as a joint VMware/Dell EMC team:

    • NVMe.
    • Next-gen PCIe.
    • NVDIMM support.
    • New high-speed networking options.
    • Continued improvements on managebility.
    • Taking fileservices we support TODAY (Isilon, Unity) and making them more integrated.
    • HCI isn’t really about HCI – it’s about building blocks for simpler clouds – VxRail plays a critical role in our Cloud offerings from VMware, Pivotal and Dell EMC – and we are way out ahead of our competition here today, but we know we have a long, LONG way to go to be everything we can be.  On. The. Case.
    • Next-generation Data Protection for VMware – there’s a post with amazing new capabilities available now, and also hinting at the direction here.
    • … and something I think that is going to be VERY important.   A simple, easy path for customers to start with VxRail, and move into VxRack SDDC as they get ready to transform their network and adopt NSX.

    Most importantly, I want to end this post the way I started.  A huge “thank you” to the VMware/Dell EMC team behind VxRail – thank you, you folks are awesome!   And… an even bigger “thank you” to our customers and partners who have put their faith in us.  We know we can always get better, but work with passion every day to not let you down!

    Are you a VxRail customer?  How is it going?  What can we do better?

    PowerEdge and VMware: HCI Innovation for the New Data Center

    $
    0
    0
    EMC logo

    The data center is evolving at a rapid pace. As architectures adapt and grow in complexity to take advantage of new applications and technologies, new challenges always creep in.

    “How do I adapt to unknown demands?”

    “How do I manage all of this complexity?”

    “Will my storage performance keep up?”

    A natural progression towards a more server-centric architecture, along with the increasing adoption of Hyper-Converged Infrastructure (HCI) is addressing these core challenges. To further drive this adoption, Dell EMC has designed HCI-focused tools that integrate into your existing data center and keep it moving forward.

    Our HCI architectures easily scale with the addition of nodes, while compute and storage virtualization offers more resiliency in case of a failure. For management simplicity, the OpenManage vCenter plug-in integrates server management into your vCenter console.

    The rise of flash significantly changes storage performance. I/O challenges are no longer solved with an increase in the number of spindles. Now you solve it with flash. For example, a traditional HDD provides 400 4K IOPS, a PCIe NVMe SSD provides 750,000 4K IOPS.

    With the launch of the new generation of PowerEdge servers and vSAN Ready Nodes, Dell EMC and VMware combine to create a scalable infrastructure that delivers intelligent automation and a secure foundation with no compromises.

    With VMware vSAN you get the industry-leading software powering HCI solutions and with Dell EMC PowerEdge you get the world’s best-selling server. This combination of VMware vSAN with PowerEdge, provides up to 12X more IOPS in a vSAN cluster and 98% less latency, as well as a fully integrated management solution with vSAN Ready Nodes.

    vSAN Ready Nodes from Dell EMC are pre-configured and easily orderable through templates optimized by profiles and use cases. A new factory installation process utilizes the Boot Optimized Storage System (BOSS) cards, installing the hypervisor on a robust bootable device without having to sacrifice drives from your vSAN storage capacity.

    The Dell EMC vSAN Ready Nodes are pre-configured and validated building blocks that offer the most flexibility while reducing deployment risks, improving storage efficiency, and allowing you to scale at the speed your business desires.

    By choosing a Dell EMC PowerEdge vSAN Ready Node with OEM licensing, and our latest vSAN Pro Deployment services, you get a single point of contact for sales and support for the entire solution.

    For more information on Dell EMC vSAN Ready Nodes, visit: dell.com/en-us/work/shop/povw/virtual-san-ready-nodes

    ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Modern-Public-Space-Night-Neon-1000x500.jpg


    Transformational Data Protection for Virtualized Mission Critical Applications

    $
    0
    0
    EMC logo

    There is a wave of change sweeping across IT infrastructure. With the shift to cloud and SDDC, data protection must transform.

    The journey to transformation begins with modernizing the current environment, which focuses on delivering IT with fewer people, less time, and less money.  The key is simplification.

    Things really begin to change when backup stops being a separate product, managed by a separate team, run in its own silo. Instead, data protection becomes a feature- an integrated part of the environment. In this new world, applications are deployed already protected. Backups and recoveries should be faster, streamlined, with minimal hardware footprint and a lower impact on the production environment.

    What does a transformed environment look like?  Data Protection becomes automatic.

    The goal here is to speed up backups and recoveries, reduce duplication of costly backup tasks and empower application and database owners who are directly responsible for the data with self-service for data protection with guardrails. And, accomplish this within IT control with central visibility and oversight of backups by IT Administrators, while also solving the difficult data protection problem of large, virtualized, high-rate change mission critical apps.

    Express Data Path to Protect Virtualized Mission-Critical Applications

    With the increasing prevalence of very large (15 TB+) fast-changing, virtualized mission-critical databases, traditional data protection approaches struggle to keep up.  Application and data owners require speed and control to meet stringent Service Level Objectives (SLOs) for mission critical applications. By decoupling backup software from the data path, administrators can backup directly to protection storage, gaining up to 5x faster backup compared to traditional backup solutions.

    With the updated Dell EMC Data Protection Suite for Applications, application owners are empowered to use native application interfaces to perform backups from the VMware hypervisor directly to Dell EMC Data Domain.  Additionally, admins gain discovery, automation and provisioning for the entire data center from storage, applications and VMs.  Impact on application servers during backup windows is significantly reduced, as little or no data flows through the application server.

    Empower Administrators With Self-Service and Automation With IT Governance and Control

    Modern data protection designed for self-service requires intelligent, consolidated oversight of data and service levels across the business to ensure protection SLO compliance is met.  Data Protection Suite for Applications provides self-service data protection capabilities with global oversight to maximize efficiency, streamline operations and ensure consistent service level compliance.  With the ability to discover physical and virtual SQL and Oracle databases, as well as VMware copies, administrators receive faster time to value via automated VM discovery and provisioning, as well as the elimination of duplicate work streams and the creation of unnecessary silos of storage by sending data directly to Data Domain.

    Data Protection Suite for Applications non-disruptively discovers copies across the enterprise and automates protection SLO compliance.  Existing data copies are non-disruptively discovered to gain consolidated oversight of what already exists in the environment.  This also allows admins to maintain self-service by enabling storage administrators and DBAs to continue creating copies from their native interfaces instead of inserting a solution into the data path, and fully within the governance, SLOs and oversight of their IT data protection regime.

    Empowering application owners with the ability use native tools within their applications provides the control they desire, including the ability to set, monitor and enforce SLOs.  And, by enabling cooperation and coordination between protection administrators and data owners, administrative overhead is reduced.

    Summary

    Architected for the modern and software driven data center, Dell EMC data protection provides automation across the entire data protection stack, delivers simpler scalability and faster performance, and protection for a broader scope of VMware workloads, including workloads in the cloud and mission-critical I/O intensive applications.

    ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Business-Man-Using-Modern-Interface-1000x500.jpg

    Dell EMC Delivers Superior VMware Data Protection Designed For the Modern, Software-Defined Data Center

    $
    0
    0
    EMC logo

    It’s Day One of VMworld and Dell EMC is excited to discuss our powerful data protection solutions for VMware environments.  VMware has been a driving force behind the transition to the modern, increasingly software-defined data center (SDDC) and the movement to the cloud. Dell EMC ensures businesses are able to protect their data throughout this transition by enabling simplified management, easier scalability, multi-faceted support for the cloud, and comprehensive protection for a wide gamut of virtualized workloads, including mission-critical, IO intensive applications and databases. This is accomplished through our native vSphere integration and software defined architecture, which enables extensive automation and high performance.

    Protecting VM workloads is complex and presents challenges that are not easily met by data protection solutions built for legacy SAN based architectures. These challenges include: VM sprawl as a result of the growth of virtualization and the ease of spinning up new VMs, increasingly stringent protection requirements due to government regulations and more critical applications moving to VMware, and shrinking backup windows.

    Other data protection solutions are not architected for the SDDC and cannot adequately deal with these challenges. They cannot scale efficiently as the number of VMs goes ups, can only protect a subset of applications, have inflexible networking requirements leading to complex and costly architectures, and offer limited automation resulting in increased operational costs.

    Dell EMC Data Protection simplifies the difficult process of protecting data in fast growing VMware environments. Our solutions provide a more modern and simpler to manage and scale software defined architecture for VMware protection with:

    • Automation across the entire VMware protection stack (VM backup policy, data movers/proxies and backup storage) with simpler scaling to more VMs without media server sprawl – Less than 5 minutes to deploy and configure a virtual proxy
    • Best in class performance and data efficiency with less capacity and bandwidth required – 72x deduplication rate and 98% reduction in network usage
    • Transformational management functionality that is designed to enable self-service to vAdmins, including native integration with vSphere UI and vRA, with oversight by backup/infrastructure admins.

    Dell EMC Data Protection also provides comprehensive protection for your VMware environment today and tomorrow. Our solutions provide protection for virtualized mission critical IO intensive applications, support for all phases of your journey to the cloud, and are optimized for converged infrastructures:

    • Hypervisor direct backup and restore for IO intensive applications and databases for superior performance – 5x faster backups (New)
      • SLO driven protection built to enable self-service with consolidated oversight and automation
    • Support for all phases of cloud:
      • Long term retention to the cloud
      • Disaster recovery in the cloud (AWS)
      • Protect workloads in the cloud, including VMware Cloud workloads on AWS (New)
    • Optimized data protection for Dell EMC VxRail Appliance (hyper converged architecture) and pre-configured, easy to deploy, comprehensive data protection with the Integrated Data Protection Appliance

    Combine all this with our flexible consumption model that lets business pay for their data protection in a way that makes the most financial sense and our customers can rest assured that their VMware environment is protected with a low TCO solution (Protection Storage, Protection Software, as well as an Integrated Appliance) that is ideal for today and designed to help them modernize and transform to a software defined data center.

    ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Speed-of-Light-1000x500.jpg

    Figuring out Your IT Transformation Path When One Size Doesn’t Fit All

    $
    0
    0
    EMC logo

    The decision to modernize an IT environment is not solely related to technology – it has a significant impact to people, processes, and ultimately the business. As we speak to different customers, we’re often finding that the right path to IT transformation varies greatly by organization, with many at different stages, using different approaches and with varying long-term goals and objectives – which is precisely why there is no one size fits all approach to IT modernization.

    We recognize our customers are in different states of IT readiness and priorities, which is why we offer a broad portfolio of options spanning complete turnkey hybrid cloud platforms to individual hyper-converged infrastructure (HCI) appliances and nodes. Turnkey hybrid cloud platforms, based on Dell EMC VxRail Appliances, VxRack Systems or VxBlock systems running VMware software, allow us to deliver accelerated deployments, simplified lifecycle management, and streamlined support. If your strategy is focused on modernizing your infrastructure, our fully integrated HCI is the best way to go. We strive to simplify the complex task of transformation and ultimately enable your organization to deliver value to the business faster.

    With that in mind, our latest enhancements to the Dell EMC portfolio are designed to help our customers advance their priorities with an approach that makes sense for their business. Both the Dell EMC Enterprise Hybrid Cloud (EHC) and Dell EMC Native Hybrid Cloud (NHC) software stacks run across a common base of Dell EMC hyper-converged infrastructure. These turnkey hybrid cloud platforms both provide faster time to value and reduced operational risk for customers looking to deliver cloud-like services on-premises.

    The latest release to Dell EMC EHC updates the majority of Dell EMC and VMware core software components in addition to making Dell EHC available now on VxRack SDDC. EHC, deployed on either VxRack Systems or VxRail Appliances, is the best way to stand up an on-premises hybrid cloud based on a tightly integrated VMware software stack. With this release, we’ve not only expanded multi-site capabilities across the Dell EMC EHC portfolio; we’ve added application and VM-level disaster recovery and automated the upgrade process.

    With customized services from Dell EMC, we can help your organization assess and classify your applications and where they’re best suited to run – in the public, private cloud or on modernized infrastructure.  With a thorough assessment you can determine which apps to retire, which to move to HCI and which should be refactored using agile, continuously delivery, and container-based architectures.

    For those seeking to embrace digital business models using cloud-native applications, the Dell EMC NHC platform built on VxRail is simply the best way to deploy the Pivotal Cloud Foundry platform-as-a-service (PaaS) environment on-premises. You can run your applications on-premises and extend them seamless into the public cloud. Our latest update to this platform makes it even easier to manage the platform and applications on a global scale with built in high availability.

     

    We also want to make it simpler for a broader swath of organizations to deploy hybrid cloud capabilities for those seeking a “do-it-yourself” approach. To that end, we’re now working to add VMware Ready Systems from Dell EMC to the portfolio by the end of this year. These Ready Systems address the needs of customers wanting the simplicity of deploying a hybrid cloud strategy on HCI while taking on more of the deployment, configuration and lifecycle maintenance responsibility themselves.

    The VMware Ready System from Dell EMC combines VMware Cloud Foundation and vRealize Suite with pre-configured and tested configurations of VxRack SDDC and VxRail. Optional fixed price services will be offered for those customers who prefer to have Dell EMC handle the initial deployment. Designed for customers and deployments of any size, they offer quick, easy and low-risk deployments with a lower capital investment than with the turnkey lifecycle and single support model of EHC and NHC.

    Organizations of all sizes are already benefiting from their investments in our broad portfolio of HCI offerings. Those IT organizations have already discovered that deployments are not only simpler while reducing install and upgrade admin costs by as much as 16X.

    IT organizations can opt to either to start small and scale over time as you move applications to a more modernized infrastructure; VxRail appliances have introduced advanced tools that automate a wide variety of management tasks for even greater efficiencies. But if they prefer a rack-scale HCI system built on the most highly integrated VMware software stack, pre-loaded with VMware Cloud Foundation software, VxRack SDDC is the way to go.

    Public clouds are often seen as the big easy button, and, for the right applications and use cases, they can be. But like most things in life, platform choices are never that simple. In fact, a new survey published by 451 Research on behalf of VMware shows that 41 percent of 150 IT decision-makers are running their own private clouds at lower unit costs than a public cloud service. And while costs play a role, location matters when it comes to control and security for some applications and use cases.  IDC’s study The Power of Hybrid Cloud found over a 5 year period, a good rule of thumb for applications with predictable and relatively consistent usage tend to be more cost effective on private clouds, while those with variable or unknown usage tend to be more cost effective on the public cloud.

    That doesn’t mean that public clouds don’t have a significant role to play in the enterprise. But it does indicate that private clouds running on-premises are here to stay and that arguably a public cloud will simply become a natural extension of a private cloud running in your local data center. In the meantime, wherever your organization is in your quest to transform delivery of your IT services, no one is better prepared than Dell EMC to support your organization both now and well into the future.

    ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Open-Road-Empty-Highway-Cloud-1000x500.jpg

    XtremIO X2 – the Fast Path to a Modern VDI Infrastructure

    $
    0
    0
    EMC logo

    VMworld 2017 kicks off this week in Las Vegas. VMware’s premier thought leadership and education destination for technology professionals. Back in May at Dell EMC World, we proudly introduced XtremIO X2 – the second generation of Dell EMC’s XtremIO All-Flash storage system to help customers modernize their datacenters and transform IT.

    Just in time for VMworld, we are excited to announce the general availability of Dell EMC XtremIO X2.  XtremIO X2 is the next generation of the Dell EMC’s purpose-built All-Flash array and provides even more performance, agility, and simplicity. For IT departments deploying a VMware infrastructure, XtremIO X2 automates, modernizes and simplifies management and monitoring with its VMware vSphere plug-in and VMware vRealize Orchestration for VDI (Virtual Desktop Infrastructure) environments.

    PERFORMANCE

    XtremIO X2 delivers extreme performance with consistent sub-millisecond latency as low as 200 microseconds while hosting up to 4,000 virtual desktops. X2 delivers 80% better response times than the previous generation, and this performance is unaffected by boot storms, antivirus scans, suspend/resume operations, application peak demands, user activity, or other demands on the storage array’s resources. With XtremIO, VM cloning for VMware is a control plane operation making it extremely fast by leveraging the hypervisor’s copy offload capabilities like VAAI Copy Offload or X-COPY. In fact, a single XtremIO X2 X-Brick can host up to 4,000 virtual desktops[i] — that’s 1000 desktops per RU resulting in up to 33% lower cost per desktop than the previous generation XtremIO.[ii] The graph below demonstrates 4,000 knowledge workers.  You can also watch a video of this here as well.

    LoginVSI benchmark demonstrating 4,000 active Knowledge Workers

    DATA STORAGE EFFICIENCY

    XtremIO’s always-on, real-time inline data reduction technology, including deduplication and compression, dramatically reduce the amount of data storage capacity required to support VDI environments. Data deduplication is always inline without affecting performance or consuming capacity.  Thousands of virtual desktops can be deployed with only a few terabytes of flash. Customer data has shown dedupe rates of 10:1 or higher for full clones and 3:1 for linked clones (see below).

    What’s also exciting about our X2 announcement at VMWorld is we’ve proven it delivers 25% higher data reduction, 2X more copies per cluster for iCDM, while at the same time 4 X better rack density than the previous generation XtremIO platform.

    XtremIO X2 used capacity for 4,000 VMs (full clones)

    USER EXPERIENCE

    With its massive I/O performance, virtual desktops running on XtremIO X2 respond instantly and consistently faster than those on physical desktops. This extreme performance ensures that every user is able to get their work done quickly and efficiently.  No more receiving calls about slow response times to mission-critical applications. In fact, XtremIO X2 is 25% faster booting virtual desktops as shown in the results below running identical tests on both XtremIO X2 and the previous generation of XtremIO.[iii]

    INCREASED SCALE

    With XtremIO X2, you can now scale-up as well as scale-out more flexibly. You can start with as few as 18 SSDs with 7TB capacity. Then as your VDI environment grows, you can simply add more SSDs (up to 138TB) to host more desktops on a single X-Brick. Once a single X-Brick reaches maximum desktop hosting capacity, you can then scale-out non-disruptively up to a total of 8 X-Bricks. Leveraging the embedded storage efficiency capabilities, X2 can scale to support up to 5.5PB of effective capacity. This new X2 capability dramatically improves and simplifies VDI deployments making online expansion completely transparent to both your IT applications and users, while also eliminating all storage planning complexity.

    XtremIO X2 Scale Up and Scale Out Up to 5.5PB Effective Capacity

    X2 has also been designed for higher density to pack more All-Flash capacity in a footprint that is 3X denser than the previous generation, a key improvement for many of our customers:

    “As a service provider, the more capacity we can fit into a smaller footprint, the better it is for our business,” said Brady Snodgrass, Storage Architect at First National Technology Solutions.” XtremIO X2’s significantly improved density — with the capability of packing up to 72 SSDs into each X-Brick, nearly 3x the previous model — is a big advantage.”

    SIMPLE STORAGE MANAGEMENT

    Finally, XtremIO X2 offers a new HTML 5 management interface making it even simpler for storage administrators to quickly provision and view overall system status. With X2 there are no applications to install and management is done via a common web browser.  Key system metrics are front and center and displayed in an easy-to-read graphical dashboard.  Overall system health, performance and capacity metrics and graphs are right at your fingertips for quick checkups while graphical views of the array hardware and interfaces provide simple visuals for overall system performance.

    XtremIO X2 HTML 5 Dashboard

    XtremIO X2 management integration with VMware vSphere is just as exciting news for VMworld attendees. Dell EMC Appsync integrates with VMware to easily and natively protect and restore datastores, virtual machines or files within VMs.   VSI integration with Appsync allows iCDM to be managed within vCenter directly.  While point in time recovery is possible using RecoverPoint and VMware SRM. A great demonstration of XtremIO X2 with VMware vSphere and Site Recover Manager integration can be found here.

    If you’re already an XtremIO customer, hopefully I’ve piqued your interest in some of the new capabilities in performance, agility and ease-of use.  If you’re not already an XtremIO customer, but run a heavy VMware infrastructure, it’s worth your while checking out how XtremIO X2 can deliver peak results to your organization.  Check out a really great video or head on over to DellEMC.com to learn more about the new XtremIO X2.

    ____________________________________________________________

    [i] Based on Dell EMC internal testing using the LoginVSI VDI workload generator (knowledge worker profile) benchmark test, July 2017.

    [ii] Based on Dell EMC internal analysis, July 2017. Actual cost will vary.

    [iii] Based on Dell EMC internal testing, July 2017. Actual performance will vary.

     

    ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/XtremIO-X2-Product-Image-VMworld-2017-1000x500.jpg

    Realize Transformation with Dell EMC at VMworld US 2017

    $
    0
    0
    EMC logo

    Technology innovation is advancing at an exponential rate, powering a new era of digital business. Most IT leaders tell us their transformation initiatives are still emerging or evolving—and not yet fully realized. Many can’t easily shift resources to innovation fast enough due to dated IT infrastructure, putting their organization at risk of falling behind competitors.

    Dell EMC and VMware are working closely together to deliver impactful solutions that modernize infrastructure and spur these IT Transformation initiatives. Through our unique relationship, we are driving new levels of cross-portfolio integration and support for our customers. Our tightly aligned strategies and joint engineering result in innovations that deliver greater value for mutual customers while simplifying and accelerating IT service delivery.

    Dell EMC experts will be on hand at VMworld in Las Vegas this week to discuss our close collaboration on solutions from the desktop to the datacenter. These enhancements help customers fuel IT Transformation initiatives to achieve digital transformation goals. Here are a few highlights from the enterprise space.

    What’s New?

    Earlier today we announced new and enhanced products and solutions integrations across our broad portfolio of solutions that support VMware. This includes hyper-converged infrastructure, hybrid cloud platforms, Ready Solutions, data protection, all-flash storage arrays and cloud-integrated storage.

    • Extensive advancements across Dell EMC’s hyper-converged infrastructure (HCI) and hybrid cloud platforms will simplify and speed IT Transformation initiatives by modernizing infrastructure for VMware environments and data centers at any scale. These advances include:
      • Dell EMC VxRail Appliances, available in September 2017 with the latest VxRail 4.5 software, will offer simple automation and lifecycle management for the latest VMware technologies.
      • Dell EMC VxRack SDDC is now powered by the latest VMware technologies, including VMware Cloud Foundation which makes it up to 80% more efficient at scale.
      • Dell EMC Enterprise Hybrid Cloud continues to extend its capabilities to offer greater agility, more flexibility and choice while simplifying the lifecycle management of the platform.
      • Dell EMC Native Hybrid Cloud built on VxRack SDDC is available today via an early access program.
      • VMware Ready Systems from Dell EMC provide a simple, standard approach to building hybrid clouds.
    • Dell EMC Data Protection Suite for Applications provides 5X faster backups over traditional methods direct from VMware vSphere® so mutual customers can address the challenges associated with protecting large, fast-changing, mission-critical applications and virtualized databases.
    • Dell EMC Data Protection is also the preferred VMware partner for providing backup and recovery for VMware Cloud workloads running on Amazon Web Services. VMware Cloud on AWS customers will also have the option to easily add Dell EMC data protection through a few simple clicks when ordering new on-demand services.
    • Dell EMC vSAN Ready Nodes, part of our flexible Ready Solutions family, featuring the 14th generation of Dell EMC PowerEdge servers are pre-configured, validated building blocks that offer the most flexibility for customers who prefer to build and manage infrastructure themselves. This represents the first step in transitioning the entire Dell EMC hyper-converged portfolio to 14th generation PowerEdge servers by the end of 2017.
    • XtremIO X2, Dell EMC’s purpose-built All-Flash array, supports 40% more VDI users per X-Brick at up to 30% lower $/desktop. It is designed to provide streamlined automation and modernization while simplifying management and monitoring.

    We’re just getting started – there is more news to come tomorrow!

    Not to Be Missed in Las Vegas

    This year’s general sessions feature VMware CEO Pat Gelsinger, COO Sanjay Poonan, and CTO Ray O’Farrell showcasing new technologies and customer use cases. On Tuesday, August 29, special guest speaker Michael Dell will join Pat on stage to share his insights.

    If you’re attending VMworld in Las Vegas this week, be sure to visit the Dell EMC booth #400 for dozens of theater sessions, a meet-the-experts bar, and conversation stations. Additionally, Dell Technologies is featured in 31 sessions that cover topics from data protection, converged and hyper-converged infrastructure, to workload validated Ready Solutions and empowering the digital workspace. And don’t miss the expert-led and self-paced labs in the VMworld Hands-on-Lab area. Here you’ll find a Dell EMC lab on Getting Started with VxRail among hundreds of other labs and certifications.

    For more information on all of the activities mentioned above – and MORE – visit our event page here as well as the Everything VMware at Dell EMC community.

    Coming Soon…VMworld EMEA

    About 10 days from now, we’ll do it all again in Barcelona from September 11–14. Hope to see you there for even more on the trends that matter most to your business and IT.

    ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Tesla-Coil-Power-Flash-1000x500.jpg

    Viewing all 8970 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>