Quantcast
Channel: Blog | Dell
Viewing all 8970 articles
Browse latest View live

Dell EMC Hybrid Cloud Advances to be Main Attraction at VMworld 2017

$
0
0
EMC logo

In a perfect digitally transformed world, every application workload would instantly know where best to deploy itself. No one on the IT staff would ever have to spend time optimizing the performance of those workloads or worrying where they are running at any given moment. As great as that all sounds every IT professional attending the VMworld 2017 conference this month knows all too well that application workloads are not going to manage themselves any time soon. What IT professionals can count on is the fact that application workloads and the IT infrastructure on which they depend are becoming acutely more intelligent with each passing day.

IT professionals today routinely get asked to manage workloads that are anywhere from a few decades old to literally being born of the cloud in that last few minutes. Some of those workloads need to run on a public cloud. But the clear majority of those workloads will continue to be deployed in traditional on-premises data center environments now and well into the future. The real challenge going forward will be managing those workloads as they migrate across hybrid cloud computing environments that will soon be the de-facto standard across the enterprise.

At the Dell EMC booth #400 we will be showcasing new additions to our Enterprise Hybrid Cloud and Native Hybrid Cloud platforms as well as updates on our HCI portfolio. In addition to the Dell EMC experts staffing the booth, there will no shortage of vBrownBag TechTalks and Hands-on Labs!

Of course, the inimitable Chad Sakac, president of the Dell EMC Converged Platform and Solutions Division (CSPD), will provide his unique perspective on the ‘the State of the Union’ within Dell EMC with a special focus on multi-cloud, converged and hyper-converged platforms during a breakout session on August 30 from 8:30 – 9:30 a.m. That will be preceded by an ‘Ask Chad’ session at the Dell EMC booth during which customers and partners will be invited to ask Chad questions on Monday, August 28 between 2:30 – 3:30 p.m.

In addition, you should make time to visit the HCI Room to see why VxRail systems are the fastest growing product line in our history.

Finally, don’t miss a chance to hear from your peers during our CONVERGED meetup Monday, August 28 from 3-5:30 p.m., at StripSteak in Mandalay Bay. The highlight of this session will be a customer panel of VxRail and EHC/NHC customers moderated by CPSD’s head of marketing, Bob Wambach, Chad Sakac and VMWare’s head of Dell Technologies solutions, Iain Mulholland.

Naturally, no conference experience is complete without the opportunity to take home some cool swag. In the HCI Room you’ll discover a new HCI Hero interactive video game along with some awesome VxRail swag. In the VxRail’s escape room you’ll find the The Great Xscape – a physical experience that challenges participants to complete thee puzzles in less than 10 minutes to escape the room. The group with the best time of the day will win Dell laptops! For those that arrive early enough we also encourage you to participate in the 9th annual v0dgeball tourney on Sunday, August 29 at 3:00 p.m. to benefit the Wounded Warriors charity.

The unifying theme of all these events is VMware, which now sets the bar for what hybrid cloud computing in the enterprise should really be. Personally, I’m looking forward to seeing how the latest version of the VMware vRealize automation framework injects a massive amount of intelligence into the management of hybrid clouds. That capability makes public clouds hosting instances of VMware an almost seamless extension of any on-premises IT environment.

 

We are also excited to show you how the latest generation of NVMe-based VxRack systems are redefining the economics of hybrid cloud computing. Contrary to popular opinion it’s now much more cost effective to run certain classes of long-running workloads in an on-premises IT environment. In fact, a recent Cloud Price Index report published by 451 Research notes that anything involving more than 400 virtual machines is less expensive to deploy on-premises. Obviously, public clouds will continue to play an important role in IT. But we believe it has been conclusively proven that the future of enterprise IT will revolve around hybrid clouds built on top of robust pre-integrated IT infrastructure such as our Dell EMC Blocks, Racks and Appliances that are designed from the ground up to achieve unparalleled levels of scale.

If you can’t make it out to Las Vegas this month, be sure to try and attend VMworld 2017 conference in Barcelona next month. Whether we are in the desert or on the shores of the Mediterranean we know there really is no substitute for a hands-on experience when it comes to seeing and believing in the future of hybrid cloud computing.

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Empty-Stage-with-Light-1000x500.jpg


Isaac Asimov: The 4th Law of Robotics

$
0
0
EMC logo

Like me, I’m sure that many of you nerds have read the book “I, Robot.” “I, Robot” is the seminal book written by Isaac Asimov (actually it was a series of books, but I only read the one) that explores the moral and ethical challenges posed by a world dominated by robots.

But I read that book like 50 years ago, so the movie “I, Robot” with Will Smith is actually irobotmore relevant to me today. The movie does a nice job of discussing the ethical and moral challenges associated with a society where robots play such a dominant and crucial role in everyday life. Both the book and the movie revolve around the “Three Laws of Robotics,” which are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

It’s like the “3 Commandments” of being a robot; adhere to these three laws and everything will be just fine. Unfortunately, that turned out not to be true (if 10 commandments can not effectively govern humans, how do we expect just 3 to govern robots?).

There is a scene in the movie where Detective Spooner (played by Will Smith) is explaining to Doctor Calvin (who is responsible for giving robots human-like behaviors) why he distrusts and hates robots. He is describing an incident where his police car crashed into another car and both cars were thrown into a cold and deep river – certain death for all occupants. However, a robot jumps into the water and decides to save Detective Spooner over a 10-year old girl (Sarah) who was in the other car. Here is the dialogue between Detective Spooner and Doctor Calvin about the robot’s decision to save Detective Spooner instead of the girl:

Doctor Calvin: “The robot’s brain is a difference engine[1]. It’s reading vital signs, and it must have calculated that…”

Spooner: “It did…I was the logical choice to save. It calculated that I had 45% chance of survival. Sarah had only an 11% chance. She was somebody’s baby. 11% is more than enough. A human being would have known that.”

I had a recent conversation via LinkedIn (see, not all social media conversations are full of fake news) with Fabio Ciucci, the Founder and CEO of Anfy srl located in Lucca, Tuscany, Italy about artificial intelligence and questions of ethics. Fabio challenged me the following scenario:

“Suppose in the world of autonomous cars, two kids suddenly run in front of an autonomous car with a single passenger, and the autonomous car (robot) is forced into a life-and-death decision or choice as to who to kill and who to spare (kids versus driver).”

What decision does the autonomous (robot) car make? It seems Isaac Asimov didn’t envision needing a law to govern robots in these sorts of life-and-death situations where it isn’t the life of the robot versus the life of a human in debate, but it’s a choice between the lives of multiple humans!

A number of surveys have been conducted to understand what to do in a situation where the autonomous car has to make a life-and-death decision between saving the driver versus sparing pedestrians. From the article “Will your driverless car be willing to kill you to save the lives of others?” we get the following:

“In one survey, 76% of people agreed that a driverless car should sacrifice its passenger rather than plow into and kill 10 pedestrians. They agreed, too, that it was moral for autonomous vehicles to be programmed in this way: it minimized deaths the cars caused. And the view held even when people were asked to imagine themselves or a family member travelling in the car.”

While 76% is certainly not an over-whelming majority, there does seem to be the basis for creating a 4th Law of Robotics to govern these sorts of situation. But hold on, while in theory 76% favored saving the pedestrians over the driver, the sentiment changes when it involves YOU!

“When people were asked whether they would buy a car controlled by such a moral algorithm, their enthusiasm cooled. Those surveyed said they would much rather purchase a car programmed to protect themselves instead of pedestrians. In other words, driverless cars that occasionally sacrificed their drivers for the greater good were a fine idea, but only for other people.”

Seems that Mercedes has already made a decision about who to kill and who to spare.  In the article “Why Mercedes’ Decision To Let Its Self-Driving Cars Kill Pedestrians Is Probably The Right Thing To Do”, Mercedes is programming its cars to save the driver and kill the pedestrians or another driver in these no-time-to-hesitant, life-and-death decisions.  Riddle me this, Batman: will how the autonomous car is “programmed” to react in these of life-or-death situations impact your decision to buy a particular brand of autonomous car?

Another study published in the journal “Science” (The social dilemma of autonomous vehicles) highlighted the ethical dilemmas self-driving car manufacturers are faced with, and what people believed would be the correct course of action; kill or be killed. About 2000 people were polled, and the majority believed that autonomous cars should always make the decision to cause the least amount of fatalities. On the other hand, most people also said they would only buy one if it meant their safety was a priority.

4th Law of Robotics

Historically, the human/machine relationship was a master/slave relationship; we told the machine what to do and it did it. But today with artificial intelligence and machine learning, machines are becoming our equals in a growing number of tasks.

I understand that overall, autonomous vehicles are going to save lives…many lives. But there will be situations where these machines are going to be forced to make life-and-death decisions about what humans to save, and what humans to kill. But where is the human empathy that understands that every situation is different?  Human empathy must be engaged to make these types of morally challenging life-and-death decision. I’m not sure that even a 4th Law of Robotics is going to suffice.

 

[1] A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. The name derives from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial coefficients.

The post Isaac Asimov: The 4th Law of Robotics appeared first on InFocus Blog | Dell EMC Services.

A Security Decision – Build or Buy

$
0
0
EMC logo

We are sometimes asked to compare our threat detection and response solutions to those custom assembled by security experts using various open source products. With a wide array of quality point solutions available, it’s natural to consider whether a combination of best-of-breed open source solutions can be a better option for a particular organization, rather than an integrated commercial solution.

To start with, RSA is a big fan of open source software and open source threat intelligence, participating in the security sharing process. This collaborative tradition is strong in the security space, as we all battle the same adversaries to protect our organizations, and to keep the internet as safe as possible for everyone.

In practical terms, this is a classic “build vs. buy” choice, and boils down to an organization’s preferences, available skills, and risk tolerance. While strong solutions are possible with either choice, the differences are important to understand.

  • Preferences Some organizations are very comfortable with open source software. They’ve typically built up skills specific to the open source software model, particularly community- and self-support, along with a full understanding of the various licenses including GPL. Other organizations are more comfortable with commercial software, or even actively prefer that approach. For these organizations, the availability of support, predictable upgrades, and lifecycle guarantees offsets potential license savings. Many have explicit rules about this in their governance, risk, and compliance (GRC) playbooks.
  • Available Skills The availability of deep security and integration skills – and the ability to retain them – is an important factor in choosing between custom integration and a commercial platform. If your organization’s skill set is strong and stable, you may feel comfortable integrating different technology for logs, packets, endpoint, and netflow, and possibly separate analysis and remediation tools. Remember that this is not a one-time event, but a continuous process of maintaining integration and adding capabilities as they become available. In the case of a commercial threat detection and response platform, the integration is managed by the vendor. This frees up resources to focus on the threat hunting activity. Furthermore, in the case of RSA offerings, the threat hunting activity can be easily split between analysts of differing skills, making everyone much more productive. Lastly, interoperability with various SIEMs, IPSs, firewalls, etc., is maintained by the vendors so customers don’t need to worry about it.
  • Risk Tolerance For organizations that integrate security strategy with business strategy, IT risk is an important category. Breaches have a potentially huge negative impact, and are appropriately weighted in most risk programs. For the open source version, there are additional risks that must be evaluated. Among these are the continued availability of high-level skills required to manage and maintain the solution. You’ll also want to consider the stability of projects underlying the components used, and the availability of suitable alternative components – as well as the effort required to replace and integrate that component. For a commercial platform, the stability and maturity of the vendor, both from technology and business perspectives, defines the risk in adopting it. Commercial support systems lower the risk of a catastrophic outage, as do support SLAs and the existence of professional services, including incident response support.

So the choice is ultimately dependent on the organization making the decision. If done really well, a custom-integrated solution can be effective. However, with that choice you have to possess (and retain) the skills to do it. In addition, you make yourself dependent on multiple projects/vendors, increasing the risk that one may cease to maintain a solution, or fail altogether.

Our approach is to integrate across our RSA offerings so customers don’t need to worry about that part, and to interoperate with any component a customer chooses to use in place. A common example is a customer adding RSA threat detection and response components to its existing SIEM solution. In this instance the analysis and detection takes place in the RSA framework, so you still get all the benefits of integration.

One good piece of advice for anyone considering a threat detection and response solution – really for any IT decision – is to look out five years into the future, and consider changes that may impact your organization. Certainly internal considerations, such as maintenance of employee skills and organizational risk tolerance, will be important. It’s also critical to evaluate the probability that technology partners will continue to support your activities at a predictable and professional level. Remember that security is a process, not an event. When you choose something as critically important as a threat detection and response solution, you need to treat it as an ongoing commitment. It’s important to choose wisely.

 

Learn more about our threat detection and response capabilities in RSA NetWitness® Suite, as well as our participation in the security sharing process through RSA® Live and Live Connect, RSA® Link and RSA® Research threat intelligence sharing.

The post A Security Decision – Build or Buy appeared first on Speaking of Security - The RSA Blog.

With VMware Embedded, OEMs Have Even More Options

$
0
0
EMC logo

What motivates you to get up every day? For me, the answer is pretty simple, I’m inspired by the work of our customers and partners. I’m talking innovative solutions that have the power to radically transform lives and improve business productivity – whether it’s an MRI machine in a hospital, a way for farmers to measure and improve crop growth, a smart building that is responsive to occupant needs or an industrial-strength PC helping to run a factory line more smoothly. At the end of the day, it’s all about technology simplifying things, improving lives and making business more efficient.

In fact, the whole focus of the Dell EMC OEM division is to help our customers and partners bring their innovative solutions to market as quickly as possible. That’s precisely why Dell EMC OEM is the first (and only) tier-one technology vendor to offer VMware Embedded.

VMware Embedded – a Way to Expand Into the Virtual Space

VMware Embedded is a pre-configured, turnkey virtualization solution that offers a new way for OEMs to increase revenue and expand into the virtual space. In a nutshell, this offering, with a resellable licensing model, enables OEMs to sell more products, more efficiently. Additionally, customers have the option to purchase VMware Embedded with Dell EMC hardware, such as PowerEdge servers, or as a standalone product to streamline their supply chain.

Why Virtualization Matters

We have all seen the trend of businesses tapping into virtualization to gain longer solution life cycles, take advantage of cloud agility, reduce downtime and improve security. As a result, virtualization has become a key priority for a majority of enterprise solutions, built by OEMs and ISVs.

Now with VMware Embedded, customers have the option to run it as a virtual appliance, install on a physical appliance or use in Dell EMC OEM’s datacenter as a managed service offering. This maximizes efficiency and lifecycle across the OEM’s solution, ultimately benefiting the end customer.

Why VMware Is Great for OEMs

As an OEM, you can deliver VMware updates as a value-add service – and at a release cadence that matches your timelines – while serving as the single point of contact for support. To help decrease costs of goods sold and speed time-to-revenue, Dell EMC OEM will work with you to validate designs, align go-to-market strategies and create roadmaps. OEMs can also choose from a wide range of licensing and pricing models, including OEM sublicensing and global distribution rights, without multiple contracts.

For me, this is the main benefit of VMware Embedded – it enables our OEMs to provide quality support of VMware across all deployment models, offering advantages to customers in multiple markets, including manufacturing, video surveillance, communications, gaming, financial services, energy, healthcare, storage and networking.

But don’t take my word for it – this is what Darren Waldrep, vice president of strategic alliances at Datatrend, a Dell EMC OEM Preferred Partner, had to say. “Dell EMC and VMware’s embedded offering is a competitively priced solution that we are excited to offer our customers. VMware Embedded creates a much easier route to market for Dell EMC OEM partners and integrators, like ourselves.” Waldrep specifically highlighted Dell EMC’s and VMware’s “best of breed technologies” and our commitment to truly enabling the channel to deliver best pricing and experience for the end customer.

As we move deeper into the era of digital transformation, the need for speed will be imperative – no matter the industry. Understanding the unique needs of our customers and helping them to adapt to the constantly changing market is what will allow you as an OEM to thrive.

Check out the datasheet or visit Dell EMC OEM in the Dell EMC booth #400 at VMworld, Aug. 27-31 in Las Vegas. We hope to see you there!

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/City-Night-1000x500.jpg

The 6 Myths About Servers That Almost Everyone Believes

$
0
0
EMC logo

“How long do we use our servers?”

“Until they die.”

That was a real conversation I had with another system admin during my first week at a new job in 1998. Google “how long should you go before refreshing servers” and you will find that the topic is still debated. But, why?

When you buy something that isn’t disposable, you own it for its useful life. Your goal is to get the most out of it before you have to replace it. A car is a great example. You buy it and plan to drive it for as long as possible. It costs a lot upfront and you need to recoup that cost, right? But, how long before an old car becomes too expensive to maintain compared to buying a new one? Repair cost is the key variable for most people to consider. But, if you commute an hour each way to work, you might factor in the increased fuel efficiency of a new car. But if you don’t drive much, a couple of extra miles per gallon is pretty meaningless. On the other hand, to the owner of a taxi company in New York City, a few extra miles per gallon could save millions.

Twenty years ago, most companies thought about technology as a cost to minimize and tried to squeeze every last ounce out of it. Today, organizations of all shapes, sizes, and organizational structures are transforming and rely heavily on technology. For a business to transform, they must embrace doing things differently. An easy place to start is by evaluating your company’s server refresh guidelines. Some companies have learned to maximize their server investment and opt for faster refreshes. But, as the IDC report Accelerate Business Agility with Faster Server Refresh Cycles shows, for many companies, old habits are hard to break. Let’s take a look at the myths leading to an average server refresh cycle of 5.8 years when it’s clearly beneficial to refresh more often.

Myth 1: To Get the Most Value From a Server, Use It as Long as Possible

Not true. It actually costs a company more to keep existing servers instead of refreshing them. A lot more. Companies who refresh servers every 3 years have operating costs 59% lower than companies who refresh their servers every 6 years.

Myth 2: The Cost to Acquire and Deploy a New Server Is More Than Just Keeping the Old One

Wrong. The cost of operating a server in years 4-6 increases to 10 times the initial server acquisition cost. The reason is because the increasing costs are not linear and jump significantly as a server ages past 3 years. In fact, by year 6, a server requires 181% more staff time and exerts a productivity cost of 447% more than in year 1.

Myth 3: Upgrading Servers Is a Cash-Flow Drain

It’s actually the opposite. Even when you factor in the cost of acquisition and the cost of deploying new servers, a company that refreshes twice every 6 years instead of just once will have a 33% lower net cash flow. And if servers are a significant investment for your business (e.g. 300 servers), that could translate to a savings up to $14.6 million. Efficiencies of new servers and the benefits of consolidation (e.g. replacing 300 servers with 247 servers) make this savings possible.

Myth 4: It Takes Too Long to Realize the Benefits of Refreshed Servers

Incorrect. Remember, the cost of a server starts to increase rapidly in year 4. But, if you replaced it instead, you would not incur those costs. The cumulative benefits of user productivity time savings, IT staff time savings, and IT infrastructure cost savings pays for that new server in less than a year. To be more precise, it occurs in about 9 months.

Myth 5: Newer Servers Have No Impact on Increasing Revenue

False. But, this concept can be a little difficult to understand without hearing from the companies who increased their revenue. IDC highlights two different companies in their report: a logistics company and an educational services company. In both cases, the greater agility, capacity, and shorter time to delivery with the new servers helped them win additional business. IDC calculated the additional revenue per server at $123,438.

Myth 6: Buying Servers Is a Capital Expense

Not anymore. There are traditional leasing options as well as new innovative payment options such as Pay As You Grow and Provision and Pay from Dell Financial Services.

Don’t let these myths continue to keep your business from moving forward. The new generation of PowerEdge Servers are more scalable and agile, perform better, are more power efficient, and can help you consolidate more than the previous generation of servers released about 3 years ago. And if you embrace a 3 year server refresh cycle instead of every 6 years, you can take advantage of these innovations and say goodbye to higher costs.

ENCLOSURE:https://blog.dellemc.com/uploads/2016/09/Blog-1-e1502810727989.jpg

Dell EMC Recognized by Microsoft as Its #1 Deployment Partner for Windows 10

$
0
0
EMC logo

Windows 10 is on track to be the fastest-growing Windows operating system in history. I’ve heard it referred to as “the best of Windows 7 and 8,” “the most secure Windows,” and the “best PC operating system ever.” That’s pretty impressive.

You would think everyone would want that, especially big and successful companies looking for ways to empower their employees to move faster than the competition. But numbers showed that adoption of Windows 10 had been slow in enterprise accounts. This might be because many customers only recently completed their Windows 7 migration or were discouraged by the usability of Windows 8.

To help companies past this hurdle, Microsoft and Dell EMC set out specifically to work with enterprise accounts and help them understand the benefits of using Windows 10. We also wanted to help clients understand the migration path from Windows 7 to Windows 10 (which is much easier, I might add, than moving from Windows XP to Windows 7 — especially around application compatibility).

Dell EMC Services led this effort by performing customer workshops, proof of concepts and security briefings to provide a roadmap for both migration and ongoing update planning. You see, Windows 10 creates an environment where quality and feature updates are pushed to PCs multiple times a year, allowing your teams to stay current on new features.

This approach and hard work was acknowledged by Microsoft this month when they recognized Dell EMC as its #1 Enterprise Deployment Partner for Windows 10!

“Dell has been a fantastic Windows 10 advocate this year, driving more Windows 10 Proof of Concept and Pilot projects through the Windows Accelerate Program than any other partner. While this by itself is a significant accomplishment, the Dell team has also worked to enable even more Windows 10 customer deployments through the launch and sale of great new Windows 10 devices, and even a new PC-as-a-Service offer than hundreds of other partners can use,” said Stuart Cutler, Global Director of Windows Product Marketing.

“Additionally, Dell has pushed the envelope around Windows-as-a-Service efficiencies, application compatibility testing, and new Windows Enterprise E5 security service offers.” He said, “Dell brings a lot of great talent and creativity to the Windows 10 ecosystem. Their energy and innovations help not just Dell and Microsoft, but many other partners around the world, too.”

Erica Lambert, Dell EMC Vice President, Global Channel Services Sales accepts the Microsoft Partner Network Windows & Devices Partner of the Year award from Ron Huddleston, Microsoft Corporate Vice President, One Commercial Partner Organization

We believe in Windows 10 and our new modern devices as the foundation for workforce transformation and we have world class expertise and resources to help our customers around the world.

Want proof? I’m proud to say we won a total of four Partner of the Year awards for our work with customers on Windows 10 deployments. In addition to the prestigious Windows and Devices Partner of the Year, US Windows Marketing named us the Enterprise Deployment Partner of the Year, US EPG named us Windows 10 Consumption Partner of the Year, and the global Consulting & SI Partner team named us the Windows Deployment Partner of the Year.

The Dell EMC US Support & Deployment team celebrating for being recognized by Microsoft’s US Windows Marketing team as the Enterprise Deployment Partner of the Year.

We started with helping our customers with Windows 10 migrations, and now, at the request of our customers, we’re taking it to the next level with two new managed services offerings: Windows-as-a-Service (WaaS) to manage the continuous updates, and we’re providing enhanced security through our Windows Defender Advanced Threat Protection program.

So, if you are looking for help to get to Windows 10, contact your Dell EMC representative and let’s work together to bring these expert deployment practices to work for you. Let us be your Deployment Partner of the Year too!

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/MPN-Ron-and-Erica-blog.jpg

Our Services Journey and Our Promise

$
0
0
EMC logo

 Announcing the Next Step in our Enterprise Support & Deployment Services Journey

There is really nothing better than hearing firsthand about the successes and challenges our customers face and talk about how, together, we can help them accelerate their IT transformation.

When we announced at Dell EMC World in May that we would be aligning our services portfolio under the existing Dell brands of ProSupport and ProDeploy, I heard a clear message from customers all over the world.  “Your team provides us excellent service. You have great experts who help us solve issues and technologies that help us avoid problems. How will these changes impact your service programs and my organization?”

With that in mind, we knew we had our work cut out for us when we began the services transformation journey to integrate our portfolios.  Beginning today, we will be offering the ProSupport Enterprise Suite on Dell EMC products and solutions.

What does this mean to our customers?

  • Legacy EMC customers will be able to receive the same level of support they receive today under the unified ProSupport brand portfolio. For example, if you traditionally purchased the Premium support option on your legacy EMC products, you will now purchase ProSupport for Enterprise with the 4-hour mission critical option.
  • By leveraging existing processes, the same technical experts, and proactive technologies such as Secure Remote Services (ESRS) and MyService360, legacy EMC customers will continue to experience the same best-in-class service.
  • For the first time ever, legacy EMC customers will now have the option to select ProSupport Plus for Enterprise. This new level of support provides proactive and predictive support for mission critical systems.
  • For our legacy Dell customers, one impact you will notice is that we are rebranding the existing Technical Account Manager (TAM) to now be called a Technology Service Manager (TSM). There will be no change to the level of service that you are receiving from your TSM.

And there is nothing we take more seriously than our customers’ experience while helping them navigate their IT transformations from core to edge to cloud. Our global customer satisfaction score of 95% for ProSupport Plus for Enterprise shows we’re already on the right track.

Having said all this, I can tell you this is just the first step in our journey.  In our next phase we will be working hard to integrate our contact centers, consolidate our support contract management, and unifying our deployment services under a single brand for a single experience…but most of all…our promise is that we will never stop listening to our customers.

Our services transformation has one primary objective – to better enable customers to realize theirs. We do that by providing a seamless and consistent services engagement across all Dell EMC products, while continuing to evolve our best-in-class service features, processes and execution.

I look forward to connecting with customers during every step of our journey to help us adjust and optimize our plans to meet the evolving needs of customers and their unique IT transformations.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/image-for-prosupport-unification-blog.jpg

Forging New IT Skills With Cloud Foundry

$
0
0
EMC logo

One of the external technology inflections we’ve been actively pursuing in ERAA/Office of the CTO has been “Cloud Foundry” – a container-based architecture running apps in any programming language over a variety of cloud service providers, supporting the full application development lifecycle, from initial development through all testing states to deployment. Yes, this is what going Cloud Native means!

We believe this is one of the next generation platforms coming online today. We believed in it so much that we engaged with the Cloud Foundry Foundation at the Platinum level (along with Pivotal and VMWare) three years ago. Our Global CTO John Roese serves as the Cloud Foundry Foundation Board Chairman.  Take a look at John’s video to hear more about the rapid growth of Cloud Foundry and industry impact.

 

Cloud Foundry Summit Silicon Valley in July of this year brought together developers, customers, and corporate members for an exciting week of sessions, trainings, speakers, and announcements. We also had the chance to talk with John and Abby Kearns, Executive Director of the Cloud Foundry Foundation, following the Summit to hear firsthand about the significant accomplishments made during the Summit – including the Launch of the Cloud Foundry Certified Developer Program!

 

Dell EMC is proud to be one of the leading corporate sponsors to add these courses to our education portfolio, providing knowledge and skills that are required for future technology success across the ecosystem.

The Cloud Foundry courses now available are:

Free Community Courses

New to Cloud Foundry? Take a look at these foundational community courses, including:

  • Cloud Foundry for Beginners: From Zero to Hero
  • Microservices on Cloud Foundry: Going Cloud Native
  • Operating a Platform: BOSH and Everything Else

Free MOOC: Introduction to Cloud Foundry and Cloud Native Software Architecture

What you’ll learn in this introductory course:

  • An introduction to cloud-native software architecture, as well as the Cloud Foundry platform and its components, distributions,​​ and what it means to be Cloud Foundry certified. ​
  • How to build runtime and framework support with buildpacks.
  • Learn “The Twelve Factor app” design patterns for resiliency and scalability.
  • Get the steps needed to make your app “cloud-native”.
  • Understand how each component of Cloud Foundry combine to provide a cloud-native platform.
  • Study techniques and examples for locating problems in distributed systems.

You can also obtain a Verified Certificate after completing this course from edX (massive open online course provider) for $99

Cloud Foundry for Developers

  • This self-paced eLearning course provides developers with a comprehensive hands on introduction to the Cloud Foundry Platform and students will use CF to build, deploy and manage a cloud native microservice solution.
  • After this course, you’ll be well prepared for the Cloud Foundry Certified Developer Exam!
  • And there’s a discount if you do the bundled registration for the course and the exam.

Check out these courses and take advantage of the educational opportunities within the Cloud Foundry Community – be an early practitioner in an emerging technology that will shape future cloud services, whether you are a seasoned IT professional or a student looking for the next competitive skills advantage.

One more thing – what’s a technology blog without a quote from Einstein? This one seems appropriate for today’s discussion:

 Education is what remains after one has forgotten what one learned in school.

Interested in learning more about our global ERAA programs or have ‘horizon’ ideas you’d like to share? Drop me an email, and let’s talk!

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Blogpost_1.2.png


Is That IaaS or Just Really Good Virtualization?

$
0
0
EMC logo

I was recently pulled into a Twitter conversation that spawned off a blog post Paul Galjan made concerning Microsoft Azure Stack IaaS and its use cases in which he correctly pointed out that Azure Stack is not just a VM dispenser.  The question was posed as to what exactly constitutes IaaS vs. virtualization (VM dispenser) and where does each fit into the hierarchy of “cloud”?

Let’s start out by clearly defining our terms:

Virtualization in its simplest form is leveraging software to abstract the physical hardware from the guest operating system(s) that run on top of it.  Whether we are using VMWare, XenServer, Hyper-V, or another hypervisor, from a conceptual standpoint they serve the same function.

In Enterprise use cases, virtualization in and of itself is only part of the solution. Typically, there are significant management and orchestration tools built around virtualized environments. Great examples of these are VMware’s vRealize Suite and Microsoft’s System Center, which allow IT organizations to manage and automate their virtualization environments. But does a hypervisor combined with a robust set of tools an IaaS offering make?

IaaS. Let’s now take a look at what comprises Infrastructure as a Service.  IaaS is one of the service models for cloud computing, and it is just that – a service model.  NIST lays it out as follows:

The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).

You may be looking at that definition and asking yourself, “So, based on that, virtualization with a robust set of management tools is capable of delivering just that, right?” Well, sort of, but not really. We must take into account that the service model definition must also jive with the broadly accepted “Essential Characteristics” of Cloud computing.

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

Essentially all of these core tenets of “cloud” are capabilities that must be built out on top of existing virtualization and management techniques.  IaaS (as well as PaaS for that matter) caters to DevOps workflows enabling Agile code development, testing and packaging, along with release and configuration management, without regard for the underlying infrastructure components. This is clearly not the same as virtualization or even “advanced virtualization” (a term I hear thrown around every so often).

So then how does that relate back to the conversation concerning Azure Stack IaaS and VM dispensers? For that we need to look at how it plays out in the data center.

Building a virtualization environment has really become table stakes as a core IT function in the Enterprise space.  There are plenty of “DIY” environments in the wild and increasingly over the past few years we’ve seen platforms in the form of converged and hyper-converged offerings that streamline virtualization efforts into a turnkey appliance for hosting highly efficient virtualized environments – Dell EMC’s VxBlock, VxRack, and VxRail being leaders in this market.  That said they are not “IaaS” in a box – they are in essence, and by no means do I intend this as derogatory, “VM dispensers,” albeit highly advanced ones.

For customers that want to move beyond “advanced virtualization” techniques and truly embrace cloud computing there are likewise several possibilities.  DIY projects are certainly plausible though they have an extremely high rate of failure – setting up, managing, and maintaining home grown cloud environments isn’t easy and the skill sets required aren’t cheap or inconsequential.  There are offerings such as Dell EMC’s Enterprise Hybrid Cloud which build on top of our converged and hyper-converged platform offerings, building out the core capabilities that make cloud, cloud.  For customers that are fundamentally concerned with delivery of IaaS services on-premises that can be delivered from a standardized platform managed and supported as a single unit, Enterprise Hybrid Cloud is a robust and capable cloud.

So, what about something like the Dell EMC Cloud for Microsoft Azure Stack?  In this case, it’s all about intended use case. First let’s understand that the core use cases for Azure Stack are PaaS related – the end goal is to deliver Azure PaaS services in an on-prem/hybrid fashion. That said, Azure Stack also has the capability to deliver Azure consistent IaaS services as well.  If your organization has a requirement or desire to deliver Azure based IaaS on-prem with a consistent “infrastructure as code” code base for deployment and management – Stack’s your huckleberry.  What it is not is a replacement for your existing virtualization solutions.  As an example, even in an all-Microsoft scenario there are capabilities and features that a Hyper-V/System Center-based solution can provide in terms of resiliency and performance that Azure Stack doesn’t provide.

In short – virtualization, even really, really, good virtualization, isn’t IaaS and it’s not even cloud.  It’s a mechanism for IT consolidation and efficiency. IaaS on the other hand builds on top of virtualization technologies and is focused on streamlining DevOps processes for rapid delivery of software and business results by proxy.

In scenarios where delivery of IaaS in a hybrid, Azure-consistent fashion is the requirement, Azure Stack is an incredibly transformative IaaS offering.  If Azure consistency is not a requirement, there are other potential solutions in the forms of either virtualization or on-prem based IaaS offerings that may well be a better fit for your organization.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Virtualization.jpg

Going (Cloud) Native

$
0
0
EMC logo

Interoperability in computing environments is a repeating problem affecting multiple parts of an organization. When systems cannot communicate with one another, the result is slowdowns in every way, from technology performance (as data has to be converted from one system to another) to human processes (such as teams spending time in yet another meeting trying to devise workarounds for data translation challenges). Interoperability must emerge, because common formats to send and store data are a necessity. This isn’t a new topic or concept, but this time, it’s being applied to modern infrastructure.

It’s time to address this issue in DevOps. We want to do for storage and containers today what Sun’s network file system protocol (NFS) did for client/server storage in the 80s: enable performance, simplicity, and cross-vendor compatibility.

Happily, we’re in a position to participate in the process and the conversation around ensuring different pieces of the ecosystem fit together. The Cloud Native Computing Foundation, a division of the Linux Foundation, aims to help organizations build, deploy, and run critical applications in any cloud. The CNCF has under its umbrella other projects aligned with our goals, such as Kubernetes and Prometheus.

Today we recognize industry collaboration moving forward with major container and storage platforms. This initiative is called CSI, short for Container Storage Interface, and we are proud to have just introduced a first example implementation. We’re seeing immediate progress, as we expect CSI to be introduced as a CNCF project later this year, bringing storage for cloud native officially into Cloud Native environments. While there are other organizations and initiatives in this space, such as OpenSDS, we are fully supportive of CSI and are proud to be a part of the community of customers, container orchestrators and other vendors standing behind this initiative. Along with this, we see REX-Ray as a prospective CNCF project, bringing support for storage for cloud native workloads.

Why It’s Necessary

 In nearly every historical case, interoperability standards have emerged: common formats we use to send and store data. Specifically in the storage industry, we’ve even seen this time and again from SCSI, NFS, SMB, CIFS, and VVOLs. Since I’ve always been a fan of Sun, I’ll use NFS as my example. In 1984 Sun developed the NFS protocol, a client/server system that permitted (and still permits!) users to access files across a network and to treat those files as if they resided in a local file directory. NFS successfully achieved its goals of enabling performance, simplicity, and cross-vendor compatibility.

NFS was specifically designed with the goal of eliminating the distinction between a local and remote file. To a user, after the appropriate setup is performed, a file on a remote computer can be used as if it were on a hard disk on the user’s local machine.

There’s plenty of architectural history to consider in how Sun achieved this, but the bottom line is: If you are a server, and you speak NFS, you don’t care who or what brand of server you’re talking to. You can be confident that the data is stored correctly.

Thirty years later, we technologists are still working to improve the options for data interoperability and storage. Today, we are creating cloud-computing environments that are built to ensure applications take full advantage of cloud resources anywhere. In these cases, the same logic applies as the NFS and server example above: If you have an application, and it needs fast storage, you shouldn’t have to care about what is available and how to configure it.

Storage and networking continue to be puzzle pieces that don’t always readily snap into place. We’re still trying to figure out if different pieces all work together in a fragmented technology ecosystem – but now it’s in the realm of containers rather than hard disks and OS infrastructure.

Ensuring Interoperability Is a Historical Imperative

As I wrote earlier, and to quote Game of Thrones,  “winter is coming.” Whatever tenuous balance we’ve had in this space is being up-ended. Right now it’s the Wild West, where there’s 15 ways to do any one thing. As an industry, cloud computing is moving into an era of consolidation where we combine efforts. If we want customers to adopt container technology, we need to make the technology consumable. Either we need to package it in a soup-to-nuts offering, or we must establish some common ground for how various components interoperate. Development and Ops teams have the opportunity to argue about more productive topics, such as whether the pitchless intentional walks makes baseball less of a time commitment or whether it sullies the ritual of the game.

The obvious reason for us to drive interoperability by bringing storage to cloud native is that it helps the user community. After all, with a known technology in hand, we make it easier for technical people to do their work, in much the same way that NFS provided an opportunity for servers to communicate with storage. But here the container provider is the server, and the storage provider is the same storage we’ve been talking about.

The importance of ensuring interoperability among components isn’t something I advocate on behalf of {code}; it’s a historical imperative. Everyone has to play here; we need to speak a common language in both human terms and technology protocols. We need to create a common interface between container runtimes and storage providers. And we need to help make these tools more consumable to the end user.

The end goal for storage and applications is to offer simplicity and cross-vendor compatibility where storage consumption is common and functions are a pluggable component in Cloud Native environments. We feel that this is the first step in that direction.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Cloud-tablet-phone-laptop-1000x500.jpg

The Business Case for Hyper-Converged Infrastructure

$
0
0
EMC logo

 

Earlier this year, Wikibon proposed a theory that “the optimum hybrid cloud strategy” is based on an integrated hyper-converged infrastructure (HCI) as opposed to the traditional enterprise architecture built on separate servers and separate traditional SAN.

To test and validate their premise, Wikibon experts performed a three year comparison of a traditional white box / storage array vs. a hyper-converged system. The hyper-converged system they selected was Dell EMC’s VxRack FLEX, which is based on ScaleIO software-defined storage. Through their research, they were able to analyze and compare costs across these four main categories: initial investment, time-to-value, cost over time and ease of use.

While you can read all the details on the approach and methodology for comparing these two instances in the full report here, I’d rather jump right into the juicy stuff: the results!

Initial Investment

The initial investment for a traditional SAN solution was $734K versus $405K for a hyper-converged system. To give you the full picture, there are several reasons why HCI solutions are less expensive to purchase up front. Not only are the equipment costs lower (storage arrays are replaced with DAS and specialized Fibre Channel networking and HBAs are no longer required), but most customers only buy what they need for the short term rather than over purchasing infrastructure to support unknown future requirements, which is common when the infrastructure is intended to last for multiple years. That’s the power of HCI – it enables you to start small and grow incrementally as your demand for resources increase. But the savings and cost discrepancies didn’t stop there.

Time-to-Value

Practitioners who took part in deploying the solutions onsite found that it took 33 days to configure, install and test the traditional white box/storage array as opposed to the hyper-converged infrastructure which took a mere 6 days. The time-to-value for hyper-converged was then calculated to be 6x faster. This is one of the key benefits of an engineered system. IT no longer bears the burden of architecting the solution, procuring separate hardware and software from multiple sources, and installing and testing the configuration before migrating workloads to production. Instead, the system is delivered pre-integrated, pre-validated and ready for deployment.

Cost Over Time

In addition to the upfront costs, the three year comparison then showed maintenance and operational expenses increasing the total cost for a traditional storage array to $1.3 million while maintenance and operations costs for hyper-converged came out to a total of $865K over three years. The main driver behind the savings is that HCI solutions greatly simplify IT operations in a number of ways: HCI provides a more efficient environment for those responsible for managing and provisioning storage; the ease of provisioning storage enables application developers to do more; and automation, streamlined provisioning, and higher reliability saves IT staff time on activities such as data migrations and capacity planning. There are also substantial savings from lower operating expenses such as power, cooling, and data center space. Other staggering numbers from the study included the following:

  • Traditional solutions are 47% more expensive than a hyper-converged solution
  • Storage costs for the traditional approach are 100% higher
  • Overall, a hyper-converged solution is $400K less expensive

Ease of Use 

In this last category, all practitioners interviewed, except one, agreed that the hyper-converged solution was easy to upgrade (more so than the traditional architecture), easy to manage and that it performed well. This makes sense given that HCI is typically managed from a central console that controls both compute and storage and automates many traditional manual functions, helping to reduce the number of tools that have to be learned and used. HCI systems also have full lifecycle management – meaning they are fully tested, maintained, and supported as one system over the entire lifespan. As a result of these efficiencies, people costs for HCI are much lower than the traditional approach. In fact, over a three year time span, people costs are 600% higher for traditional SANs.

So, when looking at the TCO breakdown, it looks like hyper-converged is the way to go over a traditional storage approach. HCI offers lower equipment and operational costs, lower latency and faster bandwidth infrastructure, as well as a “single hand to shake and a single throat to choke” for vendor support and service needs.

Dell EMC’s VxRack FLEX is a fully tested, pre-configured, hyper-converged system that meets all of these criteria while offering extreme scalability (start small and grow to 1,000+ nodes). It truly provides a turnkey experience that enables you to deploy faster, improve agility and simplify IT. You can download the report here to read about all the findings from the research or you can skip right on over to the Dell EMC VxRack website to start exploring your options.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Desk-Laptop-Tea-Briefcase-1000x500.jpg

With VMware Embedded, OEMs Have Even More Options

$
0
0
EMC logo

What motivates you to get up every day? For me, the answer is pretty simple, I’m inspired by the work of our customers and partners. I’m talking innovative solutions that have the power to radically transform lives and improve business productivity – whether it’s an MRI machine in a hospital, a way for farmers to measure and improve crop growth, a smart building that is responsive to occupant needs or an industrial-strength PC helping to run a factory line more smoothly. At the end of the day, it’s all about technology simplifying things, improving lives and making business more efficient.

In fact, the whole focus of the Dell EMC OEM division is to help our customers and partners bring their innovative solutions to market as quickly as possible. That’s precisely why Dell EMC OEM is the first (and only) tier-one technology vendor to offer VMware Embedded.

VMware Embedded – a Way to Expand Into the Virtual Space

VMware Embedded is a pre-configured, turnkey virtualization solution that offers a new way for OEMs to increase revenue and expand into the virtual space. In a nutshell, this offering, with a resellable licensing model, enables OEMs to sell more products, more efficiently. Additionally, customers have the option to purchase VMware Embedded with Dell EMC hardware, such as PowerEdge servers, or as a standalone product to streamline their supply chain.

Why Virtualization Matters

We have all seen the trend of businesses tapping into virtualization to gain longer solution life cycles, take advantage of cloud agility, reduce downtime and improve security. As a result, virtualization has become a key priority for a majority of enterprise solutions, built by OEMs and ISVs.

Now with VMware Embedded, customers have the option to run it as a virtual appliance, install on a physical appliance or use in Dell EMC OEM’s datacenter as a managed service offering. This maximizes efficiency and lifecycle across the OEM’s solution, ultimately benefiting the end customer.

Why VMware Is Great for OEMs

As an OEM, you can deliver VMware updates as a value-add service – and at a release cadence that matches your timelines – while serving as the single point of contact for support. To help decrease costs of goods sold and speed time-to-revenue, Dell EMC OEM will work with you to validate designs, align go-to-market strategies and create roadmaps. OEMs can also choose from a wide range of licensing and pricing models, including OEM sublicensing and global distribution rights, without multiple contracts.

For me, this is the main benefit of VMware Embedded – it enables our OEMs to provide quality support of VMware across all deployment models, offering advantages to customers in multiple markets, including manufacturing, video surveillance, communications, gaming, financial services, energy, healthcare, storage and networking.

But don’t take my word for it – this is what Darren Waldrep, vice president of strategic alliances at Datatrend, a Dell EMC OEM Preferred Partner, had to say. “Dell EMC and VMware’s embedded offering is a competitively priced solution that we are excited to offer our customers. VMware Embedded creates a much easier route to market for Dell EMC OEM partners and integrators, like ourselves.” Waldrep specifically highlighted Dell EMC’s and VMware’s “best of breed technologies” and our commitment to truly enabling the channel to deliver best pricing and experience for the end customer.

As we move deeper into the era of digital transformation, the need for speed will be imperative – no matter the industry. Understanding the unique needs of our customers and helping them to adapt to the constantly changing market is what will allow you as an OEM to thrive.

Check out the datasheet or visit Dell EMC OEM in the Dell EMC booth #400 at VMworld, Aug. 27-31 in Las Vegas. We hope to see you there!

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/City-Night-1000x500.jpg

PowerEdge at VMworld 2017: What Happens in Vegas Rules Your Data Center

$
0
0
EMC logo

It’s almost time for VMworld 2017. Less than a year ago Dell, EMC and VMware all combined to form the largest privately held technology company in the world. Recently, Dell EMC launched the 14th generation of PowerEdge servers, or the FIRST generation of Dell EMC PowerEdge servers.

The shared vision of Dell EMC and VMware is realized in PowerEdge servers; what we call the bedrock of the modern data center. Whether it’s traditional applications and architectures, Hyper-Converged or Cloud, PowerEdge is the singular platform that forms your entire data center foundation.

At VMworld 2017 we’ll be showcasing our full PowerEdge portfolio and you’ll see how PowerEdge is at the heart of data center infrastructure.

There are a number of exciting sessions highlighting the integration of Dell EMC and VMware.

State of the Union: Everything Multi-Cloud, Converged, Hyper-Converged and More!

You have questions? Chad Sakac has answers! The technology landscape is continuously evolving, making it a challenge for IT leaders to keep up. Chad will relate Dell Technologies’ perspective on multi-cloud, converged and hyper-converged, data analytics, and more. (Session: UEM3332PUS)

Deliver Real Business Results through IT Modernization

As businesses embark on new digital transformation initiatives, we’re seeing them simultaneously transform their core IT infrastructure. You will learn how Dell EMC and VMware together are driving advancements in converged systems, servers, storage, networking and data protection to help you realize greater operational efficiencies, accelerate innovation and deliver better business results. (Session: STO3333BUS)

Modern Data Center Transformation: Dell EMC IT Case Study

In this session, learn how Dell EMC IT is leveraging VxRail/Rack, ScaleIO and more to modernize its infrastructure with flash storage, converged and hyper-converged systems and software defined infrastructure, where all components are delivered through software. (Session: PBO1964BU)

Workforce Transformation: Balancing tomorrow’s Trends with Today’s Needs

The way we work today is changing dramatically, and organizations that empower their employees with the latest technologies gain a strategic advantage. In this session, you will learn how to modernize your workforce with innovative new devices and disruptive technologies designed to align with the new ways that people want to work and be productive. (Session: UEM3332PUS)

Modernizing IT with Server-Centric Architecture Powered by VMware vSAN and VxRail Hyper-Converged Infrastructure Appliance

With server-centric IT, data centers can offer consumable services that match the public cloud yet offer better security, cost efficiency, and performance. This session will discuss the benefits of x86-based server-centric IT and the convergence of technologies and industry trends that now enable it. The session will also explain how customers can realize server-centric IT with VMware vSAN and the Dell EMC VxRail appliance family, along with detailed analysis demonstrating these claims. (Session: STO1742BU)

The Software-Defined Storage Revolution Is Here: Discover Your Options

During the past decade, web-scale companies have shown how to operate data centers with ruthless efficiency by utilizing software-defined storage (SDS). Now, enterprises and service providers are joining the SDS revolution to achieve radical reductions in storage lifecycle costs (TCO). Learn how Dell Technologies’ SDS portfolio is revolutionizing the data center as we highlight customer success stories of VMware vSAN and Dell EMC ScaleIO. (Session: STO1216BU)

Come spend some time with Dell EMC experts to see how we can work together to help you achieve your business and IT initiatives. And be sure to follow us at @DellEMCServers for daily updates, on-site videos, and more.

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Circular-Staircase-1000x500.jpg

Moving your applications to the cloud: A multi-level Strategy

$
0
0
EMC logo

There’s more than one way to transform your organization’s legacy applications to achieve the agility, speed and customer service level that today’s IT operations require. In fact, Dell IT has unveiled a range of options to begin moving our legacy applications to the cloud.

We have created three IT services to give our business developers a choice of how much or how little they want to upgrade their apps to add cloud capabilities via Platform-as-a-Service, Infrastructure-as-a-Service and VM-as-a-Service.

Before I describe the features of our new service offerings, let me provide some background on how we came up with our strategy.

A step-by-step approach

First of all, let me point out that cloud is not destination but a strategy around transforming people, processes and technology. The service offerings we created are how we are modernizing our applications as part of our IT transformation.

IT Transformation Stages - Applications

Like most organizations grappling with digital transformation, we have thousands of legacy applications still tied to traditional infrastructure. Although some 70 percent of our infrastructure is virtualized, that doesn’t mean our applications are in the cloud.

As part of our cloud strategy, we are striving to transform those applications and leverage modern data center technologies such as software-defined storage, Flash, networking and security; automation; and self-service capabilities. This includes both on-premises and off-premises services. But transforming all of our applications at once isn’t a practical or even an efficient approach.

Instead, we’ve decided to link our cloud transformation efforts to our ongoing end-of-service-life initiative. Typically, we refresh an app when some piece of our infrastructure stack has reached the end of its service life—meaning an old hardware platform or an old software platform is due to be replaced.

As we replace aging platforms, we are investing in next-generation infrastructure featuring All-Flash storage, scale out design and software defined components.  To that end, we are now reviewing each app slated for refresh to determine the best way to move it to the cloud.

Transforming an app to the cloud can be an involved process, especially if it must be rewritten to comply with the cloud native framework. In recognition of the fact that not all app developers are ready to take on that change, we decided to create a range of cloud-enabling service offerings for them to choose from.

Infrastructure as a Service

The first option for transforming legacy apps is Infrastructure as a Service, which provides a sort of wholesale option for application developers that want to do everything themselves and not worry about the hardware. IT gives users compute, storage and network services and they can go and create the templates, and services to build their own applications.

This approach gives app developers complete control of how their app is being built without having to maintain their own infrastructure. We are taking care of the hardware and they control their app development environment with little reliance on IT.  We are essentially giving them the ability to do what they want to do.

VM as a Service (or IaaS+)

Our next service offering gives developers a middle ground between PaaS and IaaS. With VM as a Service (or IaaS+), we provide users access to blueprints and templates through a service catalog from which they select the services they want. We have chosen to use the VMware Validated Design–a pre-defined reference architecture to build and deploy a software defined data center —to accelerate this service.

With this service, we are cloud enabling the existing app without rewriting it. VMaaS decouples the app from the actual physical infrastructure and moves it to a more cloud-enabled platform—a modern data center infrastructure which includes templates and automation.

With VMaaS, we offer users different web, app and data base combinations. Think of it as something of a mix and match menu: You want web servers? How many and what type. We offer red, green and blue. You want an app server? We have yellow, orange and purple. You want data bases? Fine, we have black, brown and white.

Once the developer has placed their order, we knit them together and say, “Here’s your framework for installing your application.”

VMaaS not only offers app developers the ability to provision infrastructure but also gives them the tools and templates to spin up or spin down their environments as they need to. It provides cradle-to-grave management of their virtual infrastructure.

Platform as a Service

The third option for transforming legacy apps is Platform-as-a-Service, which is the cloud native option in which the application is written to maximize the use of cloud environments where tight coupling of applications to underlying infrastructure is eliminated. This service uses  Pivotal Cloud Foundry (PCF), a cloud platform to deploy modern cloud-native apps.

This service is for developers who want the true cloud experience—those who want to write a new app or rewrite their existing app in a way that conforms to cloud-native design standards. The result here is a lighter weight app that can readily move on and off-premise.

PaaS is a sophisticated platform where a lot of the process is already done. Some developers may find it too restrictive or too advanced. That’s where our other service options come in. By linking our cloud-enablement effort to our end of service life processes and offering various levels of app transformation services, we have begun to gradually move our apps to the cloud. And we are doing it giving our business developers the choice on how they make that move.

Modernizing our applications is just one important piece of our ongoing IT and Digital transformation, which also requires modernizing our infrastructure and adopting people and process changes to transform our IT operations.

Check out Paul DiVittorio’s session, Modern Data Center Transformation: Dell IT Case Study (ST01964BU) IT with a Modern Data Center Strategy, August 31, 12:00-1:00 p.m. at VMworld 2017, August 27-31, 2017 at the Mandalay Bay Hotel and Conference Center in Las Vegas.

Author information

Paul Divittorio
Paul Divittorio
Director, Cloud Infrastructure, EMC IT

The post Moving your applications to the cloud: A multi-level Strategy appeared first on Dell IT Blog.

The 6 Myths About Servers That Almost Everyone Believes

$
0
0
EMC logo

“How long do we use our servers?”

“Until they die.”

That was a real conversation I had with another system admin during my first week at a new job in 1998. Google “how long should you go before refreshing servers” and you will find that the topic is still debated. But, why?

When you buy something that isn’t disposable, you own it for its useful life. Your goal is to get the most out of it before you have to replace it. A car is a great example. You buy it and plan to drive it for as long as possible. It costs a lot upfront and you need to recoup that cost, right? But, how long before an old car becomes too expensive to maintain compared to buying a new one? Repair cost is the key variable for most people to consider. But, if you commute an hour each way to work, you might factor in the increased fuel efficiency of a new car. But if you don’t drive much, a couple of extra miles per gallon is pretty meaningless. On the other hand, to the owner of a taxi company in New York City, a few extra miles per gallon could save millions.

Twenty years ago, most companies thought about technology as a cost to minimize and tried to squeeze every last ounce out of it. Today, organizations of all shapes, sizes, and organizational structures are transforming and rely heavily on technology. For a business to transform, they must embrace doing things differently. An easy place to start is by evaluating your company’s server refresh guidelines. Some companies have learned to maximize their server investment and opt for faster refreshes. But, as the IDC report Accelerate Business Agility with Faster Server Refresh Cycles shows, for many companies, old habits are hard to break. Let’s take a look at the myths leading to an average server refresh cycle of 5.8 years when it’s clearly beneficial to refresh more often.

Myth 1: To Get the Most Value From a Server, Use It as Long as Possible

Not true. It actually costs a company more to keep existing servers instead of refreshing them. A lot more. Companies who refresh servers every 3 years have operating costs 59% lower than companies who refresh their servers every 6 years.

Myth 2: The Cost to Acquire and Deploy a New Server Is More Than Just Keeping the Old One

Wrong. The cost of operating a server in years 4-6 increases to 10 times the initial server acquisition cost. The reason is because the increasing costs are not linear and jump significantly as a server ages past 3 years. In fact, by year 6, a server requires 181% more staff time and exerts a productivity cost of 447% more than in year 1.

Myth 3: Upgrading Servers Is a Cash-Flow Drain

It’s actually the opposite. Even when you factor in the cost of acquisition and the cost of deploying new servers, a company that refreshes twice every 6 years instead of just once will have a 33% lower net cash flow. And if servers are a significant investment for your business (e.g. 300 servers), that could translate to a savings up to $14.6 million. Efficiencies of new servers and the benefits of consolidation (e.g. replacing 300 servers with 247 servers) make this savings possible.

Myth 4: It Takes Too Long to Realize the Benefits of Refreshed Servers

Incorrect. Remember, the cost of a server starts to increase rapidly in year 4. But, if you replaced it instead, you would not incur those costs. The cumulative benefits of user productivity time savings, IT staff time savings, and IT infrastructure cost savings pays for that new server in less than a year. To be more precise, it occurs in about 9 months.

Myth 5: Newer Servers Have No Impact on Increasing Revenue

False. But, this concept can be a little difficult to understand without hearing from the companies who increased their revenue. IDC highlights two different companies in their report: a logistics company and an educational services company. In both cases, the greater agility, capacity, and shorter time to delivery with the new servers helped them win additional business. IDC calculated the additional revenue per server at $123,438.

Myth 6: Buying Servers Is a Capital Expense

Not anymore. There are traditional leasing options as well as new innovative payment options such as Pay As You Grow and Provision and Pay from Dell Financial Services.

Don’t let these myths continue to keep your business from moving forward. The new generation of PowerEdge Servers are more scalable and agile, perform better, are more power efficient, and can help you consolidate more than the previous generation of servers released about 3 years ago. And if you embrace a 3 year server refresh cycle instead of every 6 years, you can take advantage of these innovations and say goodbye to higher costs.

ENCLOSURE:https://blog.dellemc.com/uploads/2016/09/Blog-1-e1502810727989.jpg


Dell EMC Recognized by Microsoft as Its #1 Deployment Partner for Windows 10

$
0
0
EMC logo

Windows 10 is on track to be the fastest-growing Windows operating system in history. I’ve heard it referred to as “the best of Windows 7 and 8,” “the most secure Windows,” and the “best PC operating system ever.” That’s pretty impressive.

You would think everyone would want that, especially big and successful companies looking for ways to empower their employees to move faster than the competition. But numbers showed that adoption of Windows 10 had been slow in enterprise accounts. This might be because many customers only recently completed their Windows 7 migration or were discouraged by the usability of Windows 8.

To help companies past this hurdle, Microsoft and Dell EMC set out specifically to work with enterprise accounts and help them understand the benefits of using Windows 10. We also wanted to help clients understand the migration path from Windows 7 to Windows 10 (which is much easier, I might add, than moving from Windows XP to Windows 7 — especially around application compatibility).

Dell EMC Services led this effort by performing customer workshops, proof of concepts and security briefings to provide a roadmap for both migration and ongoing update planning. You see, Windows 10 creates an environment where quality and feature updates are pushed to PCs multiple times a year, allowing your teams to stay current on new features.

This approach and hard work was acknowledged by Microsoft this month when they recognized Dell EMC as its #1 Enterprise Deployment Partner for Windows 10!

“Dell has been a fantastic Windows 10 advocate this year, driving more Windows 10 Proof of Concept and Pilot projects through the Windows Accelerate Program than any other partner. While this by itself is a significant accomplishment, the Dell team has also worked to enable even more Windows 10 customer deployments through the launch and sale of great new Windows 10 devices, and even a new PC-as-a-Service offer than hundreds of other partners can use,” said Stuart Cutler, Global Director of Windows Product Marketing.

“Additionally, Dell has pushed the envelope around Windows-as-a-Service efficiencies, application compatibility testing, and new Windows Enterprise E5 security service offers.” He said, “Dell brings a lot of great talent and creativity to the Windows 10 ecosystem. Their energy and innovations help not just Dell and Microsoft, but many other partners around the world, too.”

Erica Lambert, Dell EMC Vice President, Global Channel Services Sales accepts the Microsoft Partner Network Windows & Devices Partner of the Year award from Ron Huddleston, Microsoft Corporate Vice President, One Commercial Partner Organization

We believe in Windows 10 and our new modern devices as the foundation for workforce transformation and we have world class expertise and resources to help our customers around the world.

Want proof? I’m proud to say we won a total of four Partner of the Year awards for our work with customers on Windows 10 deployments. In addition to the prestigious Windows and Devices Partner of the Year, US Windows Marketing named us the Enterprise Deployment Partner of the Year, US EPG named us Windows 10 Consumption Partner of the Year, and the global Consulting & SI Partner team named us the Windows Deployment Partner of the Year.

The Dell EMC US Support & Deployment team celebrating for being recognized by Microsoft’s US Windows Marketing team as the Enterprise Deployment Partner of the Year.

We started with helping our customers with Windows 10 migrations, and now, at the request of our customers, we’re taking it to the next level with two new managed services offerings: Windows-as-a-Service (WaaS) to manage the continuous updates, and we’re providing enhanced security through our Windows Defender Advanced Threat Protection program.

So, if you are looking for help to get to Windows 10, contact your Dell EMC representative and let’s work together to bring these expert deployment practices to work for you. Let us be your Deployment Partner of the Year too!

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/MPN-Ron-and-Erica-blog.jpg

Our Services Journey and Our Promise

$
0
0
EMC logo

 Announcing the Next Step in our Enterprise Support & Deployment Services Journey

There is really nothing better than hearing firsthand about the successes and challenges our customers face and talk about how, together, we can help them accelerate their IT transformation.

When we announced at Dell EMC World in May that we would be aligning our services portfolio under the existing Dell brands of ProSupport and ProDeploy, I heard a clear message from customers all over the world.  “Your team provides us excellent service. You have great experts who help us solve issues and technologies that help us avoid problems. How will these changes impact your service programs and my organization?”

With that in mind, we knew we had our work cut out for us when we began the services transformation journey to integrate our portfolios.  Beginning today, we will be offering the ProSupport Enterprise Suite on Dell EMC products and solutions.

What does this mean to our customers?

  • Legacy EMC customers will be able to receive the same level of support they receive today under the unified ProSupport brand portfolio. For example, if you traditionally purchased the Premium support option on your legacy EMC products, you will now purchase ProSupport for Enterprise with the 4-hour mission critical option.
  • By leveraging existing processes, the same technical experts, and proactive technologies such as Secure Remote Services (ESRS) and MyService360, legacy EMC customers will continue to experience the same best-in-class service.
  • For the first time ever, legacy EMC customers will now have the option to select ProSupport Plus for Enterprise. This new level of support provides proactive and predictive support for mission critical systems.
  • For our legacy Dell customers, one impact you will notice is that we are rebranding the existing Technical Account Manager (TAM) to now be called a Technology Service Manager (TSM). There will be no change to the level of service that you are receiving from your TSM.

And there is nothing we take more seriously than our customers’ experience while helping them navigate their IT transformations from core to edge to cloud. Our global customer satisfaction score of 95% for ProSupport Plus for Enterprise shows we’re already on the right track.

Having said all this, I can tell you this is just the first step in our journey.  In our next phase we will be working hard to integrate our contact centers, consolidate our support contract management, and unifying our deployment services under a single brand for a single experience…but most of all…our promise is that we will never stop listening to our customers.

Our services transformation has one primary objective – to better enable customers to realize theirs. We do that by providing a seamless and consistent services engagement across all Dell EMC products, while continuing to evolve our best-in-class service features, processes and execution.

I look forward to connecting with customers during every step of our journey to help us adjust and optimize our plans to meet the evolving needs of customers and their unique IT transformations.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/image-for-prosupport-unification-blog.jpg

Forging New IT Skills With Cloud Foundry

$
0
0
EMC logo

One of the external technology inflections we’ve been actively pursuing in ERAA/Office of the CTO has been “Cloud Foundry” – a container-based architecture running apps in any programming language over a variety of cloud service providers, supporting the full application development lifecycle, from initial development through all testing states to deployment. Yes, this is what going Cloud Native means!

We believe this is one of the next generation platforms coming online today. We believed in it so much that we engaged with the Cloud Foundry Foundation at the Platinum level (along with Pivotal and VMWare) three years ago. Our Global CTO John Roese serves as the Cloud Foundry Foundation Board Chairman.  Take a look at John’s video to hear more about the rapid growth of Cloud Foundry and industry impact.

 

Cloud Foundry Summit Silicon Valley in July of this year brought together developers, customers, and corporate members for an exciting week of sessions, trainings, speakers, and announcements. We also had the chance to talk with John and Abby Kearns, Executive Director of the Cloud Foundry Foundation, following the Summit to hear firsthand about the significant accomplishments made during the Summit – including the Launch of the Cloud Foundry Certified Developer Program!

 

Dell EMC is proud to be one of the leading corporate sponsors to add these courses to our education portfolio, providing knowledge and skills that are required for future technology success across the ecosystem.

The Cloud Foundry courses now available are:

Free Community Courses

New to Cloud Foundry? Take a look at these foundational community courses, including:

  • Cloud Foundry for Beginners: From Zero to Hero
  • Microservices on Cloud Foundry: Going Cloud Native
  • Operating a Platform: BOSH and Everything Else

Free MOOC: Introduction to Cloud Foundry and Cloud Native Software Architecture

What you’ll learn in this introductory course:

  • An introduction to cloud-native software architecture, as well as the Cloud Foundry platform and its components, distributions,​​ and what it means to be Cloud Foundry certified. ​
  • How to build runtime and framework support with buildpacks.
  • Learn “The Twelve Factor app” design patterns for resiliency and scalability.
  • Get the steps needed to make your app “cloud-native”.
  • Understand how each component of Cloud Foundry combine to provide a cloud-native platform.
  • Study techniques and examples for locating problems in distributed systems.

You can also obtain a Verified Certificate after completing this course from edX (massive open online course provider) for $99

Cloud Foundry for Developers

  • This self-paced eLearning course provides developers with a comprehensive hands on introduction to the Cloud Foundry Platform and students will use CF to build, deploy and manage a cloud native microservice solution.
  • After this course, you’ll be well prepared for the Cloud Foundry Certified Developer Exam!
  • And there’s a discount if you do the bundled registration for the course and the exam.

Check out these courses and take advantage of the educational opportunities within the Cloud Foundry Community – be an early practitioner in an emerging technology that will shape future cloud services, whether you are a seasoned IT professional or a student looking for the next competitive skills advantage.

One more thing – what’s a technology blog without a quote from Einstein? This one seems appropriate for today’s discussion:

 Education is what remains after one has forgotten what one learned in school.

Interested in learning more about our global ERAA programs or have ‘horizon’ ideas you’d like to share? Drop me an email, and let’s talk!

 

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Blogpost_1.2.png

Is That IaaS or Just Really Good Virtualization?

$
0
0
EMC logo

I was recently pulled into a Twitter conversation that spawned off a blog post Paul Galjan made concerning Microsoft Azure Stack IaaS and its use cases in which he correctly pointed out that Azure Stack is not just a VM dispenser.  The question was posed as to what exactly constitutes IaaS vs. virtualization (VM dispenser) and where does each fit into the hierarchy of “cloud”?

Let’s start out by clearly defining our terms:

Virtualization in its simplest form is leveraging software to abstract the physical hardware from the guest operating system(s) that run on top of it.  Whether we are using VMWare, XenServer, Hyper-V, or another hypervisor, from a conceptual standpoint they serve the same function.

In Enterprise use cases, virtualization in and of itself is only part of the solution. Typically, there are significant management and orchestration tools built around virtualized environments. Great examples of these are VMware’s vRealize Suite and Microsoft’s System Center, which allow IT organizations to manage and automate their virtualization environments. But does a hypervisor combined with a robust set of tools an IaaS offering make?

IaaS. Let’s now take a look at what comprises Infrastructure as a Service.  IaaS is one of the service models for cloud computing, and it is just that – a service model.  NIST lays it out as follows:

The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).

You may be looking at that definition and asking yourself, “So, based on that, virtualization with a robust set of management tools is capable of delivering just that, right?” Well, sort of, but not really. We must take into account that the service model definition must also jive with the broadly accepted “Essential Characteristics” of Cloud computing.

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

Essentially all of these core tenets of “cloud” are capabilities that must be built out on top of existing virtualization and management techniques.  IaaS (as well as PaaS for that matter) caters to DevOps workflows enabling Agile code development, testing and packaging, along with release and configuration management, without regard for the underlying infrastructure components. This is clearly not the same as virtualization or even “advanced virtualization” (a term I hear thrown around every so often).

So then how does that relate back to the conversation concerning Azure Stack IaaS and VM dispensers? For that we need to look at how it plays out in the data center.

Building a virtualization environment has really become table stakes as a core IT function in the Enterprise space.  There are plenty of “DIY” environments in the wild and increasingly over the past few years we’ve seen platforms in the form of converged and hyper-converged offerings that streamline virtualization efforts into a turnkey appliance for hosting highly efficient virtualized environments – Dell EMC’s VxBlock, VxRack, and VxRail being leaders in this market.  That said they are not “IaaS” in a box – they are in essence, and by no means do I intend this as derogatory, “VM dispensers,” albeit highly advanced ones.

For customers that want to move beyond “advanced virtualization” techniques and truly embrace cloud computing there are likewise several possibilities.  DIY projects are certainly plausible though they have an extremely high rate of failure – setting up, managing, and maintaining home grown cloud environments isn’t easy and the skill sets required aren’t cheap or inconsequential.  There are offerings such as Dell EMC’s Enterprise Hybrid Cloud which build on top of our converged and hyper-converged platform offerings, building out the core capabilities that make cloud, cloud.  For customers that are fundamentally concerned with delivery of IaaS services on-premises that can be delivered from a standardized platform managed and supported as a single unit, Enterprise Hybrid Cloud is a robust and capable cloud.

So, what about something like the Dell EMC Cloud for Microsoft Azure Stack?  In this case, it’s all about intended use case. First let’s understand that the core use cases for Azure Stack are PaaS related – the end goal is to deliver Azure PaaS services in an on-prem/hybrid fashion. That said, Azure Stack also has the capability to deliver Azure consistent IaaS services as well.  If your organization has a requirement or desire to deliver Azure based IaaS on-prem with a consistent “infrastructure as code” code base for deployment and management – Stack’s your huckleberry.  What it is not is a replacement for your existing virtualization solutions.  As an example, even in an all-Microsoft scenario there are capabilities and features that a Hyper-V/System Center-based solution can provide in terms of resiliency and performance that Azure Stack doesn’t provide.

In short – virtualization, even really, really, good virtualization, isn’t IaaS and it’s not even cloud.  It’s a mechanism for IT consolidation and efficiency. IaaS on the other hand builds on top of virtualization technologies and is focused on streamlining DevOps processes for rapid delivery of software and business results by proxy.

In scenarios where delivery of IaaS in a hybrid, Azure-consistent fashion is the requirement, Azure Stack is an incredibly transformative IaaS offering.  If Azure consistency is not a requirement, there are other potential solutions in the forms of either virtualization or on-prem based IaaS offerings that may well be a better fit for your organization.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Virtualization.jpg

Going (Cloud) Native

$
0
0
EMC logo

Interoperability in computing environments is a repeating problem affecting multiple parts of an organization. When systems cannot communicate with one another, the result is slowdowns in every way, from technology performance (as data has to be converted from one system to another) to human processes (such as teams spending time in yet another meeting trying to devise workarounds for data translation challenges). Interoperability must emerge, because common formats to send and store data are a necessity. This isn’t a new topic or concept, but this time, it’s being applied to modern infrastructure.

It’s time to address this issue in DevOps. We want to do for storage and containers today what Sun’s network file system protocol (NFS) did for client/server storage in the 80s: enable performance, simplicity, and cross-vendor compatibility.

Happily, we’re in a position to participate in the process and the conversation around ensuring different pieces of the ecosystem fit together. The Cloud Native Computing Foundation, a division of the Linux Foundation, aims to help organizations build, deploy, and run critical applications in any cloud. The CNCF has under its umbrella other projects aligned with our goals, such as Kubernetes and Prometheus.

Today we recognize industry collaboration moving forward with major container and storage platforms. This initiative is called CSI, short for Container Storage Interface, and we are proud to have just introduced a first example implementation. We’re seeing immediate progress, as we expect CSI to be introduced as a CNCF project later this year, bringing storage for cloud native officially into Cloud Native environments. While there are other organizations and initiatives in this space, such as OpenSDS, we are fully supportive of CSI and are proud to be a part of the community of customers, container orchestrators and other vendors standing behind this initiative. Along with this, we see REX-Ray as a prospective CNCF project, bringing support for storage for cloud native workloads.

Why It’s Necessary

 In nearly every historical case, interoperability standards have emerged: common formats we use to send and store data. Specifically in the storage industry, we’ve even seen this time and again from SCSI, NFS, SMB, CIFS, and VVOLs. Since I’ve always been a fan of Sun, I’ll use NFS as my example. In 1984 Sun developed the NFS protocol, a client/server system that permitted (and still permits!) users to access files across a network and to treat those files as if they resided in a local file directory. NFS successfully achieved its goals of enabling performance, simplicity, and cross-vendor compatibility.

NFS was specifically designed with the goal of eliminating the distinction between a local and remote file. To a user, after the appropriate setup is performed, a file on a remote computer can be used as if it were on a hard disk on the user’s local machine.

There’s plenty of architectural history to consider in how Sun achieved this, but the bottom line is: If you are a server, and you speak NFS, you don’t care who or what brand of server you’re talking to. You can be confident that the data is stored correctly.

Thirty years later, we technologists are still working to improve the options for data interoperability and storage. Today, we are creating cloud-computing environments that are built to ensure applications take full advantage of cloud resources anywhere. In these cases, the same logic applies as the NFS and server example above: If you have an application, and it needs fast storage, you shouldn’t have to care about what is available and how to configure it.

Storage and networking continue to be puzzle pieces that don’t always readily snap into place. We’re still trying to figure out if different pieces all work together in a fragmented technology ecosystem – but now it’s in the realm of containers rather than hard disks and OS infrastructure.

Ensuring Interoperability Is a Historical Imperative

As I wrote earlier, and to quote Game of Thrones,  “winter is coming.” Whatever tenuous balance we’ve had in this space is being up-ended. Right now it’s the Wild West, where there’s 15 ways to do any one thing. As an industry, cloud computing is moving into an era of consolidation where we combine efforts. If we want customers to adopt container technology, we need to make the technology consumable. Either we need to package it in a soup-to-nuts offering, or we must establish some common ground for how various components interoperate. Development and Ops teams have the opportunity to argue about more productive topics, such as whether the pitchless intentional walks makes baseball less of a time commitment or whether it sullies the ritual of the game.

The obvious reason for us to drive interoperability by bringing storage to cloud native is that it helps the user community. After all, with a known technology in hand, we make it easier for technical people to do their work, in much the same way that NFS provided an opportunity for servers to communicate with storage. But here the container provider is the server, and the storage provider is the same storage we’ve been talking about.

The importance of ensuring interoperability among components isn’t something I advocate on behalf of {code}; it’s a historical imperative. Everyone has to play here; we need to speak a common language in both human terms and technology protocols. We need to create a common interface between container runtimes and storage providers. And we need to help make these tools more consumable to the end user.

The end goal for storage and applications is to offer simplicity and cross-vendor compatibility where storage consumption is common and functions are a pluggable component in Cloud Native environments. We feel that this is the first step in that direction.

ENCLOSURE:https://blog.dellemc.com/uploads/2017/08/Cloud-tablet-phone-laptop-1000x500.jpg

Viewing all 8970 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>