Quantcast
Channel: Blog | Dell
Viewing all 8970 articles
Browse latest View live

Adaptive IAM: On the Front Lines of Cyber Security

$
0
0
EMC logo

Like most technologies, Identity and Access Management (IAM) has been challenged by new business and IT trends that are causing serious disruptions in how we approach information security.  The exponential growth of digital identities coupled with the increasing use of software as a service and mobile and cloud platforms have made the traditional perimeter all but disappear.  As a result, legacy IAM tools that have been a security mainstay for decades are simply failing to keep up.

So how exactly do we protect the borderless enterprise?  As the saying goes, “Nothing Endures but Change” and to help navigate the current threat landscape, IAM solutions need to adapt as fast as the rapidly-changing threat scenarios.  Identities are at the front lines of the everyday battle for cyber security and IAM systems must become the front line of defense.

Next-Generation IAM

We’ve been talking a lot this year about the notion of an “anti-fragile” security system – the idea that security solutions must become stronger and smarter with each attack or disorder.  These solutions must be adaptable and intelligent to make detecting and responding to both current and future attacks a much quicker process.

In a recently released technology brief called “Adaptive IAM: Defending the Borderless Enterprise” <link>, we examine this concept for IAM.  The brief discusses how IAM must be reinvented to be more intelligent and adaptable in order to stay relevant in today’s hyper-extended IT environments.

Instead of guarding stationary perimeters, Adaptive IAM patrols a dynamic “situational perimeter” to help enforce security whenever and wherever users interact with corporate data and resources. With the rise of Advanced Threats and multi-vector attacks, gone are the days where trust can be established by a single successful log-on; trust must be continually verified and re-checked with each interaction between user and protected resource.

Adaptive IAM Principles

Adaptive IAM includes 4 guiding principles:

  1. Identity is established via a rich user profile that helps spot significant deviations from “normal” behavior, which can often signal security problems.
  2. Identity and access controls must be risk-based to verify users while adjusting access controls based on the risk levels of each transaction/activity.
  3. Real-time analytics must be used to assess risk creating the intelligence needed to distinguish good behavior from bad. This will require Big Data analytics to analyze vast amounts of data, assess risk, detect problems and interrupt users attempting unsafe activities.
  4. Consumer-level convenience must be the norm by making identity controls and analytics invisible to corporate end users.  Users are only disrupted if unacceptable activities or levels of risk are detected.

Journey to Adaptive IAM

Going from the current state of IAM to this next-generation will certainly be a journey – not only for customers, but for the vendor community as well.  We need to pave a smooth migration path for our customers and while no one is 100% of the way there yet, advances are being made toward this IAM ideal.  Our recent launch of RSA Authentication Manager 8 was a big first step, and we’ve been hard at work evolving other parts of the RSA Identity and Access Management portfolio.  Today we announced several of updates and critical integrations that can help drive the journey for our customers:

  • Rich User Profile:  RSA’s market-leading risk-based engine, delivered in the recently launched RSA Authentication Manager 8 software as well as RSA Adaptive Authentication software, is designed to transparently absorb information from a variety of device, user and environmental factors to determine normal user behavior. To make even more secure authentication and authorization decisions, the latest version of RSA Adaptive Directory 6.1 software is designed to allow organizations to aggregate and centrally manage identity information across both on-premise identity data stores as well as cloud applications to create rich user profiles.
  • Real-time Analytics to Assess Risk and Integrate with Risk-based Access Controls: Deeper integration between RSA Access Manager 6.2  software and RSA Adaptive Authentication software as well as with RSA Authentication Manager 8  software can help customers blend risk analytics to determine deviations from the norm in the user’s profile  with stronger authentication and access controls.

IAM solutions need to adapt as fast as the rapidly changing threat scenarios.  This is security’s “new normal” and we must evolve.  By creating an IAM solution that embodies the anti-fragile concept – one that is adaptable and dynamic – we can create ‘situational perimeters’ around the borderless enterprise and arm ourselves for the front lines of this cyber security battle.

 

Update your feed preferences

A Hacktivist, Phisherman and Average Joe Walk into a Bar…

$
0
0
EMC logo

By Limor S. Kessem, Cybercrime and Online Fraud Communications Specialist, RSA

Although the title of this blog may call to mind the first line of quite a number of old jokes, it appears that hacktivists, phishers and the everyday Internet user have enough in common to raise concerns of financial fraud, especially in light of the recent hacktivist-conceived operation dubbed #OpUSA.

While it is true that most cyber-attacks orchestrated by hacktivists focus on DDoS onslaughts targeting authority-type entities and banks, all too many times they add a sting to the operation and hack into immense databases containing private user information.

Hacktivism:  Disruption or Corruption?
On their quest for notoriety, media attention and overall making their points, critics say that hacktivists tend to cross the line when they publicly release untoldamounts of data, providing links to the trove and facilitating its free-for-all download.

Some hacktivists will call out every target on their list and post their threats publicly and well in advance, while those targeted will prepare to fend off the attack and advise users as needed. But at the end of the day, who plays the role of the defenseless meek? Not the targeted entities who are expecting the blow, but rather, very much like other wars—innocent bystanders and ‘average Joes.’

#OpUSA-Themed TweetsDo the lines between idealistic motives and money get blurred for hacktivists?

#OpUSA-Themed Tweets
Do the lines between idealistic motives and money get blurred for hacktivists?

Out Go The Hacktivists, In Come the Phisherman

In one of the largest hacks perpetrated in the name of hacktivist ideals, the end result, beyond the damaged brand reputation of a multinational corporation, was a public leak of account information belonging to nearly 25 million Sony Entertainment users. That was about a third of a previous leak of over 70 million accounts, also inflicted by hackers operating in the name of an opinion they formed and acted upon.

Taking the Sony case as a mere example, because hacktivist cases such as these have been increasingly plaguing the Internet, it is clear that the one party that did not expect the hack – other than Sony, of course – were the millions of ordinary users whose data was offered up freely thereafter. Those same users were also the ones who did not have advisors, lawyers and information security experts to help them recover from the actual and potential damages of the hack and its possible effects on their identities and personal finances.

For fraudsters, the large-scale hacks are like candy. Hacktivists will set up publicly available download links for anyone to be able to see the exposed databases, their hunting trophy, and end their part there. But as soon as the links are public, phishers and fraudsters – the vultures, if you will – will access and download it before it is taken down by the hosting authorities. By that time, the real damage to these average Joes is nearly done.

PHISH-N’-LISTS Phish-N’-Lists
Large hacks containing a database replete with email addresses, not to mention payment cards or other financial data, are an attractive loot for phishers to come for and discuss in underground communities. Instead of having to do their own hacking, collecting and stealing, they can enjoy the spoils and bank on the “freshly” dumped data, compliments of zealous hacktivists, paving a shortcut to fraud scenarios that make a phisher’s daily bread:

  • Monetizing gaming account credentials by selling them to other gamers
  • Enjoying a list of valid email addresses to target with phishing spam
  • Leading potential victims to phishing and malware sites and getting paid per install
  • Harvesting financial information that can be sold to fraudsters and CC shops
  • Using leaked and stolen data for fraud and identity theft
  • Checking what other accounts that user has, because as recent research shows, 61% of accounts are set-up with reused passwords.

It’s easy to see how an attack that stems from idealistic motivations, targeting very large entities and supposedly conceived in order to protect people’s rights to information, ends up serving the fraudsters and flooding the Internet with confidential data.

With the variety of actors that gain access to information publicly posted online, hacktivists end up inadvertently damaging the very people whose interests they claim to represent.

Conclusion
The number of phishing attacks recorded monthly is known to vary, fluctuating upwards and downwards and there’s limited capability to forecast a trend that is so dependent on fraudster resources.

Although totals are often tricky to predict, some seasonal trends do repeat every year, and perhaps, without realizing, a rise in phishing is to be expected after large database hacks that release millions of account addresses into the cybercrime wild.

Phishing attacks in April 2013 have so far only shown a moderate increase over the previous month, likely linked with tax season-themed attacks, but as OpUSA is executed, and news of hacked accounts wash through Pastebin and the Internet, we may just see a more significant rise before the quarter is out.

Limor Kessem is one of the top Cyber Intelligence experts at RSA, The Security Division of EMC. She is the driving force behind the cutting-edge RSA FraudAction Research Lab blog Speaking of Security. Outside of work you can find Limor dancing salsa, reading science fiction or tweeting security items on her Twitter feed @iCyberFighter

Update your feed preferences

vSphere HA – VM Monitoring sensitivity

$
0
0
EMC logo


Last week there was a question on VMTN about VM Monitoring sensitivity. I could have sworn I did an article on that exact topic, but I couldn’t find it. I figured I would do a new one with a table explaining the levels of sensitivity that you can configure VM Monitoring to.

The question that was asked was based on a false positive response of VM Monitoring, in this case the virtual machine was frozen due to the consolidation of snapshots and VM Monitoring responded by restarting the virtual machine. As you can imagine the admin wasn’t too impressed as it caused downtime for his virtual machine. He wanted to know how to prevent this from happening. The answer was simple, change the sensitivity as it is set to “high” by default.

As shown in the table high sensitivity means that VM Monitoring responds to missing “VMware Tools heartbeat” within 30 seconds. However, before VM Monitoring restarts the VM though it will check if their was any storage or networking I/O for the last 120 seconds (advanced setting: das.iostatsInterval). If the answer is no to both, the VM will be restarted. So if you feel VM Monitoring is too aggressive, change it accordingly!

Sensitivity Failure Interval Max Failures Max Failures Time window
Low 120 seconds 3 7 days
Medium 60 seconds 3 24 hours
High 30 seconds 3 1 hour

Do note that you can change the above settings individually as well in the UI, as seen in the screenshot below. For instance you could manually increase the failure interval to 240 seconds. How you should configure it is something I cannot answer, it should be based on what you feel is an acceptable response time to a failure. Also, what is the sweet spot to avoid a false positive… A lot to think about indeed when introducing VM Monitoring.

"vSphere HA – VM Monitoring sensitivity" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

Update your feed preferences

Software Security Training for All

$
0
0
EMC logo

 Fifteen years ago, a common representation of the hacker was a computer science college student hacking systems from his or her dorm room. Nowadays hackers operate on a different scale; they are more often affiliated to criminal organizations or to nation states than to colleges or universities.

The only thing today’s cyber attackers have in common with college students from 15 years ago can be summarized in 2 words: SOFTWARE VULNERABILITY. Most recent days attacks involve the exploitation of a zero day software vulnerability that has certainly been created by software engineers who used to be computer science college students several years ago. Sadly, software security is not a significant part of most software engineering curricula, leaving it to the developers to learn defensive coding techniques by themselves or to their employers to invest in expensive security engineering training.

Early on, SAFECode members acknowledged that all successful software security initiatives have been built on the foundation of a comprehensive security training program, and published in 2009 a report entitled “Security Engineering Training – A Framework for Corporate Training Programs on the Principles of Secure Software Development”. This became a useful resource to help software security leaders define a training program, but it did not do much to address the knowledge gap in software security across the software development ecosystem.

 

This week, SAFECode is going a step further and is releasing to the public online security engineering courses based on internal training materials used by SAFECode members. The first 6 courses of this program were donated by Adobe (thank you Brad!) and then reviewed and enhanced by experts from the other SAFECode member companies. These first courses touch on topics as diverse as Cross Site Request Forgery, access control or injection 101. Please go and check these courses at https://training.safecode.org.

Who is the target audience for these courses?

These courses are for software developers who do not want that the code they create become the target of a cyber attacker’s spear phishing attack. They are also for anybody who is developing a software security curriculum, in a technology company, an IT department, a college or a university and is looking for relevant content. At EMC we are integrating these courses in our existing software security curriculum.

Will SAFECode publish additional courses?

The field of software security is much broader than the 6 topics covered by these initial courses and we are already in the process of reviewing more courses.

With these courses now available, are software vulnerabilities a thing of the past?

There is no silver bullet to providing software assurance, neither a magic tool nor a set of training courses. Software assurance can only be delivered through a comprehensive process (see “Fundamental Practices for Secure Software Development”). A knowledgeable developer community is an absolute prerequisite to the successful roll-out of such process. SAFECode member companies, hope that by releasing these courses we will contribute to improving the collective knowledge of security engineering among the developer community and create a more fertile ground for the broader adoption of secure software development practices, which has been the charter of SAFECode since its inception.

The post Software Security Training for All appeared first on Product Security Blog.

Update your feed preferences

EMC ViPR; My take

$
0
0
EMC logo


When I started writing this article I knew people were going to say that I am biased considering I work for VMware (EMC owns a part of VMware), but so be it. It is not like that has ever stopped me from posting anything about potential competitors so it also will not stop me now either. After seeing all the heated debates on twitter between the various storage vendors I figured it wouldn’t hurt to try to provide my perspective. I am looking at this from a VMware Infrastructure point of view and with my customer hat on. Considering I have huge interest in Software Defined Storage solutions this should be my cup of tea. So here you go, my take on EMC ViPR. Note that I did not actually played with the product yet (like most people providing public feedback), so this is purely about the concept of ViPR.

First of all, when I wrote about Software Defined Storage one the key requirements I mentioned was the ability to leverage existing legacy storage infrastructures… Primary reason for this is the fact I don’t expect customers to deprecate their legacy storage all at once, if they will at all. Keep that in mind when reading the rest of the article.

Let me summarize shortly what EMC introduced last week. EMC introduced a brand new product call ViPR. ViPR is a Software Defined Storage product; at least this is how EMC labels it. Those who read my articles on SDS know the “abstract / pool / automate” motto by now, and that is indeed what ViPR can offer:

  • It allows you to abstract the control path from the actual underlying hardware, enabling management of different storage devices through a common interface
  • It enables grouping of different types storage in to a single virtual storage pool. Based on policies/profiles the right type of storage can be consumed
  • It offers a single API for managing various devices; in other words a lower barier to automate. On top of that, when it comes to integration it for instance allows you to use a single “VASA” (vSphere APIs for Storage Awareness) provider instead of the many needed in a multi-vendor environment

So what does that look like?

What surprised me is that ViPR not only works with EMC arrays of all kinds but will also work for 3rd party storage solutions. For now NetApp support has been announced but I can see that being extended, and I now EMC is aiming to. You can also manage your fabric using ViPR, do note that this is currently limited to just a couple of vendors but how cool is that? When I did vSphere implementations the one thing I never liked doing was setting up the FC zones, ViPR makes that a lot easier and I can also see how this will be very useful in environments where workloads move around clusters. (Chad has a great article with awesome demos here) So what does this all mean? Let me give an example from a VMware point of view:

Your infrastructure has 3 different storage systems. Each of these systems have various data services and different storage tiers. Now when you need to add new data stores or introduce a new storage system without ViPR it would mean you will need to add new VASA providers, create LUNs, present these, potentially label these, see how automation works as typically API implementation differ etc. Yes a lot of work, but what if you had a system sitting in between you and your physical systems who takes some of these burdens on? That is indeed where ViPR comes in to play. Single VASA provider on vSphere, single API, single UI and self-service.

Now what is all the drama about then I can hear some of you think as it sounds pretty compelling. To be honest, I don’t know. Maybe it was the messaging used by EMC, or maybe the competition in the Software Defined space thought the world was crowded enough already? Maybe it is just the way of the storage industry today; considering all the heated debates witnessed over the last couple of years that is a perfectly viable option. Or maybe the problem is that ViPR enables a Software Defined Storage strategy without necessarily introducing new storage. Meaning that where some pitch a full new stack, in this case the current solution is used and a man-in-the-middle solution is introduced.

Don’t get me wrong, I am not saying that ViPR is THE solution for everyone. But it definitely bridges a gap and enables you to realise your SDS strategy. (Yes I know, there are other vendors who offer something similar.) ViPR can help those who have an existing storage solution to: abstract / pool / automate. Yes indeed, not everyone can afford it to swap out their full storage infrastructure for a new so-called Software Defined Storage device and that is where ViPR will come in handy. On top of that, some of you have, and probably always will, a multi-vendor strategy… again this is where ViPR can help simply your operations. The nice thing is that ViPR is an open platform, according to Chad source code and examples of all critical elements will be published so that anyone can ensure their storage system works with ViPR.

I would like to see ViPR integrate with host-local-caching solutions, it would be nice to be able to accelerate specific datastores (read caching / write back / write through) using a single interface / policy. Meaning as part of the policy ViPR surfaces to vCenter. Same applies to host side replication solutions by the way. I would also be interested in seeing how ViPR will integrate with solutions like Virtual Volumes (VVOLs) when it is released… but I guess time will tell.

I am about to start playing with ViPR in my lab so this is all based on what I have read and heard about ViPR (I like this series by Greg Schultz on ViPR). My understanding, and opinion, might change over time and if so I will be the first to admit and edit this article accordingly.

I wonder how those of you who are on the customer side look at ViPR, and I want to invite you to leave a comment.

"EMC ViPR; My take" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

Update your feed preferences

Safeguarding Patient Information During Crisis

$
0
0
EMC logo

By Angel Grant, Senior Manager, Authentication and Anti-Fraud Solutions, RSA

In light of the recent events I’ve reflected on how valuable electronic health records (EHR) and health information exchange (HIE) participation can be in a time of crisis to immediately access critical life saving data on impacted victims.  EHRs not only allow for first responders to quickly access victims’ healthcare information, but also allows for more accurate ambulatory, ER and clinical decision making in life or death situations.

Accompanying the increase of business efficiency and convenience delivered with EHRs, organizations must also maintain concern about privacy, secure access, fraud and the growing cost of security breaches. However, too often in the mix of the chaos we tend to forget how important it is to secure electronic health information during these types of incidents to mitigate the potential risk of theft and non compliance to relevant regulatory requirements. Healthcare (and law enforcement) organizations need to ensure that all first responders, staff members – and volunteers who have access to patient information must be educated and in compliance with their security and privacy policies so that it is not inappropriately leaked to media and even worse used by fraudsters looking to capitalize on a tragedy.

The Healthcare Information Security Today survey, sponsored by RSA, highlights what healthcare organizations are taking into consideration to comply with the HIPAA Omnibus Rule.  The survey shows that most organization’s top security priorities are preventing and detecting breaches, improving regulatory compliance and improving security training.    Also, it reveals that one of the biggest perceived security threats for healthcare organizations is the growing use of mobile devices and business associates taking inadequate security precautions; only 32% of survey respondents expressed confidence in security controls of their BAs and as you can see on the HHS “wall of shame”, a majority of breaches were caused by lost or stolen devices or misplaced laptops.

Yet surprisingly, implementing multi-factor authentication is not one of the top five priorities for technology investments this year. Only 16% are currently using some type of one time password with two-factor authentication and over 89% are just using user name and password to guard against inappropriate access to EHRs.

his_survey_p18_chart

Source: Healthcare Information Security Today

The survey also shows 27% of organizations already offer a personal health record (PHR) portal and 35% have something in the works. The growth in adoption of consumer personal health record (PHR) portals really drives the need for why traditional authentication needs to make way for more dynamic and risk-based authentication.  The financial and online retail verticals have had to rely on such advanced authentication for multimillion user consumer bases.  The time has come for the healthcare industry to adopt these notions as well and deploy an adaptive intelligent framework which can morph as the threats do.  Transparent risk based authentication allows for instant, but secure, access to records in both patient and physician portals which is necessary to expedite emergency situations.  For example, if someone is accessing a patient record in an ER type of situation they need to quickly access data and do not want to be interrupted in their login workflow.  However, if someone is accessing clinical trial information remotely via a mobile device, you may want to require additional or stronger authentication requirements.  The level of authentication should be aligned to the level of risk. Integrating risk-based authentication with access management and identity federation helps organizations establish this balance because the data in a healthcare environment ranges in risk and value (e.g., credit card data for billing to PHI to appointment schedules) and multiple people across multiple functions and entitlements are accessing it.

his_survey_chart_p19

Source: Healthcare Information Security Today

During a time of crisis organizations do not need to be more vulnerable to medical identity theft and fraud.  Advanced security solutions have provided the opportunity to help balance the risk, cost and convenience across all aspects of the healthcare ecosystem mitigating against threats while at the same time taking advantage of the benefits of easier information sharing.

Bottom line – this means improved patient care safety, streamlined business processes, physician productivity, cost efficiencies and most important – saved lives.

Angel Grant is a Senior Manager for RSA’s Authentication and Anti-Fraud solutions. She is responsible for a variety of initiatives which protect organizations against fraud and identity theft.  She has more than 20 years of experience in the security and financial services industries.

Update your feed preferences

2013: A Mobile Datacenter Odyssey

$
0
0
EMC logo

At EMC World last week, Avnet Technology Solutions introduced the Avnet Mobile Data Center Solution for EMC VSPEX.

Click here or on the picture below to access my latest video which provides a bit more about this rolling datacenter-in-a-box environment and features Stefan Voss, Business Development Manager from EMC.

Avnet-Blog-VSPEX-datacenter

What is the Avnet Mobile Data Center Solution for EMC VSPEX?

Exclusively available through Avnet’s U.S. and Canadian partner community, this mobile data center solution leverages VSPEX Proven Infrastructures to create private clouds. Channel partners’ enterprise customers will benefit from the solution by being able to deploy data centers that have been ‘hardened’ to operate in harsh environments to support BC, data center moves, DR, large-scale special events, and remote field locations.

It was named one of the Top 3 hottest products at EMC World this year by Channel Reseller News / CRN (link) and includes System Center, SharePoint, Metalogix, and many more partners.

Find more information here and here


Update your feed preferences

Groove Theory of GRC – Postulate #1: Musicality or Performance?

$
0
0
EMC logo

Welcome to my second in a series of blogs based on what I term “The Groove Theory of GRC.”   As you may or may not know (or infer from this series), I have been a musician for much of my life.  Starting in grade school playing in the school band, I have enjoyed the gift of making music over many years.  While I am no longer a “gigging” musician, I still pick up my craft and noodle at home often.   One aspect of making music that I have enjoyed is the debate between musicality and performance.  Is a great musician guaranteed to be a great performer?  Are all great musical performers talented musicians?

Miles Davis is an easy example of this.  On one hand, you have an intense musical genius that fueled scores of jazz standards and inspired countless musicians across the globe.  On the other hand, you have an individual who later in his career performed quite literally with his back to the audience facing the other musicians and at times seemed oblivious that an audience was even present (Check out this video of his classic song Tutu).   Unfortunately I never got to see Miles Davis in person so I can’t weigh in on the feeling of being physically at one of his performances.  I am sure the power of the musicality was overwhelming but the performance may have left some feeling disconnected from the artist.   My point is that in some cases, you can have one without the other – great musicality without a grand performance or engaging entertainment without a deep, complex musical experience.

How does this fit into my “Groove Theory of GRC”?

Postulate #1:  Optimizing Business Performance is the end goal; Visibility and Accountability is the method.

The end goal of any GRC program should be Performance Optimization.  If GRC were a concert, the performance matters.  I am not talking about lasers and smoke machines.  I am talking about the substantive effect one feels at the end of a great performance – whether it is music, or theatre or a sporting event.  Management and the Board of Directors need to make decisions that are more certain to result in desired outcomes thus optimizing the performance of the business.   The GRC program should set this as the fundamental objective and impact the organization positively.   But great musical performances just don’t happen.  All the lasers and smoke machines in the world cannot make up for a truly awful band.   A talented set of musicians who know their own role, are dedicated to their craft and are communicating together can bring a musicality that transcends the individual members of the band.  This is the magic that makes the performance great.    The strength of the Performance is through the Visibility and Accountability the band members have with each other, the music and the audience.

To make it simple using my analogy, you have to have Musicality AND Performance to completely capture an audience.  Artists such as Michael Jackson, Prince, Frank Sinatra and many others have epitomized this unique blend of talent, personality and commitment.  GRC needs both Performance Optimization as a goal with Visibility and Accountability enabling the performance.  The program must be absolutely concerned about the positive impact to its audience AND based on a collaborative, connected ecosystem of contributors.

What are your organization’s end goals for GRC?  How do your GRC musicians connect, share and keep the audience engaged and entertained?  Do you feel your organization is bringing both performance (focus on business optimization) and musicality (visibility and accountability) to the concert hall?

Update your feed preferences

Non-malware Penetration Techniques of an Advanced Attacker – Podcast #246

$
0
0
EMC logo

The level and sophistication of advanced threats is a constantly moving target pitting the advantages of smart and patient attackers against security teams that often times can’t possibly know what to look for when an attacker employs specialized techniques and tools designed to cloak their movements. What happens when an attacker doesn’t have to rely on malware to infiltrate their target or when an attacker is able to successfully blend in like a legitimate insider? In this edition of the Speaking of Security Podcast, Tom Chmielarski, Practice Lead in RSA’s Advanced Cyber Defense Services shares some of the attack techniques he’s seen used in real breach cases, along with best practices used in the detection and defense of these advanced attacks.

Update your feed preferences

The ATM: Convenience for Consumers….and Fraudsters?

$
0
0
EMC logo

By Amy Blackshaw, Principal Product Marketing Manager, RSA Identity Protection & Verification

ATMs (otherwise known as a Cash Points, Money Machines, Cashlines or sometimes even Holes in the Wall), are a staple of modern life. To the everyday consumer, they are seen as a convenient way to access our bank accounts, even when the branch is closed.  (I remember standing in line at the bank as a child on Saturday mornings with my father so that he could withdrawal the funds our family needed for the week – talk about advanced planning!)  ATMs enable us to get our cash on demand, for those of us who still use cash, and have come a long way since the first machines in the 1960s which dispersed a set amount of funds and sent back the bank card at a later date.

Convenient to consumers, yes – but to fraudsters, ATMs are seen as a way to get their hands on currency that isn’t theirs and unlike an online transaction can be harder to trace.   As a cash-out point for many scams, fraudulent crimes and cyber-attacks the ATM has seen its fair share of unfriendly withdrawals.

Underground Card Marketplace (Source: RSA Anti-Fraud Command Center)

Underground Card Marketplace (Source: RSA Anti-Fraud Command Center)

Fraudsters will typically purchase cards and PINs in the underground or recreate plastic cards using the stolen data from card skimmers (Krebs on Security has some great information on ATM Skimmers).  They will then recruit mules who are the feet on the street that take a cut of every withdrawal they make with the stolen data from ATMs.  Mule recruitment is pretty easy as there are plenty of people looking for quick cash, especially when the unemployment rate is high.

Donkeys

There is an entire ecosystem of criminals who specialize in one or more areas of the carders market.  Mules are recruited by Mule Herders who provide forged plastic cards from Forgers who bought credit card credentials from Traders who bought the compromised credentials from a Fraudster who specializes in hacking into payment systems or social engineering schemes such as phishing.  Each criminal makes money from some point of the chain and continues to feed into the underground economy with their specialty.  Kevin Poulsen’s King Pin describes one Hacker’s (Max Butler) plan to rule the black market in stolen credit cards before his crime ring was taken down by the FBI in 2007.

Source: WIRED

Source: WIRED

Last week the US Department of Justice published an indictment of a cybercriminal gang who used the ATM as the cash out point for a massive global heist – ultimately draining $45M from around the world.  The attackers used “sophisticated intrusion techniques” to hack into the information systems of payment processors and global financial institutions, steal prepaid debit card information and modify withdrawal limits.  The hacked prepaid debit card numbers and pins were distributed to fraudsters in 26 countries who encoded magnetic stripe cards with the compromised card data and withdrew cash from ATMs on a massive scale across the globe.

It is important to note that the prepaid cards used in this attack are typically pre-loaded with a limited amount and are not associated with a specific user account.  These cards lack transaction history and individual behavior patterns which most organizations leverage to monitor fraud.  This is one of the reasons these criminals targeted prepaid cards – they understand the payment ecosystem and exploit areas of weakness. For example if a mule went from ATM to ATM with a stolen genuine debit card associated to an account a transaction monitoring system could have flagged that activity as fraud.  However, with a prepaid card there is no association, transaction or behavioral history.

This latest heist is a reminder that old tried and true attacks will continue to occur without the correct cross channel risk based, intelligent security in place.  Yes, processers need to better protect themselves from breaches and understand the threats their networks face – before an attack occurs, not only after the fact.  But banks need to better understand the transactions that occur at the ATM, online and via their mobile banking to monitor risk and look for anomalous behavior across all channels. For example, if there is an anomaly in withdrawal amount or a large velocity of ATM activity over a short period of time, a risk based authentication system should flag the activity as high risk and create for further investigation.  (It remains to be seen how the roll out of CHIP /PIN based on the EMV protocol will affect card fraud in the US – where ~ 80% of all ATM fraud occurs – but that is a discussion for another day).

RSA Adaptive Authentication ATM Module enables organizations to analyze transactions in the ATM channel using Risk Based Authentication and cross channel fraud detection.  Fraudsters will continue to use the ATM channel to get their hands on cash, and we will continue to stay on top of the attack vectors in this space to provide intelligent controls to protect the end user.

Amy Blackshaw is a Principal Product Marketing Manager within RSA’s Identity and Data Protection Group. In her role, Amy is responsible for the go-to-market strategy for the RSA Adaptive Authentication solution which provides protection against advanced threats in the enterprise and online. Prior to joining RSA, Amy worked in the Energy Industry bringing secure technology solutions for sustainable energy businesses. Amy holds her undergraduate degree from the University of Massachusetts, Amherst, her MBA from Simmons College, and is a CISSP.

Update your feed preferences

To Cybercriminals, The Size of a Company No Longer Matters

$
0
0
EMC logo

Gone are the days when it was thought that size of the company matters to the cybercriminals.  The latest PwC Information Security Breaches Survey 2013 shows that there has been a significant rise in the number of small businesses that were attacked by an unauthorized outsider in the last year – up by 22%.  Interestingly large organizations only went up by 5%.  The cybercriminal has moved on to stealing intellectual property or corporate secrets as that’s where the real money is and small companies become easy targets as many do not have the resources or budgets to fully protect their information.

It’s time to understand the differences between corporate secrets and custodial data.

Secrets refer to information that the enterprise creates and wishes to keep under wraps. They tend to be messily and abstractly described in Word documents, embedded in presentations, and enshrined in application-specific formats like CAD. Secrets that have intrinsic value to the firm are  almost always specific to the enterprise’s business context — where an interested party could cause long-term competitive harm if this information is obtained. Keeping proprietary knowledge away from competitors is essential to maintaining market advantage.

Typically, companies in knowledge-intensive industries such as aerospace and defense, electronics, and consulting generate large amounts of confidential intellectual property that present barriers to entry for competitors. Unlike with toxic data spills, failures to protect secrets are almost never made public.

By contrast, legislation, regulation, and contracts compel enterprises to protect custodial data. Mandates that oblige enterprises to be good custodians include contractual obligations like the Payment Card Industry Data Security Standard (PCI-DSS) and data breach and privacy laws. Custodial data has little intrinsic value in and of itself, but  when it is obtained by an unauthorized party, misused, lost or stolen, it changes state.Data that is ordinarily benign transforms into something harmful.

When custodial data is spilled, it becomes “toxic” and poisons the enterprise’s air in terms of press headlines, fines, and customer complaints. Outsiders, such as organized criminals, value custodial data because they can make money with it. Custodial data also accrues indirect value to the enterprise based on the costs of fines, lawsuits, and adverse publicity. Examples of custodial data include customer personally identifiable information (PII) attributes like name, address, email, and phone number; government identifiers; payment card details like credit card numbers and expiry dates; and medical records and government identifiers like passport numbers. Many well-known companies have graced the front pages of major newspapers with toxic data spills.

Interestingly, enterprises in highly knowledge-intensive industries like manufacturing, information services, professional, scientific and technical services, and transportation have between 70-80% of their information portfolio value from secrets while healthcare firms and governmental entities are nearly exactly the opposite, most of the value of their information assets are custodial data assets.

Data security incidents related to accidental losses and mistakes are common but cause little quantifiable damage. By contrast, employee theft of sensitive information is costlier on a per-incident basis than any single incident caused by accidents.

Unfortunately, compliance drives spending on security for all companies and smaller ones have a difficult choice to make.  “Compliance” in all its forms has helped CISO’s buy more gear, but it has distracted IT security from its traditional focus, keeping company secrets secure. All companies, large and small really need to do a better job of understanding the value of their corporate secrets.

Read my next blog for some recommendations on achieving the right balance.

Update your feed preferences

Five Common Corporate Pitfalls in Cyber Security Management

$
0
0
EMC logo

By Mike McGrew, Advisory Practice Consultant, RSA Advanced Cyber Defense Services

This blog discusses five of the high level missteps common to organizations that have experienced needlessly prolonged negative effects of cyber security incidents.

1) No security team

A fair percentage of clients that I have provided incident response services to over the last 12 months are operating without security or oversight on the Internet, meaning not a single person employed at that organization is solely dedicated to working on security issues. While this is common for small companies and startups, these clients matured over the years to the point where they had hundreds or thousands of employees and even more computing devices on the network. What had not occurred, however, was the investment in security commensurate with the growth of the company.

When a company consists of 10 people operating on a shoestring budget and an idea, realistically it’s hard to justify spending money on anything that doesn’t have a tangible ROI. When those companies grow, however, the potential losses in intellectual property or corporate reputation began to justify expenditure towards a comprehensive security program. Add to that potential regulatory compliance requirements and most successful companies should have no problems demonstrating a true business need for security implementation.

2) No budget for enterprise level security tools

These companies are slightly better off than the organizations with no security team at all. What I typically observe at these clients is a dedicated though undersized staff that spends a lot of time trying to convince management of the necessity of enterprise security tools. At least that’s how they start out on the job. By the time I am called in to consult, I typically find that the IT managers accept as fact that executive leadership will not dedicate funds towards the purchase of enterprise security tools. Often these managers hope that the single biggest result of a breach is that executive leadership will finally see the true value of implementing these tools.

3) No management support for an information security program

Both of the previously mentioned conditions can be summed up by this one condition. That being said, I have still occasionally seen organizations that are reasonably staffed and tooled, but end up not implementing security properly because of the perceived negative impact to the business. For example, take a company that has an intelligent web proxy up and running on the network. Since executive management does not champion network security, creating exceptions to the policy is relatively easy. Before long, that company will have entire pockets of personnel whose web traffic bypasses the proxy. If a company has adequate security in place, but lacks management support, users will often find a way to bypass that security.

4) Over-reliance on tools; under-reliance on skills training

At these organizations, what I have found to be the common denominator is that tools and security staff are both implemented, but the weak link in the chain is the capability of the personnel that are hired to deal with incidents. Consider a case where a critical client system was compromised via targeted email attack. Two users clicked on a URL in similar LinkedIn phishing emails, starting the chain of infection that ultimately led to an attempted payroll theft months after the initial infection. Multiple opportunities existed for this client to detect and remove the threat from the network prior to the attacker trying to steal money; original emails were still present in the gateway storage, both compromised systems were beaconing to a known bad IP address, both hosts had AV alerts that fed into a central server, both users created help desk tickets as a result of their computers acting strangely, and this exact attack had been sufficiently blogged about for a security analyst to gather information and perform discovery in their own network. On the surface, this organization appeared ready to be able to efficiently handle any network security issues that came up. The reality, however, was that though there was an extensive trail of evidence that could have easily been queried and analyzed, there were no truly qualified personnel on staff that could put the pieces of the puzzle together.

5) Sysadmins assigned to remediate AV alerts, end up running scan tools that don’t wipe out the threat

I understand the motivation of the sysadmin who sees an AV alert and responds by running eradication tools like Malwarebytes. More often than not I find that in targeted attacks, at best these tools only kill the portion of the malware that was causing the AV alerts. For the motivated but untrained sysadmin, no more AV alerts means no more compromise, situation resolved. Incomplete remediation is a dangerous situation, since the possibility now exists that the host is still compromised but no longer alerting anybody about it. In a corporate environment, AV alerts should be treated as a notification to rebuild the system in any case where a thorough forensic examination cannot rule out persistent compromise.

 Mike McGrew is an Advisory Practice Consultant within RSA’s Incident Response practice. Mike provides network and host-based incident response services for intrusions involving sophisticated adversaries that target intellectual property and other critically sensitive data. Mike has been a CISSP for over 10 years and was previously a Navy cryptologist supporting the National Security Agency (NSA).

Update your feed preferences

Don’t Fear the Hangover – Network Detection of Hangover Malware Samples

$
0
0
EMC logo

By Alex Cox, Senior Researcher, RSA FirstWatch team

Today, Norman and Shadowserver released a paper that revealed a large attack infrastructure in which they detailed an ongoing campaign, running as far back as September 2010.  This campaign, reportedly run out of India, used spear-phishing attacks and multiple strains of malware to breach targets of interest and extract data.

The details of this case can be researched in the following paper:

http://blogs.norman.com/2013/security-research/the-hangover-report

Due to our industry ties the RSA FirstWatch team was able to obtain an advanced copy of the paper, and doing so we were able to collect over 700 of the detailed malware samples referenced in the report for analysis.

This analysis, focused almost exclusively on network behavior, allowed us to detail effective ways of detecting this malware on the network in real-time.

As a general rule, the RSA Security Analytics / RSA NetWitness approach to network analysis for these types of threats has always been a three-part process which is circular in nature:

  1. Identify expected network behavior
  2. Examine outliers
  3. Link intelligence

Detection of Identifying User-Agents

In many APT malware cases, a non-standard user agent is observed as part of the command and control communication sequence and this case is no different. There are several case-related user-agent strings detailed in the paper:

EMSCBVDFRT
EMSFRTCBVD
FMBVDFRESCT
DSMBVCTFRE
MBESCVDFRT
MBVDFRESCT
TCBFRVDEMS
DEMOMAKE
DEMO
UPHTTP
sendFile

Additionally, the following user-agent strings are also present:

wininetget/0.1
file
test
vbusers
folderwin
smaal
simple
nento
bugmaal

When these user-agent strings are turned into a Security Analytics application rule they would look like the rule below and would allow a quick pivot on hangover-related malware traffic:

Client = emscbvdfrt,emsfrtcbvd,fmbvdfresct,dsmbvctfre,
mbescvdfrt,mbvdfresct,tcbfrvdems, demomake,demo,
uphttp,sendFile,wininetget/0.1,file, test,vbusers,folderwin,
smaal,simple,nento,bugmaal

This particular pivot, where we identify meta elements that we don’t expect to exist in our environment, is a very common way of detecting both malware and unwanted applications on the network.

Identifying Information in Query Parameters

While not as clear cut as identification of unique user-agents, many malware samples, especially Remote Access Trojans (RATs) used by APT attackers, commonly transmit identifying information as part of command and control check-in traffic.

In this case, we see similar behavior in which the computer name of the analysis environment “RemotePC” as well as the logged in user “admin” is identified in plaintext during the C2 check-in of many of the identified samples:

(click on the image below and zoom to see detail)

Querystring

Identifying C2 domains

Lastly, establishing domain intelligence by using malware analysis and existing known compromise, plus online research, passive DNS and other methods, we are able to build a large feed of domains which identify suspect traffic.

In this case, RSA FirstWatch added specific domain intelligence related to the hangover intrusion set on 4/30/13.    Historic hits to these domains can be located with the following custom drill:

threat.category = research && threat.desc = apt-domain-a-cow_star, apt-domain-a-hanove, apt-domain-a-trojan.apt.snowtime, apt-domain-a-backdoor.apt.anke, apt-domain-a-backdoor.apt.vbupload, apt-domain-a-dragoneyemini_ smackdown, apt-domain-a-smackdown, apt-domain-a-hanove2, apt-domain-a-appinbot, apt-domain-a-hanovelarge

These three detection methodologies can be applied to this and future incidents for proactive detection of advanced threats.

Special thanks to the researchers at FireEye and Dell Secureworks for their assistance in malware analysis and classification tasks.

Happy Hunting!

Alex Cox, MSIA, CISSP, GPEN, GSEC is a Senior Consultant and Security Researcher with RSA FirstWatch team responsible for advanced threat intelligence research. Alex has worked more than a decade in IT with a background in desktop architecture, emerging threat research, network forensics and behavioral malware analysis.

Update your feed preferences

Groove Theory of GRC – Postulate #2: Duet, Trio, Quartet, Orchestra

$
0
0
EMC logo

The initial inspiration of my “Groove Theory of GRC” was Rocco Prestia, the bass player for the funk band Tower of Power.  His definition, or lack thereof, of the term groove started my thought process on how very important things can exist without exact scientific explanation.   In my last blog, I talked about combining Musicality and Performance to create a special musical experience and how GRC should strive for this powerful combination through Visibility and Accountability to result in Performance Optimization.  Now I want to explore the complexities of any musical endeavor.  While solo performances can be captivating, a full orchestra performing in perfect concert together is one of the highest forms of human collaboration and expression.  So on to postulate #2.

Postulate #2:  The more pieces of the business involved; the more complex the challenge but the greater the value.

Across the spectrum of GRC activities, multiple pieces of the business need to pick up their instruments and build to the crescendo of a well-oiled organization.  This may be a flowery way of putting it to fit my running analogy so let’s cut to brass tacks:  Everybody needs to play nice in the sand box.  Not as dramatic but that is the bottom line.  Organizations that build walls, foster politically motivated cultures, enable kingdom building and all of the bad behavior we saw on the playground in kindergarten will struggle with making the right decisions and eventually face a serious business breakdown.

GRC is one of those avenues to break down the barriers between parts of the business.  If an organization can rally around a significant regulatory compliance challenge (as many companies faced with Sarbanes Oxley) or unite to respond to a major calamity (as organizations experienced during recent natural disasters), then the organization should be able to  band together to operationalize risk and compliance processes.   Domains of the business such as Information Technology, Finance, Audit, Legal, Compliance and others are necessary to build the right fabric across the organization.  A common strategy, with defined objectives and executive buy-in, will go a long way.

Each domain, or department will at times seek to build its own GRC approach.  This is completely understandable as each domain has its own drivers and needs.  Information Technology may utilize GRC to improve IT service responsiveness, reduce security risks and maintain compliance to data protection standards.  Finance may focus GRC on financial reporting processes, look to reduce capital, market or liquidity risk and maintain compliance to accounting practices.  G, R and C mean different things to different operational elements.  However, the organization can begin to bring those together into a more concerted, complimentary approach through an enterprise strategy.

Back to my Groove Theory:  Most organizations will start with a string quartet or jazz trio or folk singing duo.  The goal is then to bring more and more instruments into the ensemble until a full orchestra is making music together from the same song sheet.   Obviously that singular score, if its parts are written with harmony and based on solid music theory, can enable the movements, counter-melodies and dynamics that make for a beautiful symphony.   It is at this point where the organization transitions from singular players into a larger, more complex performance.   The result:  Opus # 9 in GRC sharp.

 

* I had to include a link to this video showing “Tower of Power” from 1973 – 2011.  A band as tight and funky as can get even after 38 years of creating music.  Now THAT is the type of sustainable collaboration we all hope we could foster in our organizations.

Update your feed preferences

Mandiant Malware? Not Exactly.

$
0
0
EMC logo

By Alex Cox, Senior Researcher, RSA FirstWatch team

The RSA FirstWatch team uses a number of techniques to detect emergent threats and trends.  Much of the output of the analysis process becomes inputs for the RSA FirstWatch Feeds and new rules to detect botnet variants, malicious user-agent strings, and suspicious queries that would be strong indicators of compromise.  One unique executable that was downloaded caught our eye today:

Untitled1

This filename, called “_load.exe” seemed to be downloaded as part of a large Zbot/Tepfir infection package.  Here is the screenshot, from RSA Security Analytics, summarizing the alert types seen post-infection in the sandbox:

TimeGraph

But what really got our attention was the falsified manufacturer’s name and author’s name.  In CFF Explorer, we see this:

Untitled

Of course, given the way the file was downloaded, we knew this wasn’t a legitimate Mandiant binary, but a piece of malware with planted meta-data to use Mandiant’s name.  According to VirusTotal, it had been seen 15 hours earlier and only ESET identifies the file as a malicious downloader.  You can see the VT report here:

https://www.virustotal.com/en/file/2714253ae4686360b45acd3fb2658966b6f61957a0b42d93cccad4a098b0a9da/analysis/

 

Digging Further

With a bit of further digging, we see that this sample was a secondary download of an initial sample hash of 1aee6a5859ecb9b43cc752244be5bec6.  This hash has been observed in the past multiple times with a filename of:

FedEx Shipment Notification.PDF.exe

This file was first observed on May 5, 2013, also with a fairly low antivirus detection rate at the time of detection (5 out of 46) but is fairly well detected now:

https://www.virustotal.com/en/file/eab3ee7c0c843dec8f6c41193465c6ff93ae914606520bb1a1dfd1e26a8862f0/analysis/

The submitted filename makes this sample highly likely to have been distributed via a “Shipment Notification” mass spam campaign.  This infection vector that has been highly effective over the past few years for spreading cybercrime malware and has garnered the attention of FedEx, who has a warning page warning its customers of this type of attack: http://www.fedex.com/dm/fraud/virusalert.html

Interestingly, both samples appear to be “downloader” malware, which only serve to download other malware on an infected machine.  These types of Trojans are commonly used in Pay-Per-Install campaigns, where criminals pay the owner of an existing botnet to have their infected machines push a piece of malware belonging to the buyer.  This approach significantly simplifies the process of building a new botnet for the buyer.

Further malware analysis reveals that the observed second-stage malware has no built-in persistence mechanism, meaning that a simple reboot clears the malware from memory.  This is somewhat unusual, but may indicate a “single-use” methodology for subsequent infection.  At this time, the RSA FirstWatch team has not observed a third-stage download occur with this sample.

 

Network Artifacts

Sample 1 – 1aee6a5859ecb9b43cc752244be5bec6 1aee6a5859ecb9b43cc752244be5bec6 has been observed connecting to the following locations for C2, which is known to be a malicious server: hxxp://asdacbxn34.us//area/la.php

and these locations for second-stage downloads were to a religious institution website, which appears to have been compromised, and another site known to host malware:

Hxxp://www.***.uk/_load.exe

hxxp://178.208.82.164/_load.exe

Passive DNS analysis indicates that the following domains have also resided on the C2 IP, all of which are known to be malicious domains:

mesalk.ru

houselle.ru  

davalki-tut.ru

nationalconstruction.ru


Sample 2 – bcadffb2117751fb89a4bb8768681030 – “Mandiant Malware”

This sample, downloaded as noted above as:

Hxxp://www.***.uk/_load.exe

hxxp://178.208.82.164/_load.exe

Connects to the following ip (address known to be associated with cybercrime) to check for additional malware to download:

94.23.234.36

This IP has mapped to the following known malicious domain names:

lamodaesbarata.es

ovh66m.exclust.com

ks307892.kimsufi.com

tusvestidos.com

Detection in RSA Netwitness Security Analytics:

 These particular malware connections can be located in an RSA Security Analytics infrastructure with a number of simple pivots on known infrastructure:

alias.host = asdacbxn34.us, mesalk.ru, houselle.ru, davalki-tut.ru, nationalconstruction.ru, 178.208.82.164, lamodaesbarata.es, ovh66m.exclust.com, ks307892.kimsufi.com, tusvestidos.com

and

ip.dst = 178.208.82.164,94.23.234.36

Generically, suspicious behavior involving executable downloads such as these can be detected by creatively combining observed extension meta data with known filetypes.   In this case:

Extension = exe && filetype != windows executable

 

Summary

In this particular case, we see a common cybercrime attack methodology, mass spam, a social engineering hook and a downloader Trojan, crossing over into APT space, likely due to all of the recent press coverage of Mandiant and other APT-related investigations.   This is further evidence of the constant evolution of online attacks based on current events.

Happy Hunting!

 

Alex Cox, MSIA, CISSP, GPEN, GSEC is a Senior Consultant and Security Researcher with RSA FirstWatch team responsible for advanced threat intelligence research. Alex has worked more than a decade in IT with a background in desktop architecture, emerging threat research, network forensics and behavioral malware analysis.

Update your feed preferences

VMware Virsto: A Very Smart Volume Manager For VMs

$
0
0
EMC logo
I think many IT professionals realize that many application performance issues eventually boil down to storage and physical I/O. That was true before server virtualization, and it's certainly true now. Storage array vendors do what they can. Operating system, hypervisor and database vendors do what they can as well. But between the two, there's the potential for a smart layer of storage software that does what others can't. As an example: way back when Solaris was popular, Veritas' VxVM and VxFS were almost ubiquitous. Both products offered an important value-added layer than neither the host OS nor the storage array did well. As a result, the Veritas products become almost a de-facto standard in an larger Sun environments. While I was exploring Virsto (recently acquired by VMware), I was struck by similarities to what Veritas did years ago. It's clearly a smarter storage abstraction layer than either the hypervisor or the array can provide on their own. The real question — now that Virsto is owned by VMware, will Virsto technology end up being a de-facto standard for many VMware environments? What Brought This About Like many of you, I try to follow what all the various storage startups are...
Update your feed preferences

Is flash the saviour of Software Defined Storage?

$
0
0
EMC logo


I have this search column open on twitter with the term “software defined storage”. One thing that kept popping up in the last couple of days was a tweet from various IBM people around how SDS will change flash. Or let me quote the tweet:

What does software-defined storage mean for the future of #flash?

It is part of a twitter chat scheduled for today, initiated by IBM. It might be just me misreading the tweets or the IBM folks look at SDS and flash in a completely different way than I do. Yes SDS is a nice buzzword these days. I guess with the billion dollar investment in flash IBM has announced they are going all-in with regards to marketing. If you ask me they should have flipped it and the tweet should have stated: “What does flash mean for the future of Software Defined Storage?” Or to make it even sound more marketing is flash the saviour of Software Defined Storage?

Flash is a disruptive technology, and changing the way we architect our datacenters. Not only did it already allow many storage vendors to introduce additional tiers of storage it also allowed them to add an additional layer of caching in their storage devices. Some vendors even created all flash based storage systems offering thousands of IOps (some will claim millions), performance issues are a thing of the past with those devices. On top of that host local flash is the enabler of scale-out virtual storage appliances. Without flash those type of solutions would not be possible, well at least not with a decent performance.

Since a couple of years host side flash is also becoming more common. Especially since several companies jumped in to the huge gap there was and started offering caching solutions for virtualized infrastructures. These solutions allow companies who cannot move to hybrid or all-flash solutions to increase the performance of their virtual infrastructure without changing their storage platform. Basically what these solutions do is make a distinction between “data at rest” and “data in motion”. Data in motion should reside in cache, if configured properly, and data in rest should reside on your array. These solutions once again will change the way we architect our datacenters. They provide a significant performance increase removing many of the performance constraints linked to traditional storage systems; your storage system can once again focus on what it is good at… storing data / capacity / resiliency.

I think I have answered the questions, but for those who have difficulties reading between the lines, how does flash change the future of software defined storage? Flash is the enabler of many new storage devices and solutions. Be it a virtual storage appliance in a converged stack, an all-flash array, or host-side IO accelerators. Through flash new opportunities arise, new options for virtualizing existing (I/O intensive) workloads. With it many new storage solutions were developed from the ground up. Storage solutions that run on standard x86 hardware, storage solutions with tight integration with the various platforms, solutions which offer things like end-to-end QoS capabilities and a multitude of data services. These solutions can change your datacenter strategy; be a part of your software defined storage strategy to take that next step forward in optimizing your operational efficiency.

Although flash is not a must for a software defined storage strategy, I would say that it is here to stay and that it is a driving force behind many software defined storage solutions!

"Is flash the saviour of Software Defined Storage?" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

Update your feed preferences

Hadoop On VMware -- Another Workload Conquered?

$
0
0
EMC logo
If you're like me, you've been promoting the idea of server virtualization for many, many years. You're also probably familiar with the standard pushback: what about performance? I can clearly remember going through before-and-after charts over and over -- again and again -- for workload after workload: databases, Exchange, web servers, etc. You had to convince people one painful step at a time. Here comes a new workload that more IT organizations are stepping up to: Hadoop in all its forms. While most IT professionals can see the many, many benefits of virtualizing Hadoop environments, they frequently encounter stubborn resistance from people who "just know" doing so will unacceptably impact performance. Well, they're wrong. And there's hard data to prove it. Big Data At VMware While perhaps not as glamorous as other parts of the VMware portfolio, there's a focused team at VMware working hard to bring the core VMware value proposition to big data environments. They've already brought you HVE (Hadoop Virtualization Extensions) as well as Project Serengeti. Today, another accomplishment: both VMware and Cloudera announced that they've done extensive joint qualification and performance characterization. That's all well and good, but what is *really* interesting is the performance white...
Update your feed preferences

The Fragmented Picture of Mobile Security

$
0
0
EMC logo

I was in Munich last week, speaking at the European Identity and Cloud Conference in a panel on standards for mobile security. It was a very good session, not least because of the colleagues who joined me on the panel. John Sabo spoke about the work he’s doing in privacy frameworks.  Tony Nadalin spoke about his work in identity management and cloud. Sven Gossel discussed his work in crypto interfaces and mobile environments. There were lots of good questions from our moderator, Fulup ar Foll , as well as great comments and questions from the audience.

Our panel was only one of a number of discussions of mobile security at the conference. In his Thursday keynote, Dr. Kai Rannenberg spoke to the need for hardware roots of trust. Pamela Dingle presented the authentication and authorization token models in OpenID. Craig Burton introduced a UK-based company that has automated the creation of open APIs consumable by mobile devices. There were sessions on VDI and mobile security, SSO and mobile security, and trust frameworks and mobile security: lots of information, across many important and interesting topics.

But in looking back on the various discussions of mobile security at the conference, what strikes me most of all is the fragmented nature of the discussion. Perhaps there were, among the keynotes and sessions I missed, some that gave a more complete picture of where we stand in terms of mobile security. But that was not something I heard or could derive across sessions. Moreover, despite the breadth of discussion, there were nonetheless a number of topics related to mobile security that I didn’t see at the conference, perhaps most strikingly the critical role that analytics, both in risk-based identity management and in threat response, should play in mobile security.

In fairness to the EIC conference, understanding of mobile security seems fragmented across the industry. Developing a comprehensive and comprehensible view of mobile security should be a concern for all of us engaged in the practice of cybersecurity.

Update your feed preferences

Number of vSphere HA heartbeat datastores less than 2 error, while having more?

$
0
0
EMC logo


Last week on twitter someone mentioned he received the error that he had less than two vSphere HA heartbeat datastores configured. I wrote an article about this error a while back so I asked him if he had two or more. This was the case, so next thing to do was to “reconfigure for HA” to clear the message hopefully.

The number of vSphere HA heartbeat datastores for this host is 1 which is less than required 2

Unfortunately after reconfiguring for HA the error was still there, next suggestion was looking at the “heartbeat datastore” section in HA. For whatever reason HA was configured to “Select only from my preferred datastores” and no datastores were selected just like in the screenshot below. HA does not override this so when configured like this NO heartbeat datastores are used, resulting in this error within vCenter. Luckily the fix is easy, just set it to “Select any of the cluster datastores”.

the number of heartbeat datastores for host is 1

the number of heartbeat datastores for host is 1

"Number of vSphere HA heartbeat datastores less than 2 error, while having more?" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

Update your feed preferences
Viewing all 8970 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>