Data Availability – Silverstring https://www.silverstring.com Mon, 07 Oct 2024 09:08:23 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.26 An Inside Look at IBM FlashCore Module 4 and Anomaly Detection https://www.silverstring.com/blog/an-inside-look-at-ibm-flashcore-module-4-and-anomaly-detection/ https://www.silverstring.com/blog/an-inside-look-at-ibm-flashcore-module-4-and-anomaly-detection/#respond Fri, 12 Jul 2024 10:14:50 +0000 https://www.silverstring.com/?p=1273 At Silverstring we’ve been exploring how different technologies impact anomaly detection in modern IT systems, and one interesting piece of hardware weve been looking at is IBM’s FlashCore Module 4 (FCM 4). While it’s easy to get lost in the tech jargon, we wanted to break down how FCM 4 functions and what role it […]

The post An Inside Look at IBM FlashCore Module 4 and Anomaly Detection appeared first on Silverstring.

]]>
At Silverstring we’ve been exploring how different technologies impact anomaly detection in modern IT systems, and one interesting piece of hardware weve been looking at is IBM’s FlashCore Module 4 (FCM 4). While it’s easy to get lost in the tech jargon, we wanted to break down how FCM 4 functions and what role it can play in keeping systems resilient.

What is IBM FlashCore Module 4?

IBM FlashCore Module 4 is a hardware-based solution that adds speed and efficiency to storage systems, particularly those using IBM FlashSystem arrays. It’s built on NVMe (Non-Volatile Memory Express) technology, which is essentially a fast lane for data transfer. IBM has also integrated features like compression and encryption at the hardware level. This isn’t just a software layer on top of the storage; it’s embedded directly into the physical components.

How Does It Support Anomaly Detection?

Here’s where things get interesting. One of the challenges many businesses face is detecting anomalies in real time, whether that’s a spike in traffic, unusual patterns in data access, or potential security breaches. From what we’ve observed, the FCM 4 can help with this because it operates directly within the hardware, allowing for real-time monitoring of huge datasets. When something goes off-script in your I/O patterns, for example, FCM 4 can flag this immediately. This brings up a natural comparison with traditional, software-based detection systems, which often rely on backend analytics to identify anomalies.

Hardware vs. Software-Based Detection: What’s the Difference?

The key advantage of FCM 4’s hardware-based detection is its speed. It monitors data in real-time at the storage level, so there’s no waiting for external processes to analyse what’s happening. This gives IT teams an immediate head start in identifying and reacting to issues (such as an encryption event in progress) before it can do serious damage.

However, software-based detection tools have their own strengths. From our own work with these solutions, especially those driven by AI and machine learning, they typically offer more flexibility. They don’t just react to threats as they happen; they can scan data proactively and even identify dormant threats, like ransomware, before an encryption event begins. This gives businesses a chance to prevent a disaster before it strikes, rather than just responding quickly when it does.

Why Both Layers Are Important

It is our opinion that replying solely on one form of detection, whether hardware or software, can leave gaps in your defence. FCM 4 is ideal for rapid, real-time anomaly detection, but combining it with proactive, software-based tools adds an extra layer of protection. While FCM 4 will detect an encryption event in progress, software-based tools can prevent it from happening in the first place by identifying the threat earlier.

Is FCM 4 the Right Fit?

So, based on our findings, FCM 4 is great for businesses that need immediate, fast anomaly detection where performance is critical. But to truly secure your infrastructure, it’s worth layering this with software-based tools that can offer proactive threat detection and long-term insights. In the end, having both forms of detection provides a more complete, resilient approach to handling anomalies and cyber threats.

The post An Inside Look at IBM FlashCore Module 4 and Anomaly Detection appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/an-inside-look-at-ibm-flashcore-module-4-and-anomaly-detection/feed/ 0
Should You Be Backing Up Containers? https://www.silverstring.com/blog/should-you-be-backing-up-containers/ https://www.silverstring.com/blog/should-you-be-backing-up-containers/#respond Fri, 28 Jun 2024 13:23:25 +0000 https://www.silverstring.com/?p=1279 At first glance, containers seem like temporary instances that don’t require backup. Containers are designed to be lightweight, short-lived environments for running applications or microservices. This might lead you to assume that since they can be quickly recreated, backup isn’t necessary. However, the reality is a bit more complex and depends on how you’re managing […]

The post Should You Be Backing Up Containers? appeared first on Silverstring.

]]>
At first glance, containers seem like temporary instances that don’t require backup. Containers are designed to be lightweight, short-lived environments for running applications or microservices. This might lead you to assume that since they can be quickly recreated, backup isn’t necessary. However, the reality is a bit more complex and depends on how you’re managing both your application and its data.

The Nature of Containers

Containers (such as those created using Docker or OpenShift) are often short-lived. They can be spun up or down based on demand, which means you don’t need to back up the container itself. If a container fails, it can easily be restarted from the base image, making backing up the actual running container redundant. However, as container technology has evolved, so too have the ways we use containers, especially in relation to persistent data.

Persistent Data in Containers

When containers were first introduced, they were mainly used for stateless applications, meaning they didn’t store any data that needed to persist beyond the life of the container. But today, containers are often used for stateful applications, including databases, and this changes the equattion entirely. If you store data in a file system or volume attached to a container, that data needs to be protected, just as it would if it were in a traditional virtual machine (VM) or physical server.

The Challenge of Backing Up Container Data

Backing up container data introduces challenges that differ from traditional methods. For example, in a VM, data is tied directly to the machine and is easily identified by a name or other metadata. But containers use GUID-style identifiers, making it more difficult to track and back up data consistently. This is especially true when applications scale up or down dynamically. One day your application might have 10 containers, and the next only 5, each with different volumes.

What’s the Right Approach?

The key to backing up container data lies in understanding your application’s data model and knowing how bests to protect it. Should you back up at the application level or at the file system level? Do you need to protect the entire data set or just specific pieces? The answer will depend on your particular use case, but one thing is certain: while you don’t need to back up the containers themselves, you absolutely need a strategy for protecting the data associated with them.

Conclusion

Containers may have been designed to be short-lived, but the data they handle can be crucial to your business. As more companies rely on stateful applications in containerised environments, having a clear strategy for protecting that data is essential. Understanding the nuances of container backup will ensure you’re not caught off guard when something goes wrong.

In a world of ever-evolving technology, one thing remains constant: your data matters. Make sure you’re backing up what counts. If you would like guidance and advice on protecting your Container environment reach out to us using the link below.

The post Should You Be Backing Up Containers? appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/should-you-be-backing-up-containers/feed/ 0
10 More Ways to Licence Spectrum Protect https://www.silverstring.com/blog/10-more-ways-to-licence-spectrum-protect/ https://www.silverstring.com/blog/10-more-ways-to-licence-spectrum-protect/#respond Mon, 09 Dec 2019 14:00:37 +0000 https://www.silverstring.com/?p=1214 Three years ago we posted an article on the many ways to licence IBM Spectrum Protect. We thought it was time to update it.

The post 10 More Ways to Licence Spectrum Protect appeared first on Silverstring.

]]>
Three years ago, my colleague posted a superb article on the 10+ ways to licence IBM Spectrum Protect. The blog was very popular but it’s time for an update. A lot can and has changed in three years. Before getting stuck into this post, we recommend reading the original blog to familiarise yourself with the basics.

Gravy Train

We talk to many users of IBM Spectrum Protect about what they are getting in return for their annual support maintenance fees. It’s not so much the absolute costs, more the perceived value, or total return on investment. To use a phone analogy, its like paying for the latest iPhone but only using the SMS and call features. The incidence of technology debt is pandemic in the backup and recovery field but is it the manufacturer’s fault? Backup is often seen as a cost of doing business and is notoriously hard to manage and upgrade. Migration projects take time and just like when decorating your house, there is always one “room” left to do.

The house analogy might be more appropriate than the phone one. Phones are usually disposed of, with care for the environment we hope, after every upgrade cycle. Your data is probably not, it represents the valuables in your home. Spectrum Protect is a quality product and ranks highly for giving users peace of mind. For many though, it’s not as modern a “house” as they would like.

Money to Burn

Some companies get tired of their cluttered houses and throw their lot in with a second, or even third, storage company. Before you know it, not only is the garage full of stuff, the yard is littered and you have more items in local storage. It can’t be the best solution.

Back to what’s changed since our 2017 blog post.

Later that same year IBM announced the general availability of Spectrum Protect Plus. Perceived by some as a new product, we believe it’s an upgrade to help you modernise Spectrum Protect and gain greater value from your investment. Spectrum Protect now offers a much simpler administrative experience for users, as well as covering more of the “cloud-native” infrastructure starting to penetrate the enterprise. The new software adopts the “agentless” model used by many backup companies targeting the VMware protection market, whilst allowing for very efficient long-term data retention, for which Spectrum Protect is lauded.

This presents the Spectrum Protect user with the opportunity to modernise their “house” from the inside out, whilst eliminating losses caused by unnecessary use of “garages, yards and third-party storage boxes”. Second homes are great, but you wouldn’t keep your valuables in them. Far better to get a new kitchen or bathroom, than buy a second home.

On the Money

Referencing the great advice given in the original blog, which licence model is best to bring-in Spectrum Protect Plus, to modernise your data protection system?

Spectrum Protect Plus (SPP) is licenced on a per-VM basis. However, if you have use of a capacity-based licence model, you can offset some of your capacity allowance to bring in the new technology. The conversion is one terabyte (1 TB) of back-end capacity to ten (10) virtual machines. If you subsequently copy the snapshots into Spectrum Protect, say for long-term retention, you don’t pay again for the use of capacity in that repository. This is not true if you use a third-party product for your snapshots and copy that data into Spectrum Protect. It makes commercial sense to replace any third-party software, such as Veeam, with Protect Plus. So, by modernising your “house” from the inside and reclaiming the cost of your “second home”, you consolidate and simplify your protection estate.

The Bottom Line

If you are familiar with cloud-billing models and are of a mind to preserve cash, you can switch to a pay-for-what-you-use subscription model. This has the added benefit of avoiding any of those obtrusive vendor licence audits. This “no surprises” model is much more flexible than the old IBM PVU or legacy capex options. It is especially suited for companies moving data between the core, the cloud and the edge.

When combined with management platforms such as Predatar, customers can more easily track usage and allocation of licences, down to the business unit, application or even individual node.

Just as with bank accounts and utility bills, customer loyalty is often rewarded with higher prices. With so many ways to consume the IBM software, it makes sense to consider your options.

If you want to stop losing money, do get in touch to find out more about our IBM Spectrum Protect Licence Workshops.

The post 10 More Ways to Licence Spectrum Protect appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/10-more-ways-to-licence-spectrum-protect/feed/ 0
Who said storage is boring? https://www.silverstring.com/blog/who-said-storage-is-boring/ https://www.silverstring.com/blog/who-said-storage-is-boring/#respond Wed, 30 Oct 2019 15:44:29 +0000 https://www.silverstring.com/?p=1192 Silverstring's new Managing Director, Rick Norgate, reflects on his first data storage conference at IBM Tech U in Prague

The post Who said storage is boring? appeared first on Silverstring.

]]>
So I have just got back from two days at an IBM storage event in Prague and I wanted to share my initial impressions with you. For those of you who do not know me, I have recently joined Silverstring as Managing Director. My background has always been in technology but with a focus on cloud delivered payroll products. Silverstring’s specialism of data protection, recovery and resilience is a new field for me. So right now I am on a fascinating learning curve while I figure out my Hypervisors from my vSnaps from my Kubernetes.

But enough about me, let’s get back to the IBM event. The reason I was at this event is because IBM is one of our key strategic partners. It’s storage solutions and backup applications are a key underpinning of our Backup-as-a-Service model for many of our larger enterprise customers. Throughout my entire career I have always viewed IBM as this huge beast that is complex, traditional and slow to react. Leaving for the airport I was joking with my wife about how exciting a two day conference on IBM storage was going to be, especially since I was used to the fast moving world of Payroll (PS: Payroll is anything but fast moving).

Day 1 opened with a 1 hour keynote to a packed conference hall of over 800 people. The keynote focused on a number of new announcements that centred around two main concepts: storage being seen as a solution rather than just hardware, and the role that storage solutions play in data security and resilience. Day 2 then built on this with a deeper dive into these topics and an overview of how Redhat supports this.

What do we want? Solutions! When do we want them? Now!

The key message from IBM is that no one wants to talk storage hardware anymore. The reason being that hardware alone has no benefit to the customer and that the industry should be much more customer focused. This is something that absolutely resonates with me, I am a huge fan of customer-centricity and strongly believe the best products and services in any industry are those that are designed with customer needs at the centre. The position from IBM is that it is a combination of hardware and software brought together as customer-centric solutions. It is to this end that IBM are pushing big on using storage for more than just storing your data. One of the big innovations that came from the show was the inclusion of AI and Machine Learning being baked into their storage solutions. This will allow customers to put the data they backup to work and use the new tools to drive big data analytics and insights. The benefit of this is that it allows businesses to understand their data better and use the intelligence gained to make informed businesses decisions. By doing this IBM are turning the conversation with customers away from that of backup and storage hardware into one that has very obvious benefits for consumers.

Please mind the air gap!

The other side of the story that IBM told was around data protection and security. Again IBM want to position storage as much more than just hardware and they see storage as a key defence against cyber crime. With cyber crime becoming more and more frequent from governments to corporations to individuals, the need to protect data is more important than it has ever been. Thus, using storage solutions (such as the cloud and modern tape) to provide an air gap, ensuring that backup data is secure, in the event of any network infection.

In Conclusion

So what were my impressions? The thing that struck me the most was my perception of IBM as this traditional and slow business was blown away. The presentations I saw were energetic, full of confidence and were delivered in a way that shows the team are proud of what they have built. Most importantly of all, the solutions were presented with the customer in mind, no technical jargon was used and the IBM team talked about benefits rather than features. Yes, of course data protection, recovery and resilience is a complex business, but it does not mean we have to talk about it in this way. Talking about customer benefits and putting the customer needs at the heart of any offering is so important in today’s market, it’s time our industry caught up.

The post Who said storage is boring? appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/who-said-storage-is-boring/feed/ 0
The Tyranny of Choice https://www.silverstring.com/blog/the-tyranny-of-choice/ https://www.silverstring.com/blog/the-tyranny-of-choice/#respond Thu, 19 Sep 2019 17:45:42 +0000 https://www.silverstring.com/?p=1149 If everything comes in 57 varieties, making decisions is hard work. With so many data protection vendors, focus on good outcomes not technology.

The post The Tyranny of Choice appeared first on Silverstring.

]]>
When it comes to backup and disaster recovery there is certainly no shortage of options to choose from. Twenty years ago, the market for backup software was carved up between Veritas, EMC and IBM. There seems to be hundreds of vendors today?

Start-up investors typically target opportunities with the largest addressable market share. For the last ten years in data protection this has meant the virtual server market, specifically VMware. Standards and easy access APIs have created a plethora of copycat backup vendors all struggling to form a position in the minds of buyers. You would assume this gold rush created value, choice and low prices but in fact confusion, complexity and runaway costs has been the result. Why is this?

Enterprise data protection isn’t easy. To be more precise, getting a consistently good outcome isn’t easy. A good outcome is always being confident of recovering whole systems, not just individual VMs; in a timely manner, with all the data in place. For many customers with mixed operating systems and platforms this requires a lot of planning, daily administration and regular testing. Systems administrators are fully aware of the administrative grind required which makes them susceptible to the headache-busting promises of “snake oil” sales people.

When good enough is not good enough

At Silverstring, we work with clients who are as obsessive about their data as we are and who share our philosophy that good enough data protection is not good enough. We add value by solving the hard stuff, even if it is not the largest addressable market share. Let’s give you an example.

IBM won the battle versus Sun, HP and others in the Risc/Unix wars, but it won a market that has shrunk by a factor of 10x. That said, just like the mainframe, many companies run their most business-critical applications on IBM Power architecture. So, when it comes to good outcomes the time spent on protecting these systems should be disproportionately high. In our experience, investment of money and time follows the high-volume or mass market, rather than the tail-risk of the few, revenue bearing systems which lie outside of the standard deviation “bell curve”. Risk is not evenly distributed, or predictable, and neither should be your investment in data protection.

It’s the recovery stupid

To recover a business process in the event of a disaster requires the coordinated recovery of potentially multiple inter-connected servers. If these servers are not homogeneous this can be a tricky task, requiring significant skill and coordination. Silverstring is working to eliminate the fear and unknown risks that come with recovering complex, mission-critical systems. How are we doing this?

Firstly, we are developing a common orchestration platform to recover multiple, heterogeneous but connected servers, from multiple backup solutions.

Secondly, we have built solutions which leverage the economics of cloud to allow you to self-provision and self-test recovery scenarios from any location, at any time.

Thirdly, since hunting perfection in data protection can be an arduous journey, we offer a fully managed data availability service, backed by our unique, Sleep Easy Guarantee™.

Next step?

Our flagship solution, Alchemis Protect, provides a backup and recovery target platform from which recoveries can occur with minimal manual effort, remotely and on a pay-as-use basis. This makes recovery testing on a frequent basis less time consuming and far more cost effective.

To find out more about our new offering for IBM Power System users, please check out this Silverstring web page –  Power DRaaS

The post The Tyranny of Choice appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/the-tyranny-of-choice/feed/ 0
Building Data Confidence for Cloud-Native Applications https://www.silverstring.com/blog/building-data-confidence-for-cloud-native-applications/ https://www.silverstring.com/blog/building-data-confidence-for-cloud-native-applications/#respond Sun, 04 Aug 2019 14:32:28 +0000 https://www.silverstring.com/?p=1042 Old monolithic storage is giving way to robust and agile open-source inspired, software-defined technology. Why?

The post Building Data Confidence for Cloud-Native Applications appeared first on Silverstring.

]]>
Old Storage meets Cloud

The growth of cloud-native applications is spawning a new dawn for the data storage sector and re-igniting interest in software defined.

Storage purchasing practice has not changed much in twenty years and the big players then are still the big players today. There are a few exceptions where specialists have ridden trends to build decent positions in the market, think Veeam for virtual data protection or Pure for high-speed Flash storage but the top three remain the same, DellEMC, Netapp and HPE.

Disruption is coming but not from any one commercial entity. No, only a fundamental change to the computing model will upset the form book. The storage and backup solutions originally designed for first monolithic and then virtual applications won’t fit the requirements for cloud-native applications. I’m not talking about “lift-n-shift” cloud using VMware to migrate existing workloads; I’m talking about the data management needs of newly written applications for containers, which can share multiple solutions on a single OS kernel.

Wall Street darlings still?

It’s difficult to track the growth of open source software because revenues are not easy to follow on Wall Street. Compare that to the corporate tech stocks like Netapp which supply data storage to enterprise datacenters. On Friday 2nd August, Netapp announced revenues 17% down on the previous year which saw its share price fall by 22%. Similarly, Pure Storage’s stock is at 50% of its highs for the year. The disruption which first hit the tech giants like HP and IBM is now hurting the specialist players.

This disruption will accelerate because the open source movement is getting much more powerful. IBM’s purchase of Red Hat for $34 billion is testament to that and a good signal of further penetration into the enterprise or container technology.

Enter Containers

Despite the hype around container orchestration software like Kubernetes, corporate enterprises have yet to fully embrace the challenge of re-engineering for containers. Containers, though more efficient than Virtual Machines and more agile, have been considered too fragile for serious IT operations staff. Developers love the speed and simplicity of containers. The process is so easy they can afford to focus on writing great code and nothing else. The ephemeral nature of containers meant that data could be easily lost which meant no IT admin could sanction their use in steady-state production workloads. This situation is changing fast.

Last year, a Kubernetes release went GA with persistent volumes, which affords data a place to live, even when containers spin down. These volumes can reside on the usual protocols; block, file and object. Another major development at the start of 2019 was the Kubernetes 1.13 release which went GA with the Container Storage Interface (CSI). This was the starting pistol for vendors to pile into developing drivers for CSI now that they have a stable development and support interface.

For even greater confidence, production workloads require robust data backup and recovery systems. Most major backup applications from vendors like Veritas, Dell, IBM and Commvault, don’t natively support containers. Though still in Beta, Kubernetes has released APIs for its volume snapshot feature. Looking back to when VMware provided VAPI it unleashed a wave of innovation and enabled powerhouse commercial entities such as Veeam. The imperative of digital transformation is driving cloud investment and open source development. According to Grand View Research, containers are expected to grow in adoption by 26% CAGR between 2019 and 2025. Several vendors including IBM are working on supporting containers in their backup/recovery software later in 2019.

Future Storage

As applications built on container technology move from testing into production and full-scale operations, they will need a stable bedrock of enterprise-class data storage and backup infrastructure in place. Developers won’t accept the constraints of traditional storage administration, they will expect rapid provision, self service and ease of use. IT admins will still be needed to manage the overall storage stack, but they will have to get out of the developers’ way for general use. To deliver storage in a uniform and dynamic manner across on premise and public clouds, with less intervention by storage administrators, requires a software-defined approach.

To finish, its not all about where your data resides. In all likelihood, it will become more portable as applications are written for containers. What’s important is that if you need to retain your data then you need the security of persistent volumes and a way of protecting the data that isn’t hampered by the ephemeral nature of the platform.

 

The post Building Data Confidence for Cloud-Native Applications appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/building-data-confidence-for-cloud-native-applications/feed/ 0
Industry stalwart’s latest foray into hybrid cloud data protection https://www.silverstring.com/blog/industry-stalwarts-latest-foray-into-hybrid-cloud-data-protection/ https://www.silverstring.com/blog/industry-stalwarts-latest-foray-into-hybrid-cloud-data-protection/#respond Wed, 13 Feb 2019 15:10:06 +0000 https://www.silverstring.com/?p=966 IBM Spectrum Protect v8.1.7 and IBM Spectrum Protect Plus v10.1.3 Release date 22 Feb 2019

The post Industry stalwart’s latest foray into hybrid cloud data protection appeared first on Silverstring.

]]>
Announced on 12th Feb at IBM Think 2019 is a new release of both Spectrum Protect and Spectrum Protect Plus. Both products feature enhancements that show IBM have been listening to their users to help them deliver.

Spectrum Protect Plus v10.1.3

Rapid development of IBM’s new data protection for VMware and Hyper-V continues. As well as VM snapshot backups with instant restore, SPP already allows backup and restore of SQL and Oracle Databases on physical or virtual machines. V10.1.3 adds backup and restore of Exchange (Item level recovery too) as well as MongoDB.

The SPP snapshot repositories could already be replicated to give site-loss protection but v10.1.3 adds High availability for the SPP server that manages backup and recovery. This is a major improvement over the current version and should see the product become more widely accepted as a result.

Optimised Offload for long-retention snapshots and backups

Offload of backups to Spectrum Protect is the method that Spectrum Protect Plus uses to store longer retention copies of data on cheaper and slower storage. The previous version used SP for VE to send a copy to Spectrum Protect but this was essentially a parallel full backup and restore process that only supported vSphere.
Now the offload is to S3 object storage and is block level incremental. This supports both vSphere and Hyper-V environments and can be to Spectrum Protect container storage pools via a new S3 connector or to CLOUD object storage (IBM COS, Amazon S3 etc)

Spectrum Protect v8.1.7

As well as a host of client and agent updates to support new versions of /Windows/Exchange/SQL/Oracle and various security enhancements, the Spectrum Protect server now has enhanced diagnostics and tape drive support. The main new feature that users have been waiting keenly for is:

Retention Sets (codename: OneProtect)

Previously, to retain client backups from the same source for different periods (e.g. Daily -30 days, Monthly – 12 Months, yearly – 7 years), it was necessary to configure extra client/TDP instances and perform extra backups to different Spectrum Protect nodes on extra schedules. The initial configuration of this is time-consuming and the resultant duplication of backup data meant extra storage and capacity license cost and database overhead.

Now with RETENTION SETS the blocks/files from existing daily backups can be marked up in the database for longer retention and your long retention requirements are satisfied from a single ingest. Client and server-side configuration is greatly simplified and massive savings in processing time, network bandwidth, server storage, capacity licence and support effort can be expected. The creation of Retention Sets can be automated, and it will run as a scheduled server process, requiring no tape mounts or duplication of data.

It’s very encouraging to see that IBM are quickly addressing some of the features that users are looking for, and hopefully this is indicative that future releases will ramp up support and make the product suite more appealing for enterprise users. Speak to Silverstring to understand how both new releases can benefit you and to learn about our Alchemis Protect Managed Service.

The post Industry stalwart’s latest foray into hybrid cloud data protection appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/industry-stalwarts-latest-foray-into-hybrid-cloud-data-protection/feed/ 0
10+ Ways to License Spectrum Protect (TSM) https://www.silverstring.com/blog/10-ways-to-license-spectrum-protect-tsm/ https://www.silverstring.com/blog/10-ways-to-license-spectrum-protect-tsm/#respond Fri, 27 Jan 2017 15:16:18 +0000 https://www.silverstring.com/?p=642 We’ve talked about licensing on the blog before, but that was way back in 2013, so it’s about time we revisited it. Why are there 10+ ways to license Spectrum Protect and what are the benefits and pitfalls of each?   Spectrum Protect has evolved a long way from its roots, to what has been […]

The post 10+ Ways to License Spectrum Protect (TSM) appeared first on Silverstring.

]]>
We’ve talked about licensing on the blog before, but that was way back in 2013, so it’s about time we revisited it. Why are there 10+ ways to license Spectrum Protect and what are the benefits and pitfalls of each?

 

Spectrum Protect has evolved a long way from its roots, to what has been rebranded to Spectrum Protect and brought into the wider Spectrum brand or bundle of software defined offerings from IBM.

So let’s list the different ways to license Spectrum Protect:

  • Processor Value Unit (PVU)
  • Processor Value Unit (PVU) Sub Capacity
  • Capacity Bundle (backend)
  • Capacity Bundle (frontend)
  • Spectrum Suite
  • ASL Capacity backend
  • ASL Capacity frontend
  • Enterprise License Agreement (ELA)
  • Spectrum Protect Archive
  • Spectrum Protect for Protectier
  • Spectrum Protect Device License

There are simply too many to cover in detail here. The important thing is that each has positive and negatives dependent upon the characteristics of the environment in which they are deployed.

Clearing the complexity

Let’s try to demystify these and give some insight into them and when you might use them.

From experience I hear people complaining about Spectrum Protect as being complicated and too big a challenge to master. It’s not and it doesn’t have to be.

The best way to approach licensing is to start by asking a few questions:

Do I store a lot of data? Do I have a small number of servers or a large number? And one very important one. Am I using the features of Spectrum Protect that make life easier?

It boils down to this: a ratio between servers and capacity. If I have lots of data and only a few servers I need to license the server. If you have a small amount of data and a large amount of servers, you need to license the capacity.

That then steers you in the direction of one model or another. It’s a steer, but not the final destination.

Customers tell me that one of the biggest issues is compliance. IBM are auditing more and more and if you are using PVU’s, tracking, controlling and governance is not simple. I’ve lost count of the number of customers’ who upgrade their server estate and forget about buying more software to cover the new, bigger, faster processors. Customers’ deploy VMware clusters and get caught out by spiralling PVU counts. Customers’ who forget to install ILMT and if you don’t know what ILMT is, it’s painful.

For some, even if PVU’s are cheaper, they still convert to a capacity model to do away with the management overhead.

When you are on a capacity model you need to think in a different way. It’s easy, it’s great but now you have a new set of concerns. You no longer care about software deployment, you are entitled to install and use every Spectrum Protect tool, as detailed here. You can install wherever and whenever you want without the need to track servers or how many cores of what type is in every server. And no more ILMT. The only thing you care about is that all important number of TB in the Primary Pool: simple.

Oh and data growth!

You can manage this effectively. Things to think about are

  • Am I using dedupe?
  • Have I switched on compression?
  • Have I cleaned house and removed all the stuff I don’t need?
  • Can I put more data on disk?
  • With access to all the TDP’s have I setup incremental’s on VMWare?

As a business partner who has deployed container pools and tracked the result via our management and automation solution, Predatar, we see an average of 4:1 data reduction. It is possible to get even better reductions, but each account is different and we want to set expectations correctly. The key point here is that Spectrum Protect is licensed post the data reduction. This can flip the PVU vs Backend Capacity model debate firmly in the favour of a capacity model.

It’s fair to say that I favour capacity models over PVU.

Now I have nailed my flag to the mast it’s worth focusing on two of the most common capacity models: IBM’s Suite for Unified Recovery (SUR) Backend Capacity and ASL.

The first thing to be clear about is that the software code is exactly the same. These are two different ways to access the same software.

What is the difference? The quick answer is that ASL is Opex and SUR is Capex. One is via IBM PA and one is via an IBM Business Partner.

Do you want to rent or to buy?

Buying is nice, we are all used to this, but buying IBM software isn’t a one off fee. Although you buy the software once, after the initial (normally) three-year period, every year IBM asks you to pay for support on the software. This enables you to keep up to date with the software and access support, if you need.

What if the cost of that software support was the same cost as the ASL rental? In my experience the ASL rental can be cheaper. You avoid the upfront Capex expenditure and get the same product for less that the annual IBM PA renewal.

Why is this the case? You are buying ASL at the Business Partners price banding not yours and the majority of cases the Business Partner has a better banding – because they’re buying larger volumes for multiple customers.

ASL was designed to enable Business Partners to ‘aaS’ solutions. This means that you need to buy additional support offerings from the business partner. This doesn’t have to mean cloud or a managed service. My customers just need to use Predatar, or buy a number of Service Units.

Even with this element, I have written a good few business cases that show a customer can get a Fully Managed Service to proactively manage their estate and all the ASL software AND do so for less than the annual IBM PA Renewal and projected new license spend.

So why isn’t everybody doing this?

I really believe more should. As more organisations adopt a pay as you go, or utility mindset, it will become more popular.

One of the reasons I see people not going for ASL is a sense of being locked into a particular Business Partner. That is considered a risk and one that outweighs the financial benefits. Another is that IBM sellers do not recommend it. As a cynic, the revenue is associated to the Business Partner not the end user, so does not go towards individual’s targets. IBM sellers do get paid on it, but they don’t all know that or know how to claim it. This is one of the reasons we’re running sessions with our IBM contacts to help them understand this better.

Adding up the licensing options

As I said at the start, every model has positives and negatives. Each option can be easily modelled with real numbers and one size does not fit all. There is no need for guess work.

What I always encourage my customers to do is review their options regularly. There are always ways to reduce software costs, the broader question is, are the benefits worth the cost of change?

If you want to review your Spectrum Protect licensing model, Silverstring have experts who can help you analyse your licensing and ensure you’re getting the best possible value from your business recovery platform. Don’t hesitate to email on info@silverstring.com or call to organise your initial free licensing review to see if you can save costs, or get better value from your existing implementation.

For an updated version of this licencing blog, please visit our latest post on this topic. It can be found here.

The post 10+ Ways to License Spectrum Protect (TSM) appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/10-ways-to-license-spectrum-protect-tsm/feed/ 0
Spectrum Protect 8.1 gets VMware tagging https://www.silverstring.com/blog/spectrum-protect-8-1-gets-vmware-tagging/ https://www.silverstring.com/blog/spectrum-protect-8-1-gets-vmware-tagging/#respond Fri, 23 Dec 2016 16:40:47 +0000 https://www.silverstring.com/?p=684 Server virtualisation has disrupted the traditional models of data protection by allowing new vendors to challenge the traditional dominance of IBM, EMC and Veritas. The role of data backup and recovery is increasingly been performed directly by the VMware administrator, using point solutions which have been designed around them, for ease of use and performance. […]

The post Spectrum Protect 8.1 gets VMware tagging appeared first on Silverstring.

]]>
Server virtualisation has disrupted the traditional models of data protection by allowing new vendors to challenge the traditional dominance of IBM, EMC and Veritas. The role of data backup and recovery is increasingly been performed directly by the VMware administrator, using point solutions which have been designed around them, for ease of use and performance.

IBM Spectrum Protect (formerly TSM) has had a product in this space, called Spectrum Protect for Virtual Environments, but it was a bit late to the party and was, until now we believe, playing catch up.

For some time, users of Spectrum protect have been asking about the use of VMware tagging for backups.

With version 8.1 (released in December), this feature is now available. Spectrum Protect now has a range of tags for VMs to be included or excluded, allowing specific disks to be included or excluded and for application protection to be enabled as required.

Users of competing products point to the simplicity of setup as a reason to purchase separate backup tools, and tags have played a big part in this. With the new support for VMware tags in Spectrum Protect 8.1, it seems that IBM has hugely simplified the process of configuring and scheduling backups of VMware estates, and customers that previously discounted IBM for reasons of complexity, should take another look.

Check out our video below to see how simple it is to use tags for backing up VMs in Spectrum Protect 8.1

The post Spectrum Protect 8.1 gets VMware tagging appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/spectrum-protect-8-1-gets-vmware-tagging/feed/ 0
Spectrum Protect – Using new technology to extend the lifespan of your storage assets https://www.silverstring.com/blog/spectrum-protect-using-new-technology-to-extend-the-lifespan-of-your-storage-assets/ https://www.silverstring.com/blog/spectrum-protect-using-new-technology-to-extend-the-lifespan-of-your-storage-assets/#respond Fri, 14 Oct 2016 17:42:10 +0000 https://www.silverstring.com/?p=708 In Spectrum Protect v 7.1.3 a new type of storage pool called Container Pools was introduced. These pools are specifically designed for data deduplication, which can be either deduplicated at source (client) or inline during the server ingest phase. Since the GA of 7.1.7, Silverstring have upgraded two separate environments from 7.1.5 to 7.1.7. Both […]

The post Spectrum Protect – Using new technology to extend the lifespan of your storage assets appeared first on Silverstring.

]]>
In Spectrum Protect v 7.1.3 a new type of storage pool called Container Pools was introduced. These pools are specifically designed for data deduplication, which can be either deduplicated at source (client) or inline during the server ingest phase.

Since the GA of 7.1.7, Silverstring have upgraded two separate environments from 7.1.5 to 7.1.7. Both of these environments had data in both legacy deduplication pools and Container deduplication pools, so following the upgrade, we were able to run the conversion process to consolidate all of this data in one container pool.

In these two cases, the increase in available capacity was between 19% and 35%. Allowing for a typical year-on-year capacity growth of 10%. Therefore avoiding unpredicted spend on additional storage capacity. Converting to container pools could see the life of a Spectrum Protect system extended by 1-2 years without any additional hardware expenditure.

This generation (7.1.7) of deduplication pool adds a layer of sophistication and efficiency by removing the following barriers to effective data reduction:

  • Excess capacity was required to ingest the original data
  • Lengthy processing cycles were required to redistribute the data chunks after duplicates were identified.

Previous generations were inhibited by the following limitations:

  • No procedure for moving legacy data into container pools (other than server to server replication)
  • No support for tape copy of container pools

Spectrum Protect 7.1.6 (June 2016) addresses the first of these issues. Using the CONVERT STGPOOL command, there is now a facility to move data out of a legacy deduplicated storage pool (a FILE POOL) into a new container pool. This makes it possible to upgrade a current server instance to use the new storage pools without having to either keep data in two different types of deduplicated pool and without having to replicate data to a second instance. Spectrum Protect 7.1.7 (September 2016) now addresses the second issue by allowing a deduplicated tape copy of a container pool!

This makes it much easier for Spectrum Protect users to realise the benefits of container storage pools and those benefits can be significant. As of version 7.1.5, Container Pools perform inline compression as well as inline deduplication. This enables them to achieve significantly improved data reduction when compared to File Pools.

Taken together, this means that Container Pools can use storage far more efficiently than File Pools, allowing users of Spectrum Protect to retain more data on that storage. This allows users to reduce costs and to increase the life of that storage.

Contact us NOW and save money on storage costs!

The post Spectrum Protect – Using new technology to extend the lifespan of your storage assets appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/spectrum-protect-using-new-technology-to-extend-the-lifespan-of-your-storage-assets/feed/ 0