Data Preservation – Silverstring https://www.silverstring.com Mon, 07 Oct 2024 09:08:23 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.26 An Inside Look at IBM FlashCore Module 4 and Anomaly Detection https://www.silverstring.com/blog/an-inside-look-at-ibm-flashcore-module-4-and-anomaly-detection/ https://www.silverstring.com/blog/an-inside-look-at-ibm-flashcore-module-4-and-anomaly-detection/#respond Fri, 12 Jul 2024 10:14:50 +0000 https://www.silverstring.com/?p=1273 At Silverstring we’ve been exploring how different technologies impact anomaly detection in modern IT systems, and one interesting piece of hardware weve been looking at is IBM’s FlashCore Module 4 (FCM 4). While it’s easy to get lost in the tech jargon, we wanted to break down how FCM 4 functions and what role it […]

The post An Inside Look at IBM FlashCore Module 4 and Anomaly Detection appeared first on Silverstring.

]]>
At Silverstring we’ve been exploring how different technologies impact anomaly detection in modern IT systems, and one interesting piece of hardware weve been looking at is IBM’s FlashCore Module 4 (FCM 4). While it’s easy to get lost in the tech jargon, we wanted to break down how FCM 4 functions and what role it can play in keeping systems resilient.

What is IBM FlashCore Module 4?

IBM FlashCore Module 4 is a hardware-based solution that adds speed and efficiency to storage systems, particularly those using IBM FlashSystem arrays. It’s built on NVMe (Non-Volatile Memory Express) technology, which is essentially a fast lane for data transfer. IBM has also integrated features like compression and encryption at the hardware level. This isn’t just a software layer on top of the storage; it’s embedded directly into the physical components.

How Does It Support Anomaly Detection?

Here’s where things get interesting. One of the challenges many businesses face is detecting anomalies in real time, whether that’s a spike in traffic, unusual patterns in data access, or potential security breaches. From what we’ve observed, the FCM 4 can help with this because it operates directly within the hardware, allowing for real-time monitoring of huge datasets. When something goes off-script in your I/O patterns, for example, FCM 4 can flag this immediately. This brings up a natural comparison with traditional, software-based detection systems, which often rely on backend analytics to identify anomalies.

Hardware vs. Software-Based Detection: What’s the Difference?

The key advantage of FCM 4’s hardware-based detection is its speed. It monitors data in real-time at the storage level, so there’s no waiting for external processes to analyse what’s happening. This gives IT teams an immediate head start in identifying and reacting to issues (such as an encryption event in progress) before it can do serious damage.

However, software-based detection tools have their own strengths. From our own work with these solutions, especially those driven by AI and machine learning, they typically offer more flexibility. They don’t just react to threats as they happen; they can scan data proactively and even identify dormant threats, like ransomware, before an encryption event begins. This gives businesses a chance to prevent a disaster before it strikes, rather than just responding quickly when it does.

Why Both Layers Are Important

It is our opinion that replying solely on one form of detection, whether hardware or software, can leave gaps in your defence. FCM 4 is ideal for rapid, real-time anomaly detection, but combining it with proactive, software-based tools adds an extra layer of protection. While FCM 4 will detect an encryption event in progress, software-based tools can prevent it from happening in the first place by identifying the threat earlier.

Is FCM 4 the Right Fit?

So, based on our findings, FCM 4 is great for businesses that need immediate, fast anomaly detection where performance is critical. But to truly secure your infrastructure, it’s worth layering this with software-based tools that can offer proactive threat detection and long-term insights. In the end, having both forms of detection provides a more complete, resilient approach to handling anomalies and cyber threats.

The post An Inside Look at IBM FlashCore Module 4 and Anomaly Detection appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/an-inside-look-at-ibm-flashcore-module-4-and-anomaly-detection/feed/ 0
Should You Be Backing Up Containers? https://www.silverstring.com/blog/should-you-be-backing-up-containers/ https://www.silverstring.com/blog/should-you-be-backing-up-containers/#respond Fri, 28 Jun 2024 13:23:25 +0000 https://www.silverstring.com/?p=1279 At first glance, containers seem like temporary instances that don’t require backup. Containers are designed to be lightweight, short-lived environments for running applications or microservices. This might lead you to assume that since they can be quickly recreated, backup isn’t necessary. However, the reality is a bit more complex and depends on how you’re managing […]

The post Should You Be Backing Up Containers? appeared first on Silverstring.

]]>
At first glance, containers seem like temporary instances that don’t require backup. Containers are designed to be lightweight, short-lived environments for running applications or microservices. This might lead you to assume that since they can be quickly recreated, backup isn’t necessary. However, the reality is a bit more complex and depends on how you’re managing both your application and its data.

The Nature of Containers

Containers (such as those created using Docker or OpenShift) are often short-lived. They can be spun up or down based on demand, which means you don’t need to back up the container itself. If a container fails, it can easily be restarted from the base image, making backing up the actual running container redundant. However, as container technology has evolved, so too have the ways we use containers, especially in relation to persistent data.

Persistent Data in Containers

When containers were first introduced, they were mainly used for stateless applications, meaning they didn’t store any data that needed to persist beyond the life of the container. But today, containers are often used for stateful applications, including databases, and this changes the equattion entirely. If you store data in a file system or volume attached to a container, that data needs to be protected, just as it would if it were in a traditional virtual machine (VM) or physical server.

The Challenge of Backing Up Container Data

Backing up container data introduces challenges that differ from traditional methods. For example, in a VM, data is tied directly to the machine and is easily identified by a name or other metadata. But containers use GUID-style identifiers, making it more difficult to track and back up data consistently. This is especially true when applications scale up or down dynamically. One day your application might have 10 containers, and the next only 5, each with different volumes.

What’s the Right Approach?

The key to backing up container data lies in understanding your application’s data model and knowing how bests to protect it. Should you back up at the application level or at the file system level? Do you need to protect the entire data set or just specific pieces? The answer will depend on your particular use case, but one thing is certain: while you don’t need to back up the containers themselves, you absolutely need a strategy for protecting the data associated with them.

Conclusion

Containers may have been designed to be short-lived, but the data they handle can be crucial to your business. As more companies rely on stateful applications in containerised environments, having a clear strategy for protecting that data is essential. Understanding the nuances of container backup will ensure you’re not caught off guard when something goes wrong.

In a world of ever-evolving technology, one thing remains constant: your data matters. Make sure you’re backing up what counts. If you would like guidance and advice on protecting your Container environment reach out to us using the link below.

The post Should You Be Backing Up Containers? appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/should-you-be-backing-up-containers/feed/ 0
Silverstring take on Cloud Object Storage https://www.silverstring.com/blog/silverstring-take-on-cloud-object-storage/ https://www.silverstring.com/blog/silverstring-take-on-cloud-object-storage/#respond Sun, 21 Aug 2016 10:26:48 +0000 https://www.silverstring.com/?p=793 At a roundtable event earlier this summer, a group of IBM business partners, including Silverstring’s CEO gathered to discuss the key trends and challenges facing their industries and their customers. One of the key disruptors in all industries is data growing exponentially at an accelerated pace. Questions of how you store and access data and […]

The post Silverstring take on Cloud Object Storage appeared first on Silverstring.

]]>
At a roundtable event earlier this summer, a group of IBM business partners, including Silverstring’s CEO gathered to discuss the key trends and challenges facing their industries and their customers.

One of the key disruptors in all industries is data growing exponentially at an accelerated pace. Questions of how you store and access data and move and share information are also becoming critical in the world of globalization.

Facing unprecedented data growth, IT organizations are now tasked with finding ways to efficiently preserve, protect, analyze and maximize the value of their unstructured data as it grows to petabytes and beyond. IBM Cloud Object Storage is designed to handle unstructured data at web-scale with industry-leading flexibility, scale and simplicity.

After attending the event, Alistair Mackenzie summed up how Cloud Object Storage will help solve the problems and become the future of storage with the following quote:

“Cloud Object Storage is a way companies can dynamically allocate data, move data, access information in a much more progressive dynamic way than was previously possible in the more traditional models of storage.”

Watch the full interview here:

For more information on Cloud Object Storage please get in contact and fill in the form below.

The post Silverstring take on Cloud Object Storage appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/silverstring-take-on-cloud-object-storage/feed/ 0
Flape and Floud…..Huh? https://www.silverstring.com/blog/flape-and-floud-huh/ https://www.silverstring.com/blog/flape-and-floud-huh/#respond Fri, 15 Apr 2016 09:45:29 +0000 https://www.silverstring.com/?p=772 It’s been a few weeks since we’ve dug around storage technologies on this blog, so it seems a reasonable point to discuss two of the strangest terms bandied about in the storage world over the past year, or so. So, just what are flape and floud? Although both rejoice in their own Urban Dictionary entries, […]

The post Flape and Floud…..Huh? appeared first on Silverstring.

]]>
It’s been a few weeks since we’ve dug around storage technologies on this blog, so it seems a reasonable point to discuss two of the strangest terms bandied about in the storage world over the past year, or so.

So, just what are flape and floud? Although both rejoice in their own Urban Dictionary entries, these are not the flape and floud we’re looking to discuss here. Both are a cunning combination of flash and something else. Although the cost of flash storage has tumbled over the past couple of years, it’s still not super cheap, especially if you’re looking towards larger volumes, hence the search to combine flash with other, cheaper bulk storage options.

With the fall in flash costs, as well as its key USP of no moving parts, means that disk, as a storage medium is now moving towards an awkward position. The total cost of ownership of disk looks to be increasingly challenging to justify, especially in situations where assets are potentially going to be sweated out much longer than the conventional three year initial warranty period, maybe to five, or seven years. There is no trumpeting the benefits of ‘flisc’ in the future roadmap of storage!

The very slow rate of improvement in performance of disk-based systems has been coupled with a steady climb in the capacity and speed of access of tape, with LTO-7 being announced at the end of 2015. The rise in capacity per cartridge now means, with LTO-7 each unit delivers 16TB of capacity – ten times more than an LTO-4. In addition to this step up in capacity, the new standard has added more heads to read and write to tape simultaneously. Despite the repeated announcements of the death of tape, it seems to continue to go from strength to strength, to the point that we are looking at a brave new world of tape as data storage, becoming extremely effective when combined with flash storage, which is used as the indexing database to create…. Flape!!!! (pause for breath)

Flape might be particularly appropriate in situations where large media files, such as video etc. are stored. It’s an ideal solution for a media library, where files might not need to be accessed for a long period, but it’s important systems can navigate to them swiftly, when they’re needed. It also has a great use-case where large volumes of academic research data need to be retained, but might not be referred to very often. With the increase in the number of heads available to read from tape, combined with the high speed identification of the metadata, once files are located, they can be streamed off pretty much as quickly as a file could be read from disk, it’s just the start of the read that might take slightly longer.

It is very unusual for any discussion about storage technologies to get this far through a post without mentioning the C-word, so let’s remedy that immediately: Floud is a glorious combination of flash for indexing and metadata, combined with cloud as the extensive data storage medium. This is an interesting proposition for people reviewing their backup solutions, especially if there is an Iron Mountain tape off-siting solution in place. Whilst tape off-siting is a useful way of securing data, it does rely on the physical movement of media to provide that life-saving data recovery solution that’s likely to be needed if a disaster recovery situation is ever invoked. Moving to a – ahem – floud solution means your recovery data is held, ready to be retrieved from the cloud, without the logistics and timescale associated with a recovery from cold, off-sited data, especially if this hybrid solution was combined with a DR as a Service (DRaaS) solution, such as the Silverstring Predatar DRaaS.

The cloud storage container pools introduced in 7.1.3, along with the inline compression introduced in 7.1.5 means that floud solutions to provide your off-site backup images is now very much an option worthy of consideration. If you would like to discuss this further with me, then please fill in the form below.

The post Flape and Floud…..Huh? appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/flape-and-floud-huh/feed/ 0
IBM Cloud Object Storage – It IS big and it IS clever…and Safe. https://www.silverstring.com/blog/cleversafe-object-storage-it-is-big-and-it-is-clever-and-safe/ https://www.silverstring.com/blog/cleversafe-object-storage-it-is-big-and-it-is-clever-and-safe/#respond Fri, 04 Mar 2016 15:45:17 +0000 https://www.silverstring.com/?p=651 In recent years the growth in unstructured data has brought about the creation of a new type of storage.

The post IBM Cloud Object Storage – It IS big and it IS clever…and Safe. appeared first on Silverstring.

]]>
In recent years the growth in unstructured data has brought about the creation of a new type of storage – Object Storage – which is much better suited to unstructured data than traditional block storage and filesystems. Its different to block storage in a number of ways which will take too long to explain here. If you’ve seen a picture of a cat on Facebook or listened to a tune on Spotify you’ve retrieved an object from object storage.

IBM recently acquired Cleversafe as it fits neatly into the software defined storage suite at a performance level somewhere between Spectrum Scale (high performance) and Spectrum Archive (tape). Cleversafe is an object storage system available as hardware appliances or as software only – enabling it to be deployed to public cloud datacentres as well as customer datacentres (typically 3-4 in total) to form a hybrid storage cloud.

CLEVER

Here’s a diagram of how it works at a very high level:

Erasure coding is a very clever and much more space-efficient alternative to RAID and inter-site Mirroring/replication to protect data which can mean you actually use 1/1.3 to 1/1.8 of the physical disk space deployed to store data. This of course translates to money savings on hardware/maintenance/power/cooling and so on.

I’ve had a go on a demo Cleversafe environment and the management UI really is simple to use and one instance can manage up to 3000 devices – 100s of Petabytes of capacity through a single pane of glass (sigh).

Objects are created and accessed by users and VMs/applications through Accessor devices via a URL using Swift, S3 and Simple Object APIs.

Cleversafe is massively scalable and there are already deployments of 100s of Petabyte out there today in production

SAFE

Security

Objects are ‘sliced’ – encrypted – encoded and then distributed over multiple ‘Slicestor’ storage devices (ideally) over multiple sites. So to reconstruct an object a hacker would potentially have to access several sites, several devices – know which slices constitute an object and then de-encrypt those individual slices. There are no external encryption keys to manage/lose/be compromised.

Resilience

The erasure coding and slicing means that its only necessary to read a subset of the total slices per object to reconstruct the object. How many depends on the level of resilience configured but the upshot is you can lose multiple ‘Slicestor’ storage nodes or entire sites without losing access to the data and the ‘missing’ slices can be rebuilt from the survivors to restore the resilience. Cleversafe object storage environments are constantly monitored for data integrity to protect against disk failures and rebuild corrupt or missing data slices and the rebuild processing actually becomes faster/more efficient as the system scales up.

“Cleversafe systems can be designed with over 10 9’s of permanent reliability.

0.0000000029% of data loss in any given year

That’s 34,633,083,744.1 years mean time to data loss”

(Source – IBM presentation)

That’s 34 billion years! The Earth is 4.5 billion years old and the universe is about 14 billion years.

The data security and resilience is such that the CIA’s investment branch has invested in Cleversafe with a view to storing the VERY sensitive data for government agencies.

http://www.computerworld.com/article/2512893/data-center/cloud-storage-vendor-cleversafe-gets-cia-funding.html

ECONOMICS

Cleversafe is not a storage panacea for all ills as it is not for performance applications/databases & works out expensive for less than 500TB of storage. After that, as you get into multiple Petabytes, Cleversafe can work out 60% cheaper than Amazon AWS S3 cloud object storage and 80% cheaper than equivalent mirrored/replicated NAS storage.

USE CASES

We’ve established that Cleversafe is not for everyone so here are some areas where IBM think Cleversafe will be the right choice.

  • Object storage deployments of 500TB and beyond.
  • Businesses with demanding data storage needs: Video, Imagery, Sensor Data, Archives…
  • Service providers requiring large scale reliable object storage.
  • Target for Backup/Archive Software – Future integration with Spectrum Protect
  • Businesses who are considering Public, Private, or Hybrid Cloud
  • Businesses who need to refresh large NAS environments

The post IBM Cloud Object Storage – It IS big and it IS clever…and Safe. appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/cleversafe-object-storage-it-is-big-and-it-is-clever-and-safe/feed/ 0
Data Archiving – The Future Is In The Long Term https://www.silverstring.com/blog/data-archiving-the-future-is-in-the-long-term/ https://www.silverstring.com/blog/data-archiving-the-future-is-in-the-long-term/#respond Fri, 24 Oct 2014 10:00:51 +0000 https://www.silverstring.com/?p=775 I was out with a friend last night, and we happened to bump into somebody that he hadn’t seen for over 20 years. A moment of delayed recognition passed across his face (I’ll call this ‘recall’) before they re-established their common ground and started to reminisce about when they were both (a lot!) younger. Always […]

The post Data Archiving – The Future Is In The Long Term appeared first on Silverstring.

]]>
I was out with a friend last night, and we happened to bump into somebody that he hadn’t seen for over 20 years. A moment of delayed recognition passed across his face (I’ll call this ‘recall’) before they re-established their common ground and started to reminisce about when they were both (a lot!) younger.

Always having my storage hat on, it reminded me of the need for long-term data archiving.

Data archiving is a necessity

All users need to keep copies of data – often because of regulatory reasons – for long periods of time. On the rare occasions they do need to interrogate their data, there’s usually a short delay whilst they recall it before they carry on as if the gap hadn’t happened.

But as with most things, there’s a cost associated with long-term archiving.

Consider a system where you need to take a full backup on a monthly basis, and keep each of those backups for 7 years. Over time, that’s going to amount to 84 full backups, and because of the relative costs of tape versus disk, these have tended to be pushed out to tape.

Although this makes it less straightforward to access the data, it’s usually been cost-prohibitive to store that many full backups on disk.

Fortunately, that’s no longer necessarily the case. In the example above where a customer is taking a full backup of a system every month, there’s going to be a large degree of commonality between those backups.

As a result, this makes long-term archiving an ideal candidate for data reduction techniques, either on a software level (TSM, for example) or a hardware level (Protectier Deduplication, Storwize Compression, etc).

Costs don’t have to be restrictive

If you take this in tandem with storage commoditisation, it now means that storing long-term archives on disk should no longer be ruled out on cost grounds. As an added bonus, the data being stored on disk means that data retrieval should be that much quicker, with no need to recall tapes from offsite storage locations.

The final piece of the jigsaw comes in the form of the offsite storage of that data. Now that it’s taking up much less capacity due to the said data reduction, why not send that second copy off into the Cloud, using one of the recently launched Predatar solutions?

No one would argue that archiving isn’t easy – it’s often a challenge that companies don’t want to face, and when they do, they often compromise because of a perceived lack of capacity.

But with the need for effective and efficient data archiving becoming ever more important, now’s the time to think again and investigate how to meet regulatory requirements and utilise advances in data protection at the same time.

The post Data Archiving – The Future Is In The Long Term appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/data-archiving-the-future-is-in-the-long-term/feed/ 0
FASP And Steelstore – Two Big Data Transfer Technologies For Putting Data In And Taking It Out Of The Cloud https://www.silverstring.com/blog/fasp-and-steelstore-two-big-data-transfer-technologies-for-putting-data-in-and-taking-it-out-of-the-cloud/ https://www.silverstring.com/blog/fasp-and-steelstore-two-big-data-transfer-technologies-for-putting-data-in-and-taking-it-out-of-the-cloud/#respond Fri, 25 Jul 2014 19:43:59 +0000 https://www.silverstring.com/?p=730 There’s no two ways about it – Cloud computing is here to stay and Cloud storage is offered by all of the major cloud providers, which is great, right? Genuinely, it is – for the most part. But when we’re looking at high end usage, how do you confidently transfer multiple terabytes of data into […]

The post FASP And Steelstore – Two Big Data Transfer Technologies For Putting Data In And Taking It Out Of The Cloud appeared first on Silverstring.

]]>
There’s no two ways about it – Cloud computing is here to stay and Cloud storage is offered by all of the major cloud providers, which is great, right?

Genuinely, it is – for the most part.

But when we’re looking at high end usage, how do you confidently transfer multiple terabytes of data into and out of the Cloud, be it for backup purposes or for actual processing and manipulation?

IBM, Aspera and FASP

At the end of last year, IBM acquired Aspera, which with its FASP solutions, claims to have overcome a major obstacle to transferring high volumes of data over long distances.

You see, generally speaking there are two set ways of doing this – via TCP or UDP.

TCP (Transmission Control Protocol) allows reliable data delivery under ideal conditions. However, when packet loss and higher latency occurs (something that’s common in long distance WAN), the combined mechanism for network congestion avoidance and in-order delivery of packets results in low utilisation of available bandwidth and ultimately, slow file transfer speeds. Packet loss is taken to mean ‘network congestion’, so transmission is drastically throttled back and dropped packets retransmitted.

The alternative, UDP-based solutions, have tried to speed up file transfer, as UDP (User Datagram Protocol) dispenses with the reliability and congestion control mechanism that slow down TCP file transfer, but aside from driving networks (sometimes too) hard, these solutions tend to be actually retransmitting up to 10x the required data. This means the actual transfer speed is still not great and the bandwidth is wasted on unnecessary retransmission.

In-order delivery is important to many applications, but not file transfer. Aspera’s FASP solutions use UDP in the transport layer, however, it separates reliability and rate control mechanisms, gently backing off transmission as queuing in the network increases, yet maintaining a high level of bandwidth utilisation. What’s more, it only retransmits minimal dropped packets rather than stopping and starting to ensure in-order delivery.

As such, this promises better than 90% utilisation of bandwidth compared to TCP-based transfer figures, which can get below 20% on long

And the cost benefit of this is clear – use ALL the bandwidth you have bought instead of a fraction, or buy more to achieve the desired throughput?

FASP in TSM

In his recent inspirational speech at Silverstring’s summer barbecue, IBM’s Director of Product Management (Storage Software) Ian Smith indicated that he’d like to see this technology incorporated into TSM in the future. Efficient bandwidth usage and higher file transport speeds are obvious and necessary enablers to Cloud backup solutions.

Of course, increasing transfer efficiency in the network is one thing, so how about just transferring less?

Introducing Steelstore

Riverbed WAN optimisation products have been in widespread use for some time now. A new product that recently came to our attention is Steelstore. Having all of the deduplicating / compressing / encrypting goodness that you get from point-to-point Riverbed Steelhead / Granite etc, it’s also a gateway to Cloud storage.

Deployed either as a hardware appliance or on a VM, the Steelstore appliance appears to a backup application as a large (up to a PB) disk accessed by CIFS / NFS. It could even be thought of as an offsite disk storage pool in the Cloud, with data directed to the device cached on local disk for quick recall of recent backup data.

The real magic happens as all data is deduplicated inline, encrypted (AES 256-bit and SSLv3) and replicated to the Cloud storage provider of your choice.

Steelstore appliances have built-in support for the APIs from Amazon S3, AT&T Synaptics Storage as a Service, Microsoft Azure and Rackspace Cloud Files, with baseline support for other instances of OpenStack (Swift) object storage and EMC Atmos. More cloud providers will be added over time based on demand.

Where possible, data for restore is recalled from disk cache and, if not, the Cloud. When the site and appliance are ‘lost’, the data can be accessed by building / deploying another device and re-entering the encryption keys and Cloud access credentials (remember to keep these safe – offsite!).

Cloud storage and data transfer tend to be charged commodities, and the value of Steelstore appears to be in reducing consumption of these. Promotional material claims “Steelstore gateways reduce your WAN data transmission and cloud storage needs by 10-30 times on average.”

In a TSM context, this claim needs a pinch of salt taking with it as TSM already uses ‘incremental forever’ philosophy for unstructured data, and now VM backups, so serious data reduction has happened in the backup application before hitting storage.

And the truth is, less sophisticated backup applications that throw lots of full backups into storage would probably see better deduplication ratios at the gateway.

Always interested to hear your thoughts, have you used Aspera’s FASP solutions? Steelstore? Get in touch and let us know.

The post FASP And Steelstore – Two Big Data Transfer Technologies For Putting Data In And Taking It Out Of The Cloud appeared first on Silverstring.

]]>
https://www.silverstring.com/blog/fasp-and-steelstore-two-big-data-transfer-technologies-for-putting-data-in-and-taking-it-out-of-the-cloud/feed/ 0