I was eagerly awaiting some new content database and RBS storage guidance that I had heard was coming from Microsoft. Of course, they managed to release the new info when I was out on vacation. Then I had planned on transferring my blog to a new hosting provider and didn’t want to put out any new posts using the old system.
So yes, this post is a little late and I’m sure half of the planet has already blogged about this, but since storage and RBS are something that I’ve talked a lot about, I feel like I need to add my 2 cents to this one. What am I talking about? I’m glad you asked…
The Microsoft SharePoint Team blog has published some new guidance regarding supported content database sizes. The provide a summary of the changes and some nice background info in this blog post where they did a VERY good job highlighting the changes and pointing you to the new supported limit statements referenced in TechNet. So in the sections below, I want to just mention a couple of things related to the new guidance.
New RBS Storage Clarification
According to the new guidance, the storage consumed in a particular BLOB store by a related content database must be included in the overall content database storage size when considering content database size supported limits. I have to come clean RIGHT NOW and say that I’ve been a proponent of using RBS to skirt content database size limits in order to facilitate a more flexible taxonomy for large scale content databases. Unfortunately, I heard this concept long ago and latched onto it as a solution for an issue that I regularly encountered. So I’ve been shouting this misinformation from the rooftops for a long time. Turns out that TechNet never explicitly said we could ignore RBS BLOB store storage requirements in the content database size limit. So for the record, please include BLOB store storage sizing in your content database size number. I’ll be passing this along in future speaking engagements as well.
So where does that put us? Are we hosed if we deployed RBS? In many cases, the answer is no. I found that we typically wanted to use RBS for large scale document archive site collections based on either the Document Center or Records Center site template. Since the new supported limits allow for larger content databases, in many cases we’re still within the limits. If you’re not inside the limits but your system is functioning normally, Microsoft has given us the option of opening a paid supportability ticket so that their support team can “certify” systems that are beyond the limits.
Is RBS Still a Useful Technology?
Yep. But the new guidance ensures that it won’t be overused and abused. Ok, so we’ve got 1 less use case. But RBS is still very beneficial in several solutions:
- Content Addressable Storage (CAS) Solutions – If your organization operates in a heavily regulated industry, it’s possible that you’re not allowed to delete documents. Write Once Read Many (WORM) mode CAS storage devices such as Hitachi HCAP and EMC Centera are excellent solutions for ensuring that binaries live forever. An RBS provider can take advantage of these CAS storage devices to ensure that binaries are never deleted.
- Digital Asset Management Solutions – If you need to deploy a DAM site collection that will be used to host 700mb training videos for your organization, then RBS may again be a good fit. Nobody wants to shove 700mb files into a content database. But with RBS enabled and our new 4TB storage limit in place, it’s possible to store over 5,000 of those 700mb training videos OUTSIDE of SQL Server (using RBS) instead of less than 300 with the old 200GB limit. Sure, it’s an edge case, but there are many other similar solutions that might involve CAD files, large graphic assets, or huge 1,000 page PDF reports that need to be archived.
- Compression, De-duplication and Encryption Solutions – Looking for an extra layer of binary security for your files? RBS can encrypt binary streams on the way to the BLOB store. How about saving on some storage cost by enabling compression? RBS can help you there too. Also, a real sophisticated RBS provider can employ a de-duplication engine in the BLOB store to ensure that a given document is only stored once on the file system.
These are just a few reasons why RBS is still an important technology. But it’s important not to go to far into the other ditch and try to use RBS on every large scale solution. If the requirements don’t dictate that RBS is necessary, then it is important to leave it on the shelf to avoid the additional complexity of backup/restore and upgrade.
General Usage Content Databases
For most content databases, Microsoft still wants us to hang out under that 200GB number that we all know and love. Essentially, if you don’t need to push the boundary, then don’t! By staying under 200GB you ensure that you’re always supported and you probably won’t have to deal with any of the performance optimizations that Microsoft requires in order to support larger content databases. This is the 80% bracket that most solutions should fall into. Performance, usability, database maintenance, backup/restore, and upgrade will all benefit if try to architect your content databases to stay under 200GB
4TB is the new 200GB
Wow 4TB. That’s a fun number. Makes you want to just go out and re-architect your whole SharePoint environment doesn’t it! Um, please don’t. There are reasons to push a content database to 4TB but there is also a mountain of additional requirements that need to be addressed before you can even think about a number like 4TB.
Managing Very Large Content Databases
They went the extra mile. There are a few people that hold a wealth of knowledge regarding the planning, monitoring and maintenance of super gigantic content databases. Bill Baer is one of those guys. If you are entertaining the possibility of taking advantage of some of these new limits (or if you’re already there!), then you need to understand this whitepaper inside and out. All of the juicy “how to” goodness is in there.
No Limits for Document Archive?
Personally speaking, this is really cool. Professionally speaking, this scares the daylights out of me! I’m guessing that about 18 months from now someone is going to ring up KnowledgeLake and ask for that MCM guy they’ve got who’s real good with storage and performance optimization. It will take me about 30 seconds to look at their SQL Server and see that someone has jammed 5TB of content into a content database that is in no way optimized for it. People, if you want to go larger than even 1TB you better have some ridiculously spec’d out storage! Can it be done? I think so. Should it be done? Depends on how big of a check you’re willing to write to make it happen.
I have to say that I’m really glad that Microsoft finally bit the bullet and gave us something we can work with. For those of us who are willing to take the time and money to properly architect a large scale storage solution, this guidance really opens up a lot of doors. But at the same time, I expect it to cause some issues as well. Inevitably, systems will be targeted at these new numbers by people who don’t want to take the time to read ALL of the supporting guidance that enables these new boundaries. Still, the guidance was sorely needed and at least now we have a stronger storage foundation to build upon.