Let me set the background by saying that I currently (until the end of the week anyway) work for a large tech company. We recently launched a reader app for iPad. On the backend we have a thin layer of PHP, and behind that a lot of processing via C# with Mono. I, along with my brother Jeff, wrote most of the backend (PHP and C#). The C# side is mainly a queuing system driven off of MongoDB.
Our queuing system is different from others in that it supports dependencies. For instance, before one job completes, its four children have to complete first. This allows us to create jobs that are actually trees of items all processing in parallel.
On a small scale, things went fairly well. We built the entire system out, and tested and built onto it over the period of a few months. Then came time for production testing. The nice thing about this app was that most of it could be tested via fake users and batch processing. We loaded up a few hundred thousand fake users and went to town. What did we find?
Without a doubt, MongoDB was the biggest bottleneck. What we really needed was a ton of write throughput. What did we do? Shard, of course. Problem was that we needed even distribution on insert…which would give us almost near-perfect balance for insert/update throughput. From what we found, there’s only one way to do this: give each queue item a randomly assigned “bucket” and shard based on that bucket value. In other words, do your own sharding manually, for the most part.
This was pretty disappointing. One of the whole reasons for going with Mongo is that it’s fast and scales easily. It really wasn’t as painless as everyone led us to believe. If I could do it all over again, I’d say screw dependencies, and put everything into Redis, but the dependencies required more advanced queries than any key-value system could do. I’m also convinced a single MySQL instance could have easily handled what four MongoDB shards could barely keep up with…but at this point, that’s just speculation.
So there’s my advice: don’t use MongoDB for evenly-distributed high-write applications. One of the hugest problems is that there is a global write lock on the database. Yes, the database…not the record, not the collection. You cannot write to MongoDB while another write is happening anywhere. Bad news bears.
On a more positive note, for everything BUT the queuing system (which we did get working GREAT after throwing enough servers at it, by the way) MongoDB has worked flawlessly. The schemaless design has cut development time in half AT LEAST, and replica sets really do work insanely well. After all’s said and done, I would use MongoDB again, but for read-mostly data. Anything that’s high-write, I’d go Redis (w/client key-hash sharding, like most memcached clients) or Riak (which I have zero experience in but sounds very promising).
TL,DR; MongoDB is awesome. I recommend it for most usages. We happened to pick one of the few things it’s not good at and ended up wasting a lot of time trying to patch it together. This could have been avoided if we picked something that was built for high write throughput, or dropped our application’s “queue dependency” requirements early on. I would like if MongoDB advertised the global write lock a bit more prominently, because I felt gypped when one of their devs mentioned it in passing months after we’d started. I do have a few other projects in the pipeline and plan on using MongoDB for them.