RailsCasts Pro episodes are now free!

Learn more or hide this

Martin Westin's Profile

GitHub User: eimermusic

Site: www.eimermusic.com

Comments by Martin Westin


You can also make use of routes.rb to perform something similar to this and most sub domain setup work.

  # Subdomain handling via route constraint
  constraints(Domain) do
    # Regular routes that scope on subdomains go in here
  # Routes to use if no tenant subdomain exists

In the above example Domain is a class with a method named matches? in it. More about making use of this trigger (which is very early in the request process) to setup the tenant stuff is found in the rails guides starting here:

We trigger all subdomain/tenant setup from this kind of route constraint.


I don't think you can argue that something that switched DBs is less prone to error and attack than something that is query-based... I prefer it, personally, and have used it for many years.

See my comment above about testing where i mention why I changed... TLDR: The app grew and many DBs became an overhead we could not afford.

I should also note that it was not a huge thing for me to add a default scope (in a module btw) and including that in all models. Then it was just a matter of setting the right tenant association on existing data before migrating it all to a single DB. The testing was probably what I spent the longest on.


I recently created capybara and rspec tests for an entire app specifically to verify that multi tenancy does not leak data.

Basically I got it down to:
* Setting up a tenant that the tests use. For me a subdomain which is created in a before filter.
* Making sure there is fixture/factory data belonging to at least two tenants

Then you just check to make sure that models don't find data for the "other" domain, views don't display that data and so on. It is not the most fun tests but it does feel a lot safer knowing that I am pretty sure I don't leak data.

I also went down the anal-retentive path of switching tenants mid-test and verifying that I only saw the correct data there too.

We have been using the one-db-per-tenant concept for a few years but since moving to MongoDB this is starting to create ridiculous overhead. I was a bit paranoid about switching to a scoped tenancy but it has, so far been solid. (Mongo has pretty large overhead for databases. Out DB is >100GB on disk but only a few GBs exported and zipped.)


Funny. We switched from MySQL to MongoDB and Monogid about a year ago for the same basic reason they moved the other way. :)

We had SQL migrations and "strongly typed schema" as a major source of friction in a system that's quickly evolving and iterating. I miss joins but there is really only a single query in our system where it is a significant problem.


I recommend Mongify if you want to create a branch of your app and move your data over from MySQL to MongoDB (and Mongoid). It saved my life when I migrated in that direction last year.



Any tricks for speeding up asset serving in development?

When I have been playing with a 3.1 branch of our main rails app at work I experience terribly slow performance. I gather this is because Rails is serving every request for every asset (as seen in the log) instead of Unicorn doing it.

I ask since the performance I see is so bad I can't imagine any developer coping with it... hence I must be missing some crucial bit of information.

I guess the reason assets don't work like Compass (which generates static files in public) is that I can make use of a full request object with session and all in my assets or something?