GitHub User: Timo614
DHH goes into Russian doll caching at a recent meetup he teleconferenced in at.
Would be interesting to see Ryan cover it as well but he does a pretty good job explaining how it works.
Linode is less secure by default until you do some configuration changes when you get the box -- it comes very unsecure -- Ryan didn't go into it in the screencast (probably for the purposes of time) but you should lock down your VPS root access and only really sudo up to it from another user. Changing your SSH port and turning off password authentication can help too. Also setting up a firewall (even just ufw) will help lock it further down. You may also want to add some blacklisting software to handle multiple login attempts.
I'd follow those steps when setting up.
As for Heroku it's a great solution that makes scaling very very easy. The only problem I've had with it is you tend to have to bend off backwards at times to get things working (asset pipeline issues among other things) and you will bump into issues with some gems needing to be configured differently. It's also much pricier relative to using a solution such as Linode.
It does have free plans, however, so you should really try it out and see if you like the workflow. You pretty much make a local git repository for it (or just add another remote host to an existing) and push to it. It handles bundling, getting the server up and running, etc. You can even set environmental variables directly from the command prompt for the server using: heroku config:add KEY_NAME=value
Short answer: Heroku handles all that configuration for you since you don't get root access.
To clarify what I meant by in the same order above.
It always will select the same users for tests in the same order (as you ramp up that is -- so if you always run your tests at 20% the same 20% of your site users will be effected).
Ah one last thing is this sort of implementation requires you to be only testing users. So you wouldn't be able to split run a feature on your homepage across all visitors. That may not matter in most cases since you likely can get a feel if it works from users but, for example, if you wanted to test a new sign up form or sign up text to entice users you'd be out of luck.
I like the custom implementation but it could use some work in regards to the percentage selection.
percentage ? user.id % 100 < percentage : true
It always will select the same users for tests in the same order.
So if you're running multiple tests you have an almost certain chance the same customers will be in each of the same tests.
It'd be better to just randomly pick users (using the percentage as a means to select them) and then sticky session them some way (perhaps a cookie with which features values you're currently opted into) to keep them in the select group. Then you could analyze the results from the tests independently of each other without having them always intersect.
On top of that if you went with the current approach and tended to have multiple faulty features you're subjecting the same users always to these features. If I was always part of the 1% that has features tested upon it I'd likely be annoyed if most of the rollouts were painful as my perception of the site would be degraded. Having it more random will prevent certain users from being the guinea pigs of the lot.
Just a thought but definitely agree having a custom one may be better than having all the extra dependencies.
May be worth it to eventually build out a gem that handles some intense feature testing (alt controls to stabilize populations, quick reporting, feature value touched to determine whether the feature was touched or not by the user, multiple test features for a test, etc) but that's likely outside the scope of this.
Either way, keep up the good work man, enjoying learning Rails through these casts.
Someone correct me if I'm wrong but I'd assume since you have to fetch the migrations from the engine and run them on the consumer / main app it's likely they both would share the same data store (and thus this wouldn't work).
Not sure if there is a way to pass arbitrary data to the mounts to allow it to store data associated with a particular id (ie: blog 1, 2, 3 and then the data being associated with those). I assume you could somehow (although it sounds dirty) inspect the request and then use that to determine which instance the blog is and store / fetch its data using that means.
My gut tells me the best way to do this would rather to:
1. Have one engine for the blog
2. Have that engine understand the idea of there being more than one blog ... so there would be a blog model that has 0 to many posts, etc and then key the posts off blog so they are associated with that blog
3. Have the routes somehow handle pointing to the correct blog -- whether you want that to be /blog/1-Name-of-Blog/ etc or something else is your call.