If you're using an html5 compatible browser, you might encounter a javascript validation saying "Please enter an email address" for the username field around 8:33.
HTML5 compatible browsers will interpret the type="email" attribute of the username input field and perform the associated email validation:
First of all, many thanks Ryan for this amazing cast. I just wanted to know is there a way this can be applied in a scenario where I would like to use my VPN to host multiple rails applications?
For example if my domain is www.myrailsapps.com and I have three apps that I want to deploy on it which can be accessed at myrailsapps.com/app1, myrailsapps.com/app2, etc.
Anyone know if this can be accomplished with minor tweaks to this particular method?
I just figured this out under the javascript debugger, for some reason the coffee script version does not set the xkeys or xkeys options, using plain old javascript works even under bootstrap.
Hi
is there possible to have a final solution for the limit function issue?
Basically for example, how can I get the first 3 users from my users records?
thanks for your help
For me Products::to_csv did not successfully get called. Instead, a comma separated list of objecs was rendered. e.g #<MyObject:0x007fb05ee9eb00>,#<MyObject:0x007fb05ee9e4e8> (which is the same output as when there is no def self.to_csv)
I was only able to use this design by changing format.csv { send_data @products.to_csv }
to format.csv { send_data Product.to_csv(@products) }
We are trying to make the sample code working on post and user (instead of product and category) with user has_many posts and a post belongs_to a user. However there is a network error below. (even when source: was changed to source: ["choice1", "choice2"] and data: (autocomplete_search :@users.map(&:name)} in view file
"NetworkError: 404 Not Found - http://localhost:3000/ajax/users?term=cha"
OK, so the purpose is to reuse cached fragments between production deploys. In Capistrano terms, store them in shared, rather than current. That makes sense.
It still seems absurd to jump through those hoops in order to test that caching works in development mode. Testing gets great attention in the Ruby community. I'm surprised we don't have better coverage over this. I took a stab at it a three years ago with my Banker plug-in, but haven't maintained it. Perhaps I should turn it into a gem and modernize it.
You can also make use of routes.rb to perform something similar to this and most sub domain setup work.
ruby
# Subdomain handling via route constraint
constraints(Domain) do# Regular routes that scope on subdomains go in hereend# Routes to use if no tenant subdomain exists
In the above example Domain is a class with a method named matches? in it. More about making use of this trigger (which is very early in the request process) to setup the tenant stuff is found in the rails guides starting here: http://guides.rubyonrails.org/routing.html#request-based-constraints
We trigger all subdomain/tenant setup from this kind of route constraint.
Does this work in concert with a Twitter Bootstrap modal? I'd like to have my steps displayed inside a popup modal rather than render a new page. Can it be hacked to respond to JS requests?
Does anyone have any idea to how to make this work with reserved subdomains? I want to render a StaticPages controller if no subdomain is present, or if it's a reserved one.
Originally I scoped everything through a current_account method as Ryan briefly mentioned in this episode. I only asked for the current_account if the subdomain was not reserved, or if there was no subdomain.
ruby
def current_account
if request.subdomain.present? && !Account.reserved_subdomain?(request.subdomain)
@account ||= Account.find_by_subdomain!(request.subdomain)
end
end
This meant www or no subdomain can be routed to a controller of my choice. Using Ryna's approach, however, prevents the view from rendering when no subdomain is specified, or when it's reserved.
Any ideas how to get around this limitation? This is what I get, but the page is completely blank:
Started GET "/" for 127.0.0.1 at 2012-10-22 21:47:06 -0200
Processing by StaticPagesController#home as HTML
Completed 200 OK in 0ms (ActiveRecord: 0.0ms)
hiredis failed to build during bundle so I updated rails to 3.2.8 (current at this time). Problem solved.
But then Faye crashed with "undefined method `close_connection_after_writing'" anytime it was messaged. The solution was to add:
Faye::WebSocket.load_adapter('thin')
to private_pub.ru before the PrivatePub.load_config line. This answer came from Issue #39 which claims to be fixed in the private_pub gem. That's probably true, but the source for this example project was built before that change.
@denispeplin mentions this above via the complimentary Faye issue #128.
Also, this is exactly why people use client-side frameworks like Backbone, Ember, Spine, etc.... instead of having the server spend the time rendering it, it just sends the json data to the client's browser where the framework figures out how to render it.
Heh, I answered my own question. In my case, I grouped my events by start.to_date, an then took the keys and paginated that array. This allowed me to return the first page of so many dates, and then the next page, and it doesn't matter how many calendar dates fall between them. (so I don't have pages of empty data this way for gaps in the dates)
I've been working on a multi-tenant app for about four years now. I didn't know it was called that and have seen so little written about it, I made it all up from scratch. And it's been ridiculously complex. Especially with a myriad of roles per tenant, and several tenant types. Although mine works better without subdomains, I think I can apply the default scope to my current methodology for enhanced security/protection against developer forgetfulness.
Because when it comes time to push to production you would either have to update all your versions in each render call or manually clear your cache (something like Rails.cache.clear).
Using cache_digests you not only get auto-expiration of what should expire, but also get to keep all your other cached items as well.
I don't understand the purpose of this gem at all.
Templates don't (or, at least, shouldn't) change in production, so there are no changes to detect.
Templates do change in development, however this gem requires extra code (explicit partial and collection arguments to render) and developer behavior changes (frequent server restarts).
I've wondered for years why there isn't better (any?) support for testing caching. Wouldn't it be better to have that than to jump through these hoops to test caching in development mode?
I don't think you can argue that something that switched DBs is less prone to error and attack than something that is query-based... I prefer it, personally, and have used it for many years.
See my comment above about testing where i mention why I changed... TLDR: The app grew and many DBs became an overhead we could not afford.
I should also note that it was not a huge thing for me to add a default scope (in a module btw) and including that in all models. Then it was just a matter of setting the right tenant association on existing data before migrating it all to a single DB. The testing was probably what I spent the longest on.
I recently created capybara and rspec tests for an entire app specifically to verify that multi tenancy does not leak data.
Basically I got it down to:
* Setting up a tenant that the tests use. For me a subdomain which is created in a before filter.
* Making sure there is fixture/factory data belonging to at least two tenants
Then you just check to make sure that models don't find data for the "other" domain, views don't display that data and so on. It is not the most fun tests but it does feel a lot safer knowing that I am pretty sure I don't leak data.
I also went down the anal-retentive path of switching tenants mid-test and verifying that I only saw the correct data there too.
We have been using the one-db-per-tenant concept for a few years but since moving to MongoDB this is starting to create ridiculous overhead. I was a bit paranoid about switching to a scoped tenancy but it has, so far been solid. (Mongo has pretty large overhead for databases. Out DB is >100GB on disk but only a few GBs exported and zipped.)
simpler implementation
ability to use existing rails migration tools
use of Postgres Views/Rules & Rails 3 scope to enforce
faster pg_dump, typical use case optimization in DB parser/planner for SELECT QUERIES
no need to jerry-rig IDs
no monkey-patching of Rails A/R (maybe just the connection)
Cons:
might break down at 20 to 50M records in a single table?
difficulty in partitioning?
Hey all, like @Mark says above, be sure to use reputation_value_for.
Also, the reputation system requires different names for all types of reputations (ie. users and haikus can't share the same reputation name), so the code from the example would be more like:
I'm fairly new to all of this but yes, I believe caching the template renders any erb conditionals contained within the cached portion useless (except for the first time the cache is created.) You can get around this, using JS and HTML5 data attributes:
First, create your 'sensitive link' - in this case, we'll wrap it in a div named delete-btn - and use CSS to HIDE the link. Then, assign a data attribute to the div:
Of course, this assumes the User model has an admin attribute.
Then, it's as easy as fetching the admin attribute using JS and writing a quick if statement that checks if the admin data attribute is true. If so, unhide the link.
I'm wondering what the purpose of caching an individual task is? It seems as though caching the project view may be sufficient.
I understand the benefit of caching an individual task is that when a project is re-rendered (after a task changes), most tasks will still be cached. But does rendering a task really take that long?
I think the answer to my question lies around how we avoid hitting the database when using fragment caching. I haven't fully thought it through, though.
If you're using an html5 compatible browser, you might encounter a javascript validation saying "Please enter an email address" for the username field around 8:33.
HTML5 compatible browsers will interpret the type="email" attribute of the username input field and perform the associated email validation:
You can update the username input field to remove this validation. Change the username field to:
You can do the same thing in the
views/devise/registrations/edit.html.erb
file.+1 for tests!
Never mind I figured it out. I just added this helper function:
+1
Hi everyone,
First of all, many thanks Ryan for this amazing cast. I just wanted to know is there a way this can be applied in a scenario where I would like to use my VPN to host multiple rails applications?
For example if my domain is www.myrailsapps.com and I have three apps that I want to deploy on it which can be accessed at myrailsapps.com/app1, myrailsapps.com/app2, etc.
Anyone know if this can be accomplished with minor tweaks to this particular method?
Thanks again.
Can someone tell me how this helps? Isnt the controller being called on each request where the heavy lifting is done anyway? (DB calls etc)
Found the solution, add these into the model
liquid_methods :name, :photo, :thumbnail, :panel_thumbnail
def thumbnail
photo(:thumb)
end
def panel_thumbnail
photo(:panel_thumb)
end
+1
Does anyone know how I can parse a paperclip object from the model, was trying this, but doesn't work
Have access to the rest of the model, but get this error, when it tries to render the image
Liquid error: undefined method `to_liquid' for #Paperclip::Attachment:0x1183e4e78
Jasmine can have problems if you have a catchall route at the end of your routes.rb file. To solve, you can add
before the catchall in routes.rb
Hi,
i got the chart working under twitter bootstrap, I had to use a .js file rather than a coffee-script file for the code that created the graph.
I just figured this out under the javascript debugger, for some reason the coffee script version does not set the xkeys or xkeys options, using plain old javascript works even under bootstrap.
I'm having issues with it too, running under bootstrap I get an error in the javascript console: "undefined is not an object (evaluating _ref.length)"
I'm using it with the test data, not even a dynamic dataset.
How did you get this 'repeat' command?
Is there a way to set the default direction of certain columns to desc?
For example Popularity that uses the amount of hits. Right now the least popular item is when you click it once...
Thanks in advance!
It deployed correctly the first time but after second deploy it stopped working. I was getting problems with nginx and unicorn.
The problem was related to permission for unicorn_blog service. In deploy.rb we are starting the app with normal permissions.
run "/etc/init.d/unicorn_#{application} #{command}"
After changing it to
sudo "/etc/init.d/unicorn_#{application} #{command}"
everything started to work.
I don't know why it is necessary. I followed all instructions correctly.
Hi
is there possible to have a final solution for the limit function issue?
Basically for example, how can I get the first 3 users from my users records?
thanks for your help
In subscriptions.js.coffee:
You need to add 'false' to prevent the submit button from working and going to the next page.
I'm using rails 3.2 and I got this error undefined method `[]' for nil:NilClass can you please me explain why it happens?
I would like to export the filtered results, as well. What are those few lines I'd need? Appreciate the help...
For me Products::to_csv did not successfully get called. Instead, a comma separated list of objecs was rendered. e.g
#<MyObject:0x007fb05ee9eb00>,#<MyObject:0x007fb05ee9e4e8>
(which is the same output as when there is nodef self.to_csv
)I was only able to use this design by changing
format.csv { send_data @products.to_csv }
to
format.csv { send_data Product.to_csv(@products) }
ruby-1.9.3-p125
Rails 3.2.8
Agreed - There's no need to use 'first' if you use index_by. The hash values are instances as expected.
Another option - use each_with_object
We are trying to make the sample code working on post and user (instead of product and category) with user has_many posts and a post belongs_to a user. However there is a network error below. (even when source: was changed to source: ["choice1", "choice2"] and data: (autocomplete_search :@users.map(&:name)} in view file
"NetworkError: 404 Not Found - http://localhost:3000/ajax/users?term=cha"
Any idea about fixing the error?
Anyone know how to nicely fadeIn the results? Thnx!
I find it better to put the global scope directly into the controller:
That way everything outside it will work the same way as before.
OK, so the purpose is to reuse cached fragments between production deploys. In Capistrano terms, store them in shared, rather than current. That makes sense.
It still seems absurd to jump through those hoops in order to test that caching works in development mode. Testing gets great attention in the Ruby community. I'm surprised we don't have better coverage over this. I took a stab at it a three years ago with my Banker plug-in, but haven't maintained it. Perhaps I should turn it into a gem and modernize it.
Also an interesting talk on the same at RailsConf 2012: Schemaless SQL The Best of Both Worlds
You can also make use of routes.rb to perform something similar to this and most sub domain setup work.
In the above example Domain is a class with a method named matches? in it. More about making use of this trigger (which is very early in the request process) to setup the tenant stuff is found in the rails guides starting here:
http://guides.rubyonrails.org/routing.html#request-based-constraints
We trigger all subdomain/tenant setup from this kind of route constraint.
I figured it out. Stupidly, my yield was inside of my if condition!
+1
once again very useful Ryan, thank you!
Does this work in concert with a Twitter Bootstrap modal? I'd like to have my steps displayed inside a popup modal rather than render a new page. Can it be hacked to respond to JS requests?
Yes, that's right, so it will be used for both sandbox and production environment ?
And another gem, closure_tree: https://github.com/mceachen/closure_tree
"Closure Tree is a mostly-API-compatible replacement for the ancestry, acts_as_tree and awesome_nested_set gems".
Does anyone have any idea to how to make this work with reserved subdomains? I want to render a StaticPages controller if no subdomain is present, or if it's a reserved one.
Originally I scoped everything through a
current_account
method as Ryan briefly mentioned in this episode. I only asked for thecurrent_account
if the subdomain was not reserved, or if there was no subdomain.ruby
def current_account
if request.subdomain.present? && !Account.reserved_subdomain?(request.subdomain)
@account ||= Account.find_by_subdomain!(request.subdomain)
end
end
This meant
www
or no subdomain can be routed to a controller of my choice. Using Ryna's approach, however, prevents the view from rendering when no subdomain is specified, or when it's reserved.Any ideas how to get around this limitation? This is what I get, but the page is completely blank:
Started GET "/" for 127.0.0.1 at 2012-10-22 21:47:06 -0200
Processing by StaticPagesController#home as HTML
Completed 200 OK in 0ms (ActiveRecord: 0.0ms)
hiredis failed to build during
bundle
so I updated rails to 3.2.8 (current at this time). Problem solved.But then Faye crashed with "undefined method `close_connection_after_writing'" anytime it was messaged. The solution was to add:
to private_pub.ru before the
PrivatePub.load_config
line. This answer came from Issue #39 which claims to be fixed in the private_pub gem. That's probably true, but the source for this example project was built before that change.@denispeplin mentions this above via the complimentary Faye issue #128.
What changes are needed so that you can select the country and then select the city, instead of the state/region
Also, this is exactly why people use client-side frameworks like Backbone, Ember, Spine, etc.... instead of having the server spend the time rendering it, it just sends the json data to the client's browser where the framework figures out how to render it.
Heh, I answered my own question. In my case, I grouped my events by start.to_date, an then took the keys and paginated that array. This allowed me to return the first page of so many dates, and then the next page, and it doesn't matter how many calendar dates fall between them. (so I don't have pages of empty data this way for gaps in the dates)
tl;dr: I figured it out. Thanks!
I've been working on a multi-tenant app for about four years now. I didn't know it was called that and have seen so little written about it, I made it all up from scratch. And it's been ridiculously complex. Especially with a myriad of roles per tenant, and several tenant types. Although mine works better without subdomains, I think I can apply the default scope to my current methodology for enhanced security/protection against developer forgetfulness.
Thanks Ryan!
Because when it comes time to push to production you would either have to update all your versions in each render call or manually clear your cache (something like Rails.cache.clear).
Using cache_digests you not only get auto-expiration of what should expire, but also get to keep all your other cached items as well.
It just makes deployment much simpler.
Rendering the ERB can be fairly intensive and time consuming as well.
Using this method gets you down to the <100ms page load times.
I don't understand the purpose of this gem at all.
Templates don't (or, at least, shouldn't) change in production, so there are no changes to detect.
Templates do change in development, however this gem requires extra code (explicit partial and collection arguments to render) and developer behavior changes (frequent server restarts).
I've wondered for years why there isn't better (any?) support for testing caching. Wouldn't it be better to have that than to jump through these hoops to test caching in development mode?
I don't think you can argue that something that switched DBs is less prone to error and attack than something that is query-based... I prefer it, personally, and have used it for many years.
See my comment above about testing where i mention why I changed... TLDR: The app grew and many DBs became an overhead we could not afford.
I should also note that it was not a huge thing for me to add a default scope (in a module btw) and including that in all models. Then it was just a matter of setting the right tenant association on existing data before migrating it all to a single DB. The testing was probably what I spent the longest on.
I recently created capybara and rspec tests for an entire app specifically to verify that multi tenancy does not leak data.
Basically I got it down to:
* Setting up a tenant that the tests use. For me a subdomain which is created in a before filter.
* Making sure there is fixture/factory data belonging to at least two tenants
Then you just check to make sure that models don't find data for the "other" domain, views don't display that data and so on. It is not the most fun tests but it does feel a lot safer knowing that I am pretty sure I don't leak data.
I also went down the anal-retentive path of switching tenants mid-test and verifying that I only saw the correct data there too.
We have been using the one-db-per-tenant concept for a few years but since moving to MongoDB this is starting to create ridiculous overhead. I was a bit paranoid about switching to a scoped tenancy but it has, so far been solid. (Mongo has pretty large overhead for databases. Out DB is >100GB on disk but only a few GBs exported and zipped.)
+1 for TESTS!!! I found this to be quite hard to test. The complexity grows when you have an authentication system on top of the subdomains.
In this blog (http://railscraft.tumblr.com/post/21403448184/multi-tenanting-ruby-on-rails-applications-on-heroku) the author recommends rows based multitenancy over schema based and gives the following pros and cons:
Summary of ROW-BASED Pros & Cons
Pros:
simpler implementation
ability to use existing rails migration tools
use of Postgres Views/Rules & Rails 3 scope to enforce
faster pg_dump, typical use case optimization in DB parser/planner for SELECT QUERIES
no need to jerry-rig IDs
no monkey-patching of Rails A/R (maybe just the connection)
Cons:
might break down at 20 to 50M records in a single table?
difficulty in partitioning?
Hey all, like @Mark says above, be sure to use
reputation_value_for
.Also, the reputation system requires different names for all types of reputations (ie. users and haikus can't share the same reputation name), so the code from the example would be more like:
I'm fairly new to all of this but yes, I believe caching the template renders any erb conditionals contained within the cached portion useless (except for the first time the cache is created.) You can get around this, using JS and HTML5 data attributes:
First, create your 'sensitive link' - in this case, we'll wrap it in a div named delete-btn - and use CSS to HIDE the link. Then, assign a data attribute to the div:
Of course, this assumes the User model has an admin attribute.
Then, it's as easy as fetching the admin attribute using JS and writing a quick if statement that checks if the admin data attribute is true. If so, unhide the link.
Angelo
I'm wondering what the purpose of caching an individual task is? It seems as though caching the project view may be sufficient.
I understand the benefit of caching an individual task is that when a project is re-rendered (after a task changes), most tasks will still be cached. But does rendering a task really take that long?
I think the answer to my question lies around how we avoid hitting the database when using fragment caching. I haven't fully thought it through, though.