So that's what 37signals calls Russian Doll Caching?
Yeah, I believe that's what they're doing.
Would there be any gotchas when doing this with sensitive links that show up or remain hidden depending if a user is an admin or not?
The SVN blog discusses that. The short answer is you can't use conditionals that can change when the template itself doesn't. Impacts system design a bit.
You could include the things that can change in the cache key, like <% cache [post, user.admin?] do %>, but then you lose some of the automatic cache invalidation that cache digests provide when nesting cache fragments.
<% cache [post, user.admin?] do %>
I'm fairly new to all of this but yes, I believe caching the template renders any erb conditionals contained within the cached portion useless (except for the first time the cache is created.) You can get around this, using JS and HTML5 data attributes:
First, create your 'sensitive link' - in this case, we'll wrap it in a div named delete-btn - and use CSS to HIDE the link. Then, assign a data attribute to the div:
<div class="delete-btn" data-admin="<%= user.admin? %>">
Of course, this assumes the User model has an admin attribute.
Then, it's as easy as fetching the admin attribute using JS and writing a quick if statement that checks if the admin data attribute is true. If so, unhide the link.
This seems highly insecure, what if I simply in spec your page in my browser and alter the html?
I suppose you would be validating credentials in controllers as well though...
On the other hand what if it's secure data? Are there not valid cases where the HTML should never be delivered for a subset of users?
I'm wondering what the purpose of caching an individual task is? It seems as though caching the project view may be sufficient.
I understand the benefit of caching an individual task is that when a project is re-rendered (after a task changes), most tasks will still be cached. But does rendering a task really take that long?
I think the answer to my question lies around how we avoid hitting the database when using fragment caching. I haven't fully thought it through, though.
Rendering the ERB can be fairly intensive and time consuming as well.
Using this method gets you down to the <100ms page load times.
Also, this is exactly why people use client-side frameworks like Backbone, Ember, Spine, etc.... instead of having the server spend the time rendering it, it just sends the json data to the client's browser where the framework figures out how to render it.
@cpuguy83 You're absolutely right, that's one big advantage of clint-side MVC.'s
By sticking with mostly Rails, you can avoid having to test/debug lots of JS, devising a strategy for sharing views between Rails and JS, logic duplication in Backbone models and Rails models, etc.
I think caching + utilizing Rails UJS + RJS (rails js templates) is a winning combo for many apps. That's why cache digests is sweet :-)
Even starting to wonder if N+1 issues can be mitigated with fragment caching instead of always having the server do the include at the action.
I don't understand the purpose of this gem at all.
Templates don't (or, at least, shouldn't) change in production, so there are no changes to detect.
Templates do change in development, however this gem requires extra code (explicit partial and collection arguments to render) and developer behavior changes (frequent server restarts).
I've wondered for years why there isn't better (any?) support for testing caching. Wouldn't it be better to have that than to jump through these hoops to test caching in development mode?
Because when it comes time to push to production you would either have to update all your versions in each render call or manually clear your cache (something like Rails.cache.clear).
Using cache_digests you not only get auto-expiration of what should expire, but also get to keep all your other cached items as well.
It just makes deployment much simpler.
OK, so the purpose is to reuse cached fragments between production deploys. In Capistrano terms, store them in shared, rather than current. That makes sense.
It still seems absurd to jump through those hoops in order to test that caching works in development mode. Testing gets great attention in the Ruby community. I'm surprised we don't have better coverage over this. I took a stab at it a three years ago with my Banker plug-in, but haven't maintained it. Perhaps I should turn it into a gem and modernize it.
But there are no hoops.
You install this gem and it does the work for you.
Can someone tell me how this helps? Isnt the controller being called on each request where the heavy lifting is done anyway? (DB calls etc)
This is actually the problem caching is trying to solve. Instead of reading from your DB and rendering partials rails is smart enough to look for a cached file, which, if it exists, will be used for the response.
Only if information critical for the request has changed, will a new version be created.
Hope this helps.
cache_digests and fragment caching is 1 piece of the puzzle.
Rendering out templates is actually pretty heavy on the server as well. Using this method will get you closer (performance-wise) to using a full client side JS solution, which takes all the fun out of using Rails.
If your controller actions are heavy then considering caching the underlying data as well.
In various places in my app I'm using:
1. Just fragment cachine
2. Fragment caching + action caching
3. Model caching + fragment caching
I'm not too fond of action caching, but it does speed things up tremendously if you can use it.
What to force cache to expire? for example when the list of items are updated?
You aren't caching the lists themselves, just the individual objects.
When you setup your associations you do the ":touch => true" option so the related object gets touched, thus automatically expiring the cache on it.
So how would you cache json? I think I am missing something obvious.
This is specifically for fragment caching.
JSON would need to get cached by directly calling Rails.cache.fetch in your model or controller.
In order to formulate the proper cache key for some template so that we can check Memcached, does that not require 1+ DB calls (more for nested templates that use some db objects) to fetch the updated_at value on every request?
If so, doesn't this/these mandatory call(s) somewhat offset the performance benefits a bit?
If not, is it implied that cache_digests gem creates static HTML files specific for every call and somehow tag them w/ specific ETags (though this method yields no benefit for users viewing the page the first time)?
No, the updated_at timestamp is coming from the object you've already pulled... specifically this is generated using the cache_key method on your object. Then it's generating an MD5 of your template to see if it's changed and adding that into the key being stored in memcache.
There should be noted, that this gem isn't aware of I18n. Because of this you should always write something like
<% cache [I18n.locale, @post] do %>
<%= t(:post_created_at) %> <%= @post.created_at %>
<% end %>
in multilingual applications or write a helper for this. And there is no plans to support I18n in future. More info.
Is there a way to use this for pages that contain mostly static content, that is only updated whenever the app is updated?
I have some static pages (tutorial, help, contact, about, etc.) that use embedded ruby for for some things, but should otherwise be cached.
First sign in through GitHub to post a comment.