RailsCasts Pro episodes are now free!

Learn more or hide this

Recent Comments

Avatar

Agh, that could be it, I'm running 9.0.4, I'll update any confirm whether or not this is the problem. Thanks

Avatar

okok, now I have read the documentation, its says.

# Generate JavaScript
rails generate backbone:install --javascript

Avatar

While I suspect this has been resolved since your post is nearly 2 weeks old, but make sure to restart your app after adding bcrypt-ruby to your gemfile & bundle installing. Not sure if that was the actual cause of your issue, but it resolved the same error on my end.

Avatar

YEAP !!

It is a problem! When you look at config/unicorn.log file, you will see that the release number is the wrong one and it is not the current one. Thereby, unicorn cannot find its gems.

I would really really appreciate any help on that.

Avatar

You need to rsync using ssh and your keyfile like that:
rsync -r <dir> -e "ssh -i <your pem file>" --rsync-path='sudo rsync' ubuntu@<your ip>:/var/chef

Avatar

Does loading the QC functions once with a migration work for dev environments? We have it set up so we drop and reload them around db:schema:load and load them after db:test:load.

We are working to add support to allow symbols in job params (e.g. hash keys or values) when enqueuing jobs in QC (by turning them into strings before encoding with OkJSON).

We have a queue for sending emails, so we'll have to investigate using workers that aren't running the full environment for those to save memory. Great idea.

One thing to watch out for when using QC (or any background job manager) with Rails is jobs enqueued from after_create / after_save callbacks that use the new record. We've found that you must use after_commit instead to ensure that the record has been committed to the DB before the worker tries to find it by ID.

Avatar

Great screencast! I'm wondering how I would configure the deployment to support an app that offers users basecamp style subdomains?

Avatar

A note that if you're using nginx compiled via passenger: It won't compile the gzip_static module. You need to pass it as a configure option. http://dennisreimann.de/blog/configuring-nginx-for-the-asset-pipeline/

Avatar

Thanks Ryan. I started typing out two issues I was having before I did a thorough check of tabs vs. spaces in my coffee file. A quick convert to my preferences fixed both. All, just be careful about this if you get totally unexpected, weird errors :).

Avatar

Hi Ryan

Please please do an episode on CRUD in datatable and export features of it.

Avatar

thx ryan, I very much enjoy all the PostgreSQL episodes!

Avatar

This screencast kind of convinced me that hstore is not worth the hassle :(. It doesn't look elegant to integrate it and Mongo or something else looks way better.

In any case, thank you Ryan for focus on pg and all your work.

Avatar

did something happen to the source code repository on GitHub? Getting a 404 from the link..... :^(

Avatar

There are definite feature, performance and scaling tradeoffs versus a dedicated search engine like Solr or Sphinx. I'll compare to Solr, since that's where my expertise lies (I'm a cofounder of websolr)

Relative advantages for Postgres:

  • Reuse an existing service that you're already running instead of setting up and maintaining something new.
  • Much better search performance than SQL LIKE.

Relative advantages for Lucene/Solr:

  • Scale your indexing and search load separately from your regular database load.
  • More efficient indexing-time performance (microlithic segments vs. monolithic B-tree).
  • More flexible term analysis for things like accent normalizing, linguistic stemming, N-grams, markup removal.
  • Probably faster search performance for common terms or complicated queries.
  • Much better term relevancy ranking -- faster, and cheap to customize.
  • More flexible data model and better tolerance for changing data model
  • Phrase search

Other Postgres TODOs that Lucene/Solr handle just fine: http://www.sai.msu.su/~megera/wiki/FTS_Todo

Clearly I think a dedicated search engine is the better option here. But at least if you're using LIKE, then Postgres full-text search is a clear upgrade :)

Avatar

I plan to watch this video when I am at the point of coding my Rails project. However I am curious if someone who has watched the video can tell me which version of Rails & Thinking Sphinx are being used for this video. I am learning Rails 3.2.3. I currently have gem Thinking-Sphinx 2.0.10 in my Gemfile.

Avatar

Have you tried using the Mailgun or CloudMailIn add on?

Avatar

Can I use backbone-on-rails without cofee-script gem?

Avatar

PostgreSQL specific features used include: SELECT FOR UPDATE NOWAIT and LISTEN/NOTIFY. Take a look at the PL/pgSQL function that is called to lock a job. https://github.com/ryandotsmith/queue_classic/blob/master/sql/ddl.sql

Avatar

Awesome update on this technique, Ryan. And thanks everyone for the additional gem suggestions (cocoon and awesome_nested_fields).

Avatar

Nice post Ryan as always.

I noticed that if I want to set properties with multi-byte string key or key string with spaces, the following format did not work.

p.properties = {rating: "PG-13", runtime: 107}

I needed to write this like follows in my environment.

p.properties = {"Highest Rating" => "PG-13", "時間" => "107"}

Avatar

I might be missing something obvious here, but I ran through this tutorial on my own app and then downloaded and setup the Blog After from this screencast. In both cases, when I do a search with more than one word... say for example "Lex Luthor"...
I get the following. So then, one word search no problem.. words with spaces between =

PG::Error: ERROR: syntax error in tsquery: "Lex Luthor"
LINE 1: ...articles" WHERE (to_tsvector('english', name) @@ 'Lex Lutho...
^
: SELECT "articles".* FROM "articles" WHERE (to_tsvector('english', name) @@ 'Lex Luthor' or to_tsvector('english', content) @@ 'Lex Luthor') ORDER BY ts_rank(to_tsvector(name), plainto_tsquery('Lex Luthor'))
desc LIMIT 3 OFFSET 0

Avatar

So... after some tinkering I discovered a few things.
1. transform_command returns an array.
2. the resize is ahead of crop in the array, which means that the image is being resized, then cropped, which obviously won't work.

This very simple hack fixed the expected functionality:

ruby
def transformation_command
      if crop_command
        arr = super
        p "un modded #{arr}"
        arr[3] = crop_command

        p "modded #{arr}"
        [arr[2],arr[3],arr[0],arr[1],arr[4]]
      else
        super
      end
    end
    
    def crop_command
      target = @attachment.instance
      if target.cropping?
        "\"#{target.crop_w}x#{target.crop_h}+#{target.crop_x}+#{target.crop_y}\""
      end
    end

The array varies depending on the transform, so this is indeed an unreliable HACK.
I'd fix it, but I think I'll explore a CarrierWave solution instead.

Avatar

If you use Kaminari instead of will_paginate, just replace total_entries by total_count and per_page(per_page) by per(per_page) in the class ProductsDatatable

Avatar

check your database.yml file. Rake tries to connect to DB.

Avatar

I don't see any PostgreSQL specific features here.

MySQL and any another RDBMSes can be used to achieve the same thing AFAICT.

Avatar

The best 9 dollars a month I have spent in my whole life...

Avatar

Is this possible when using collection.build for example?
@board.users.adding_board_user = true
@scoreboard.users.build(username: val[:username])
@board.save

adding_board_user is the conditional validation that works when called like the following in the user class:-
@user.adding_board_user = true

Just want to apply the same conditionals when saving a user from the board class.

Avatar

Make sure you are using the latest homebrew install of postgres (9.1.x) on Mac.

On Ubuntu/Debian you will probably need the postgres-contrib package installed for extensions.

Avatar

THANK YOU! I wrote my college Software Engineering project using the old version of this video, and this new approach makes it so much simpler! Keep them coming.

Avatar

Ryan -- I'm a big fan of this site, as you know from our email exchanges. So, I offer some friendly feedback from a moderate level Rails developer.

This cast was a nightmare to undo. I hod neither an idea about the dependency on Devise, nor the initializers the install method creates.

As you already know, you have a huge middle (moderate level) dev community you could reach. I'id personally pay almost anything to have things casted at my level.

The Bell Curve of Rails development reaches people like me -- and you should tap the market that is most willing to pay.

I know you are hesitant to recommend best-practices as to not divide the Rails community. I certainly respect that.

But the fact remains that every industry has its best practices, and I wish your and your cohorts could standardize on some so that your casts evolve from those best practices In my view, these casts are the means to an end, and we all want to reach the end.

Nonetheless, keep keep up the excellent work independent of my daft request.

Kind Regards, Hunt

Avatar

awesome_nested_fields is more flexible (you can put the code anywhere) and has an JS Callbacks and API.

Avatar

While I admit that QC does not have all of the features that delayed jobs boasts, I would also admit that it does not have all of the shortcomings as well. QC takes job locking quite serious and has proven to be more robust in high-volume scenarios. For instance, take a look at the PL/pgSQL functions that queue_classic uses to lock a job. All of the code in that function is tuned to ensure quick access to jobs while guaranteeing that no two workers can lock the same job. While the chance that delayed job will allow multiple workers to lock the same chance is low, it will happen eventually --especially in production environments that churn through tens of millions of jobs per day

Also, as Ryan Bates mentioned, queue_classic is not dependent on active_record. This means a whole lot when you want to write simple ruby programs that utilize a message queue.

Finally, we use queue_classic at Heroku because we have production systems that have daily message throughput in the tens of millions. We needed a queuing system that was reliable and easy to reason about; queue_classic satisfied both of these conditions.

Feel free to reach out if you have any more questions on how queue_classic might be able to help you out. @ryandotsmith or ryan@heroku.com

Avatar

If the response comes back faster than 500ms, won't this look a little funny? Testing in this in my local environment, I've had PJAX load in the response before the div was finished fading.

My workaround for the moment is to hide immediately and then fade in, which still looks better than a simple replacement.

coffeescript
        $('[data-pjax-container]').bind("start.pjax", ->
            $('[data-pjax-container]').hide 0
        ).bind "end.pjax", ->
            $('[data-pjax-container]').fadeIn 500
Avatar

Ok so here we need to require mailers classes & action_mailer only, work great

Avatar

Thanks for the coverage on all the great Postgres features, Ryan. I switched from MySQL a couple of years ago and haven't looked back. This video on hstore is especially timely for the project I'm working on.

Avatar

resque needs redis running all the time. that's the daemon in question. queue_classic only needs the database.

Avatar

Is there any good way to expose this authentication scheme to http requests? (or JSON)... I'm trying to write exposed apis for an iOS application that needs to talk to a rails app that I used this autentication scheme in.

Avatar

I wonder how easy it would be to use this with Apartment. Apartment currently supports DJ. Also, it would be cool to see an episode on Apartment. It'd fit right into the current Postgres theme.

Avatar

It can take anywhere from 24-48 hours for the IPN to go out actually. I assume this is why in the tutorial we request payment and then create a profile. To provide instant feedback on payment rather than have the person wait in order to start using services.

Try canceling the subscription and add a method in your ipn handling.

I found this a bit useful:

http://ianpurton.com/adding-paypal-subscriptions-to-your-rails-application

Avatar

Really nice and handy episode (as usual !), keep up the good work.
Another queue system worth trying is sidekiq, it's quite close of resque but doesn't spawn a process per worker. It uses celluloid under the hood that would worth an episode too !

Avatar

Hey Ryan,
Fantastic coverage of Postgres. I switched to Postgres a year ago and that is my preferred database now. For the RailsCasts community, there are two more excellent references that you may want to look into:

  1. Peepcode screencast on Postgres: a bit dated but very very good.

https://peepcode.com/products/postgresql

  1. Tekpub's more recent screencasts, equally good

http://tekpub.com/productions/pg

As far as using Postgres with Rails, Railscasts videos are the best.

Bharat

Avatar

Thanks Ryan, very useful.

I have a requirement for user-definable attributes and found the pointer to Richard's Schneems example was just the ticket for me.

Really enjoying all the PostgreSQL screencasts.

Avatar

I'm experiencing issue with require "your_app"
what it should be required? the name of my application does not work

It must run with rake or "./bin/worker" ? (both does not work)

Avatar

You mentioned that the main advantage of queue_classic over something like resque would be that you don't need a daemonized process running all the time. I'm curious what you would use to poll the queue then? Wouldn't you daemonize the rake task you had running throughout the screencast? Or would you run that or the lightweight worker script from cron?

Avatar

Answer to your question is in the screencast (between 6:30-6:50) and QC documentation; simply write your own worker "binary" (weird to use that term for a script file...) that only loads what's needed.