For those of you having troubles when using routes containing subdomains in combination with Nginx (v1.2.5), you have to modify Ryan's Nginx configuration file like so:
Just wanted to point out to everyone - if you happen to be using the google-api-client Roo has a name conflict with 'Google' and breaks the api client.
Thanks Ryan!!! Nice Episode..
A small question when I was watching:
What plugin do you use to swap between different applications(Safari, Terminal, and so on)? Currently, I used Quicksilver, but sometimes it conflicts with VI command. Any clues?
I set this up exactly as the RailsCast suggested. When I'm creating an identity it populates the identity table with the login information, however, it doesn't connect this data to the user table until the user actually logs in.
According to the logs it looks for a null email then throws an error "/auth/failure?message=invalid_credentials".
I'm probably missing something very simple. The only change I made was replacing first_name and last_name with name.
Started POST "/auth/identity/register" for 127.0.0.1 at 2012-12-13 08:28:44 -0500
(57.1ms) BEGIN
Identity Exists (55.7ms) SELECT 1 AS one FROM identities WHERE identities.email = BINARY 'email@email.com' LIMIT 1
SQL (56.5ms) INSERT INTO identities (created_at, email, first_name, last_name, password_digest, sha_password, updated_at) VALUES ('2012-12-13 13:28:44', 'email@email.com', 'first', 'last', '$2a$10$o4axANSpONo.GsIRIqpWET5FG', NULL, '2012-12-13 13:28:44')
(56.8ms) COMMIT
Identity Load (55.4ms) SELECT identities.* FROM identities WHERE identities.email IS NULL LIMIT 1
Started GET "/auth/failure?message=invalid_credentials" for 127.0.0.1 at 2012-12-13 08:28:44 -0500
Processing by SessionsController#failure as HTML
Parameters: {"message"=>"invalid_credentials"}
Redirected to http://localhost:3000/
Completed 302 Found in 1ms (ActiveRecord: 0.0ms)
Started GET "/assets/rails.png" for 127.0.0.1 at 2012-12-13 08:28:44 -0500
Served asset /rails.png - 304 Not Modified (0ms)
[2012-12-13 08:28:44] WARN Could not determine content-length of response body. Set content-length of the response or set Response#chunked = true
I played around with this issue, and the only solution that worked for me was adding an explicit flag to skip the processing:
ruby
defenqueue_imageImageWorker.perform_async(id, key) if key.present? && !skip_image_processing
enddefperform(id, key)
painting = Painting.find(id)
painting.key = key #setting the key again
painting.remote_image_url = painting.image.direct_fog_url(with_path:true)
painting.skip_image_processing = true
painting.save! # enqueue_image will get called again
painting.update_column(:image_processed, true)
end
"ALTER TABLE will propagate any changes in column data definitions and check constraints down the inheritance hierarchy. Again, dropping columns that are depended on by other tables is only possible when using the CASCADE option. ALTER TABLE follows the same rules for duplicate column merging and rejection that apply during CREATE TABLE."
it addresses the issue of reading large CSV files (I tried it with more than a million rows), and can process them in chunks, e.g. for creating Resque jobs (if you want to de-couple the processing from the CSV-file reading).
It gives more control over how the CSV is imported, e.g. can manipulate, re-write, or replace column-headers; it can read data in chunks for more efficient post-processing (e.g. with Resque); it can deal with non-standard record separators; and more..
Hi Ryan, nice episode.
Is it possible to speed up APIs? In episode #348 you described The Rails API gem and, if I got the concept, many rails features aren't loaded in order to have faster APIs. Is it possible in some way to exclude libraries only for controllers related to APIs?
Totally agreed. Writing authentication from scratch is so simple. The ugly side of Devise comes out when you start customising your app, and you spend more time fixing such issues than you would have had you written authentication from scratch.
I have added morris and raphael inside vendor/javascripts and did call in application.js also. In my orders.js.coffee i put
jQuery ->
Morris.Line
element: 'annual'
data: [
{y: '2012', a: 100}
{y: '2011', a: 75}
{y: '2010', a: 50}
{y: '2009', a: 75}
{y: '2008', a: 50}
{y: '2007', a: 75}
{y: '2006', a: 100}
]
xkey: 'y'
ykeys: ['a']
labels: ['Series A']
for testing purpose and did add in index. But chart did not appears.
Few things I really like:
1) Its a pull architecture, so you can cron machines to check back in and pull updates for production. But from the cli (knife) you can use the search feature and run whatever command you want. So for testing env I use it like cap (and essentially have a push deploy)
2) You can rollout upgrades to your application stack as well as your code. This is really nice for testing b/c it has environments so you can roll out new recipes to testing envs and then to production. Recipes etc are also usually maintained in a git repo.
3) I really like that it can be used in so many places. I use it to setup my laptop which runs ubuntu, virtual machines for testing environments, and a physical hosted server. I have also started to play with it in the cloud a little. For the physical machines I run a bootstrap command to setup the machine, with vms & cloud there are plugins that will also provision machines. If you write your recipes to scale and add monitoring... you can actually have you app scale on its own!
I have been playing with it for the last 6 months or so (on and off). I will say that chef like rails has a learning curve, but it gets easier... And its very easy to see the power. Infrastructure as Code gotta love it
Because it's faster
find_or_initialize_by_id - is a dinamic finder, it uses SQL request + method_missing chain to return the object
Class hierarhy traversal (to hit correct method_missing) is much much slower than "|| new"
Fastest way is to build and execute raw sql request
insertion of 25k records takes about 30 sec
Memory efficiency is about 1-3x CSV size (slower - less memory , faster - more memory)
If you plan to import CSVs frequently , you will face inefficient garbage collection problem. To free up memory you need to kill Dj worker
We are using following scheme in production - spawn Dj worker , import couple CSV files with raw queries, respawn worker
And one more thing - don't use Delayed job with rails directly ! it will load full rails environment for each Dj worker and it's a lot of memory for no reason
I love your videos, however this video and even the controller walkthrough were extremely fast and jumped around too quickly. I just couldn't focus on what you were saying. It seemed like I was just trying to figure out what directory you were in the entire time and not paying attention to the code.
I must say that I have only been working with Rails for only 8-10 months now.
Yeah, I didn't like the use (abuse?) of cookies there either. I would just created a relationship in the DB between users and announcements, (AnnouncementsUser user_id, announcement_id, read:boolean).
Then I'd have a method on the user object (current_user.unseen_announcements) that would return an array of all announcements that the user hasn't already marked as read. Then, the act of hiding the announcement would create a record in the DB that records the fact that that user has hidden the announcement.
Of course this isn't a solution for users without accounts.
I built a Rails-based client portal for a client, and I initially used Roo to permit them to upload the spreadsheets full of data that the portal needed to parse. I later switched to RubyXL. I ran into severe constraints with both packages, on large spreadsheets (where "large" means more than 50,000 rows of data, in my case). Both packages performed poorly with large XLSX files, which stands to reason, given how those files are encoded. I had memory use issues and CPU spikes, severe at times. These issues were especially challenging on the production system (a Linode VPS).
My ultimate solution was to use the Gnumeric spreadsheet tool "ssconvert" to convert the XLSX files to CSV format; from there, the standard Ruby CSV module worked fine. The same Resque job that initially used Roo to parse the spreadsheets just runs "ssconvert" and produces CSVs in a temp directory, before continuing its work. This solution isn't always suitable, of course; if you need to preserve the formulae in the spreadsheets, CSV won't work. In my case, I just needed the data in the columns.
Roo and RubyXL are great for some purposes. In fact, I'd have preferred to use them, rather than my hack solution. But my wishes were thwarted by the sheer size of the spreadsheets I had to import. If you run into a problem like that, using "ssconvert" to convert the Excel spreadsheets to CSV might help.
I keep getting weird errors in rails 3 when using the render method. Also I don't have access to the instance variable. I wrote my issue on stack overflow if any you guys would have the time to have a look at my code. Please
Is there an issue with using render in rails 3 in js files?
@Sebastian its probably too late for you, but for anyone else with this problem, It happened to me and the reason was I was using my user name for the account field as opposed to my AWS account number. I'm assuming that if the account number is wrong, a similar error will ensue.
I'm thinking about using HABTM instead of has_many because in my case, I do have a catalog of names (contributors) linked to a "groups" table.
Each "group" may have many "contributors" and each "contributor" may belong-to-many "groups". These two tables are linked with a "memberships" table.
As I don't want to waste my time with views, model, controller, routes, callbacks, bla-bla-bla for the "memberships" table, I just want to be able to set many contributors belonging to my group on my group form with a select-box. That is far more quicker to use the HABTM. (Now for validation, I just put a SQL table constraint UNIQUE index on both ID column of the membership table (in my migration) to avoid creating multiple same membership. This a protection against pirates, because a select box doesn't allow you to select multiple times the same item)
I think HABTM still make sens when you need to link one of your tables with a table (with a single column/attribute) that is just used as catalog of values. I may be wrong, but it makes sens to me.
I am using cloudmailin, and just started playing with mailgun.
Both are very easy to setup and use.
Thing I like about cloudmailin: its advance feature allows directly storing email attachment on s3.
Thing I like about mailgun: its routes and actions.
Here are the necessary model methods for any DataMapper users:
ruby
has n, :taggings
has n, :tags, :through => :taggingsdefself.tagged_with(name)
Tag.all(name: name).articles
enddeftag_list
tags.map(&:name).join(", ")
enddefself.tag_countsDataMapper.repository.adapter.select('select tags.*, t.count from tags inner join (select taggings.tag_id, count(taggings.tag_id) as count from taggings group by taggings.tag_id) t where tags.id = t.tag_id')
enddeftag_list=(names)
self.tags = names.split(',').map do |n|
Tag.first_or_create(name: n.strip)
endend
Implementing the roll your own method of this for a project of mine and I keep getting an error when I attempt to find all "draft" or "open" or "whatever" orders:
ERROR: column "orders.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT "orders".* FROM "orders" INNER JOIN "order_events" ON...
Hi There, thanks for the rail cast I am wondering if anyone has had experience with using ActiveModel like the #219 Rails Cast as it would be better if I could do this without the Active Record database creation, as well as possibly removing the sessions to be used in memcache instead, it seems there isn't a lot of info on this and would be very appreciative for any help someone can offer!
Hi There, thanks for the rail cast I am wondering if anyone has had experience with using these in multi step forms like that of the #217 Rails Cast as it would be better if I could do this without the Active Record database creation, as well as possibly removing the sessions to be used in memcache instead, it seems there isn't a lot of info on this and would be very appreciative for any help someone can offer!
One less mutation, and it allows specifying environment-dependent settings in both yaml files. It does use mutation though, but as far as I know they (merge! and symbolize_keys!) are both thread-safe.
For those of you having troubles when using routes containing subdomains in combination with Nginx (v1.2.5), you have to modify Ryan's Nginx configuration file like so:
Change this:
To this:
Restart, and you should be good to go!
The
$http_host
variable seems to only return the request subdomain.Just wanted to point out to everyone - if you happen to be using the google-api-client Roo has a name conflict with 'Google' and breaks the api client.
Took a little while to track that down..
Thanks Ryan!!! Nice Episode..
A small question when I was watching:
What plugin do you use to swap between different applications(Safari, Terminal, and so on)? Currently, I used Quicksilver, but sometimes it conflicts with VI command. Any clues?
I set this up exactly as the RailsCast suggested. When I'm creating an identity it populates the identity table with the login information, however, it doesn't connect this data to the user table until the user actually logs in.
According to the logs it looks for a null email then throws an error "/auth/failure?message=invalid_credentials".
I'm probably missing something very simple. The only change I made was replacing first_name and last_name with name.
Started POST "/auth/identity/register" for 127.0.0.1 at 2012-12-13 08:28:44 -0500
(57.1ms) BEGIN
Identity Exists (55.7ms) SELECT 1 AS one FROM
identities
WHEREidentities
.email
= BINARY 'email@email.com' LIMIT 1SQL (56.5ms) INSERT INTO
identities
(created_at
,email
,first_name
,last_name
,password_digest
,sha_password
,updated_at
) VALUES ('2012-12-13 13:28:44', 'email@email.com', 'first', 'last', '$2a$10$o4axANSpONo.GsIRIqpWET5FG', NULL, '2012-12-13 13:28:44')(56.8ms) COMMIT
Identity Load (55.4ms) SELECT
identities
.* FROMidentities
WHEREidentities
.email
IS NULL LIMIT 1Started GET "/auth/failure?message=invalid_credentials" for 127.0.0.1 at 2012-12-13 08:28:44 -0500
Processing by SessionsController#failure as HTML
Parameters: {"message"=>"invalid_credentials"}
Redirected to http://localhost:3000/
Completed 302 Found in 1ms (ActiveRecord: 0.0ms)
Started GET "/assets/rails.png" for 127.0.0.1 at 2012-12-13 08:28:44 -0500
Served asset /rails.png - 304 Not Modified (0ms)
[2012-12-13 08:28:44] WARN Could not determine content-length of response body. Set content-length of the response or set Response#chunked = true
Hi,
Was wondering if CarrierWave and Fog be used to upload documents to box.com using a box API_KEY and and a box AUTH_TOKEN
Thanks
nevermind, I figured it out, the code should have been
jQuery ->
Morris.Line({
element: 'annual'
data: [
{y: '2012', a: 100}
{y: '2011', a: 75}
{y: '2010', a: 50}
{y: '2009', a: 75}
{y: '2008', a: 50}
{y: '2007', a: 75}
{y: '2006', a: 100}
]
xkey: 'y'
ykeys: ['a']
labels: ['Series A']}
)
Thank You for this video!
try to add :
group :production do
gem 'less-rails'
end
in your Gemfile
It seems that the SSH agent forwarding issue in JRuby was fixed as of 1.7.0.pre2.
I did just a little bit of testing, but for me PhantomJS was about 20% slower than Capybara-Webkit.
You don't end up with the QT dependency though... So that might be worth its weight in gold to some.
I played around with this issue, and the only solution that worked for me was adding an explicit flag to skip the processing:
Any other, more elegant ideas?
Any ideas on how to use Ryans scheduler with ajax calls to update month instead of the page reload links.
How would Postgres table inheritance play into this? Couldn't it simplify the database migrations?
http://www.postgresql.org/docs/9.2/static/ddl-inherit.html
In particular:
"ALTER TABLE will propagate any changes in column data definitions and check constraints down the inheritance hierarchy. Again, dropping columns that are depended on by other tables is only possible when using the CASCADE option. ALTER TABLE follows the same rules for duplicate column merging and rejection that apply during CREATE TABLE."
You might also want to check out this gem for importing CSV:
https://github.com/tilo/smarter_csv
it addresses the issue of reading large CSV files (I tried it with more than a million rows), and can process them in chunks, e.g. for creating Resque jobs (if you want to de-couple the processing from the CSV-file reading).
You might also want to check out this gem for importing CSV:
https://github.com/tilo/smarter_csv
It gives more control over how the CSV is imported, e.g. can manipulate, re-write, or replace column-headers; it can read data in chunks for more efficient post-processing (e.g. with Resque); it can deal with non-standard record separators; and more..
did this work for you Andrew?
I tried this but am still running into the same error. Can't seem to find the solution anywhere
Hi Ryan, nice episode.
Is it possible to speed up APIs? In episode #348 you described The Rails API gem and, if I got the concept, many rails features aren't loaded in order to have faster APIs. Is it possible in some way to exclude libraries only for controllers related to APIs?
Thanks in advance.
hey,
i released a gem for morris.js so you don't need to add the assets manually.
You can find it over at Github (https://github.com/beanieboi/morrisjs-rails)
I'm the only one who have encodings issue? There's a method to force encoding to roo?
Pass encoding options to CSV is possible,
but with roo?
If you are getting a
gem not installed
error, then do checkout the railscasts on zero downtime deployment. You need to send aUSR2 signal
and setpreload_app=true
. More details here at stackoverflowTotally agreed. Writing authentication from scratch is so simple. The ugly side of Devise comes out when you start customising your app, and you spend more time fixing such issues than you would have had you written authentication from scratch.
I have added morris and raphael inside vendor/javascripts and did call in application.js also. In my orders.js.coffee i put
jQuery ->
Morris.Line
element: 'annual'
data: [
{y: '2012', a: 100}
{y: '2011', a: 75}
{y: '2010', a: 50}
{y: '2009', a: 75}
{y: '2008', a: 50}
{y: '2007', a: 75}
{y: '2006', a: 100}
]
xkey: 'y'
ykeys: ['a']
labels: ['Series A']
for testing purpose and did add in index. But chart did not appears.
and :stripe_card_token for that matter.
Question: In the controller, I can't do
@subscription = Subscription.new(params[:subscription])
because plan_id is not accessible. How did you get around this?
I use chef server (open source) and got rid of Capistrano for my deploys. Chef has a "deploy resource" http://wiki.opscode.com/display/chef/Deploy+Resource
Few things I really like:
1) Its a pull architecture, so you can cron machines to check back in and pull updates for production. But from the cli (knife) you can use the search feature and run whatever command you want. So for testing env I use it like cap (and essentially have a push deploy)
2) You can rollout upgrades to your application stack as well as your code. This is really nice for testing b/c it has environments so you can roll out new recipes to testing envs and then to production. Recipes etc are also usually maintained in a git repo.
3) I really like that it can be used in so many places. I use it to setup my laptop which runs ubuntu, virtual machines for testing environments, and a physical hosted server. I have also started to play with it in the cloud a little. For the physical machines I run a bootstrap command to setup the machine, with vms & cloud there are plugins that will also provision machines. If you write your recipes to scale and add monitoring... you can actually have you app scale on its own!
I have been playing with it for the last 6 months or so (on and off). I will say that chef like rails has a learning curve, but it gets easier... And its very easy to see the power. Infrastructure as Code gotta love it
Try adding
gem 'faye'
to your Gemfile and then update your faye.ru asBecause it's faster
find_or_initialize_by_id - is a dinamic finder, it uses SQL request + method_missing chain to return the object
Class hierarhy traversal (to hit correct method_missing) is much much slower than "|| new"
Delayed Job effectively solves only response delay problem.
If you plan to use Active record for data import - expect poor performance
insertion of 25k records using AR takes about 10 minutes
A little bit better situation with https://github.com/zdennis/activerecord-import
insertion of 25k records takes about 3 minutes
Fastest way is to build and execute raw sql request
insertion of 25k records takes about 30 sec
Memory efficiency is about 1-3x CSV size (slower - less memory , faster - more memory)
If you plan to import CSVs frequently , you will face inefficient garbage collection problem. To free up memory you need to kill Dj worker
We are using following scheme in production - spawn Dj worker , import couple CSV files with raw queries, respawn worker
And one more thing - don't use Delayed job with rails directly ! it will load full rails environment for each Dj worker and it's a lot of memory for no reason
I love your videos, however this video and even the controller walkthrough were extremely fast and jumped around too quickly. I just couldn't focus on what you were saying. It seemed like I was just trying to figure out what directory you were in the entire time and not paying attention to the code.
I must say that I have only been working with Rails for only 8-10 months now.
Yeah, I didn't like the use (abuse?) of cookies there either. I would just created a relationship in the DB between users and announcements, (AnnouncementsUser user_id, announcement_id, read:boolean).
Then I'd have a method on the user object (current_user.unseen_announcements) that would return an array of all announcements that the user hasn't already marked as read. Then, the act of hiding the announcement would create a record in the DB that records the fact that that user has hidden the announcement.
Of course this isn't a solution for users without accounts.
I got the same problem... Can anyone help?
I built a Rails-based client portal for a client, and I initially used Roo to permit them to upload the spreadsheets full of data that the portal needed to parse. I later switched to RubyXL. I ran into severe constraints with both packages, on large spreadsheets (where "large" means more than 50,000 rows of data, in my case). Both packages performed poorly with large XLSX files, which stands to reason, given how those files are encoded. I had memory use issues and CPU spikes, severe at times. These issues were especially challenging on the production system (a Linode VPS).
My ultimate solution was to use the Gnumeric spreadsheet tool "ssconvert" to convert the XLSX files to CSV format; from there, the standard Ruby CSV module worked fine. The same Resque job that initially used Roo to parse the spreadsheets just runs "ssconvert" and produces CSVs in a temp directory, before continuing its work. This solution isn't always suitable, of course; if you need to preserve the formulae in the spreadsheets, CSV won't work. In my case, I just needed the data in the columns.
Roo and RubyXL are great for some purposes. In fact, I'd have preferred to use them, rather than my hack solution. But my wishes were thwarted by the sheer size of the spreadsheets I had to import. If you run into a problem like that, using "ssconvert" to convert the Excel spreadsheets to CSV might help.
cap deploy:cold seems to break on executing deploy:start - followed every step along the way. Puzzled on this one. Anybody else have this issue?
I keep getting weird errors in rails 3 when using the render method. Also I don't have access to the instance variable. I wrote my issue on stack overflow if any you guys would have the time to have a look at my code. Please
Is there an issue with using render in rails 3 in js files?
Stackoverflow question
@Sebastian its probably too late for you, but for anyone else with this problem, It happened to me and the reason was I was using my user name for the account field as opposed to my AWS account number. I'm assuming that if the account number is wrong, a similar error will ensue.
I'm thinking about using HABTM instead of has_many because in my case, I do have a catalog of names (contributors) linked to a "groups" table.
Each "group" may have many "contributors" and each "contributor" may belong-to-many "groups". These two tables are linked with a "memberships" table.
As I don't want to waste my time with views, model, controller, routes, callbacks, bla-bla-bla for the "memberships" table, I just want to be able to set many contributors belonging to my group on my group form with a select-box. That is far more quicker to use the HABTM. (Now for validation, I just put a SQL table constraint UNIQUE index on both ID column of the membership table (in my migration) to avoid creating multiple same membership. This a protection against pirates, because a select box doesn't allow you to select multiple times the same item)
I think HABTM still make sens when you need to link one of your tables with a table (with a single column/attribute) that is just used as catalog of values. I may be wrong, but it makes sens to me.
As usual, this is a very handy example. However, I'm a bit confused. Why don't we need to use the "accept_nested_attributes_for" class method ?
More about Struct: Random Ruby Tricks: Struct.new by Steve Klabnik.
what do you think about using this solution mixed whit delayed job for large cvs?
Why would you use
over
I am using cloudmailin, and just started playing with mailgun.
Both are very easy to setup and use.
Thing I like about cloudmailin: its advance feature allows directly storing email attachment on s3.
Thing I like about mailgun: its routes and actions.
Here are the necessary model methods for any DataMapper users:
Hope that helps.
I just removed the grouping and it works fine, not sure what purpose the group("order_id") was serving, maybe I am missing something?
Implementing the roll your own method of this for a project of mine and I keep getting an error when I attempt to find all "draft" or "open" or "whatever" orders:
ERROR: column "orders.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT "orders".* FROM "orders" INNER JOIN "order_events" ON...
Any ideas?
Any luck with this?
Hi There, thanks for the rail cast I am wondering if anyone has had experience with using ActiveModel like the #219 Rails Cast as it would be better if I could do this without the Active Record database creation, as well as possibly removing the sessions to be used in memcache instead, it seems there isn't a lot of info on this and would be very appreciative for any help someone can offer!
Thank you, I had to do the same.
Hi There, thanks for the rail cast I am wondering if anyone has had experience with using these in multi step forms like that of the #217 Rails Cast as it would be better if I could do this without the Active Record database creation, as well as possibly removing the sessions to be used in memcache instead, it seems there isn't a lot of info on this and would be very appreciative for any help someone can offer!
My latest revision of reading yaml'ed data in application.rb (1.9 notation):
One less mutation, and it allows specifying environment-dependent settings in both yaml files. It does use mutation though, but as far as I know they (merge! and symbolize_keys!) are both thread-safe.
I keep getting a 'trinidad_init_service: not found' error after the gem installs. Has anyone encountered this issue?