Sign in through GitHub

Please read for an updated status on RailsCasts:

Learn more or hide this

vickash's Profile

GitHub User: vickash

Site: www.vickash.com

Comments by vickash

Avatar

As of Bootstrap 2.2.1 the plugins listen to document instead of body, so modification shouldn't be necessary. You'll still need make other $(document).ready stuff happen on page:load as well though.

Avatar

Probably because the carrierwave_directgem (and Amazon S3 itself), doesn't support uploading multiple files in a single POST request. You need some kind of client-side scripting to separate the files into individual POST requests and insert the other form fields (key, policy, signature etc.) on the fly each time.

The final example still uses Carrierwave though, just not for the upload. The jQuery plugin uploads the file straight to S3. Then, in the done callback, the URL of the uploaded file gets "calculated" and POST-ed to the paintings#create action. The URL comes back in the response header from S3 (as "location" I think), so you could probably get it from there if you wanted.

In the paintings#create action, things start to look normal again. Either in the app, or a worker process, the record gets created and Carrierwave copies the raw file from the S3 bucket, to the location where your Carrierwave settings want it to be, then runs any image processing you have set up.

BTW, having a separate bucket for holding the raw uploads from the jQuery plugin is probably a good idea if you're not persisting the record before letting the user upload. You can end up with files that never get associated to anything in the database.

Let Carrierwave copy each file that gets associated to a record and process it. Carrierwave won't delete the raw upload file, but you can use the aws-sdk gem to delete it when Carrierwave finishes working. Finally, set up a task to periodically clear out old files in the "raw upload bucket" and that should take care of cleanup.

For that last part, you might want to prepend a timestamp to the SecureRandom portion of the key, then use the prefix method in aws-sdk, so you can do something like raw_uploads_bucket.objects_with_prefix('20120926').delete_all, to delete all the raw uploads from yesterday. Might be a good idea to check that all those workers are finished running too.

If it works for your UX though, persisting the record before letting the user upload makes cleanup a bit simpler.