Janko Marohnić
Joined
Activity
You can use the upload_options plugin to pass the :acl
option to Shrine::Storage::S3#upload
:
plugin :upload_options, store: ->(io, context) do
if context[:version] && context[:version] != :original
{acl: "public-read"}
else
{acl: "private"}
end
end
Can you post the question to the Google group? Make sure to include all your Shrine-related code. Did you remember to load the direct_upload plugin with Shrine.plugin :direct_upload
?
If we talk in the scope of direct S3 uploads, if Shrine used only one S3 storage, attacker could uncontrollably upload as many files as they want, because the user can "give up" on attaching an uploaded file by not clicking the "Save" button, so it creates orphan files that is not easy to detect and automatically delete. By separating temporary and permanent S3 storage the files that are attached and ones that aren't attached are on separate locations, which allows you to easily set a rule for clearing the unattached files, and it ensures that the permanent storage will never have orphan (unattached) files.
You're likely having a network problem, as the error says. AWS SDK will actually retry networking errors 2 times before failing, so it seems to be something permanent.
This is the list of exceptions that AWS SDK will re-raise as Seahorse::Client::NetworkingError, you can find out which one of those errors was actually raised by catching the NetworkingError, and calling `#cause` on it if you're on Ruby 2.1+, or `#original_error` if you're on lower versions.
Additionally, you can initialize Shrine::Storage::S3 with `http_wire_trace: true` to see what request is AWS SDK making and what response is it receiving, and see if you can spot the cause of the exception.
If you look at the Shrine::Storage::S3
documentation, you'll notice that you can set :upload_options
when initializing the storage, and this hash will automatically get forwarded to Aws::S3::Object#put
on each upload, and that method supports options :server_side_encryption
parameter, and any other :sse_*
parameters you need.
Shrine::Storage::S3.new(
upload_options: {server_side_encryption: "AES256", ...},
**options
)
You can choose how you generate the location in your uploader:
class ImageUploader < Shrine
def generate_location(io, context)
"images/#{super}"
end
end
class DocumentUploader < Shrine
def generate_location(io, context)
"documents/#{super}"
end
end
Feel free to post any general Shrine questions on the Shrine Google group ;)
See this Google group thread. The question was for cropping on promotion, but you can easily adapt it to have it performed on recache (probably just changing `process(:crop)` to `process(:recache)`).
This is the term in Shrine for when a cached file is re-uploaded to permanent storage (which happens after the record is saved). Btw, feel free to ask any questions that aren't related to this screencast on the Shrine Google group.
You can use the recache plugin, which adds another processing step between caching and promoting, triggered before the record is saved. So you can just download the cached file there, crop it, and upload it to cache storage again.
JavaScript looks exactly the same if you're using the versions plugin, because direct upload is just another way of caching the file (you're uploading it to temporary storage directly via the client rather than have the server upload it), and processing versions happens on promoting (when cached file is moved to permanent storage).
You can just call the `fileupload()` function on the new file field after it is added to the DOM.
After the file is uploaded, you can write the JSON data to the hidden attachment field (which has the same "name" attribute as the file field; it's mentioned in the "Quick Start" section of the Shrine README). Then when you submit the form, Shrine will attach the file from the JSON data. So the idea is, you can either send a new file (multipart request via the file field) or send an already uploaded file (JSON data via the hidden field) as the attachment attribute.
The `/images/upload/cache/presign` endpoint is instantaneous, it doesn't make any HTTP requests or anything.
I think best behaviour is to send data of each file to the Rails app as soon as it's uploaded to S3.
That gives great user experience, because even if they terminate the upload at some point, the files that were uploaded before stay uploaded. That's why e.g. Flickr upload interface really sucks, because it's all-or-nothing - if there is an error or you have to terminate the multiple file upload midway, nothing stays uploaded.
It is also the most performant, because the server can already start processing each file as soon as it's uploaded to S3, instead of holding off and then sending all files at once.
The shrine-rails-example app already demonstrates this flow, so you can draw your inspiration from there.
Yes, you can just add "multiple" HTML attribute to the file field, which enables it to accept multiple files:
<input type="file" name="file" multiple="true">
The files will still be uploaded in individual requests (there is no performance gain in sending multiple files in a single request anyway), but the uploads will happen in parallel. The shrine-rails-example repository demonstrates this flow.
Shrine has a distinction of temporary and permanent storage, and you can use any kind of storage for both. Direct S3 uploads just means that you're using S3 for caching files. So if you're already permanently storing your files on S3 with CarrierWave, adding direct uploads is just an addition, you wouldn't need to reupload any of your existing files (they are already permanently stored).
I listed here the main benefits of using direct S3 uploads. Even if you would like to ultimately store your files on the filesystem, with Shrine you can still upload files directly to S3, and then they will just be downloaded to filesystem on the server-side. This reduces the load on your server, because it doesn't need to accept file uploads. That's probably a rarer use case, but I just wanted to illustrate what do direct S3 uploads mean.
The main difference is that with CarrierWaveDirect you generate the HTML form with fields for S3 request parameters, which you do while rendering the page, while with Shrine the JavaScript requests the S3 request parameters from your app dynamically (in JSON format) when a file is attached.
Among other things, this allows you to do multiple file uploads with Shrine, because an S3 presign can be requested for each selected file. With CarrierWaveDirect multiple uploads aren't really possible, because it can only generate HTML forms, it doesn't enable you to return these S3 request parameters in JSON format so that the JavaScript can just make the AJAX S3 request(s) itself.
This is a bug related to the configuration of your aws-sdk credentials. I found an issue on the aws-sdk repo that might help. Otherwise it would be good to open an issue on the aws-sdk repo.
See Shrine for CarrierWave Users for a brief introduction. Other differences include proper direct upload and backgrounding support (see the motivational blog post for explanation of the issues with CarrierWave). There are many more differences/advantages, I started writing about them more in depth on my blog :)