Activity
Posted in Understand Scope Returns Discussion
Elaborating further on this theme of composable scopes, and being handy both for tests and for reporting, leaning on the predicate builder and a default parameter:
scope :future, ->(after = Time.current) { where.not(start_time: ..after) }
There’s also an argument for using a CurrentAttributes value for time, since application code depending directly on Time.current may fall into the trap of assuming it is the same value throughout a single request’s execution, when of course it may not be, leading to some really subtle intermittent bugs, especially at the cusp of midnight.
This seems like a prime candidate to become a stimulus controller.
What’s the story with Shrine if an application is also using Action Text?
I usually anticipate having more than one such calculator, because very often, pricing is a classic case for the Strategy pattern. This immediately informs the granularity because each strategy has a name, and the methods are the things that might vary from one namable strategy to the next.
A typical scenario is a tax calculator. A given invoice has a specific tax treatment depending on the customer or the location. The calculator object is instantiated for the invoice and embodies all the expertise necessary to be responsible for answering tax-related questions such as rate per item, handles special rates, due dates, can list the available rates, and calculates various answers as required.
That is one responsibility: it represents the domain knowledge of a specific tax treatment.
One-method-per-class is a naive rule for SRP that takes a mechanical, not domain-centric view of software and I disregard any such advice. Its a slippery slope from there into service objects and other antipatterns.
Chris's objects in the screencast naturally gravitate to one method, not due to the SRP, but because he's also implementing the command pattern for which the fundamental method is usually #perform. Notwithstanding which, command pattern implementations often have a bunch of other methods because they may also participate in a framework with logging, transactions, progress reporting, undo etc.
NB: for calculators, I'd suggest at most one instantiation parameter, usually a domain entity, and avoid internal state except for that entity object and maybe some value caching, because responses from calculators like should be consistent and nullipotent, or at least idempotent.
Warning: when using a load-balanced production configuration, this configuration can result in intermittently broken assets. The problem occurs when traffic arrives during a reload that includes fresh assets. If care isn't taken, it's possible for a browser to request assets that aren't yet available from every backend.
The CDN may then cache a 404 for that asset, with horrible results. A typical symptom is missing CSS and JS for all subsequent visitors. The recovery from this breakage may be painfully slow, because cache invalidation in Cloudfront is a) fiddly and b) takes minutes/hours to complete. You can fine-tune Cloudfront to not cache 404s, but serving them to any user at all is simply unacceptable in production.
To avoid all this, you must ensure that fully precompiled assets are available from all CDN origins prior to any reload. There's several ways to achieve that, but I'll generally configure a sync to S3 in the before_restart hook; the CDN then uses the bucket as origin. You could also use a CI/CD pipeline, or blue/green deploys, or a shared public/ folder, or carefully choreographed rolling reloads.
Posted in User Onboarding Progress Bar Discussion
Isn't it seriously dangerous to be calling update inside a read method?
Posted in Liskov Substitution Principle Discussion
You'll need to throw an exception for penguins already located at the south pole.
How about taking an OO approach i.e. changing the type to DeletedComment, e.g. using STI? This eliminates the code smell of conditionals in a partial.
If the attribute is exposed via a method (as it would be in the common case for Rails models) then this is how many of us would do it:
my_array.sort_by { |item| item.attribute }
or as shorthand for the same:
my_array.sort_by(&:attribute) #=> [...sorted array...]
However, if you'd rather that objects knew how to sort themselves, therefore giving them the latitude to vary later, then you can push the sort method down into the object, thus:
class MyModel
def attribute_sort(b)
self.attribute <=> b.attribute
end
end
# and then ...
my_array.sort(&:attribute_sort) #=> [...sorted array...]
Although arguably more object-oriented, this may be slower if attribute
is actually an expensively computed value, due to repeated re-computation of the sorting key.
Note that if your array is actually an ActiveRecord relation then you may be much better off using the order
predicate to push the work into the database, especially if the result set is large:
my_relation.order(:attribute)
and see https://guides.rubyonrails.org/active_record_querying.html#ordering.
2. In general, persisted entity names should be a noun, not a verb. This is true even when the persisted entity is modelling a process or action, in which case use the noun that names the action, not verbs that describe it.
3. A sale is a commercial act that is documented by an invoice. The entity you're recording here, though, is an increment or decrement in credits. Those are two separate concepts and should be modelled accordingly. Your addition or subtraction of credits should be linked to the reason, not conflated with it. In other words you don't need the "Add_free_credits" model or indeed any subclass.
Note also: underscores in class names are going to extremely confusing for developer and framework alike. Don't do that. Rails especially is going to get very, very confused about them.
The reason for each increase or decrease should be a separate and probably polymorphic belongs_to, linking to a model such as "Sale" or "Freebie" or "Usage" that explains the movement separately from the the record of the movement itself. That way you don't need different types of movement.
This is single-entry book-keeping in a nutshell, by the way. So let's call each "transaction" an AccountEntry.
I also believe the balance computation has no business being in your user class. That should be in an Account object.
I haven't tested this even for syntax errors let alone function but I'd be looking for something like:
class AccountEntry < ApplicationRecord belongs_to :user belongs_to :reason, optional: true, polymorphic: true validates_numericality_of :change end Account = Struct.new(:user) do def balance user.account_entries.sum(:change) end end class User < ApplicationRecord has_many :account_entries has_many :sales def account Account.new(self) end end class Sale < ApplicationRecord belongs_to :user has_one :account_entry, required: true, as: :reason, dependent: :nullify before_create :build_associations validates_numericality_of :price, :credits, greater_than: 0 private def build_associations account_entry || build_account_entry(user: user, change: credits) end end
The idea being that then you can write
current_user.sales.create!(price: 2000, credits: 20)
Balance: <%= current_user.account.balance %>
Posted in Liskov Substitution Principle Discussion
Hard to believe you said all that about birds and types and didn't mention ducks ;)
<table> <row-component></row-component> <row-component></row-component> </table>
<row-component></row-component> <row-component></row-component> <table></table>
Your second mistake was thinking that the application mattered, when clearly it just serves as a vehicle for illustrating the SRP.
Your final mistake is in looking at the result and seeing Rails, when one of the major outcomes by the end of the episode is that the provisioning code is now decoupled from Rails.
Some people might misunderstand what happened here and end up with the (unfortunately common) misunderstanding that SRP means "one method per class".
What Chris did is refactor a set of domain-specific behaviours from being collected in one class, to each being classes in their own right. The next step might be to compose them back together as necessary, without over-engineering the infrastructure that does so.
The example I personally like to give is extracting business rules implemented as "just code" into Rule classes, and then delivering new and useful outcomes by re-ordering rule objects or using subclasses and variants.
For the Rails programmer this is hopefully all in contrast to using Concerns which are basically just a way to chop up fat models into separate files and don't lead to new and interesting runtime structures.
All of you are attempting to re-implement book-keeping from first principles.
When what you need is Plutus. https://rubygems.org/gems/plutus
Polymorphic types don't fit the domain model you described.
And this might be much simpler than you think. It might just be a column on your Profile:
class Profile < ApplicationRecord
belongs_to :business_location, class_name: "Location"
has_many :locales
has_many :delivery_locations, through: :locales, class_name: "Location"
end
I'm not a fan of cron because it's fragile when worker nodes die. For years now I've been using a delayed_job that runs once a minute. Upon waking up, it has two tasks:
- Schedule for immediate execution any tasks whose time has come, and
- Reschedule itself for the start of the next minute.
This was originally built for timed workflow transitions, but (1) also includes scheduling jobs that handle other one-shot events, such as sending notifications that are due but not yet marked as sent. This scheme also avoids having to perform cleanup when the schedule changes.
how about ActiveStorage?
For things like this I use a separate controller with the user reference passed in query parameters as a Signed Global ID; e.g. in the mailer:
<%= link_to 'Unsubscribe', unsubscribe_url(u: @user.to_sgid_param(for: :unsubscribe_user)) %>
then in my UnsubscribeController actions, code like this:
user = SignedGlobalID.find(params[:u], for: :unsubscribe_user)
if user&.subscribed?
user.update(subscribed: false)
# etc
This controller's actions are not otherwise authenticated i.e. no authorization via @user in before_actions, no checking for a valid session.
You should not read or modify any session data in the UnsubscribeController nor reveal any information whatsoever in its views. My views for this say "You were unsubscribed" on success, that's all. No names or addresses shown, nothing. I also prefer a plain layout. Here's why: you can trust the SGID to have come from you (otherwise the find method returns nil and you'll take them to an error page), but you can't trust that it was used by someone honest, because it was sent via email which is notoriously insecure. So don't leak information in the response.
About Signed Global IDs
Signed Global IDs (aka SGIDs) are little-known but they come with Rails and are built into Active Record models; they use your configured secret key to build a tamper-resistant object reference that can be passed to third parties for later reference back into your models. https://github.com/rails/globalid for more.
I use them for password reset links too, since they support expiry times and purpose restriction (which you should always use). They do not support replay protection, however; if you need that you'll need additional logic; so for the password reset links I consume a single-use token as well.
(I also use them for more arcane references to data objects by third-parties, e.g. in the metadata of Google Drive push notification channels, but that's a whole other story.)
Posted in What could be the best way to deal with addresses in terms of database structure and perfomance?
I have quite a strong opinion when it comes to data modelling. I'll present it here but keep in mind that others may have different views - I'm just one commentator!
To me it looks both overcomplicated and unnecessary. Database query performance should be the last thing on your mind when writing business applications. Design for a good user experience instead, optimise later. All that clicking sounds like a recipe for user frustration and broken inputs. A high performing system that no-one uses or everyone dislikes is not a good outcome.
Addresses are extremely human data structures. They map badly into relational structures. Any one location can have many different representations depending on the individual and the context and the timeframe. Some countries have very unusual ways of representing addresses, so any assumptions you make now about relational data structures are likely to be wrong in future. The task of maintaining and curating a database of all the states, cities and provinces of 20 countries sounds impossible. Even if you have a full-time master data management team, I can guarantee your dataset will be incorrect from day one onward.
I suggest using an address validation service instead to normalize and pinpoint addresses. They do the extremely complex and time-consuming job of geographical master data management, so that you don't have to. If you then need to search by region, use GIS tools designed for that purpose e.g. PostGIS.