Using model constants for project sanity
On one of our larger client projects (approx. 160 models and growing…) we have a specific model that we refer to quite a bit throughout our code. This model contains less than 10 records, but each of them sits on top of an insanely large and complex set of data. Each record refers to a each of their regions that our client does business in.
For example… we have, Australia, United Kingdom, Canada, United States, and so forth. Each of these regional divisions has their own company code, which are barely distinguishable from the next. They make sense to our client, but when we’re not interacting with those codes on a regular basis, we have to look constantly look them up again to make sure we’re dealing with the right record.
I wanted to share something that we did to make this easier for our team to work around these codes, which we should have thought of long ago.
Let’s take the following mode, Division
. We only have about 10 records in our database, but have conditional code throughout the site that are dependent upon which divisions specific actions are being triggered within. Each division has various business logic that we have to maintain.
Prior to our change, we’d come across a lot of code like:
# For all divisions except Canada, invoices are sent via email
# In Canada, invoices are sent via XML to a 3rd-party service
def process_invoices_for(division)
if division.code == 'XIUHR12'
# trigger method to send invoices to 3rd party service
# ...
else
# batch up invoices and send via email
# ...
end
end
An alternative that we’d also find ourselves using was.
if division.name == 'Canada'
Hell, I think I’ve even seen if division.id == 2
somewhere in the code before. To be fair to ourselves, we did inherit this project a few years ago. ;-)
Throughout the code base, you’ll find business rules like this. Our developers all agreed that this was far from friendly and/or efficient and worst of all, it was extremely error-prone. There have been a few incidents where we read the code wrong and/or got them confused with one another. We were lacking a convention that we could all begin to rely on and use.
So, we decided to implement the following change.
Model Constants
You might already use constants in your Ruby on Rails application. It’s not uncommon to add a few into config/environment.rb
and call it a day, but you might also consider scoping them within your models. (makes it much easier for you to maintain them as well)
In our scenario, we decided to add the following constants to our division
model.
class Division < ActiveRecord::Base
AFRICA = self.find_by_code('XYU238')
ASIA = self.find_by_code('XIUHR73')
AUSTRALIA = self.find_by_code('XIUHR152')
CANADA = self.find_by_code('XIUHR12')
USA = self.find_by_code('XIUHR389')
# etc..
end
What this will do is load up ech of these constants with the corresponding object. It’s basically the equivallent of us doing:
if division == Division.find_by_code('XIUHR389')
But, with this approach, we can stop worrying about their codes and use the division names that we’re talking about with our clients. Our client usually approaches us with, “In Australia, we need to do X,Y,Z differently than we do in the other divisions due to new government regulations.”
if division == Division::CANADA
# ...
end
case division
when Division::AFRICA
#
when Division::AUSTRALIA
# ...
end
We are finding this to be much easier to read and maintain. When we’re dealing with a lot of complex business logic in the application, little changes like this can make a big difference.
If you have any alternative solutions, we’d love to hear them. Until then, we’ve been quite pleased with this approach. Perhaps you’ll find some value in it as well.
Master/Slave Databases with Ruby on Rails
Not terribly long ago, I announced Active Delegate, which was a really lightweight plugin that I developed to allow models to talk to multiple databases for specific methods. The plugin worked great for really simple situations, like individual models.. but when it came time to test with associations it fell apart. I haven’t had a chance to work on any updates and knew that it was going to take more work to get it going.
Earlier this week, we helped one of our bigger clients launch their new web site1. For the deployment, we needed to send all writes to a master database and a reads to slaves (initial deployment is talking to almost 10 slaves spread around the globe!). We needed something to get integrated quickly and decided to ditch Active Delegate for the time being and began looking at the following options.
I spoke with Rick Olson2 and he pointed me to a new plugin that he hasn’t really released yet. So, I’m going to do him a favor and announce it for him. Of course… I got his permission first… ;-)
Announcing Masochism!
Masochism3 is a new plugin for Ruby on Rails that allows you to delegate all writes to a master database and reads to a slave database. The configuration process is just a few lines in your environment file and the plugin takes care of the rest.
Installing Masochism
With piston, you can import Masochism with:
$ cd vendor/plugins
$ piston import http://ar-code.svn.engineyard.com/plugins/masochism/
- To learn more about piston, read Every Second Counts with a Piston in your trunk
You can also install it with the old-fashioned way:
$ ./script/plugin install -x http://ar-code.svn.engineyard.com/plugins/masochism/
Configuring Masochism
The first thing that you’ll need to do is add another database connection in config/database.yml
for master_database
. By default, Masochism expects you to have a production database, which will be the read-only/slave database. The master_database
will be the connection details for your (you guessed it…) master database.
# config/database.yml
production:
database: masochism_slave_database
adapter: postgresql
host: slavedb1.hostname.tld
...
master_database:
database: masochism_master_database
adapter: postgresql
host: masterdb.hostname.tld
...
The idea here is that replication will be handled elsewhere and your application can reap the benefits of talking to the slave database for all of it’s read-only operations and let the master database(s) spend their time writing data.
The next step is to set this up in your environment file. In our scenario, this was config/environments/production.rb
.
# Add this to config/environments/production.rb
config.after_initialize do
ActiveReload::ConnectionProxy.setup!
end
Voila, you should be good to go now. As I mentioned, we’ve only been using this for this past week and we’ve had to address a few problems that the initial version of the plugin didn’t address. One of our developers, Andy Delcambre, just posted an article to show how we had a problem with using ActiveRecord observers and masochism, which we’re sending over a patch for now.
As we continue to monitor how this solution works, we’ll report any findings on our blog. In the meantime, I’d be interested in knowing what you’re using to solve this problem. :-)
1 Contiki, a cool travel company we’re working with
2 Rick just moved to Portland… welcome to stump town!
Multiple Database Connections in Ruby on Rails
We have a client that already has some database replication going on in their deployment and needed to have most of their Ruby on Rails application pull from slave servers, but the few writes would go to the master, which would then end up in their slaves.
So, I was able to quickly extend ActiveRecord with just two methods to achieve this. Anyhow, earlier today, someone in #caboose asked if there was any solutions to this and it prompted me to finally package this up into a quick and dirty Rails plugin.
Introducing… Active Delegate!
To install, do the following:
cd vendor/plugins;
piston import http://svn.planetargon.org/rails/plugins/active_delegate
Next, you’ll need to create another database entry in your database.yml
.
login: &login
adapter: postgresql
host: localhost
port: 5432
development:
database: rubyurl_development
<<: *login
test:
database: rubyurl_test
<<: *login
production:
database: rubyurl_servant
<<: *login
# NOTICE THE NEXT ENTRY/KEY
master_database:
database: rubyurl_master
<<: *login
At this point, your Rails application won’t talk to the master_database
, because nothing is being told to connect to it. So, the current solution with Active Delegate is to create an ActiveRecord model that will act as a connection handler.
# app/models/master_database.rb
class MasterDatabase < ActiveRecord::Base
handles_connection_for :master_database # <-- this matches the key from our database.yml
end
Now, in the model(s) that we’ll want to have talk to this database, we’ll do add the following.
# app/models/animal.rb
class Animal < ActiveRecord::Base
delegates_connection_to :master_database, :on => [:create, :save, :destroy]
end
Now, when your application performs a create
, save
, or destroy
, it’ll talk to the master database and your find
calls will retrieve data from your servant database.
It’s late on a Friday afternoon and I felt compelled to toss this up for everyone. I think that this could be improved quite a bit, but it’s working great for the original problem that needed to be solved.
If you have feedback and/or bugs, please send us tickets.
Q&A: ActiveRecord Observers and You
Yesterday, I wrote a short post titled, Observers Big and Small, about using Observers in your Rails applications.
The following questions were raised in the comments.
When should I use an Observer?
Eric Allam asks…
“Why not just use ActiveRecord callback hooks instead of Observers? Are Observers more powerful or is it just a matter of preference?”
Eric, this is an excellent question. I’d say that a majority of the time, using the ActiveRecord callbacks in your models is going to work for your situation. However, there are times that you want the same methods to be called through callbacks. For example, let’s take a recent problem that we used an observer to solve.
Graeme is working on implementing Ferret into a project that we’re developing for a client. With the use of Ferret, we can index and later search through content over several objects into a format that makes sense for our implementation goals. Each time an object is created and updated, we have to update our Ferret indexes to reflect these changes. The most obvious location that we can call our indexing methods is in each models’ callbacks, but this violates the DRY[1] principle. So, we created an Observer, which observes each of the models that need these methods to be called. In fact, as far as we’re concerned, the fact that we’re indexing some of its data, is none of its business. We only want our models to be concerned with that they’re designed to be concerned about. We may opt to change our indexing solution in the future and we’d just need to rethink that at the Observer level and not change anything about the business logic in our models.
This is the sort of scenario when using an Observer makes great sense in your application.
Logging from an Observer
Adam R. asks…
“I’d also like the ability to use the logger from within an observer, but that’s another issue.”
I assume that you are referring to the logger
method? I always forget to even use that method. I do know that the following works just fine in an Observer.
class IndexObserver < ActiveRecord::Observer
observer Article, Editorial, BlogPost, ClassifiedAd
def after_save(model)
RAILS_DEFAULT_LOGGER.warn("Every single day. Every word you say. Every game you play. Every night you stay. I'll be watching you.")
# execute something fun
end
end
This will output to your log file without any problem.
This reminded me of when I used to want to log from Unit Tests.
(few minutes later)
Okay, I just attempted to use logger
from an Observer and you’re right… it doesn’t currently work. There is a simple fix though, just extend ActiveRecord::Observer to add a logger
method like so and require it in config/environment.rb
(much like I did in with unit tests).
# lib/observer_extensions.rb
class ActiveRecord::Observer
def logger
RAILS_DEFAULT_LOGGER
end
end
This will give you a solution to that problem.
class FooObserver < ActiveRecord::Observer
observer Foo
def after_save(model)
logger.warn("I wonder if the #{address.class} knows that I've been watching it all along?")
end
end
Observers Spy for Us
Most often, I look at Observers as being the guys that I hire to spy on my models. I don’t want my models to know that they’re being spied on and I’d like to keep it that way. They don’t solve all of our problems and it’s easy to overuse them. However, I have found several cases that they made a lot of sense and most of those cases have been where we’ve had the same things occurring in our model’s callbacks.
If you have other questions related to Observers, feel free to let me know. If you’re already using Observers, perhaps you could post a comment and/or blog post response with an example of when and how you use Observers in your Rails applications.
Related Posts
1 Don’t Repeat Yourself
Observers Big and Small
1 comment Latest by Gordon Yeong Tue, 19 Jan 2010 21:43:46 GMT
My colleague, Gary, keeps a stack of Ruby and Rails books on his desk and was implementing an Observer into a client project. It appears that the Agile Web Development with Rails book is still encouraging people to do the following in order to load an Observer.
# app/models/flower_observer.rb
class FlowerObserver < ActiveRecord::Observer
observe Flower
def after_create(model)
# model.do_something!
end
end
# controller(s)
class FlowerController < ApplicationController
observer :flower_observer
end
What is wrong with this approach?
Well, in order for your Observer to be used, the model(s) callbacks that it is observing need to be triggered through a controller. If you end up writing any scheduled rake tasks, your observer will not be called. In my opinion, the controller shouldn’t know this much about the model. In fact, the model doesn’t even really know about it’s observer… so why should a controller?
This was actually changed a long time ago (I previously blogged about a different solution here) and the Rails docs for ActiveRecord::Observer are currently correct.
Observers in the Environment
If you open up a recent version of config/environment.rb
, you notice in the comments the following.
# Activate observers that should always be running
# config.active_record.observers = :cacher, :garbage_collector
Take a moment to go ahead and specify which observer(s) you’d like to load into your Rails environment.
config.active_record.observers = :flower_observer
Then you can remove your observer calls in all your controllers, because that’s not where you should be defining them.
Also, if you’re not using Observers yet, I’d really encourage you to consider reading up on them and giving them a try.
Where did my observer go?
4 comments Latest by Robby Russell Tue, 14 Mar 2006 14:05:29 GMT
I was playing around with an application from script/console
and all of a sudden… my Observers just stopped working. It took me a while to figure out that when I would call Dispatcher.reset_application!
... it wouldn’t reload config/environment.rb where I have defined the observers.
I really don’t like that this is the current implementation for invoking Observers and adding them to the controllers is not kosher with my (do-it-in-script-console) approach.
The current solution?
# app/models/foo_bar.rb
class FooBar < ActiveRecord::Base
end
FooBarObserver.instance
... there needs to be a better way!