Kerry Buckley

What’s the simplest thing that could possibly go wrong?

Archive for the ‘Rails’ Category

Memoised remote attribute readers for ActiveRecord

leave a comment

I was recently working with an ActiveRecord class that exposed some attributes retrieved from a remote API, rather than from the database. The rules for handling the remote attributes were as follows:

  • If the record is unsaved, return the local value of the attribute, even if it’s nil.
  • If the record is saved and we don’t have a local value, call the remote API and remember and return the value.
  • If the record is saved and we already have a local value, return that.

Here’s the original code (names changed to protect the innocent):

[ruby]
class MyModel < ActiveRecord::Base
attr_writer :foo, :bar

def foo
(new_record? || @foo) ? @foo : remote_object.foo
end

def bar
(new_record? || @bar) ? @bar : remote_object.bar
end

def remote_object
@remote_object ||= RemoteService.remote_object
end
end
[/ruby]

The remote_object method makes a call to the remote service, and memoises the returned object (which contains all the attributes we are interested in).

I didn’t really like the duplication in all these accessor methods – we had more than the two I’ve shown here – so decided to factor it out into a common remote_attr_reader class method. Originally I had the method take a block which returned the remote value, but that made the tests more complicated, so I ended up using convention over configuration and having the accessor for foo call a remote_foo method.

Here’s the new code in the model:

[ruby]
class MyModel < ActiveRecord::Base
remote_attr_reader :foo, :bar

def remote_foo
remote_object.foo
end

def remote_bar
remote_object.bar
end

def remote_object
@remote_object ||= RemoteService.remote_object
end
end
[/ruby]

Here's the RemoteAttrReader module that makes it possible:

[ruby]
module RemoteAttrReader
def remote_attr_reader *names
names.each do |name|
attr_writer name
define_method name do
if new_record? || instance_variable_get(“@#{name}”)
instance_eval “@#{name}”
else
instance_eval “remote_#{name}”
end
end
end
end
end
[/ruby]

To make the module available to all models, I added an initialiser containing this line:

[ruby]
ActiveRecord::Base.send :extend, RemoteAttrReader
[/ruby]

Here’s the spec for the module:

[ruby]
require File.dirname(__FILE__) + ‘/../spec_helper’

class RemoteAttrReaderTestClass
extend RemoteAttrReader
remote_attr_reader :foo

def remote_foo
“remote value”
end
end

describe RemoteAttrReader do
let(:model) { RemoteAttrReaderTestClass.new }

describe “for an unsaved object” do
before do
model.stub(:new_record?).and_return true
end

describe “When the attribute is not set” do
it “returns nil” do
model.foo.should be_nil
end
end

describe “When the attribute is set” do
before do
model.foo = “foo”
end

it “returns the attribute” do
model.foo.should == “foo”
end
end
end

describe “for a saved object” do
before do
model.stub(:new_record?).and_return false
end

describe “When the attribute is set” do
before do
model.foo = “foo”
end

it “returns the attribute” do
model.foo.should == “foo”
end
end

describe “When the attribute is not set” do
it “returns the result of calling remote_” do
model.foo.should == “remote value”
end
end
end
end
[/ruby]

To simplify testing of the model, I created a matcher, which I put into a file in spec/support:

[ruby]
class ExposeRemoteAttribute
def initialize attribute
@attribute = attribute
end

def matches? model
@model = model
return false unless model.send(@attribute).nil?
model.send “#{@attribute}=”, “foo”
return false unless model.send(@attribute) == “foo”
model.stub(:new_record?).and_return false
return false unless model.send(@attribute) == “foo”
model.send “#{@attribute}=”, nil
model.stub(“remote_#{@attribute}”).and_return “bar”
model.send(@attribute) == “bar”
end

def failure_message_for_should
“expected #{@model.class} to expose remote attribute #{@attribute}”
end

def failure_message_for_should_not
“expected #{@model.class} not to expose remote attribute #{@attribute}”
end

def description
“expose remote attribute #{@attribute}”
end
end

def expose_remote_attribute expected
ExposeRemoteAttribute.new expected
end
[/ruby]

Testing the model now becomes a simple case of testing the remote_ methods in isolation, and using the matcher to test the behaviour of the remote_attr_reader call(s).

[ruby]
require File.dirname(__FILE__) + ‘/../spec_helper’

describe MyModel do
it { should expose_remote_attribute(:name) }
it { should expose_remote_attribute(:origin_server) }
it { should expose_remote_attribute(:delivery_domain) }

describe “reading remote foo” do
# test as a normal method
end
end
[/ruby]

Technorati Tags: , , , , , , ,

Written by Kerry

April 27th, 2010 at 11:29 am

Posted in Rails,Ruby

Managing gems in a Rails project

3 comments

Over the years I’ve tried a number of approaches for managing gem dependencies in a Rails project. Here’s a quick round-up of what I’ve tried, and the pros and cons of each.

Just use what’s on the system

This is probably most people’s default approach when first starting with Rails. Just sudo gem install whatever you need, require the appropriate gems (either in environment.rb or in the class that uses them), and you’re away.

This mostly works OK for small projects where you’re the only developer, but you still need to make sure the right gems are installed on the machine you’re deploying the application to.

Worse, though, is what happens when you come back to the project after a while, various gems have been updated, and things mysteriously don’t work any more. Not only do you have to mess around getting the code to work with the latest gem versions, but you probably don’t even know exactly which versions it used to work with.

Freeze (unpack) gems

I think I first came across this technique in Err the Blog’s Vendor Everything post. The idea is to install copies of all your gems into the project’s vendor/gems directory, meaning that wherever the code is running, you can guarantee that it has the correct versions of all its dependencies.

This got much easier in Rails 2.1, which allowed you to specify all your gems using config.gem lines in environment.rb (you can also put gems only needed in specific environments in the appropriate file, eg you might only want to list things like rspec and cucumber in config/enviroments/test.rb). You can then run sudo rake gems:install to install any gems that aren’t on your system, and rake gems:unpack to freeze them into vendor/rails, and be sure that wherever you check out or deploy the code, you’ll be running the same versions of the gems. There’s even a gems:build task to deal with gems that have native code (but more on that later).

Subsequent versions of Rails have improved on the original rake tasks – dependencies are now handled much better, for example – but there are still a few problems. The main one is the handling of gems that are required by rake tasks in your project, rather than just from your application code.

When you call a rake task in your Rails project, this is more-or-less what happens (I may have got some of the details slightly wrong):

  1. The top-level Rakefile is loaded.
  2. This in turn requires config/boot.rb, but not config/environment.rb.
  3. It then requires some standard rake stuff, and finally tasks/rails (which is part of Rails – specifically railties). This finds and requires all the .rake files in your plugins and your project’s lib/rake directory.

The problems start when you have a task depends on the rails environment task, and also requires a gem which is listed in environment.rb. Because the gem-loading magic only happens when the environment is loaded, the rake task will be blissfully unaware of your frozen gems, and will load them from the system instead.

If the system gem is newer than the frozen one, you get errors like this:

can't activate foo (= 1.2.3, runtime) for [], already activated foo-1.2.4 for []

If you work on two projects that use different versions of a gem like this, you end up having to uninstall and reinstall them as you switch from one to the other, which gets tedious fairly quickly.

Specify gems, but don’t freeze

You can get round the wrong-version problem to some extent by specifying version numbers in environment.yml as ‘>=x.z.y’ (or by not specifying them at all). If you’re doing that, though, there’s not really much benefit in unpacking the gems, and you may as well just use rake gems:install to make sure they’re on the system. Of course the downside of this approach is that you can’t be sure that everyone’s running the exact same versions of the gems. Worse still, you can’t be sure that what’s on your production box matches your development and test environments.

GemInstaller

GemInstaller solves most of the problems with the built-in Rails gem management by running as a preinitializer, meaning it gets loaded before the other boot.rb gubbins.

GemInstaller uses the gems installed on the system rather than freezing them into the project, but because it gets to run first it ensures that the correct versions are used, even if there are newer versions installed. By default it checks your project’s gem list and installs anything that’s missing every time it runs (which is whenever you start a server, run the console, execute a rake task etc). You create a YAML file listing the gems you need (dependencies are handled automatically), and other options such as an HTTP proxy if necessary.

Of course on Unix-like systems, which is most of them (although I hear there are still people developing Rails projects on Windows), gems are generally installed as root. GemInstaller can get round this in two ways – either by setting the --sudo option and setting a rule in /etc/sudoers to allow the appropriate user(s) to run the gem commands as root without having to provide a password, or by using the built-in gem behaviour that falls back to installing in ~/.gem.

Personally I like to keep all my gems in one place, accessible to any user, so I went for the sudo approach. The only problem with this is that it uses sudo for all gem commands, rather than just install or update, which means it runs a sudo gem list every time your app starts up. Depending on the way you have Apache and Passenger set up this may mean granting sudo access to what should be a low-privileged user.

I ended up disabling the automatic updating of gems, and just warning when they’re missing instead. In fact later versions of GemInstaller don’t try to handle the update automatically anyway.

I created a separate script to do the update, which can be run manually, on a post-merge git hook, or as part of the Capistrano deployment task.

Because GemInstaller needs to go out to the network to fetch any new or updated gems, things get a bit more painful (as always) if you are unfortunate enough to be stuck behind a corporate HTTP proxy. Actually it’s easy enough to configure if you’re always behind a proxy, but it gets slightly trickier if your web access is sometimes proxied and sometimes direct. Nothing that can’t be solved of course.

Unfortunately you can still end up with version conflicts if a gem is required by one you have specified, then you explicitly require an older version, but these can usually be resolved by shuffling the order of the gems in geminstaller.yml.

Bundler

Bundler is the newest kid on the gem management block, and looks to have solved pretty much all the problems faced by the other approaches. It’s based on the gem management approach from Merb, and can be used in any Ruby project (not just Rails).

Bundler works by unpacking gems into the project (I recommend using a directory other than the default vendor/gems to avoid confusing Rails – this can be configured by setting bundle_path and bin_path in the Gemfile), but the intention is that you only commit the .gem files in the cache directory to source control. Gems are then installed locally within the project, including any platform-specific native code as well as the commands in bin.

Because Bundler resolves all dependencies up-front, you only need to specify the gems you’re using explicitly, and let it handle the rest, which hopefully means an end to version conflicts at last.

Here’s an example Gemfile:

[ruby]
source ‘http://gemcutter.org’
source ‘http://gems.github.com’
bundle_path ‘vendor/bundled_gems’
bin_path ‘vendor/bundled_gems/bin’

gem ‘rails’, ‘2.3.4’
gem ‘bundler’, ‘0.6.0’

gem ‘capistrano’, ‘2.5.8’
gem ‘capistrano-ext’, ‘1.2.1’
gem ‘cucumber’, ‘0.4.3’, :except => :production
# [more gems here]

disable_system_gems
[/ruby]

Note the two additional sources (rubyforge.org is configured by default), the path overrides, and the last line, which removes the system gems from the paths, avoiding any potential confusion.

I’ve put this in config/preinitializer.rb to update from the cached gems on startup (this doesn’t hit the network):

[ruby]
$stderr.puts ‘Updating bundled gems…’
system ‘gem bundle –cached’
require “#{RAILS_ROOT}/vendor/bundled_gems/environment”
[/ruby]

To avoid any startup delays after an upgrade, I also call system 'gem bundle --cached' from the after_update_code hook in the capfile.

Finally, to make sure only the .gem files are checked in, add these lines to .gitignore (you’ll still need to explicitly git add the bundled_gems/cache directory):

vendor/bundled_gems
!vendor/bundled_gems/cache

[Update 3 November] Yehuda Katz just posted an article all about Bundler, including features coming in the imminent 0.7 release.

Technorati Tags: , , , , ,

Written by Kerry

November 2nd, 2009 at 5:24 pm

Posted in Rails,Ruby

My (very!) small part in the Array#forty_two controversy

one comment

For those outside the Rails community who have no idea what I’m on about, some people got a bit upset about Rails 2.2 defining Array#second up to Array#tenth, so for example you can call foo.fifth instead of foo[4] (you can already call foo.first instead of foo[0]). One of the last changes to be committed before 2.2 was released was to slim the list down to just second, third, fourth and fifth, but adding Array#forty_two (the ultimate answer) instead.

dhh_tweet.png

commit_1.png

kerry_reply.png

dhh_reply.png

commit_2.png

Written by Kerry

November 24th, 2008 at 12:52 pm

Rails 2.2 Envycast Review

one comment

I’ve been a fan of the RailsEnvy guys (Gregg Pollack and Jason Seifer) ever since their “Hi, I’m Ruby on Rails” spoof of the “I’m a Mac” ads, and have been listening to their podcast ever since. I even got a mention on it once. Well that’s not strictly true – I wasn’t actually mentioned, but a patch I’d contributed to Capistrano was, which is close enough.

Recently, Gregg and Jason have branched out into screencasts, but I hadn’t actually watched one because (understandably) they charge for them, and I was too tight to cough up the cash. £1200 for a new MacBook, no problem. A fiver for a screencast? What am I, made of money?

Anyhow, when I saw that they were looking for people to review their Ruby on Rails Envycast, covering the latest goodness in Rails 2.2, I jumped at the chance to get a free copy. A wonderful example of cognitive bias, given that I wouldn’t have agreed to write a review just to be paid $16.

What do I get for the money?

The basic $9 gets you the screencast and a set of code samples to go with it, or for $16 they’ll throw in Carlos Brando’s Ruby on Rails 2.2 PDF too. Alternatively the PDF is available on its own, also for $9.

Video

The video is available in Quicktime or Ogg formats at a resolution of 569×480, as well as a version optimised for iPhones and iPods. Total running time is just under 45 minutes, and incredibly the first 39½ of those go by before Jason makes any claims about Rails’s scalability.

I don’t know whether it’s unique, but the Envycast style of having the presenters chroma-keyed onto the Keynote presentation generally works very well. The visuals themselves are professional, although sometimes the ‘sparkle’ effect is a bit overused for my taste. The presentation style is much as you’d expect if you’ve listened to the podcasts, with plenty of cheesy humour to keep things interesting. I think having two people present in a conversational style is a big help.

The screencast is split into sections, each covering the new features for a different component (ActiveRecord, ActiveSupport, ActionPack, ActionController, Railties, internationalization and performance). The Quicktime version (not sure about the others) has bookmarks, making it easy to jump to a particular section. The whole thing is set against a variety of city skylines to liven the background up a little – by the way guys, that’s Tower Bridge, not London Bridge.

Each new feature is introduced with an example, generally contrasting the ‘old’ way of doing something with the equivalent in 2.2. There’s enough detail to get the idea of what’s changed, without dwelling too long on each one. One tiny gripe with the code snippets on screen: the pedant in me hates seeing curly quotes in code, because I know if I typed puts ‘foo’ into irb instead of puts 'foo', it wouldn’t work.

Code samples

The screencast comes with a set of code samples to illustrate all the features discussed in the screencast. These take the form of sample classes with Test::Unit test cases, along with rakefiles to run them. The sample directory contains a frozen installation of Rails 2.2, so all you need to do to run them is add the appropriate values to database.yml. I had trouble running them initially because they were inside a directory with a space in its name, but other than that it all worked nicely.

PDF

The PDF that comes with the $16 bundle is by Carlos Brando, well-known for his free Rails 2.1 book. It’s available in the original Portugese, or translated to English by Carl Youngblood. The book weighs in at 118 pages, and as you would expect goes into more detail than the screencast. It claims to cover all the major changes in Rails 2.2 (I haven’t checked!), and contains clear descriptions with examples.

Conclusion

So is it worth it? On balance, I think the answer is yes, although I wonder whether they’d sell more at $5 rather than $9 – after all, I can buy (to pick an example at random) the entire Naked Gun trilogy on DVD for roughly the same amount, and Gregg and Jason aren’t that funny. The value is in collecting all the information in one place – you could trawl through the release notes and lighthouse tickets to get all the same information, but if you value your time at all, the screencast and PDF pay for themselves many times over.

Should you buy the PDF, the video or both? If you just want the hard facts, go for the PDF, but if you want to be entertained too (assuming you find the Rails Envy podcasts entertaining), get the video as well. The next episode, Scaling Ruby, is out now, and I might buy it just to see if Jason finally admits that Rails might actually be able to scale.

Technorati Tags:

Written by Kerry

November 8th, 2008 at 9:04 pm

Posted in Rails,Ruby

Defending Ruby and Rails in the Enterprise

leave a comment

I consider myself fortunate that the previous two projects I worked on (the BT Web21C Portal and Mojo) were Rails-based (actually it wan’t just luck in the former case, as I had a part in selecting the framework). I love the expressiveness and flexibility of the Ruby language, the power and relative simplicity of the Rails framework, and the all-round awesomeness of tools like RSpec and Capistrano, and I don’t particularly relish the thought of going back to Java (although I’m told that Spring is much nicer than last time I used it).

At our recent release planning session, I was assigned to a new project, which involves (among other things) exposing a CLI-based configuration interface as a web service. In our initial discussions, the four of us more-or-less agreed on a few initial decisions:

  • The exposure should be REST. Fortunately the people developing the upstream system shared this opinion.
  • Rails is an ideal framework for RESTful web services.
  • Ruby seemed like a good fit for parsing the command responses too.
  • Asynchronous behaviour would be handled using queues (probably ActiveMQ).

Since we’d all been working on Rails projects when the new team was formed, we assumed that this wouldn’t be a particularly contentious route to go down, but unfortunately our director/architect/boss didn’t see things quite the same way. He had two main objections:

  • Rails may make sense for GUI applications, but why on earth would you use it for a service? All our other [SOAP] services are written in Java.
  • At some point the application will need to go into support, and we don’t have support/operations people with Ruby or Rails experience

I think the first point’s easier to address, as I’d argue it’s based on a misunderstanding: Rails isn’t really anything to do with GUIs, but is a framework for creating MVC web applications. Virtually all the heavy lifting Rails takes care of is in the controller and model areas, with the creation of the actual visible GUI being left to the developer to take care of with the usual mix of HTML, CSS and Javascript. The only thing Rails adds is the ability to insert dynamic content using ERB – similar to the role of JSP in Java EE.

A RESTful web service is, to all intents and purposes, the same as a normal web application, but (potentially) without the HTML. All the power that Rails brings to web application development is also harnessed when creating RESTful services.

The second point represents a much more fundamental strategy choice. If the company makes the decision that all development is going to use Java (the language as well as the platform), then we inevitably lose the flexibility to choose what may appear (in a local context) to be the right tool for the job. Personally I think that would be a shortsighted and ill-informed decision: if that were the strategy, we’d presumably all still be developing in C, or COBOL, or Assembler. Or we’d have gone bust. But then I’m not an architect (incidentally, according to Peter Gillard-Moss, that’s reason number 10 why I don’t deserve to be fired), so what do I know?

However, if Ruby is considered an acceptable technology choice for “normal” web applications, we’ll still need people with appropriate skills to support those, so the problem doesn’t go away. I suspect even for a Java specialist, supporting a well-written Rails application with good test coverage is probably easier than supporting some of the spaghetti-coded Java I’ve seen.

Anyway, our arguments obviously weren’t totally unconvincing, because we were given a couple of weeks to show what we could produce before getting a final decision. That time runs out on Monday, so if I’m unnaturally grumpy after that it’ll be because we’ve been told to chuck all our work so far away and start from scratch in Java. Or possibly FORTRAN.

Update, 16 June Well we made our case, and we get to stick with Rails. Celebration all round!

Written by Kerry

June 12th, 2008 at 9:42 am

Posted in Enterprise,Rails,Ruby

“You have to declare the controller name in controller specs”

one comment

For ages I’ve been getting an intermittent problem with RSpec, where occasionally I’d see the following error on a model spec:

You have to declare the controller name in controller specs. For example:
describe "The ExampleController" do
controller_name "example" #invokes the ExampleController
end

The problem seemed to depend on which order the specs were run in, and for rake it could be avoided by removing --loadby mtime --reverse from spec.opts. It was a real pain with autotest though, and today (my original plan of “wait for RSpec 1.1 and hope it goes away” having failed) I finally got round to looking into it properly.

It seemed that the error was being triggered by the rather unpleasant code I wrote a while ago to simplify testing of model validation. Digging into the RSpec source to see what was happening, I found that that error message only gets returned when (as you’d expect) you don’t declare the controller name in a controller spec (specifically in an instance of Spec::Rails::Example::ControllerExampleGroup). The code that decides what type of example group to create lives in Spec::DSL::BehaviourFactory, and according to its specs, there are two methods it uses to figure out what type of spec it’s looking at:

[ruby]
it “should return a ModelExampleGroup when given :type => :model” do

it “should return a ModelExampleGroup when given :spec_path => ‘/blah/spec/models/'” do

it “should return a ModelExampleGroup when given :spec_path => ‘\\blah\\spec\\models\\’ (windows format)” do

it “should favor the :type over the :spec_path” do

[/ruby]

I began to suspect that the problem was caused by the fact that my specify_attributes method wasn’t declared in a file in spec/models, so I thought I’d try specifying the type explicitly. So instead of this:

[ruby]
describe “#{label} with all attributes set” do
[/ruby]

I changed it to this:

[ruby]
describe “#{label} with all attributes set”, :type => ‘model’ do
[/ruby]

Sure enough, it worked! Not sure whether anyone else is likely to see the same problem (unless they’re foolish enough to use my validation spec code), but hopefully if you do, a Google search will bring up this post and it might point you in the right direction.

Written by Kerry

December 18th, 2007 at 9:35 pm

Posted in Rails,rspec,Ruby

Rails Envy’s take on the werewolf question

one comment

This clip [MP3, 57s] from a Rails Envy podcast made me laugh. It’s referring to Charles Nutter’s recent musings on whether werewolf is killing the conference hackfest.

Incidentally, how often do you get the chance to Google for “nutter werewolf”?

Technorati Tags: ,

Written by Kerry

November 26th, 2007 at 10:18 am

Weird Rails bug

leave a comment

I Spent some time yesterday tracking down a bizarre bug which was causing some of our Selenium tests to fail. Watching the browser running the tests, I could see that occasionally a page would fail to render, with an “invalid argument” error and a stack trace. The line in question was an <%= end_form_tag %> in a layout. The strange thing was, it didn’t display the same behaviour when I ran the single failing test on its own, or when I viewed the page myself.

Or at least, I thought it didn’t. Because it seemed intermittent, I tried reloading the page a few times, and sure enough, the error appeared. Once. Then the page reloaded successfully six times, before failing again. This was completely repeatable – six times OK; one stack trace. Regular as clockwork.

Completely stumped, I thought I might as well at least replace the deprecated <%= start_form_tag %> … <%= end_form_tag %> with <% form_tag do %> … <% end %>, and lo and behold, that fixed it.

Unfortunately, I have no idea why. An imaginary prize to whoever can explain it!

Written by Kerry

November 14th, 2007 at 9:21 am

Posted in Rails

Rails, SOAP and REST

leave a comment

From the list of new features coming in Rails 2.0:

It’ll probably come as no surprise that Rails has picked a side in the SOAP vs REST debate. Unless you absolutely have to use SOAP for integration purposes, we strongly discourage you from doing so.

Written by Kerry

October 5th, 2007 at 8:24 am

Posted in Rails,Software

Correct use of the flash in Rails

5 comments

[Update 9 May 2012]

This seems to work for testing flash.now in Rails 3:

[ruby]
it “puts an error in the flash” do
post :create
flash[:error].should == “Sorry, that’s wrong.”
end

it “does not persist the flash” do
post :create
flash.sweep
flash[:error].should be_nil
end
[/ruby]

[Update 20 April 2010]

I recently had problems testing flash.now, and Google kept leading me back to this post. Unfortunately it doesn’t seem to work with the current version of Rails (I’m using 2.3.5 at the moment).

This post from Pluit Solutions gives an alternative approach which seems to work. I haven’t tried it with Rails 3 though.


I don’t know whether this has caught anyone else out, or whether we just didn’t read the documentation properly (it’s covered briefly on p153 of AWDwR), but I thought I’d mention it anyway.

Anyone who’s written a Rails app will know that the ‘flash’ is used to store error and status messages, usually on form submissions. Model validation failure messages automatically get copied into the flash, but you often want to do it manually too.

[ruby]
flash[:notice] = “User Details updated.”
redirect_to edit_user_path(@user)
[/ruby]

The gotcha comes when you want to display a message and render a page, as opposed to redirecting – for example when errors are preventing a form from being submitted. This is how not to do it:

[ruby]
flash[:error] = “Password doesn’t match confirmation.” # WRONG!
render :action => ‘change_password’
[/ruby]

The problem is that the flash is stored for the next request. Because we’re no longer doing a redirect, that means the message may appear wherever the user goes next, not just on the page that we just rendered. To avoid this, use flash.now, which is only used for the current request:

[ruby]
flash.now[:error] = “Password doesn’t match confirmation.”
render :action => ‘change_password’
[/ruby]

The rule of thumb is to use flash if you’re redirecting, and flash.now if you’re rendering (either explicitly, or by dropping through to the default view for the action).

All very well, but whatever you put in flash.now is cleared out at the end of the request, so how do you test it? The answer (for RSpec, at least) lies in a comment on this RSpec feature request – basically just add the following to spec_helper.rb:

[ruby]
module ActionController
module Flash
class FlashHash
def initialize
@hash = {}
@now_hash = {}
end

def [](key)
@hash[key]
end

def []=(key, obj)
@hash[key] = obj
end

def discard(k = nil)
initialize
end

def now
@now_hash
end

def update(hash)
@hash.update(hash)
end

def sweep
# do nothing
end
end
end
end
[/ruby]

You can now do something like this:

[ruby]
describe “When a user tries to change his password with an invalid verification code” do

it “should put an error message in the flash” do
flash.now[:error].should == “Incorrect verification code or password.”
end

it “should not persist the flash” do
flash[:error].should be_nil
end
end
[/ruby]

Technorati Tags: , , ,

Written by Kerry

July 4th, 2007 at 2:37 pm

Posted in Rails