Sunteți pe pagina 1din 61

DataMapper

One API for a variety of datastores


DataMapper comes with the ability to use the same API to talk to a multitude of different
datastores. There are adapters for the usual RDBMS suspects, NoSQL stores, various file formats
and even some popular webservices.
There's a probably incomplete list of available datamapper adapters on the github wiki with new
ones getting implemented regularly. A quick github search should give you further hints on
what's currently available.

Plays Well With Others


With DataMapper you define your mappings in your model. Your data-store can develop
independently of your models using Migrations.
To support data-stores which you don't have the ability to manage yourself, it's simply a matter
of telling DataMapper where to look. This makes DataMapper a good choice when Working
with legacy databases
1 class Post
2
include DataMapper::Resource
3
4
# set the storage name for the :legacy repository
5
storage_names[:legacy] = 'tblPost'
6
7
# use the datastore's 'pid' field for the id property.
8
property :id, Serial, :field => :pid
9
10
# use a property called 'uid' as the child key (the foreign key)
11
belongs_to :user, :child_key => [ :uid ]
12 end

DataMapper only issues updates or creates for the properties it knows about. So it plays well
with others. You can use it in an Integration Database without worrying that your application
will be a bad actor causing trouble for all of your other processes.
DataMapper has full support for Composite Primary Keys (CPK) builtin. Specifying the
properties that form the primary key is easy.
1 class LineItem
2
include DataMapper::Resource
3
4
property :order_id,
Integer, :key => true
5
property :item_number, Integer, :key => true
6 end

If we were to know an order_id/item_number combination, we can easily retrieve the


corresponding line item from the datastore.
1 order_id, item_number = 1, 1
2 LineItem.get(order_id, item_number)
3 # => [#<LineItem @orderid=1 @item_number=1>]

Less need for writing migrations


With DataMapper, you specify the datastore layout inside your ruby models. This allows
DataMapper to create the underlying datastore schema based on the models you defined. The
#auto_migrate! and #auto_upgrade! methods can be used to generate a schema in the
datastore that matches your model definitions.
While #auto_migrate! destructively drops and recreates tables to match your model definitions,
#auto_upgrade! supports upgrading your datastore to match your model definitions, without
actually destroying any already existing data.
There are still some limitations to the operations that #auto_upgrade! can perform. We're
working hard on making it smarter, but there will always be scenarios where an automatic
upgrade of your schema won't be possible. For example, there's no sane strategy for
automatically changing a column length constraint from VARCHAR(100) to VARCHAR(50).
DataMapper can't know what it should do when the data doesn't validate against the new
tightened constraints.
In situations where neither #auto_migrate! nor #auto_upgrade! quite cut it, you can still fall
back to the classic migrations feature provided by dm-migrations.
Here's some code that puts #auto_migrate! and #auto_upgrade! to use.
1 require 'rubygems'
2 require 'dm-core'
3 require 'dm-migrations'
4
5 DataMapper::Logger.new($stdout, :debug)
6 DataMapper.setup(:default, 'mysql://localhost/test')
7
8 class Person
9
include DataMapper::Resource
10
property :id,
Serial
11
property :name, String, :required => true
12 end
13
14 DataMapper.auto_migrate!
15
16 # ~ (0.015754) SET sql_auto_is_null = 0
17 # ~ (0.000335) SET SESSION sql_mode =
'ANSI,NO_BACKSLASH_ESCAPES,NO_DIR_IN_CREATE,NO_ENGINE_SUBSTITUTION,NO_UNSIGNE
D_SUBTRACTION,TRADITIONAL'
18 # ~ (0.283290) DROP TABLE IF EXISTS `people`

19 # ~ (0.029274) SHOW TABLES LIKE 'people'


20 # ~ (0.000103) SET sql_auto_is_null = 0
21 # ~ (0.000111) SET SESSION sql_mode =
'ANSI,NO_BACKSLASH_ESCAPES,NO_DIR_IN_CREATE,NO_ENGINE_SUBSTITUTION,NO_UNSIGNE
D_SUBTRACTION,TRADITIONAL'
22 # ~ (0.000932) SHOW VARIABLES LIKE 'character_set_connection'
23 # ~ (0.000393) SHOW VARIABLES LIKE 'collation_connection'
24 # ~ (0.080191) CREATE TABLE `people` (`id` INT(10) UNSIGNED NOT NULL
AUTO_INCREMENT, `name` VARCHAR(50) NOT NULL, PRIMARY KEY(`id`)) ENGINE =
InnoDB CHARACTER SET utf8 COLLATE utf8_general_ci
25 # => #<DataMapper::DescendantSet:0x101379a68 @descendants=[Person]>
26
27 class Person
28
property :hobby, String
29 end
30
31 DataMapper.auto_upgrade!
32
33 # ~ (0.000612) SHOW TABLES LIKE 'people'
34 # ~ (0.000079) SET sql_auto_is_null = 0
35 # ~ (0.000081) SET SESSION sql_mode =
'ANSI,NO_BACKSLASH_ESCAPES,NO_DIR_IN_CREATE,NO_ENGINE_SUBSTITUTION,NO_UNSIGNE
D_SUBTRACTION,TRADITIONAL'
36 # ~ (1.794475) SHOW COLUMNS FROM `people` LIKE 'id'
37 # ~ (0.001412) SHOW COLUMNS FROM `people` LIKE 'name'
38 # ~ (0.001121) SHOW COLUMNS FROM `people` LIKE 'hobby'
39 # ~ (0.153989) ALTER TABLE `people` ADD COLUMN `hobby` VARCHAR(50)
40 # => #<DataMapper::DescendantSet:0x101379a68 @descendants=[Person]>

Data integrity is important


DataMapper makes it easy to leverage native techniques for enforcing data integrity. The dmconstraints plugin provides support for establishing true foreign key constraints in databases that
support that concept.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

require
require
require
require

'rubygems'
'dm-core'
'dm-constraints'
'dm-migrations'

DataMapper::Logger.new($stdout, :debug)
DataMapper.setup(:default, 'mysql://localhost/test')
class Person
include DataMapper::Resource
property :id, Serial
has n, :tasks, :constraint => :destroy
end
class Task
include DataMapper::Resource
property :id, Serial
belongs_to :person
end

21 DataMapper.auto_migrate!
22
23 # ~ (0.000131) SET sql_auto_is_null = 0
24 # ~ (0.000141) SET SESSION sql_mode =
'ANSI,NO_BACKSLASH_ESCAPES,NO_DIR_IN_CREATE,NO_ENGINE_SUBSTITUTION,NO_UNSIGNE
D_SUBTRACTION,TRADITIONAL'
25 # ~ (0.017995) SHOW TABLES LIKE 'people'
26 # ~ (0.000278) SHOW TABLES LIKE 'tasks'
27 # ~ (0.001435) DROP TABLE IF EXISTS `people`
28 # ~ (0.000226) SHOW TABLES LIKE 'people'
29 # ~ (0.000093) SET sql_auto_is_null = 0
30 # ~ (0.000087) SET SESSION sql_mode =
'ANSI,NO_BACKSLASH_ESCAPES,NO_DIR_IN_CREATE,NO_ENGINE_SUBSTITUTION,NO_UNSIGNE
D_SUBTRACTION,TRADITIONAL'
31 # ~ (0.000334) SHOW VARIABLES LIKE 'character_set_connection'
32 # ~ (0.000278) SHOW VARIABLES LIKE 'collation_connection'
33 # ~ (0.187402) CREATE TABLE `people` (`id` INT(10) UNSIGNED NOT NULL
AUTO_INCREMENT, PRIMARY KEY(`id`)) ENGINE = InnoDB CHARACTER SET utf8 COLLATE
utf8_general_ci
34 # ~ (0.000309) DROP TABLE IF EXISTS `tasks`
35 # ~ (0.000313) SHOW TABLES LIKE 'tasks'
36 # ~ (0.200487) CREATE TABLE `tasks` (`id` INT(10) UNSIGNED NOT NULL
AUTO_INCREMENT, `person_id` INT(10) UNSIGNED NOT NULL, PRIMARY KEY(`id`))
ENGINE = InnoDB CHARACTER SET utf8 COLLATE utf8_general_ci
37 # ~ (0.146982) CREATE INDEX `index_tasks_person` ON `tasks` (`person_id`)
38 # ~ (0.002525) SELECT COUNT(*) FROM
"information_schema"."table_constraints" WHERE "constraint_type" = 'FOREIGN
KEY' AND "table_schema" = 'test' AND "table_name" = 'tasks' AND
"constraint_name" = 'tasks_person_fk'
39 # ~ (0.230075) ALTER TABLE `tasks` ADD CONSTRAINT `tasks_person_fk`
FOREIGN KEY (`person_id`) REFERENCES `people` (`id`) ON DELETE CASCADE ON
UPDATE CASCADE
40 # => #<DataMapper::DescendantSet:0x101379a68 @descendants=[Person, Task]>

Notice how the last statement adds a foreign key constraint to the schema definition.

Strategic Eager Loading


DataMapper will only issue the very bare minimums of queries to your data-store that it needs to.
For example, the following example will only issue 2 queries. Notice how we don't supply any
extra :include information.
1 zoos = Zoo.all
2 zoos.each do |zoo|
3
# on first iteration, DM loads up all of the exhibits for all of the
items in zoos
4
# in 1 query to the data-store.
5
6
zoo.exhibits.each do |exhibit|
7
# n+1 queries in other ORMs, not in DataMapper
8
puts "Zoo: #{zoo.name}, Exhibit: #{exhibit.name}"
9
end
10 end

The idea is that you aren't going to load a set of objects and use only an association in just one of
them. This should hold up pretty well against a 99% rule.
When you don't want it to work like this, just load the item you want in it's own set. So
DataMapper thinks ahead. We like to call it "performant by default". This feature singlehandedly wipes out the "N+1 Query Problem".
DataMapper also waits until the very last second to actually issue the query to your data-store.
For example, zoos = Zoo.all won't run the query until you start iterating over zoos or call one
of the 'kicker' methods like #length. If you never do anything with the results of a query,
DataMapper won't incur the latency of talking to your data-store.
Note: that this currently doesn't work when you start to nest loops that access the associations
more than one level deep. The following would not issue the optimal amount of queries:
1 zoos = Zoo.all
2 zoos.each do |zoo|
3
# on first iteration, DM loads up all of the exhibits for all of the
items in zoos
4
# in 1 query to the data-store.
5
6
zoo.exhibits.each do |exhibit|
7
# n+1 queries in other ORMs, not in DataMapper
8
puts "Zoo: #{zoo.name}, Exhibit: #{exhibit.name}"
9
10
exhibit.items.each do |item|
11
# currently DM won't be smart about the queries it generates for
12
# accessing the items in any particular exhibit
13
puts "Item: #{item.name}"
14
end
15
end
16 end

However, there's work underway to remove that limitation. In the future, it will be possible to get
the same smart queries inside deeper nested iterations.
Depending on your specific needs, it might be possible to workaround this limitations by using
DataMapper's feature that allows you to query models by their associations, as described briefly
in the chapter below.

Querying models by their associations


DataMapper allows you to create and search for any complex object graph simply by providing a
nested hash of conditions. The following example uses a typical Customer - Order domain model
to illustrate how nested conditions can be used to both create and query models by their
associations.
For a complete definition of the Customer - Order domain models have a look at the Finders
page.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

# A hash specifying one customer with one order


#
# In general, possible keys are all property and relationship
# names that are available on the relationship's target model.
# Possible toplevel keys depend on the property and relationship
# names available in the model that receives the hash.
#
customer = {
:name
=> 'Dan Kubb',
:orders => [
{
:reference
=> 'TEST1234',
:order_lines => [
{
:item => {
:sku
=> 'BLUEWIDGET1',
:unit_price => 1.00,
},
},
],
},
]
}
# Create the Customer with the nested options hash
Customer.create(customer)
# => [#<Customer @id=1 @name="Dan Kubb">]
# The same options to create can also be used to query for the same object
p Customer.all(customer)
# => [#<Customer @id=1 @name="Dan Kubb">]

QueryPaths can be used to construct joins in a very declarative manner.


Starting from a root model, you can call any relationship by its name. The returned object again
responds to all property and relationship names that are defined in the relationship's target model.
This means that you can walk the chain of available relationships, and then match against a
property at the end of that chain. The object returned by the last call to a property name also
responds to all the comparison operators that we saw above. This makes for some powerful join
construction!
1 Customer.all(Customer.orders.order_lines.item.sku.like => "%BLUE%")
2 # => [#<Customer @id=1 @name="Dan Kubb">]

You can even chain calls to all or first to continue refining your query or search within a
scope. See Finders for more information.

Identity Map

One row in the database should equal one object reference. Pretty simple idea. Pretty profound
impact. If you run the following code in ActiveRecord you'll see all false results. Do the same
in DataMapper and it's true all the way down.
1 @parent = Tree.first(:conditions => { :name => 'bob' })
2
3 @parent.children.each do |child|
4
puts @parent.object_id == child.parent.object_id
5 end

This makes DataMapper faster and allocate less resources to get things done.

Laziness Can Be A Virtue


Columns of potentially infinite length, like Text columns, are expensive in data-stores. They're
generally stored in a different place from the rest of your data. So instead of a fast sequential
read from your hard-drive, your data-store has to hop around all over the place to get what it
needs.
With DataMapper, these fields are treated like in-row associations by default, meaning they are
loaded if and only if you access them. If you want more control you can enable or disable this
feature for any column (not just text-fields) by passing a lazy option to your column mapping
with a value of true or false.
1 class Animal
2
include DataMapper::Resource
3
4
property :id,
Serial
5
property :name, String
6
property :notes, Text
# lazy-loads by default
7 end

Plus, lazy-loading of Text property happens automatically and intelligently when working with
associations. The following only issues 2 queries to load up all of the notes fields on each
animal:
1 animals = Animal.all
2 animals.each do |pet|
3
pet.notes
4 end

Embracing Ruby
DataMapper loves Ruby and is therefore tested regularly against all major Ruby versions. Before
release, every gem is explicitly tested against MRI 1.8.7, 1.9.2, JRuby and Rubinius. We're
proud to say that almost all of our specs pass on all these different implementations.

Have a look at our CI server reports for detailed information about which gems pass or fail their
specs on the various Ruby implementations. Note that these results always reflect the state of the
latest codes and not the state of the latest released gem. Our CI server runs tests for all
permutations whenever someone commits to any of the tested repositories on Github.

All Ruby, All The Time


DataMapper goes further than most Ruby ORMs in letting you avoid writing raw query
fragments yourself. It provides more helpers and a unique hash-based conditions syntax to cover
more of the use-cases where issuing your own SQL would have been the only way to go.
For example, any finder option that are non-standard is considered a condition. So you can write
Zoo.all(:name => 'Dallas') and DataMapper will look for zoos with the name of 'Dallas'.
It's just a little thing, but it's so much nicer than writing Zoo.find(:all, :conditions => [
'name = ?', 'Dallas' ]) and won't incur the Ruby overhead of
Zoo.find_by_name('Dallas'), nor is it more difficult to understand once the number of
parameters increases.
What if you need other comparisons though? Try these:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Zoo.first(:name => 'Galveston')


# 'gt' means greater-than. 'lt' is less-than.
Person.all(:age.gt => 30)
# 'gte' means greather-than-or-equal-to. 'lte' is also available
Person.all(:age.gte => 30)
Person.all(:name.not => 'bob')
# If the value of a pair is an Array, we do an IN-clause for you.
Person.all(:name.like => 'S%', :id => [ 1, 2, 3, 4, 5 ])
# Does a NOT IN () clause for you.
Person.all(:name.not => [ 'bob', 'rick', 'steve' ])
# Ordering
Person.all(:order => [ :age.desc ])
# .asc is the default

Open Development
DataMapper sports a very accessible code-base and a welcoming community. Outside
contributions and feedback are welcome and encouraged, especially constructive criticism. Go
ahead, fork DataMapper, we'd love to see what you come up with!

Getting started with DataMapper


If you think you might need some help, there's an active community supporting DataMapper
through the mailing list and the #datamapper IRC channel on irc.freenode.net.
So lets imagine we're setting up some models for a blogging app. We'll keep it nice and simple.
The first thing to decide on is what models we want. Post is a given. So is Comment. But let's
mix it up and do Category too.

Install an Adapter
First, you will need to install an Adapter, which allows DataMapper to communicate to the
Database:

dm-sqlite-adapter
# Debian / Ubuntu
sudo apt-get install libsqlite3-dev
# RedHat / Fedora
sudo yum install sqlite-devel
# MacPorts
sudo port install sqlite3
# HomeBrew
sudo brew install sqlite
gem install dm-sqlite-adapter

dm-mysql-adapter
# Debian / Ubuntu
sudo apt-get install libmysqlclient-dev
# RedHat / Fedora
sudo yum install mysql-devel
# MacPorts
sudo port install mysql5
# HomeBrew
sudo brew install mysql
gem install dm-mysql-adapter

dm-postgres-adapter
# Debian / Ubuntu
sudo apt-get install libpq-dev
# RedHat / Fedora

sudo yum install postgresql-devel


# MacPorts
sudo port install postgresql91
# HomeBrew
sudo brew install postgresql
gem install dm-postgres-adapter

Install DataMapper
If you have RubyGems installed, open a Terminal and install a few things.
gem install data_mapper

This will install the following, most commonly used DataMapper gems.

dm-core
dm-aggregates
dm-constraints
dm-migrations
dm-transactions
dm-serializer
dm-timestamps
dm-validations
dm-types

Require it in your application


require 'rubygems'
require 'data_mapper' # requires all the gems listed above

Specify your database connection


You need to make sure to do this before you use your models, i.e. before you actually start
accessing the database.
# If you want the logs displayed you have to do this before the call to
setup
DataMapper::Logger.new($stdout, :debug)
# An in-memory Sqlite3 connection:
DataMapper.setup(:default, 'sqlite::memory:')
# A Sqlite3 connection to a persistent database
DataMapper.setup(:default, 'sqlite:///path/to/project.db')

# A MySQL connection:
DataMapper.setup(:default, 'mysql://user:password@hostname/database')
# A Postgres connection:
DataMapper.setup(:default, 'postgres://user:password@hostname/database')

Note: that currently you must setup a :default repository to work with DataMapper (and to be
able to use additional differently named repositories). This might change in the future.

Define your models


The Post model is going to need to be persistent, so we'll include DataMapper::Resource. The
convention with model names is to use the singular, not plural version...but that's just the
convention, you can do whatever you want.
class Post
include DataMapper::Resource
property
property
property
property
end

:id,
:title,
:body,
:created_at,

Serial
String
Text
DateTime

#
#
#
#

An auto-increment integer key


A varchar type string, for short strings
A text block, for longer string data.
A DateTime, for any date you might like.

class Comment
include DataMapper::Resource
property
property
property
property
property
end

:id,
:posted_by,
:email,
:url,
:body,

Serial
String
String
String
Text

class Category
include DataMapper::Resource
property :id,
property :name,
end

Serial
String

The above example is simplified, but you can also specify more options such as constraints for
your properties. DataMapper supports a lot of different property types natively, and more
through dm-types.
An important thing to note is that every model must have a key in order to be valid. If a model
has no key, there's no way to identify a resource and thus no way to update its persistent state
within the backend datastore. DataMapper will raise a DataMapper::IncompleteModelError
when trying to auto_migrate! a model that has no key declared.
Have a look at property to learn about the different ways of declaring keys for your models.

Associations
Ideally, these declarations should be done inside your class definition with the properties and
things, but for demonstration purposes, we will just re-open the classes.
One To Many

Posts can have comments, so well need to setup a simple one-to-many association between then:
class Post
has n, :comments
end
class Comment
belongs_to :post
end

Has and belongs to many

Categories can have many Posts and Posts can have many Categories, so well need a many to
many relationship commonly referred to has and belongs to many. Well setup a quick model
to wrap our join table between the two so that we can record a little bit of meta-data about when
the post was categorized into a category.
class Categorization
include DataMapper::Resource
property :id,
Serial
property :created_at, DateTime
belongs_to :category
belongs_to :post
end
# Now
class
has
has
end

we re-open our Post and Categories classes to define associations


Post
n, :categorizations
n, :categories, :through => :categorizations

class Category
has n, :categorizations
has n, :posts,
:through => :categorizations
end

Finalize Models
After declaring all of the models, you should finalize them:
DataMapper.finalize

This checks the models for validity and initializes all properties associated with relationships. It
is likely if you use a web-framework such as merb or rails, this will already be done for you. In
case you do not, be sure to call it at an appropriate time.
DataMapper allows the use of natural primary keys, composite primary keys and other
complexities. Because of this, when a model is declared with a belongs_to relationship the
property to hold the foreign key cannot be initialized immediately. It can only be initialized when
the parent model has also been declared. This is hard for DataMapper to determine, due to the
dynamic nature of ruby, so it is left up to developers to determine the appropriate time.
In general, you want to call finalize before your application starts accessing the models.

Set up your database tables


Relational Databases work with pre-defined tables. To be able to create the tables in the
underlying storage, you need to have dm-migrations loaded.
Note: If you've been following this instructions and did require 'data_mapper', you can
safely skip the following require statement as it has already been done for you.
require

'dm-migrations'

Once dm-migrations is loaded, you can create the tables by issuing the following command:
DataMapper.auto_migrate!

This will issue the necessary CREATE statements (DROPing the table first, if it exists) to define
each storage according to their properties. After auto_migrate! has been run, the database
should be in a pristine state. All the tables will be empty and match the model definitions.
This wipes out existing data, so you could also do:
DataMapper.auto_upgrade!

This tries to make the schema match the model. It will CREATE new tables, and add columns to
existing tables. It won't change any existing columns though (say, to add a NOT NULL constraint)
and it doesn't drop any columns. Both these commands also can be used on an individual model
(e.g. Post.auto_migrate!)

Create your first resource


Using DataMapper to create a resource (A resource is an instance of a model) is simple
# create makes the resource immediately
@post = Post.create(
:title
=> "My first DataMapper post",

:body
=> "A lot of text ...",
:created_at => Time.now
)
# Or new gives you it back unsaved, for more operations
@post = Post.new(:title => ..., ...)
@post.save
# persist the resource

Both are equivalent. The first thing to notice is we didn't specify the auto-increment key. This is
because the data-store will provide that value for us, and should make sure it's unique, too. Also,
note that while the property is a DateTime, we can pass it a Time instance, and it will convert (or
typecast) the value for us, before it saves it to the data-store. Any properties which are not
specified in the hash will take their default values in the data-store.

Installation Issues
If you've followed the install instructions but run into problems, you can find some tips below.

Dependencies
First port of call if you're having issues with an installation is to make sure you have all the
dependencies installed. RubyGems should take care of this for you, but just in case, make sure
you have the following gems as well:

addressable
json_pure
RSpec - for running specs on DataMapper itself
YARD - for building documentation

Installing an adapter
You will also need to install the adapter for your platform:
1 gem install dm-mysql-adapter

The current database adapters are:

dm-mysql-adapter
dm-sqlite-adapter
dm-postgres-adapter
dm-oracle-adapter
dm-sqlserver-adapter

There are also many more database, and non-database, adapters. Have a look at the (probably
incomplete) list on the github wiki. Additionally, a quick github search might reveal some more.

Uninstalling all DataMapper gems


Should you ever have the need to uninstall datamapper completely, Dan Kubb has prepared a
bash command that does the trick. Have a look at this gist for a oneliner that gets rid of
datamapper completely.

Getting Help
If you still have issues, we suggest getting onto the mailing list or the IRC channel and asking
around. There are friendly people there to help you out.

Properties
A model's properties are not introspected from the fields in the data-store; In fact the reverse
happens. You declare the properties for a model inside it's class definition, which is then used to
generate the fields in the data-store.
This has a few advantages. First it means that a model's properties are documented in the model
itself, not a migration or XML file. If you've ever been annoyed at having to look in a schema
file to see the list of properties and types for a model, you'll find this particularly useful. There's
no need for a special annotate rake task either.
Second, it lets you limit access to properties using Ruby's access semantics. Properties can be
declared public, private or protected. They are public by default.
Finally, since DataMapper only cares about properties explicitly defined in your models,
DataMapper plays well with legacy data-stores and shares them easily with other applications.

Declaring Properties
Inside your class, call the property method for each property you want to add. The only two
required arguments are the name and type, everything else is optional.
1 class Post
2
include DataMapper::Resource
3
4
property :id,
Serial
5
property :title,
String, :required => true
6
property :published, Boolean, :default => false
records is false

# primary serial key


# Cannot be nil
# Default value for new

7 end

Keys
Primary Keys

Primary keys are not automatically created for you, as with ActiveRecord. You MUST configure
at least one key property on your data-store. More often than not, you'll want an autoincrementing integer as a primary key, so DM has a shortcut:
1

property :id, Serial

Natural Keys

Anything can be a key. Just pass :key => true as an option during the property definition.
Most commonly, you'll see String as a natural key:
1

property :slug, String, :key => true

# any Type is available here

Natural Keys are protected against mass-assignment, so their setter= will need to be called
individually if you're looking to set them.
Fair warning: Using Boolean, Discriminator, and the time related types as keys may cause your
DBA to hunt you down and "educate" you. DM will not be held responsible for any injuries or
death that may result.
Composite Keys

You can have more than one property in the primary key:
1 class Post
2
include DataMapper::Resource
3
4
property :old_id, Integer, :key => true
5
property :new_id, Integer, :key => true
6 end

Setting default values


Defaults can be set via the :default key for a property. They can be static values, such as 12 or
"Hello", but DataMapper also offers the ability to use a Proc to set the default value. The
property becomes whatever the Proc returns, which will be called the first time the property is
used without having first set a value. The Proc itself receives two arguments: The resource the
property is being set on, and the property itself.
1 class Image
2
include DataMapper::Resource

3
4
property :id,
Serial
5
property :path,
FilePath, :required => true
6
property :md5sum, String,
:length => 32, :default => lambda { |r, p|
Digest::MD5.hexdigest(r.path.read) if r.path }
7 end

When creating the resource, or the first time the md5sum property is accessed, it will be set to the
hex digest of the file referred to by path.
Fair Warning: A property default must not refer to the value of the property it is about to set, or
there will be an infinite loop.

Setting default options


If you find that you're setting the same default options over and over again, you can specify them
once and have them applied to all properties you add to your models.
1
2
3
4
5
6
7
8
9
10
11
12
13
14

# set all String properties to have a default length of 255


DataMapper::Property::String.length(255)
# set all Boolean properties to not allow nil (force true or false)
DataMapper::Property::Boolean.allow_nil(false)
# set all properties to be required by default
DataMapper::Property.required(true)
# turn off auto-validation for all properties by default
DataMapper::Property.auto_validation(false)
# set all mutator methods to be private by default
DataMapper::Property.writer(:private)

You can of course still override these defaults by specifying any option explicitly when defining
a specific property.

Lazy Loading
Properties can be configured to be lazy loaded. A lazily loaded property is not requested from the
data-store by default. Instead it is only loaded when it's accessor is called for the first time. This
means you can stop default queries from being greedy, a particular problem with text fields. Text
fields are lazily loaded by default, which you can over-ride if you need to.
1 class Post
2
include DataMapper::Resource
3
4
property :id,
Serial
5
property :title, String
6
property :body, Text

# Is lazily loaded by default

7
property :notes, Text,
default
8 end

:lazy => false

# Isn't lazy, will load by

Lazy Loading can also be done via contexts, which let you group lazily loaded properties
together, so that when one is fetched, all the associated ones will be as well, cutting down on
trips to the data-store.
1 class Post
2
include DataMapper::Resource
3
4
property :id,
Serial
5
property :title,
String
6
property :subtitle, String
:lazy => [ :show ]
7
property :body,
Text
:lazy => [ :show ]
8
property :views,
Integer, :lazy => [ :show ]
9
property :summary, Text
10 end

In this example, only the title (and the id, of course) will be loaded from the data-store on a
Post.all. But as soon as the value for subtitle, body or views are called, all three will be loaded
at once, since they're members of the :show group. The summary property on the other hand,
will only be fetched when it is asked for.

Available Types
DM-Core supports the following 'primitive' data-types.

Boolean
String (default length limit of 50 characters)
Text (defaults to lazy loading and length limit of 65535 characters)
Float
Integer
Decimal
DateTime, Date, Time
Object, (marshalled)
Discriminator
Binary (inherits default length limit of 50 characters from String)

If you include DM-Types, the following data-types are supported:


You are encouraged to have a quick glance at the implementation of the various properties
below. It's really easy to create a DataMapper property that encapsulates data that is suitable for
use in high level application code while at the same time being able to be persisted in all kinds of
datastores.

APIKey
BCryptHash

CommaSeparatedList
Csv
Enum
EpochTime
FilePath
Flag
IPAddress
Json
ParanoidBoolean
ParanoidDateTme
Regexp
Slug
URI
UUID
Yaml

Limiting Access
Access for properties is defined using the same semantics as Ruby. Accessors are public by
default, but you can declare them as private or protected if you need to. You can set access using
the :accessor option. For demonstration, we'll reopen our Post class.
1 class Post
2
property :title, String, :accessor => :private
writer are private
3
property :body, Text,
:accessor => :protected
writer are protected
4 end

# Both reader and


# Both reader and

You also have more fine grained control over how you declare access. You can, for example,
have a public reader and private writer by using the :writer and :reader options. (Remember,
the default is Public)
1 class Post
2
property :title, String, :writer => :private
3
property :tags, String, :reader => :protected
protected
4 end

# Only writer is private


# Only reader is

Over-riding Accessors
When a property has declared accessors for getting and setting, it's values are added to the
model. Just like using attr_accessor, you can over-ride these with your own custom accessors.
It's a simple matter of adding an accessor after the property declaration. Reopening the Post
class....
1 class Post
2
property :slug, String

3
4
def slug=(new_slug)
5
raise ArgumentError if new_slug != 'DataMapper is Awesome'
6
super # use original method instead of accessing @ivar directly
7
end
8 end

Create, Save, Update and Destroy


This page describes the basic methods to use when creating, saving, updating and destroying
resources with DataMapper. Some of DataMapper's concepts might be confusing to users
coming from ActiveRecord for example. For this reason, we start with a little background on the
usage of bang vs. no-bang methods in DataMapper, followed by ways of manipulating the rules
DataMapper abides when it comes to raising exceptions in case some persistence operation went
wrong.

Bang(!) or no bang methods


This page is about creating, saving, updating and destroying resources with DataMapper. The
main methods to achieve these tasks are #create, #save, #update and #destroy. All of these
methods have bang method equivalents which operate in a slightly different manner.
DataMapper follows the general ruby idiom when it comes to using bang or non-bang methods.
A detailed explanation of this idiom can be found at
David A. Black's weblog
When you call a non-bang method like #save, DataMapper will invoke all callbacks defined for
resources of the model. This means that it will have to load all affected resources into memory in
order to be able to execute the callbacks on each one of them. This can be considered the safe
version, without the bang(!). While it sometimes may not be the best way to achieve a particular
goal (bad performance), it's as safe as it gets. In fact, if dm-validations are required and active,
calling the non-bang version of any of these methods, will make sure that all validations are
being run too.
Sometimes though, you either don't need the extra safety you get from dm-validations, or you
don't want any callbacks to be invoked at all. In situations like this, you can use the bang(!)
versions of the respective methods. You will probably find yourself using these unsafe methods
when performing internal manipulation of resources as opposed to, say, persisting attribute
values entered by users (in which case you'd most likely use the safe versions). If you call
#save! instead of #save, no callbacks and no validations will be run. DataMapper just assumes
that you know what you do. This can also have severe impact on the performance of some
operations. If you're calling #save!, #update! or #destroy! on a (large)
DataMapper::Collection, this will result in much better performance than calling the safe nonbang counterparts. This is because DataMapper won't load the collection into memory because it
won't execute any resource level callbacks or validations.

While the above examples mostly used #save and #save! to explain the different behavior, the
same rules apply for #create!, #save!, #update! and #destroy!. The safe non-bang methods
will always execute all callbacks and validations, and the unsafe bang(!) methods never will.

Raising an exception when save fails


By default, datamapper returns true or false for all operations manipulating the persisted state
of a resource (#create, #save, #update and #destroy).
If you want it to raise exceptions instead, you can instruct datamapper to do so either globally, on
a per-model, or on a per-instance basis.
1 DataMapper::Model.raise_on_save_failure = true
models
2 Zoo.raise_on_save_failure = true
3 zoo.raise_on_save_failure = true

# globally across all


# per-model
# per-instance

If DataMapper is told to raise_on_save_failure it will raise the following when any save
operation failed:
DataMapper::SaveFailureError: Zoo#save returned false, Zoo was not saved

You can then go ahead and rescue from this error.

The example Zoo


To illustrate the various methods used in manipulating records, we'll create, save, update and
destroy a record.
1 class Zoo
2
include DataMapper::Resource
3
4
property :id,
Serial
5
property :name,
String
6
property :description, Text
7
property :inception,
DateTime
8
property :open,
Boolean,
9 end

:default => false

Create
If you want to create a new resource with some given attributes and then save it all in one go,
you can use the #create method.
1 zoo = Zoo.create(:name => 'The Glue Factory', :inception => Time.now)

If the creation was successful, #create will return the newly created DataMapper::Resource. If
it failed, it will return a new resource that is initialized with the given attributes and possible
default values declared for that resource, but that's not yet saved. To find out wether the creation
was successful or not, you can call #saved? on the returned resource. It will return true if the
resource was successfully persisted, or false otherwise.
If you want to either find the first resource matching some given criteria or just create that
resource if it can't be found, you can use #first_or_create.
1 zoo = Zoo.first_or_create(:name => 'The Glue Factory')

This will first try to find a Zoo instance with the given name, and if it fails to do so, it will return
a newly created Zoo with that name.
If the criteria you want to use to query for the resource differ from the attributes you need for
creating a new resource, you can pass the attributes for creating a new resource as the second
parameter to #first_or_create, also in the form of a #Hash.
1 zoo = Zoo.first_or_create({ :name => 'The Glue Factory' }, { :inception =>
Time.now })

This will search for a Zoo named 'The Glue Factory' and if it can't find one, it will return a new
Zoo instance with its name set to 'The Glue Factory' and the inception set to what has been
Time.now at the time of execution. You can see that for creating a new resource, both hash
arguments will be merged so you don't need to specify the query criteria again in the second
argument Hash that lists the attributes for creating a new resource. However, if you really need to
create the new resource with different values from those used to query for it, the second Hash
argument will overwrite the first one.
1 zoo = Zoo.first_or_create({ :name => 'The Glue Factory' }, {
2
:name
=> 'Brooklyn Zoo',
3
:inception => Time.now
4 })

This will search for a Zoo named 'The Glue Factory' but if it fails to find one, it will return a Zoo
instance with its name set to 'Brooklyn Zoo' and its inception set to the value of Time.now at
execution time.

Save
We can also create a new instance of the model, update its properties and then save it to the data
store. The call to #save will return true if saving succeeds, or false in case something went
wrong.
1 zoo = Zoo.new
2 zoo.attributes = { :name => 'The Glue Factory', :inception => Time.now }
3 zoo.save

In this example we've updated the attributes using the #attributes= method, but there are
multiple ways of setting the values of a model's properties.
1 zoo = Zoo.new(:name => 'Awesome Town Zoo')
hash to the new method
2 zoo.name = 'Dodgy Town Zoo'
individual property
3 zoo.attributes = { :name => 'No Fun Zoo', :open => false }
properties at once

# Pass in a
# Set
# Set multiple

Just like #create has an accompanying #first_or_create method, #new has its
#first_or_new counterpart as well. The only difference with #first_or_new is that it returns a
new unsaved resource in case it couldn't find one for the given query criteria. Apart from that,
#first_or_new behaves just like #first_or_create and accepts the same parameters. For a
detailed explanation of the arguments these two methods accept, have a look at the explanation
of #first_or_create in the above section on Create.
It is important to note that #save will save the complete loaded object graph when called. This
means that calling #save on a resource that has relationships of any kind (established via
belongs_to or has) will also save those related resources, if they are loaded at the time #save is
being called. Related resources are loaded if they've been accessed either for read or for write
purposes, prior to #save being called.
NOTE the following behavior of #save when dm-validations are in effect!
The symptom that people are seeing is that their records fail to save (i.e. #save returns false)
while calling #valid? returns true. This is caused when an object has a parent or child that fails
validation and thus refuses to save, thereby also blocking the object which #save was called on
from saving.

Update
You can also update a model's properties and save it with one method call. #update will return
true if the record saves and false if the save fails, exactly like the #save method.
1 zoo.update(:name => 'Funky Town Municipal Zoo')

One thing to note is that the #update method refuses to update a resource in case the resource
itself is #dirty? at this time.
1 zoo.name = 'Brooklyn Zoo'
2 zoo.update(:name => 'Funky Town Municipal Zoo')
3 # => DataMapper::UpdateConflictError: Zoo#update cannot be called on a
dirty resource

You can also use #update to do mass updates on a model. In the previous examples we've used
DataMapper::Resource#update to update a single resource. We can also use

DataMapper::Model#update

which is available as a class method on our models. Calling it will


update all instances of the model with the same values.
1 Zoo.update(:name => 'Funky Town Municipal Zoo')

This will set all Zoo instances' name property to 'Funky Town Municipal Zoo'. Internally it does
the equivalent of:
1 Zoo.all.update(:name => 'Funky Town Municipal Zoo')

This shows that actually, #update is also available on any DataMapper::Collection and
performs a mass update on that collection when being called. You typically retrieve a
DataMapper::Collection from either a call to SomeModel.all or a call to a relationship
accessor for any 1:n or m:n relationship.

Destroy
To destroy a record, you simply call its #destroy method. It will return true or false
depending if the record is successfully deleted or not. Here is an example of finding an existing
record then destroying it.
1 zoo = Zoo.get(5)
2 zoo.destroy # => true

You can also use #destroy to do mass deletes on a model. In the previous examples we've used
DataMapper::Resource#destroy to destroy a single resource. We can also use
DataMapper::Model#destroy which is available as a class method on our models. Calling it
will remove all instances of that model from the repository.
1 Zoo.destroy

This will delete all Zoo instances from the repository. Internally it does the equivalent of:
1 Zoo.all.destroy

This shows that actually, #destroy is also available on any DataMapper::Collection and
performs a mass delete on that collection when being called. You typically retrieve a
DataMapper::Collection from either a call to SomeModel.all or a call to a relationship
accessor for any 1:n or m:n relationship.

Talking to your datastore directly


Sometimes you may find that you need to execute a non-query task directly against your
database. For example, performing bulk inserts might be such a situation.

The following snippet shows how to insert multiple records with only one statement on MySQL.
It may not work with other databases but it should give you an idea of how to execute non-query
statements against your own database of choice.
1 adapter = DataMapper.repository(:default).adapter
2 # Insert multiple records with one statement (MySQL)
3 adapter.execute("INSERT INTO zoos (id, name) VALUES (1, 'Lion'), (2,
'Elephant')")
4 # The interpolated array condition syntax works as well:
5 adapter.execute('INSERT INTO zoos (id, name) VALUES (?, ?), (?, ?)', 1,
'Lion', 2, 'Elephant')

Validations
DataMapper validations allow you to vet data prior to saving to a database. To make validations
available to your app you simply 'require "dm-validations"' in your application. With
DataMapper there are two different ways you can validate your classes' properties.

Manual Validation
Much like a certain other Ruby ORM we can call validation methods directly by passing them a
property name (or multiple property names) to validate against.
1
2

validates_length_of :name
validates_length_of :name, :description

These are the currently available manual validations. Please refer to the API docs for more
detailed information.

validates_absence_of
validates_acceptance_of
validates_with_block
validates_confirmation_of
validates_format_of
validates_length_of
validates_with_method
validates_numericality_of
validates_primitive_type_of
validates_presence_of
validates_uniqueness_of
validates_within

Auto-Validations

By adding triggers to your property definitions you can both define and validate your classes
properties all in one fell swoop.
Triggers that generate validator creation:
# implicitly creates a validates_presence_of
:required => true # cannot be nil
# implicitly creates a (scoped) validates_uniqueness_of
# a symbol value (or an array of symbols) must denote
# one or more of the resource's properties and will
# be passed on as the :scope option to validates_uniqueness
:unique => true
# must be unique
:unique => :some_scope
# must be unique within some_scope
:unique => [:some, :scope] # must be unique within [:some, :scope]
# implicitly creates a validates_length_of
:length => 0..20 # must be between 0 and 20 characters in length
:length => 1..20 # must be between 1 and 20 characters in length
# implicitly creates a validates_format_of
:format => :email_address # predefined regex
:format => :url
# predefined regex
:format => /\w+_\w+/
:format => lambda { |str| str }
:format => proc { |str| str }
:format => Proc.new { |str| str }

Here we see an example of a class with both a manual and auto-validation declared:
1
2
3
4
5
6
7
8
9
10
11
12

require 'dm-validations'
class Account
include DataMapper::Resource
property :name, String
# good old fashioned manual validation
validates_length_of :name, :max => 20
property :content, Text, :length => 100..500
end

Validating
DataMapper validations, when included, alter the default save/create/update process for a model.
You may manually validate a resource using the valid? method, which will return true if the
resource is valid, and false if it is invalid.

Working with Validation Errors

If your validators find errors in your model, they will populate the
Validate::ValidationErrors object that is available through each of your models via calls to
your model's errors method.
1
2
3
4
5
6
7
8

my_account = Account.new(:name => 'Jose')


if my_account.save
# my_account is valid and has been saved
else
my_account.errors.each do |e|
puts e
end
end

Error Messages
The error messages for validations provided by DataMapper are generally clear, and explain
exactly what has gone wrong. If they're not what you want though, they can be changed. This is
done via providing a :message in the options hash, for example:
validates_uniqueness_of :title, :scope => :section_id,
:message => "There's already a page of that title in this section"

This example also demonstrates the use of the :scope option to only check the property's
uniqueness within a narrow scope. This object won't be valid if another object with the same
@section_id@ already has that title.
Something similar can be done for auto-validations, too, via setting :messages in the property
options.
property :email, String, :required => true, :unique => true,
:format
=> :email_address,
:messages => {
:presence => "We need your email address.",
:is_unique => "We already have that email.",
:format
=> "Doesn't look like an email address to me ..."
}

To set an error message on an arbitrary field of the model, DataMapper provides the add
command.
1

@resource.errors.add(:title, "Doesn't mention DataMapper")

This is probably of most use in custom validations, so ...

Custom Validations
DataMapper provides a number of validations for various common situations such as checking
for the length or presence of strings, or that a number falls in a particular range. Often this is

enough, especially when validations are combined together to check a field for a number of
properties. For the situations where it isn't, DataMapper provides a couple of methods:
validates_with_block and validates_with_method. They're very similar in operation, with
one accepting a block as the argument and the other taking a symbol representing a method
name.
The method or block performs the validation tests and then should return true if the resource is
valid or false if it is invalid. If the resource isn't valid instead of just returning false, an array
containing false and an error message, such as [ false, 'FAIL!' ] can be returned. This will
add the message to the errors on the resource.
1
class WikiPage
2
include DataMapper::Resource
3
4
# properties ...
5
6
validates_with_method :check_citations
7
8
# checks that we've included at least 5 citations for our wikipage.
9
def check_citations
10
# in a 'real' example, the number of citations might be a property
set by
11
# a before :valid? hook.
12
num = count_citations(self.body)
13
if num > 4
14
return true
15
else
16
[ false, "You must have at least #{5 - num} more citations for
this article" ]
17
end
18
end
19
end

Instead of setting an error on the whole resource, you can set an error on an individual property
by passing this as the first argument to validates_with_block or validates_with_method.
To use the previous example, replacing line 5 with:
validates_with_method :body, :method => :check_citations

This would result in the citations error message being added to the error messages for the body,
which might improve how it is presented to the user.

Conditional Validations
Validations don't always have to be run. For example, an issue tracking system designed for git
integration might require a commit identifier for the fix--but only for a ticket which is being set
to 'complete'. A new, open or invalid ticket, of course, doesn't necessarily have one. To cope
with this situation and others like it, DataMapper offers conditional validation, using the :if and
:unless clauses on a validation.

:if

and :unless take as their value a symbol representing a method name or a Proc. The
associated validation will run only if (or unless) the method or Proc returns something which
evaluates to true. The chosen method should take no arguments, whilst the Proc will be called
with a single argument, the resource being validated.
1
class Ticket
2
include DataMapper::Resource
3
4
property :id,
Serial
5
property :title,
String, :required => true
6
property :description, Text
7
property :commit,
String
8
property :status,
Enum[ :new, :open, :invalid, :complete ]
9
10
validates_presence_of :commit, :if => lambda { |t| t.status ==
:complete }
11
end

The autovalidation that requires the title to be present will always run, but the
validates_presence_of on the commit hash will only run if the status is :complete. Another
example might be a change summary that is only required if the resource is already there--'initial
commit' is hardly an enlightening message.
validates_length_of :change_summary, :min => 10, :unless => :new?

Sometimes a simple on and off switch is not enough, and so ...

Contextual Validations
DataMapper Validations also provide a means of grouping your validations into contexts. This
enables you to run different sets of validations under different contexts. All validations are
performed in a context, even the auto-validations. This context is the :default context. Unless
you specify otherwise, any validations added will be added to the :default context and the
valid? method checks all the validations in this context.
One example might be differing standards for saving a draft version of an article, compared with
the full and ready to publish article. A published article has a title, a body of over 1000
characters, and a sidebar picture. A draft article just needs a title and some kind of body. The
length and the sidebar picture we can supply later. There's also a published property, which is
used as part of queries to select articles for public display.
To set a context on a validation, we use the :when option. It might also be desirable to set
:auto_validation => false on the properties concerned, especially if we're messing with
default validations.
1
2
3
4

class Article
include DataMapper::Resource
property :id,

Serial

5
property :title,
String
6
property :picture_url, String
7
property :body,
Text
8
property :published,
Boolean
9
10
# validations
11
validates_presence_of :title,
:when => [ :draft, :publish ]
12
validates_presence_of :picture_url, :when => [ :publish ]
13
validates_presence_of :body,
:when => [ :draft, :publish ]
14
validates_length_of
:body,
:when => [ :publish ], :minimum =>
1000
15
validates_absence_of :published,
:when => [ :draft ]
16
end
17
18
# and now some results
19
@article = Article.new
20
21
@article.valid?(:draft)
22
# => false. We have no title, for a start.
23
24
@article.valid_for_publish?
25
# => false. We have no title, amongst many other issues.
26
# valid_for_publish? is provided shorthand for valid?(:publish)
27
28
# now set some properties
29
@article.title = 'DataMapper is awesome because ...'
30
@article.body = 'Well, where to begin ...'
31
32
@article.valid?(:draft)
33
# => true. We have a title, and a little body
34
35
@article.valid?(:publish)
36
# => false. Our body isn't long enough yet.
37
38
# save our article in the :draft context
39
@article.save(:draft)
40
# => true
41
42
# set some more properties
43
@article.picture_url = 'http://www.greatpictures.com/flower.jpg'
44
@article.body
= an_essay_about_why_datamapper_rocks
45
46
@article.valid?(:draft)
47
# => true. Nothing wrong still
48
49
@article.valid?(:publish)
50
# => true. We have everything we need for a full article to be
published!
51
52
@article.published = true
53
54
@article.save(:draft)
55
# => false. We set the published to true, so we can't save this as a
draft.
56
# As long as our drafting method always saves with the :draft context,
we won't ever
57
# accidentally save a half finished draft that the public will see.

58
59
60
61

@article.save(:publish)
# => true
# we can save it just fine as a published article though.

That was a long example, but it shows how to set up validations in differing contexts and also
how to save in a particular context. One thing to be careful of when saving in a context is to
make sure that any database level constraints, such as a NOT NULL column definition in a
database, are checked in that context, or a data-store error may ensue.

Setting Properties Before Validation


It is sometimes necessary to set properties before a resource is saved or validated. Perhaps a
required property can have a default value set from other properties or derived from the
environment. To set these properties, a before :valid? hook should be used.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

class Article
include DataMapper::Resource
property :id,
Serial
property :title,
String, :required => true
property :permalink, String, :required => true
before :valid?, :set_permalink
# our callback needs to accept the context used in the validation,
# even if it ignores it, as #save calls #valid? with a context.
def set_permalink(context = :default)
self.permalink = title.gsub(/\s+/, '-')
end
end

Be careful not to save your resource in these kinds of methods, or your application will spin off
into infinite trying to save your object while saving your object.

Finding Records
The finder methods for DataMapper objects are defined in DataMapper::Repository. They
include #get, #all, #first, #last

Finder Methods
DataMapper has methods which allow you to grab a single record by key, the first match to a set
of conditions, or a collection of records matching conditions.
1 zoo

= Zoo.get(1)

# get the zoo with primary key of 1.

2 zoo = Zoo.get!(1)
ObjectNotFoundError on failure
3 zoo = Zoo.get('DFW')
keys
4 zoo = Zoo.get('Metro', 'DFW')
5 zoo = Zoo.first(:name => 'Metro')
'Metro'
6 zoo = Zoo.last(:name => 'Metro')
'Metro'
7 zoos = Zoo.all
8 zoos = Zoo.all(:open => true)
9 zoos = Zoo.all(:opened_on => (s..e))
the date-range

# Or get! if you want an


# wow, support for natural primary
# more wow, composite key look-up
# first matching record with the name
# last matching record with the name
# all zoos
# all zoos that are open
# all zoos that opened on a date in

Scopes and Chaining


Calls to `#all can be chained together to further build a query to the data-store:
1 all_zoos
= Zoo.all
2 open_zoos
= all_zoos.all(:open => true)
3 big_open_zoos = open_zoos.all(:animal_count => 1000)

As a direct consequence, you can define scopes without any extra work in your model.
1 class Zoo
2
# all the keys and property setup here
3
def self.open
4
all(:open => true)
5
end
6
7
def self.big
8
all(:animal_count => 1000)
9
end
10 end
11
12 big_open_zoos = Zoo.big.open

Scopes like this can even have arguments. Do anything in them, just ensure they return a Query
of some kind.

Conditions
Rather than defining conditions using SQL fragments, we can actually specify conditions using a
hash.
The examples above are pretty simple, but you might be wondering how we can specify
conditions beyond equality without resorting to SQL. Well, thanks to some clever additions to
the Symbol class, it's easy!
1 exhibitions = Exhibition.all(:run_time.gt => 2, :run_time.lt => 5)
2 # => SQL conditions: 'run_time > 2 AND run_time < 5'

Valid symbol operators for the conditions are:


1
2
3
4
5
6
7

gt
lt
gte
lte
not
eql
like

#
#
#
#
#
#
#

greater than
less than
greater than or equal
less than or equal
not equal
equal
like

Nested Conditions
DataMapper allows you to create and search for any complex object graph simply by providing a
nested hash of conditions.
Possible keys are all property and relationship names (as symbols or strings) that are established
in the model the current nesting level points to. The available toplevel keys depend on the model
the conditions hash is passed to. We'll see below how to change the nesting level and thus the
model the property and relationship keys are scoped to.
For property name keys, possible values typically are simple objects like strings, numbers, dates
or booleans. Using properties as keys doesn't add another nesting level.
For relationship name keys, possible values are either a hash (if the relationship points to a single
resource) or an array of hashes (if the relationship points to many resources). Adding a
relationship name as key adds another nesting level scoped to the Model the relationship is
pointing to. Inside this new level, the available keys are the property and relationship names of
the model that the relationship points to. This is what we meant by "the Model the current
nesting level points to".
The following example shows a typical Customer - Order domain model and illustrates how
nested conditions can be used to both create and search for specific resources.
1 class Customer
2
include DataMapper::Resource
3
4
property :id,
Serial
5
property :name, String, :required => true, :length => 1..100
6
7
has n, :orders
8
has n, :items, :through => :orders
9 end
10
11 class Order
12
include DataMapper::Resource
13
14
property :id,
Serial
15
property :reference, String, :required => true, :length => 1..20
16
17
belongs_to :customer

18
19
has n, :order_lines
20
has n, :items, :through => :order_lines
21 end
22
23 class OrderLine
24
include DataMapper::Resource
25
26
property :id,
Serial
27
property :quantity,
Integer, :required => true, :default => 1, :min =>
1
28
property :unit_price, Decimal, :required => true, :default => lambda {
|r, p| r.item.unit_price }
29
30
belongs_to :order
31
belongs_to :item
32 end
33
34 class Item
35
include DataMapper::Resource
36
37
property :id,
Serial
38
property :sku,
String, :required => true, :length => 1..20
39
property :unit_price, Decimal, :required => true, :min => 0
40
41
has n, :order_lines
42 end
43
44 # A hash specifying a customer with one order
45 customer = {
46
:name
=> 'Dan Kubb',
47
:orders => [
48
{
49
:reference
=> 'TEST1234',
50
:order_lines => [
51
{
52
:item => {
53
:sku
=> 'BLUEWIDGET1',
54
:unit_price => 1.00,
55
},
56
},
57
],
58
},
59
]
60 }
61
62 # Create the Customer with the nested options hash
63 Customer.create(customer)
64
65 # The options to create can also be used to retrieve the same object
66 p Customer.all(customer)
67
68 # QueryPaths can be used to construct joins in a very declarative manner.
69 #
70 # Starting from a root model, you can call any relationship by its name.
71 # The returned object again responds to all property and relationship
names

72 # that are defined in the relationship's target model.


73 #
74 # This means that you can walk the chain of available relationships, and
then
75 # match against a property at the end of that chain. The object returned
by
76 # the last call to a property name also responds to all the comparison
77 # operators available in traditional queries. This makes for some powerful
78 # join construction!
79 #
80 Customer.all(Customer.orders.order_lines.item.sku.like => "%BLUE%")
81 # => [#<Customer @id=1 @name="Dan Kubb">]

Order
To specify the order in which your results are to be sorted, use:
1 @zoos_by_tiger_count = Zoo.all(:order => [ :tiger_count.desc ])
2 # in SQL => SELECT * FROM "zoos" ORDER BY "tiger_count" DESC

Available order vectors are:


1 asc
2 desc

# sorting ascending
# sorting descending

Once you have the query, the order can be modified too. Just call reverse:
1 @least_tigers_first = @zoos_by_tiger_count.reverse
2 # in SQL => SELECT * FROM "zoos" ORDER BY "tiger_count" ASC

Ranges
If you have guaranteed the order of a set of results, you might choose to only use the first ten
results, like this.
1 @zoos_by_tiger_count = Zoo.all(:limit => 10, :order => [ :tiger_count.desc
])

Or maybe you wanted the fifth set of ten results.


1 @zoos_by_tiger_count = Zoo.all(:offset => 40, :limit => 10, :order => [
:tiger_count.desc ])

Combining Queries
Sometimes, the simple queries DataMapper allows you to specify with the hash interface to #all
just won't cut it. This might be because you want to specify an OR condition, though that's just
one possibility. To accomplish more complex queries, DataMapper allows queries (or more
accurately, Collections) to be combined using set operators.

1 # Find all Zoos in Illinois, or those with five or more tigers


2 Zoo.all(:state => 'IL') + Zoo.all(:tiger_count.gte => 5)
3 # in SQL => SELECT * FROM "zoos" WHERE ("state" = 'IL' OR "tiger_count" >=
5)
4
5 # It also works with the union operator
6 Zoo.all(:state => 'IL') | Zoo.all(:tiger_count.gte => 5)
7 # in SQL => SELECT * FROM "zoos" WHERE ("state" = 'IL' OR "tiger_count" >=
5)
8
9 # Intersection produces an AND query
10 Zoo.all(:state => 'IL') & Zoo.all(:tiger_count.gte => 5)
11 # in SQL => SELECT * FROM "zoos" WHERE ("state" = 'IL' AND "tiger_count"
>= 5)
12
13 # Subtraction produces a NOT query
14 Zoo.all(:state => 'IL') - Zoo.all(:tiger_count.gte => 5)
15 # in SQL => SELECT * FROM "zoos" WHERE ("state" = 'IL' AND
NOT("tiger_count" >= 5))

Of course, the latter two queries could be achieved using the standard symbol operators. Set
operators work on any Collection though, and so Zoo.all(:state => 'IL') could just as
easily be replaced with Zoo.open.big or any other method which returns a collection.

Projecting only specific properties


In order to not select all of your model's properties but only a subset of them, you can pass
:fields => [:desired, :property, :names] in your queries.
1
2
3
4
5
6
7
8
9
10

# Will return a mutable collection of zoos


Zoo.all(:fields => [:id, :name])
# Will return an immutable collection of zoos.
# The collection is immutable because we haven't
# projected the primary key of the model.
# DataMapper will raise DataMapper::ImmutableError
# when trying to modify any resource inside the
# returned collection.
Zoo.all(:fields => [:name])

Compatibility
DataMapper supports other conditions syntaxes as well:
1
2
3
4
5
6
7

zoos = Zoo.all(:conditions => { :id => 34 })


# You can use this syntax to call native storage engine functions
zoos = Zoo.all(:conditions => [ 'id = ?', 34 ])
# even mix and match
zoos = Zoo.all(:conditions => { :id => 34 }, :name.like => '%foo%')

Talking directly to your data-store


Sometimes you may find that you need to tweak a query manually.
1 zoos = repository(:default).adapter.select('SELECT name, open FROM zoos
WHERE open = 1')
2 #
Note that this will not return Zoo objects, rather the raw data
straight from the database
zoos

will be full of Struct objects with name, and open attributes, rather than instances of the
Zoo class. They'll also be read-only. You can still use the interpolated array condition syntax as
well:
1 zoos = repository(:default).adapter.select('SELECT name, open FROM zoos
WHERE name = ?', 'Awesome Zoo')

Grouping
DataMapper automatically groups by all selected columns in order to return consistent results
across various datastores. If you need to group by some columns explicitly, you can use the
:fields combined with the :unique option.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

class Person
include DataMapper::Resource
property :id, Serial
property :job, String
end
Person.auto_migrate!
# Note that if you don't include the primary key, you will need to
# specify an explicit order vector, because DM will default to the
# primary key if it's not told otherwise (at least currently).
# PostgreSQL will present this rather informative error message when
# you leave out the order vector in the query below.
#
#
column "people.id" must appear in the GROUP BY clause
#
or be used in an aggregate function
#
# To not do any ordering, you would need to provide :order => nil
#
Person.all(:fields => [:job], :unique => true, :order => [:job.asc])
# ...
# SELECT "job" FROM "people" GROUP BY "job" ORDER BY "job"

Note that if you don't include the primary key in the selected columns, you will not be able to
modify the returned resources because DataMapper cannot know how to persist them.
DataMapper will raise DataMapper::ImmutableError if you're trying to do so nevertheless.

If a group by isn't appropriate and you're rather looking for select distinct, you need to drop
down to talking to your datastore directly, as shown in the section above.

Aggregate functions
For the following to work, you need to have dm-aggregates required.

Counting
1 Friend.count # returns count of all friends
2 Friend.count(:age.gt => 18) # returns count of all friends older then 18
3 Friend.count(:conditions => [ 'gender = ?', 'female' ]) # returns count of
all your female friends
4 Friend.count(:address) # returns count of all friends with an address (NULL
values are not included)
5 Friend.count(:address, :age.gt => 18) # returns count of all friends with
an address that are older then 18
6 Friend.count(:address, :conditions => [ 'gender = ?', 'female' ]) # returns
count of all your female friends with an address

Minimum and Maximum


1 # Get the lowest value of a property
2 Friend.min(:age) # returns the age of the
3 Friend.min(:age, :conditions => [ 'gender
age of the youngest female friends
4 # Get the highest value of a property
5 Friend.max(:age) # returns the age of the
6 Friend.max(:age, :conditions => [ 'gender
age of the oldest female friends

youngest friend
= ?', 'female' ]) # returns the
oldest friend
= ?', 'female' ]) # returns the

Average and Sum


1 # Get the average value of a property
2 Friend.avg(:age) # returns the average age of friends
3 Friend.avg(:age, :conditions => [ 'gender = ?', 'female' ]) # returns the
average age of the female friends
4
5 # Get the total value of a property
6 Friend.sum(:age) # returns total age of all friends
7 Friend.sum(:age, :conditions => [ 'gender = ?', 'female' ]) # returns the
total age of all female friends

Multiple aggregates
1 sum, count = Friend.aggregate(:age.sum, :all.count) # returns the sum of
all ages and the count of all friends

Aggregates with order-by


1 Friend.aggregate(:city, :all.count) # returns the city names and the number
of friends living in each city
2 # e.g. [['Hamburg', 3], ['New York', 4], ['Rome', 0], ... ]

Associations
Associations are a way of declaring relationships between models, for example a blog Post "has
many" Comments, or a Post belongs to an Author. They add a series of methods to your models
which allow you to create relationships and retrieve related models along with a few other useful
features. Which records are related to which are determined by their foreign keys.
The types of associations currently in DataMapper are:
DataMapper Terminology

ActiveRecord Terminology

has n

has_many

has 1

has_one

belongs_to

belongs_to

has n, :things, :through => Resource has_and_belongs_to_many


has n, :things, :through => :model

has_many :association, :through => Model

Declaring Associations
This is done via declarations inside your model class. The class name of the related model is
determined by the symbol you pass in. For illustration, we'll add an association of each type. Pay
attention to the pluralization or the related model's name.
has n and belongs_to (or One-To-Many)
1 class Post
2
include DataMapper::Resource
3
4
property :id, Serial
5
6
has n, :comments
7 end
8

9 class Comment
10
include DataMapper::Resource
11
12
property :id,
Serial
13
property :rating, Integer
14
15
belongs_to :post # defaults to :required => true
16
17
def self.popular
18
all(:rating.gt => 3)
19
end
20 end

The belongs_to method accepts a few options. As we already saw in the example above,
belongs_to relationships will be required by default (the parent resource must exist in order for
the child to be valid). You can make the parent resource optional by passing :required =>
false as an option to belongs_to.
If the relationship makes up (part of) the key of a model, you can tell DM to include it as part of
the primary key by adding the :key => true option.
has n, :through (or One-To-Many-Through)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

class Photo
include DataMapper::Resource
property :id, Serial
has n, :taggings
has n, :tags, :through => :taggings
end
class Tag
include DataMapper::Resource
property :id, Serial
has n, :taggings
has n, :photos, :through => :taggings
end
class Tagging
include DataMapper::Resource
belongs_to :tag,
:key => true
belongs_to :photo, :key => true
end

Note that some options that you might wish to add to an association have to be added to a
property instead. For instance, if you wanted your association to be part of a unique index rather
than the key, you might do something like this.
1 class Tagging
2
include DataMapper::Resource

3
4
property :id, Serial
5
6
property :tag_id,
:unique_index => :uniqueness, :required =>
true
7
property :tagged_photo_id, :unique_index => :uniqueness, :required =>
true
8
9
belongs_to :tag
10
belongs_to :tagged_photo, 'Photo'
11 end

Has, and belongs to, many (Or Many-To-Many)

The use of Resource in place of a class name tells DataMapper to use an anonymous resource to
link the two models up.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#

When auto_migrate! is being called, the following model


definitions will create an
ArticleCategory
model that will be automigrated and that will act as the join
model. DataMapper just picks both model names, sorts them
alphabetically and then joins them together. The resulting
storage name follows the same conventions it would if the
model had been declared traditionally.
The resulting model is no different from any traditionally
declared model. It contains two belongs_to relationships
pointing to both Article and Category, and both underlying
child key properties form the composite primary key (CPK)
of that model. DataMapper uses consistent naming conventions
to infer the names of the child key properties. Since it's
told to link together an Article and a Category model, it'll
establish the following relationships in the join model.
ArticleCategory.belongs_to :article, 'Article', :key => true
ArticleCategory.belongs_to :category, 'Category', :key => true
Since every many to many relationship needs a one to many
relationship to "go through", these also get set up for us.
Article.has n, :article_categories
Category.has n, article_categories
Essentially, you can think of ":through => Resource" being
replaced with ":through => :article_categories" when DM
processes the relationship definition.
This also means that you can access the join model just like
any other DataMapper model since there's really no difference
at all. All you need to know is the inferred name, then you can
treat it just like any other DataMapper model.

class Article

40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72

include DataMapper::Resource
property :id, Serial
has n, :categories, :through => Resource
end
class Category
include DataMapper::Resource
property :id, Serial
has n, :articles, :through => Resource
end
# create two resources
article = Article.create
category = Category.create
# link them by adding to the relationship
article.categories << category
article.save
# link them by creating the join resource directly
ArticleCategory.create(:article => article, :category => category)
# unlink them by destroying the related join resource
link = article.article_categories.first(:category => category)
link.destroy
# unlink them by destroying the join resource directly
link = ArticleCategory.get(article.id, category.id)
link.destroy

Self referential many to many relationships


Sometimes you need to establish self referential relationships where both sides of the
relationship are of the same model. The canonical example seems to be the declaration of a
Friendship relationship between two people. Here's how you would do that with DataMapper.
1 class Person
2
include DataMapper::Resource
3
4
property :id,
Serial
5
property :name , String, :required
6
7
has n, :friendships, :child_key =>
8
has n, :friends, self, :through =>
9 end
10
11 class Friendship
12
include DataMapper::Resource
13
14
belongs_to :source, 'Person', :key
15
belongs_to :target, 'Person', :key

=> true
[ :source_id ]
:friendships, :via => :target

=> true
=> true

16 end

The Person and Friendship model definitions look pretty straightforward at a first glance.
Every Person has an id and a name, and a Friendship points to two instances of Person.
The interesting part are the relationship definitions in the Person model. Since we're modelling
friendships, we want to be able to get at one person's friends with one single method call. First,
we need to establish a one to many relationship to the Friendship model.
1 class Person
2
3
# ...
4
5
# Since the foreign key pointing to Person isn't named 'person_id',
6
# we need to override it by specifying the :child_key option. If the
7
# Person model's key would be something different from 'id', we would
8
# also need to specify the :parent_key option.
9
10
has n, :friendships, :child_key => [ :source_id ]
11
12 end

This only gets us half the way though. We can now reach associated Friendship instances by
traversing person.friendships. However, we want to get at the actual friends, the instances of
Person. We already know that we can go through other relationships in order to be able to
construct many to many relationships.
So what we need to do is to go through the friendship relationship to get at the actual friends. To
achieve that, we have to tweak various options of that many to many relationship definition.
1 class Person
2
3
# ...
4
5
has n, :friendships, :child_key => [ :source_id ]
6
7
# We name the relationship :friends cause that's the original intention
8
#
9
# The target model of this relationship will be the Person model as
well,
10
# so we can just pass self where DataMapper expects the target model
11
# You can also use Person or 'Person' in place of self here. If you're
12
# constructing the options programmatically, you might even want to pass
13
# the target model using the :model option instead of the 3rd parameter.
14
#
15
# We "go through" the :friendship relationship in order to get at the
actual
16
# friends. Since we named our relationship :friends, DataMapper assumes
17
# that the Friendship model contains a :friend relationship. Since this
18
# is not the case in our example, because we've named the relationship
19
# pointing to the actual friend person :target, we have to tell
DataMapper
20
# to use that relationship instead, when looking for the relationship to

21
# piggy back on. We do so by passing the :via option with our :target
22
23
has n, :friends, self, :through => :friendships, :via => :target
24
25 end

Another example of a self referential relationship would be the representation of a relationship


where people can follow other people. In this situation, any person can follow any number of
other people.
1 class Person
2
3
class Link
4
5
include DataMapper::Resource
6
7
storage_names[:default] = 'people_links'
8
9
# the person who is following someone
10
belongs_to :follower, 'Person', :key => true
11
12
# the person who is followed by someone
13
belongs_to :followed, 'Person', :key => true
14
15
end
16
17
include DataMapper::Resource
18
19
property :id,
Serial
20
property :name, String, :required => true
21
22
23
# If we want to know all the people that John follows, we need to look
24
# at every 'Link' where John is a :follower. Knowing these, we know all
25
# the people that are :followed by John.
26
#
27
# If we want to know all the people that follow Jane, we need to look
28
# at every 'Link' where Jane is :followed. Knowing these, we know all
29
# the people that are a :follower of Jane.
30
#
31
# This means that we need to establish two different relationships to
32
# the 'Link' model. One where the person's role is :follower and one
33
# where the person's role is to be :followed by someone.
34
35
# In this relationship, the person is the follower
36
has n, :links_to_followed_people, 'Person::Link', :child_key =>
[:follower_id]
37
38
# In this relationship, the person is the one followed by someone
39
has n, :links_to_followers, 'Person::Link', :child_key => [:followed_id]
40
41
42
# We can then use these two relationships to relate any person to
43
# either the people followed by the person, or to the people this
44
# person follows.
45

46
# Every 'Link' where John is a :follower points to a person that
47
# is followed by John.
48
has n, :followed_people, self,
49
:through => :links_to_followed_people, # The person is a follower
50
:via
=> :followed
51
52
# Every 'Link' where Jane is :followed points to a person that
53
# is one of Jane's followers.
54
has n, :followers, self,
55
:through => :links_to_followers, # The person is followed by someone
56
:via
=> :follower
57
58
# Follow one or more other people
59
def follow(others)
60
followed_people.concat(Array(others))
61
save
62
self
63
end
64
65
# Unfollow one or more other people
66
def unfollow(others)
67
links_to_followed_people.all(:followed => Array(others)).destroy!
68
reload
69
self
70
end
71
72 end

Adding To Associations
Adding resources to many to one or one to one relationships is as simple as assigning them to
their respective writer methods. The following example shows how to assign a target resource to
both a many to one and a one to one relationship.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

class Person
include DataMapper::Resource
has 1, :profile
end
class Profile
include DataMapper::Resource
belongs_to :person
end
# Assigning a resource to a one-to-one relationship
person = Person.create
person.profile = Profile.new
person.save
# Assigning a resource to a many-to-one relationship
profile = Profile.new

22 profile.person = Person.create
23 profile.save

Adding resources to any one to many or many to many relationship, can basically be done in two
different ways. If you don't have the resource already, but only have a hash of attributes, you can
either call the new or the create method directly on the association, passing it the attributes in
form of a hash.
1
2
3
4
5
6
7
8
9
10
11

post = Post.get(1)

# find a post to add a comment to

# This will add a new but not yet saved comment to the collection
comment = post.comments.new(:subject => 'DataMapper ...')
# Both of the following calls will actually save the comment
post.save
# This will save the post along with the newly added comment
comment.save # This will only save the comment
# This will create a comment, save it, and add it to the collection
comment = post.comments.create(:subject => 'DataMapper ...')

If you already have an existing Comment instance handy, you can just append that to the
association using the << method. You still need to manually save the parent resource to persist
the comment as part of the related collection.
1 post.comments << comment # append an already existing comment
2
3 # Both of the following calls will actually save the comment
4 post.save
# This will save the post along with the newly added
comment
5 post.comments.save # This will only save the comments collection

One important thing to know is that for related resources to know that they have changed, you
must change them via the API that the relationship (collection) provides. If you cannot do this
for whatever reason, you must call reload on the model or collection in order to fetch the latest
state from the storage backend.
The following example shows this behavior for a one to many relationship. The same principle
applies for all other kinds of relationships though.
1
2
3
4
5
6
7
8
9
10
11

class Person
include DataMapper::Resource
property :id, Serial
has n, :tasks
end
class Task
include DataMapper::Resource
property :id, Serial
belongs_to :person
end

If we add a new task not by means of the API that the tasks collection provides us, we must
reload the collection in order to get the correct results.
1
2
3
4
5
6
7
8
9
10
11
12

ree-1.8.7-2010.02 > p = Person.create


=> #<Person @id=1>
ree-1.8.7-2010.02 > t = Task.create :person => p
=> #<Task @id=1 @person_id=1>
ree-1.8.7-2010.02 > p.tasks
=> [#<Task @id=1 @person_id=1>]
ree-1.8.7-2010.02 > u = Task.create :person => p
=> #<Task @id=2 @person_id=1>
ree-1.8.7-2010.02 > p.tasks
=> [#<Task @id=1 @person_id=1>]
ree-1.8.7-2010.02 > p.tasks.reload
=> [#<Task @id=1 @person_id=1>, #<Task @id=2 @person_id=1>]

Customizing Associations
The association declarations make certain assumptions about the names of foreign keys and
about which classes are being related. They do so based on some simple conventions.
The following two simple models will explain these default conventions in detail, showing
relationship definitions that solely rely on those conventions. Then the same relationship
definitions will be presented again, this time using all the available options explicitly. These
additional versions of the respective relationship definitions will have the exact same effect as
their simpler counterparts. They are only presented to show which options can be used to
customize various aspects when defining relationships.
1 class Blog
2
include DataMapper::Resource
3
4
# The rules described below apply equally to definitions
5
# of one-to-one relationships. The only difference being
6
# that those would obviously only point to a single resource.
7
8
# However, many-to-many relationships don't accept all the
9
# options described below. They do support specifying the
10
# target model, like we will see below, but they do not support
11
# the :parent_key and the :child_key options. Instead, they
12
# support another option that's available to many-to-many
13
# relationships exclusively. This option is called :via, and
14
# will be explained in more detail in its own paragraph below.
15
16
# - This relationship points to multiple resources
17
# - The target resources will be instances of the 'Post' model
18
# - The local parent_key is assumed to be 'id'
19
# - The remote child_key is assumed to be 'blog_id'
20
#
- If the child model (Post) doesn't define the 'blog_id'
21
#
child key property either explicitly, or implicitly by
22
#
defining it using a belongs_to relationship, it will be
23
#
established automatically, using the defaults described
24
#
here ('blog_id').

25
26
has n, :posts
27
28
# The following relationship definition has the exact same
29
# effect as the version above. It's only here to show which
30
# options control the default behavior outlined above.
31
32
has n, :posts, 'Post',
33
:parent_key => [ :id ],
# local to this model (Blog)
34
:child_key => [ :blog_id ] # in the remote model (Post)
35
36 end
37
38 class Post
39
include DataMapper::Resource
40
41
# - This relationship points to a single resource
42
# - The target resource will be an instance of the 'Blog' model
43
# - The locally established child key will be named 'blog_id'
44
#
- If a child key property named 'blog_id' is already defined
45
#
for this model, then that will be used.
46
#
- If no child key property named 'blog_id' is already defined
47
#
for this model, then it gets defined automatically.
48
# - The remote parent_key is assumed to be 'id'
49
#
- The parent key must be (part of) the remote model's key
50
# - The child key is required to be present
51
#
- A parent resource must exist and be assigned, in order
52
#
for this resource to be considered complete / valid
53
54
belongs_to :blog
55
56
# The following relationship definition has the exact same
57
# effect as the version above. It's only here to show which
58
# options control the default behavior outlined above.
59
#
60
# When providing customized :parent_key and :child_key options,
61
# it is not necessary to specify both :parent_key and :child_key
62
# if only one of them differs from the default conventions.
63
#
64
# The :parent_key and :child_key options both accept arrays
65
# of property name symbols. These should be the names of
66
# properties being (at least part of) a key in either the
67
# remote (:parent_key) or the local (:child_key) model.
68
#
69
# If the parent resource need not be present in order for this
70
# model to be considered complete, :required => false can be
71
# passed to stop DataMapper from establishing checks for the
72
# presence of the attribute value.
73
74
belongs_to :blog, 'Blog',
75
:parent_key => [ :id ],
# in the remote model (Blog)
76
:child_key => [ :blog_id ], # local to this model (Post)
77
:required
=> true
# the blog_id must be present
78
79 end

In addition to the :parent_key and :child_key options that we just saw, the belongs_to
method also accepts the :key option. If a belongs_to relationship is marked with :key =>
true, it will either form the complete primary key for that model, or it will be part of the primary
key. The latter will be the case if other properties or belongs_to definitions have been marked
with :key => true too, to form a composite primary key (CPK). Marking a belongs_to
relationship or any property with :key => true, automatically makes it :required => true
as well.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

class Post
include DataMapper::Resource
belongs_to :blog, :key => true
end

# 'blog_id' is the primary key

class Person
include DataMapper::Resource
property id, Serial
end
class Authorship
include DataMapper::Resource
belongs_to :post,
:key => true
belongs_to :person, :key => true
end

# 'post_id'
is part of the CPK
# 'person_id' is part of the CPK

When defining many to many relationships you may find that you need to customize the
relationship that is used to "go through". This can be particularly handy when defining self
referential many-to-many relationships like we saw above. In order to change the relationship
used to "go through", DataMapper allows us to specifiy the :via option on many to many
relationships.
The following example shows a scenario where we don't use :via for defining self referential
many to many relationships. Instead, we will use :via to be able to provide "better" names for
use in our domain models.
1 class Post
2
include DataMapper::Resource
3
4
property :id, Serial
5
6
has n, :authorships
7
8
# Without the use of :via here, DataMapper would
9
# search for an :author relationship in Authorship.
10
# Since there is no such relationship, that would
11
# fail. By using :via => :person, we can instruct
12
# DataMapper to use that relationship instead of
13
# the :author default.
14
15
has n, :authors, 'Person', :through => :authorships, :via => :person

16
17
18
19
20
21
22
23
24
25
26
27
28
29

end
class Person
include DataMapper::Resource
property id, Serial
end
class Authorship
include DataMapper::Resource
belongs_to :post,
:key => true
belongs_to :person, :key => true
end

# 'post_id'
is part of the CPK
# 'person_id' is part of the CPK

Adding Conditions to Associations


If you want to order the association, or supply a scope, you can just pass in the options...
1 class Post
2
include DataMapper::Resource
3
4
has n, :comments, :order => [ :published_on.desc ], :rating.gte => 5
5
# Post#comments will now be ordered by published_on, and filtered by
rating > 5.
6 end

Finders off Associations


When you call an association off of a model, internally DataMapper creates a Query object
which it then executes when you start iterating or call length off of. But if you instead call .all
or .first off of the association and provide it the exact same arguments as a regular all and
first, it merges the new query with the query from the association and hands you back a
requested subset of the association's query results.
In a way, it acts like a database view in that respect.
1 @post = Post.first
2 @post.comments
# returns
the full association
3 @post.comments.all(:limit => 10, :order => [ :created_at.desc ]) # return
the first 10 comments, newest first
4 @post.comments(:limit => 10, :order => [ :created_at.desc ])
# alias
for #all, you can pass in the options directly
5 @post.comments.popular
# Uses
the 'popular' finder method/scope to return only highly rated comments

Querying via Relationships

Sometimes it's desirable to query based on relationships. DataMapper makes this as easy as
passing a hash into the query conditions:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

# find all Posts with a Comment by the user


Post.all(:comments => { :user => @user })
# in SQL => SELECT * FROM "posts" WHERE "id" IN
#
(SELECT "post_id" FROM "comments" WHERE "user_id" = 1)
# This also works (which you can use to
Post.all(:comments => Comment.all(:user
# in SQL => SELECT * FROM "posts" WHERE
#
(SELECT "post_id" FROM "comments"

build complex queries easily)


=> @user))
"id" IN
WHERE "user_id" = 1)

# Of course, it works the other way, too


# find all Comments on posts with DataMapper in the title
Comment.all(:post => { :title.like => '%DataMapper%' })
# in SQL => SELECT * from "comments" WHERE "post_id" IN
#
(SELECT "id" FROM "posts" WHERE "title" LIKE '%DataMapper%')

DataMapper accomplishes this (in sql data-stores, anyway) by turning the queries across
relationships into sub-queries.

Hooks (AKA Callbacks)


You can define callbacks for any of the model's explicit lifecycle events:

create
update
save
destroy

Currently, valid? is not yet included in this list but it could be argued that validation is
important enough to make it an explicit lifecycle event. Future versions of DataMapper will most
likely include valid? in the list of explicit lifecycle events.
If you need to hook any of the non-lifecycle methods, DataMapper still has you covered tho. It's
also possible to declare advice for any other method on both class and instance level. Hooking
instance methods can be done using before or after as described below. In order to hook class
methods you can use before_class_method and after_class_method respectively. Both take
the same options as their instance level counterparts.
However, hooking non-lifecycle methods will be deprecated in future versions of DataMapper,
which will only provide hooks for the explict lifecycle events. Users will then be advised to
either roll their own hook system or use any of the available gems that offer that kind of
functionality.

Adding callbacks
To declare a callback for a specific lifecycle event, define a new method to be run when the
event is raised, then define your point-cut.
1 class Post
2
include DataMapper::Resource
3
4
# key and properties here
5
6
before :save, :categorize
7
8
def categorize
9
# code here
10
end
11 end

Alternatively, you can declare the advice during the point-cut by supplying a block rather than a
symbol representing a method.
1 class Post
2
include DataMapper::Resource
3
4
# key and properties here
5
6
before :save do
7
# code here
8
end
9 end

Throw :halt, in the name of love...


In order to abort advice and prevent the advised method from being called, throw :halt
1 class Post
2
include DataMapper::Resource
3
4
# ... key and properties here
5
6
# This record will save properly
7
before :save do |post|
8
true
9
end
10
11
# But it will not be destroyed
12
before :destroy do |post|
13
throw :halt
14
end
15 end

Remember, if you throw :halt inside an after advice, the advised method will have already
ran and returned. Because of this, the after advice will be the only thing halted.

Miscellaneous Features
DataMapper comes loaded features, many of which other ORMs require external libraries for.

Single Table Inheritance


Many ORMs support Single Table Inheritance and DataMapper is no different. In order to
declare a model for Single Table Inheritance, define a property with the data-type of
Types::Discriminator.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

class Person
include DataMapper::Resource
property :name, String
property :job, String,
property :type, Discriminator
...
end

:length => 1..255

class Male
< Person; end
class Father < Male;
end
class Son
< Male;
end
class Woman
< Person; end
class Mother
< Woman; end
class Daughter < Woman; end

When DataMapper sees your type column declared as type Types::Discriminator, it will
automatically insert the class name of the object you've created and later instantiate that row as
that class. It also supports deep inheritance, so doing Woman.all will select all women, mothers,
and daughters (and deeper inherited classes if they exist).

Timezone handling
Currently, DataMapper has no builtin support for working with timezones particularly. This
means that time properties will always be stored and retrieved in the timezone the datastore is set
to. There is no API to explicitly manipulate timezones.
Have a look at dm-zone-types for more elaborate timezone handling support.

Paranoia
Sometimes...most times...you don't really want to destroy a row in the database, you just want to
mark it as deleted so that you can restore it later if need be. This is aptly-named Paranoia and
DataMapper has basic support for this baked right in. Just declare a property and assign it a type
of Types::ParanoidDateTime or Types::ParanoidBoolean:

1 property :deleted_at, ParanoidDateTime

Multiple Data-Store Connections


DataMapper sports a concept called a context which encapsulates the data-store context in which
you want operations to occur. For example, when you setup a connection in getting-started, you
were defining a context known as :default
1

DataMapper.setup(:default, 'mysql://localhost/dm_core_test')

If you supply another context name, you will now have 2 database contexts with their own
unique loggers, connection pool, identity map....one default context and one named context.
1 DataMapper.setup(:external, 'mysql://someother_host/dm_core_test')

To use one context rather than another, simply wrap your code block inside a repository call. It
will return whatever your block of code returns.
1 DataMapper.repository(:external) { Person.first }
2 # hits up your :external database and retrieves the first Person

This will use your connection to the :external data-store and the first Person it finds. Later,
when you call .save on that person, it'll get saved back to the :external data-store; An object
is aware of what context it came from and should be saved back to.
NOTE that currently you must setup a :default repository to work with DataMapper (and to be
able to use additional differently named repositories). This might change in the future.

Chained Associations
Say you want to find all of the animals in a zoo, but Animal belongs to Exhibit which belongs to
Zoo. Other ORMs solve this problem by providing a means to describe the double JOINs into the
retrieval call for Animals. ActiveRecord specifically will let you specify JOINs in a hash-ofhashes syntax which will make most developers throw up a little in their mouths.
DataMapper's solution is to let you chain association calls:
1 zoo = Zoo.first
2 zoo.exhibits.animals

# retrieves all animals for all exhibits for that zoo

This has great potential for browsing collections of content, like browsing all blog posts'
comments by category or tag. At present, chaining beyond 2 associations is still experimental.

Working with Legacy Schemas


DataMapper has quite a few features and plugins which are useful for working with legacy
schemas. We're going to introduce the feature available in the core first, before moving on to
plugins. Note that whilst the title is "Working with Legacy Schemas", really this applies to any
situation where there is no control over the 'table' in the data-store. These features could just as
easily be used to modify the fields returned by a RESTful webservice adapter, for example.

Small Tweaks
If the number of modifications are smalljust one table or a few propertiesit is probably
easiest to modify the properties and table names directly. This can be accomplished using the
:field option for properties, :child_key (or :target) for relationships, and manipulation of
storage_names[] for models. In all the following examples, the use of the :legacy repository
name assumes that it is some secondary repository that should behave in the special manner. If it
is the main database the application will be interacting with, :default makes a much more
sensible choice. Note that for the below snippet to work, you need to have have the :legacy
repository set up properly.
1 class Post
2
include DataMapper::Resource
3
4
# set the storage name for the :legacy repository
5
storage_names[:legacy] = 'tblPost'
6
7
# use the datastore's 'pid' field for the id property.
8
property :id, Serial, :field => 'pid'
9
10
# use a property called 'uid' as the child key (the foreign key)
11
belongs_to :user, :child_key => [ :uid ]
12 end

Changing Behaviour
With one or two models, it is quite possible to tweak properties and models using :field and
storage_names. When there is a whole repository to rename, naming conventions are an
alternative. These apply to all the tables in the repository. Naming conventions should be applied
before the model is used as the table name gets frozen when it is first used. DataMapper comes
with a number of naming conventions and custom ones can be defined:
1
2
3
4
5
6
7
8

# the DataMapper model


class Example::PostModel
end
# this is the default
DataMapper.repository(:legacy).adapter.resource_naming_convention =
DataMapper::NamingConventions::Resource::UnderscoredAndPluralized
Example::PostModel.storage_name(:legacy)

9 # => example_post_models
10
11 # underscored
12 DataMapper.repository(:legacy).adapter.resource_naming_convention =
13
DataMapper::NamingConventions::Resource::Underscored
14 Example::PostModel.storage_name(:legacy)
15 # => example/post_models
16
17 # without the module name
18 DataMapper.repository(:legacy).adapter.resource_naming_convention =
19
DataMapper::NamingConventions::Resource::UnderscoredAndPluralizedWithoutModul
e
20 Example::PostModel.storage_name(:legacy)
21 # => post_models
22
23 # custom conventions can be defined using procs, or any module which
24 # responds to #call. They are passed the name of the model, as a string.
25 module ResourceNamingConvention
26
def self.call(model_name)
27
'tbl' + DataMapper::Inflector.classify(model_name)
28
end
29 end
30
31 DataMapper.repository(:legacy).adapter.resource_naming_convention =
32
ResourceNamingConvention
33 Example::PostModel.storage_name(:legacy)
34 # => 'tblExample::PostModel'

For field names, use the field_naming_convention menthod. Field naming conventions work
in a similar manner, except the #call function is passed the property name.

Common Pitfalls
Below is a list of common problems that someone new to DataMapper will encounter, along with
work-arounds or solutions if possible.

Implicit String property length


When declaring a String property, DataMapper will add an implicit limit of 50 characters if no
limit is explicitly declared.
For example, the following two models will have the same behaviour:
1 # with an implicit length
2 class Post
3
include DataMapper::Resource
4
5
property :title, String

6
7
8
9
10
11
12
13

end
# with an explicit length
class Post
include DataMapper::Resource
property :title, String, :length => 50
end

The reason for this default is that DataMapper needs to know the underlying column constraints
in order to add validations from the property definitions. Databases will often choose their own
arbitrary length constraints if one is not declared (often defaulting to 255 chars). We choose
something a bit more restrictive as a default because we wanted to encourage peolpe to declare it
explicitly in their model, rather than relying on DM or the DB choosing an arbitrary limit.

DM More
DataMapper is intended to have a lean and minimalistic core, which provides the minimum
necessary features for an ORM. It's also designed to be easily extensible, so that everything you
want in an ORM can be added in with a minimum of fuss. It does this through plugins, which
provide everything from automatically updated timestamps to factories for generating
DataMapper resources. The biggest collection of these plugins is in dm-more, which isn't to say
that there's anything wrong with plugins which aren't included in dm-more -- it will never house
all the possible plugins.
This page gives an overview of the plugins available in dm-more, loosely categorized by what
type of plugin they are.

Resource Plugins
These plugins modify the behavior of all resources in an application, adding new functionality to
them, or providing easier ways of doing things.
dm-validations

This provides validations for resources. The plugin both defines automatic validations based on
the properties specified and also allows assignment of manual validations. It also supports
contextual validation, allowing a resource to be considered valid for some purposes but not
others.

dm-timestamps

This defines callbacks on the common timestamp properties, making them auto-update when the
models are created or updated. The targeted properties are :created_at and :updated_at for
DateTime properties and :created_on and :updated_on for Date properties.
dm-aggregates

This provides methods for database calls to aggregate functions such as count, sum, avg, max
and min. These aggregate functions are added to both collections and Models.
dm-types

This provides several more allowable property types. Enum and Flag allow a field to take a few
set values. URI, FilePath, Regexp, EpochTime and BCryptHash save database representations of
the classes, restoring them on retrieval. Csv, Json and Yaml store data in the field in the serial
formats and de-serialize them on retrieval.
dm-serializer

This provides 'to_*' methods which take a resource and convert it to a serial format to be
restored later. Currently the plugin provides to_xml, to_yaml and to_json
dm-constraints

This plugin provides foreign key constrains on has n relationships for Postgres and MySQL
adapters.
dm-adjust

This plugin allows properties on resources, collections and models to incremented or


decremented by a fixed amount.

is Plugins
These plugins make new functionality available to models, which can be accessed via the is
method, for example is :list. These make the models behave in new ways.
dm-is-list

The model acts as an item on a list. It has a position, and there are methods defined for moving it
up or down the list based on this position. The position can also be scoped, for example on a user
id.

dm-is-tree

The model acts as a node of a tree. It gains methods for querying parents and children as well as
all the nodes of the current generation, the trail of ancestors to the root node and the root node
itself.
dm-is-nested_set

The model acts as an item in a 'nested set'. This might be used for some kind of categorization
system, or for threaded conversations on a forum. The advantage this has over a tree is that is
easy to fetch all the descendants or ancestors of a particular set in one query, not just the next
generation. Added to a nested set is more complex under the hood, but the plugin takes care of
this for you.
dm-is-versioned

The model is versioned. When it is updated, instead of the previous version being lost in the
mists of time, it is saved in a subsidiary table, so that it can be restored later if needed.
dm-is-state_machine

The model acts as a state machine. Instead of a column being allowed to take any value, it is
used to track the state of the machine, which is updated through events that cause transitions. For
example, this might step a model through a sign-up process, or some other complex task.
dm-is-remixable

The model becomes 'remixable'. It can then be included (or remixed) in other models, which
defines a new table to hold the remixed model and can have other properties or methods defined
on it. It's something like class table inheritance for relationships :)

Adapters
These plugins provide new adapters for different storage schemes, allowing them to be used to
store resources, instead of the more conventional relational database store.
dm-rest-adapter

An adapter for a XML based REST-backed storage scheme. All the usual DataMapper
operations are performed as HTTP GETs, POSTs, UPDATEs and DELETEs, operating on the
URIs of the resources.

Integration Plugins
These plugins are designed to ease integration with other libraries, currently just web
frameworks.
merb_datamapper

Integration with the merb web framework. The plugin takes care of setting up the DataMapper
connection when the framework starts, provides several useful rake tasks as well as generators
for Models, ResourceControllers and Migrations.
rails_datamapper

Integration with Rails 2.x. It provides a Model generator and also takes care of connecting to the
data-store through DataMapper.
dm-rails

Integration with Rails 3.x. It provides a model and migration generators, takes care of connecting
to the data-store through DataMapper and supports RSpec2.

Utility Plugins
These provide useful functionality, though are unlikely to be used by every project or assist more
with development than production use.
dm-sweatshop

A model factory for DataMapper, supporting the creation of random models for specing or to fill
an application for development. Properties can be picked at random or made to conform to a
variety of regular expressions. dm-sweatshop also understands has n relationships and can assign
a random selection of child models to a parent.
dm-migrations

Migrations for DataMapper, allowing modification of the database schema with more control
than auto_migrate! and auto_upgrade!. Migrations can be written to create, modify and drop
tables and columns. In addition, the plugin provides support for specing migrations and verifying
they perform as intended.
dm-observer

This plugin eases operations involving models across multiple repositories, allowing wrapping in
a repository(:foo) block to be replaced with a MyModel(:foo).some_method call.

Observers watch other classes, doing things when certain operations are performed on the remote
class. This can be anything, but they are commonly used for writing logs or notifying via email
or xmpp when a critical operation has occurred.
dm-cli

The dm executable is a DataMapper optimized version of irb. It automatically connects to a datastore based on the arguments passed to it and supports easy loading of DataMapper plugins,
models from a directory as well as reading connection information from a YAML configuration
file.
dm-ar-finders

ActiveRecord style syntax for DataMapper. This includes functionality such as find_by_name,
find_or_create and find_all_by_title.

S-ar putea să vă placă și