Notes on Hibernate

aquahellishSoftware and s/w Development

Dec 13, 2013 (3 years and 10 months ago)

101 views

Page
1

of
32

Notes on Hibernate

Created 03/22/04

Updated 10/24/04, Updated 07/23/05, Updated 10/21/05, Updated 10/23/05, Updated 10/30/05
,
Updated 03/28/06

Updated 06/08/06, Updated 11/12/06, Updated 12/22/06, Updated 02/10/07
,
Updated 03/02/07, Update
d 03/25/07

Update
d 04/02/07
,
Updated 04/15/07, Upda
ted 06/07/07, Updated 07/22/07
,
Updated 08/05/07, Updated 11/15/07

Updated 01/27/08
, Updated 03/20/08
,
Updated 05/20/08
,
Updated 06/08/08
,
Updated 07/20/08
,
Updated 09/08
/
08

Updated 12/12/08
,
Updated 12/21/08
, Updated 01/2
5/09
,
Updated 03/13/09
, Updated 09/14/09
,
Updated 10/13/09

Updated 08/22/10
, Updated 10/02/10
, Updated 01/21/11
, Updated 02/27/11
, Updated 08/31/12, Updated 09/21/13

Overview

Hibernate is an Object/Relational Mapping tool, which supports a range of databas
es. From the www.hibernate.org
web site:


Hibernate is a powerful, ultra
-
high performance object/relational persistence and query service for Java. Hibernate
lets you develop persistent classes following common Java idiom
-

including association, inherit
ance, polymorphism,
composition, and the Java collections framework. Hibernate allows you to express queries in its own portable SQL
extension (HQL), as well as in native SQL, or with Java
-
based Criteria and Example objects.


Another goal of Hibernate
is
to remove the presence of SQL from the Java application
(
or at least from the non
-
database portions
)
. Typical applications based o
n Hibernate

use a design which is partitioned into presentation
objects, business objects, and data access objects in an MVC
relationship.


Finally, a goal of these tools is to remove impact of different “dialects” of database implementation, such as the
difference between Oracle and MySQL for example. When you consider that these are both leading implementations

of SQL
, but ha
ve slightly different datatypes, query syntax, sequence generators, and connection strings, it is easy to
see why this is important.


In all mapping
-
based tools, maintaining the matches between the database, the Java classes, and the mapping files can
be a

significant effort, and there are different approaches, which can be organized around the chronology of “which
piece of the implementation is the master embodiment of the design”. For instance:




Top down: you start with an existing domain model, and its
implementation in Java. Quite common.



Bottom up: you start with an existing database schema and data model. Quite common.



Middle out: you start by generating the mapping files, since the mapping metadata provides sufficient
information to completely de
duce the database schema and to generate the Java source code for the
persistence layer. Recommended only for designers who have great familiarity wit
h the mapping file
concept

and representation.



Meet in the middle: you have an existing set of Java clas
ses and an existing database schema for which there
may or may not be simply mapping. Some refactoring will probably be needed, and so this is the most
complex of the four cases.


There are tools within Hibernate to carry out the “top down” approach (call
ed
hbm2ddl
), and the “bottom up”
approach (called
hbm2java
). In addition there are database reverse engineering tools in MyEclipse and others.
Both of these Hibernate tools could be used to carry out the “middle out” approach, after the mapping files are

created. For the last approach, you are largely on your own.


There are several layers to Hibernate and its distributions:



Hibernate Core: this is the primary API t
o Hibernate and its mapping files
. All of the discussion in this
document is about Hiber
n
ate Core, except for those sections covering use of Hibernate Annotations.



Hibernate Annotations: this is alternate way of specifying mapping information, by using Annotations in
your Java files, similar to how XDoclet operates. By keeping the mapping d
ata locally, the number of files to
keep in sync is reduced.
Since
late
2007
, we have been using Hibernate Annotations
almost exclusively
.



Hibernate Validator: this is a different approach to validation from the usual approach used in JSF: rather
than ha
ving the validation rules expressed on the editing views, the validation rules are expressed on the
entity definitions (in both cases, they are carried out on the server side).

Page
2

of
32



Hibernate EntityManager: this is a set of tools that implement EJB 3.0 entity
persistence, which is based on
Hib
ernate. During 2007 this was not highly important, as we were using Hibernate directly, but in 2008 with
JBoss, understanding the EntityManager became important.



Hibernate Tools:

this is described as an entirely new tool
set for Hibernate3, implemented as an integrated
suite of Eclipse plugins, together with a unified Ant task for integration into the build cycle. Hibernate Tools
is a core c
omponent of JBoss Eclipse IDE
.


Hiber
nate Core comes with C3P
0
, which is a pooling

system like DBCP. However, C3P0 won't block in
synchronized code like DBCP can. One note: recommend setting numHelperThreads up above 6 to limit the pool
size under load.


Hibernate appears to have

more industry support than any other O/R mapping tool.

Hibernate is a p
rofessional Open
Source project and
a critical component of the JBoss Enterprise Middleware System (JEMS) suite of products. For
instance, there are far more books and resources available, it is referred to more often in the hiring requi
rements of
companies surveyed on CraigsList

(during our research of 2006
-
2007)
, and it has become the persistence model of the
EJB 3.0 specification and implementations (called JPA, for Java Persistence API).

Layers


This shows the most of Hibernate is located in the Session class, which holds the state information, and used in most
of the API calls. Sessions come from a SessionFactory, which is based
on a C
onfigura
tion facility.


Above the session are classes for generating queries, such as the Query and Criteria classes, and code for managing
the transactions. For some reason this diagram implies that application code is using the Session rather than the Query
fac
ilities, while in fact the most common classes in DAO’s of the application are Query and Criteria classes. But
perhaps this represents the fact that to persist an object, you call a method on the Session object.


Not shown below this diagram are the datab
ase connection management, pooling, and caching facilities.

Released Versions

In Dec 2012, Hibernate ORM 4.1.9 Final was released.
[4]


In Dec 2011, Hibernate Core 4.0.0 Fina
l was released. This includes new features such as multi
-
tenancy support,
introduction of ServiceRegistry (a major change in how Hibernate builds and manages "services"), better Session
opening from SessionFactory, improved integration via
org.hibernate.in
tegrator.spi.Integrator

and auto discovery,
internationalization

support and message codes in logging, and a clearer split between

API, SPI and implementation
classes.
[3]


The first 3.6.x releases came out in late 2010.

Grails 2.2.4 is still using 3.6.10 as of Summer 2013.


The prior
series

is
3.5.6,
released September 15, 2010
.
Hibernate 3.5.x came out in
April 2010, and includes

Page
3

of
32



JSR 317 (JPA2) support.



Integration of hibernate
-
annotations, hibernate
-
entitymanager and hibernate
-
envers into the core project. See
http://in.relation.to/14172.lace

for details



Added
Infinispan

as a standard second
-
level cache. See
http://infinispan.blogspot.com/2009/10/infinispan
-
based
-
hibernate
-
cache.html

for details



Improvements to the new second
-
level caching SPI introduced in 3.3 based on feedback from implementers
including Ehcache, Inifinispan and JBoss Cache.



Far

better read only / immutable support. See the new chapter added to the core reference manual dedicated
to the subject.



Support for
JDBC 4

such that Hibernate can be u
sed in JDK 1.6 JVMs and make use of JDBC4
-
compliant
drivers.'



Support for
column
-
level read/write

fragments (HBM only for now)



Initial support for
fetch profiles
.

Hibe
rnate 3.4.x came out in

2009, and addresses the following goals
:



A fix for an out of memory error generated upon
calling list()

in
getSingleResult
.



A fix to an exception in the
EntityManager

during the deletion of an entity.



The ability to build independen
t of the Hibernate Core Structure.



An upgrade to Hibernate Core 3.3.


Hibernate 3.3.x ca
me out in
November

2008, and addresses the following goals:



Redesigned, modular JARs

-

There are now a number of fine
-
grained JARs rather than one large JAR
-

this
allo
ws users to more easily see and minimize their dependencies, and allows organizations to build
customized Hibernate deployments with unwanted parts left out



Maven
-
based build

-

Hibernate is now built using the
Apache Maven

build system



Revamped caching S
PI

-

Based on feedback, the cache system has been refactored to allow more fine
-
grained control of the characteristics of different cache regions



JBoss Cache 2.x

integration

-

Based upon the new caching SPI, JBoss Cache 2.x integration is now
available ou
t
-
of
-
the
-
box


There are some Hibernate 3.3 upgrade notes at
http://www.agileapproach.com/blog
-
entry/hibernate
-
33
-
upgrade
-
tips


Hibernate 3.2 came out in late 2006, and
adds

Java Persistence compliance (the Java Persistence API is the standard
object/relational mapping and persistence management interface of the Java EE 5.0 platform). I
t appears that the 3.2
release wa
s a major milestone and is a reference version, as the pr
imary authors of Hibernate have updated their main
book to match this version.

The version used in JBoss 4.2.3 is Hibernate 3.2.4sp1.
The final version of the

3.2 release
series was 3.2.7.


Hibernate 3.0 came out in late 2004, and reached 3.0.5 as a bug
fix release by early 2005. Note that the number of
libraries used by Hibernate increased greatly during this time. Hibernate 3.0 increases the mapping capabilities,
adds
support for association joins based on SQL formulas and for SQL formulas
defined at
the
<column>

level.



There were versions of Hibernate 2.x during 2004 and very early 2005. They are important now only in that they are
the documented version in several Hibernate books written during 2004 to 2005.


At Menlo School, we were running rele
ase 3.0.5 during late Spring 2006, and switched to 3.2 in November 2006, and
to 3.2.2 in February 2007.

We switched to 3.2.4
sp1

for
JBoss 4.2.3
-
based projects in late 2008, and
began to evaluate
3.3.1

and 3.3.2

for native projects (mostly test programs).

Resources for Information on Hibernate

The Hibernate web site supplies the reference documentation, and a set of FAQ’s. These
questions
link to some
simple tutorials. The reference documentation PDF was 212 pages long as of Hibernate

2.1, and is 34
2

page
s as of
3.3.2
. There is a one
-
page quickref to the Hibernate API that is handy for review. However, it is the books that carry
the main discussion about the concepts and patterns of use, and only the books have enough examples to communicate
some of the
more complex concepts and patterns.

Page
4

of
3
2


“J
ava Persistence with Hibernate”

by Christian Bauer and Gavin King. Manning Press, November
2006, 9
04 pages.
List price $49.99,
Amazon price $32.99, used from $28.44.
Rated 3.5 stars on Amazon.com.
This should be t
hought
of as the second edition of “Hibernate in Action”

(by the same authors)

and
this book

is the one to buy. We got a
copy in late December 2006. It is twice as long as the prior edition, and covers the tools and process for converting
schema definiti
ons to Java classes and vice versa.
Expanding considerably on the reference material provided in the
download, this book is invaluable.

What is somewhat confusing about the book is that for each topic, it has sections
that describe how to do it with Hiber
nate Core, then with Hibernate Annotations instead of the

XML files, then with
the

JPA
annotations and
calls (this haphazard organization is what has caused the book to get some less
-
positive
reviews).


“Harnessing Hibernate” by James Elliot, Tim O’Brian,
and Ryan Fowler. O’Reilly Press, April 2008, 380 pages. List
price $39.99, Am
azon price $26.20, used from $22.6
0.
Rating 4 stars on Amazon.com
. This appears to be O’Reilly’s
answer to Manning’s coverage of Hibernate, and is considered to be well
-
writte
n. However, from reviewing it in the
bookstore, it doesn’t appear to cover anything

regarding Hibernate that isn’t in the above book, but it does add
material about Spring, Stripes, and Maven.


“Hibern
ate: A J2EE Developer’s Guide”

by Will Iverson. Addis
on
-
We
sley, November 2004, 351 pages. List price
$39.99. Amazon price $26.01, used for $3.75. Discusses Hibernate 2.1.2, and is largely parallel to the Bauer/King
book
first edition

(which was written at the same time). It has chapters on setting up Hib
ernate, starting schema from
Java classes, starting schemas from an existing database, mapping, transactions, etc. The Bauer/King book is
generally getting better reviews.


“Pro Hibernate 3 (Expert’s Voice Series)” by Jeff Linwood, Dave Minter.

APress, J
uly 2005, 242 pages. List price
$39.99. Amazon price $26.39, used for $19.99. Comments on this book were mixed, and I have only glanced at it in
the bookstore. Its main strength is that it was the first book to cover Hibernate 3’s API, but I don’t thin
k that it adds
much beyond the reference manual materials.


“Hib
ernate: A Developer’s Notebook”

by James Elliot. O’Rei
lly Press, May 2004, 170 pages. List price
$24.95.
Amazon price $16.47, used price $12.00. Rather dated now. This is part of the “lab
notebook” approach to learning
new pieces of software, in which you are given very specific steps of installing and running Hibernate. Goes into
great detail about the Ant scripts that you would write to run the tools, set up the database, etc. Uses HSQL
DB and
MySQL for the examples. We got a great deal out of it, but found that we next needed to know much more about the
mapping files and the API that this book provided.


There is an article from OnJava in April 2005,
http://www.onjava.com/pub/a/onjava/2005/08/03/hibernate.html

that
discusses various uses of formula mappings in Hibernate3.


There are also tools for using Hibernate provided within MyEclipse. They are documented
in our
document “Notes
on MyEclipse”
.


There are also tools for using Hibernate provided in the JBoss Tools download. They are documented in our
document
“Notes on JBoss Tools”
.

Minimum S
et of Libraries

Using guidance from

the MyEclipse and Hibernate 3.3.1

materi
als, plus taking in the most up
-
to
-
date of the required
Apache libraries, we have concluded that the following set is the minimum set of libraries for using Hibernate at run
-
time in our applications:

Page
5

of
32



(The junit
-
4.4
.jar is not actually needed, exc
ept for tests with our application).


This set of libraries supports all of the standard session management, content mapping, query management, and
insert/update/delete behavior, plus
annotations and
transactions. While Hibernate appears to have far mor
e libraries
shipped wit hit, they appear to be needed only for doing schema generation, jBoss linkage, JMX linkage, reverse
engineering, etc. These version numbers are current as of February 2007.


Note that Hibernate is now using slf4J instead of log4j.


There is also a requirement to have a log4.properties in order to avoid getting a warning about lack of appenders for
Hibernate to use. The shortest contents are:


# A log4j configuration file.


# The root logger uses the appenders called A1 and A2. Sinc
e no

# priority is set, the root logger assumes the default which is one of

# TRACE, DEBUG, INFO, WARN, ERROR, FATAL

log4j.rootLogger=
INFO,

STDOUT


# STDOUT is set to be ConsoleAppender sending its output to System.out

log4j.appender.STDOUT=
org.apache.log4
j.ConsoleAppender


# STDOUT uses PatternLayout.

log4j.appender.STDOUT.layout=
org.apache.log4j.PatternLayout


# The conversion pattern

log4j.appender.STDOUT.layout.ConversionPattern=
%
-
5d

%
-
5p

[%t]

%c
{2}

-

%m%n

Hibernate

Annotations

The Hibernate A
nnotations

facility is a huge step forward in reducing the number of configuration files that are used.
The Hibernate Annotations package includes:



Standardized Java Persistence and EJB 3.0 (JSR 220) object/relational mapping annotations



Hibernate
-
specific extensi
on annotations



Annotations for data integrity validation with the included Hibernate Validator API



Domain object indexing/searching for Lucene via the included Hibernate Lucene framework

Requirements:

At a minimum, you need JDK 5.0 and Hibernate Core, bu
t no application server or EJB 3.0 container.

You can use Hibernate Core and Hibernate Annotations in any Java EE 5.0 or Java SE 5.0 environment.


Page
6

of
32

The current release is 3.4.0 GA, released on 08/20
/2008.

It consists of two libraries: hibernate
-
annotation
s.jar and
hibernate
-
common
s
-
annotations.jar, and it requires

the library ejb3
-
persistance.ja
r.


Here is an example of using the annotations facility:


@Entity

@Table(name="language")

public class Language {


private String isoCode;


private String descri
ption;




@Id


@Column(name="code", length=5)


public String getIsoCode() {


return this.isoCode;


}


public void setIsoCode(String isoCode) {


this.isoCode = isoCode;


}




public String getDescription() {


return this.description;



}


public void setDescription(String description) {


this.description = description;


}

}


The important annotations are @Entity, @Table, and @Column. These are sufficient for describing simple classes.
More complex tags are introduced
in the mater
ial below
as the more complex mapping examples are discussed.


Since these annotations
are
also the specifications of the SQL schema that will be generated, it is important to add the
annotations th
at

define the SQL

(even if they have minimal effect on the

Java)
, such as:



l
engt
h

for strings: P
ut this in the @C
olumn annotation
. Default 255.



n
ullable
: to indicate what fields can be null.


Put this in the @Column annotation. Default false



precision,
scale

for numbers
.


Put this in the @Column annotation. De
fault 0 and 0



unique
:
Put
this in the

@Column annotation.
default false



datetimes
,
annotated with
@Temporal

Hibernate and HSQL

Many of the Hibernate examples focus on using the in
-
memory database called HSQL. This was

formerly called

Hypersonic SQL

, an
d is unrelated to Hibernate itself.


Since HSQL is distributed with JBoss, some of the reasons for using it include ease of set
-
up and administration. The
best way to use it is to ha
ve Hibernate generate a schema
, using annotations, and then specify the f
ollowing in the
Hibernate configuration:

Basic Configuration for Hibernate

The Configuration object holds all configuration data, and can be populated by reading an XML file (defaults to
hibernate.cfg.xml. This file defines session configuration data, as
well as related resource data, such as the pointers to
the mapping files. The mapping files are typically located with the Java classes that they map, and there is typically
only one class per mapping file. Here is an example:


<?xml version='1.0' encodi
ng='utf
-
8'?>

<!DOCTYPE hibernate
-
configuration PUBLIC


"
-
//Hibernate/Hibernate Configuration DTD 3.0//EN"


"http://hibernate.sourceforge.net/hibernate
-
configuration
-
3.0.dtd">


<hibernate
-
configuration>

Page
7

of
32

<session
-
factory>


<property name="show_
sql">true</property>


<property name="connection.url">



jdbc:mysql://localhost:3306/in


</property>


<property name="connection.username">root</property>


<property name="connection.password">root</property>


<property name="connection.driver_class">



co
m.mysql.jdbc.Driver


</property>


<property name="dialect">



org.hibernate.dialect.MySQLDialect


</property>



<mapping resource="dto/User.hbm.xml" />


<mapping resource="dto/Role.hbm.xml" />


<mapping resource="dto/Location.hbm.xml" />

</session
-
factory>

</hibernate
-
configuration>


You can enable automatic export of a schema when your
SessionFactory

is built by setting the
hibernate.hbm2ddl.auto

configuration property to
create

or
create
-
drop
.

The first setting results in
DROP statements followed by CREA
TE statements when the
SessionFactory

is built.

The second setting adds
additional DROP statements when the application is shut down and the
SessionFactory

is closed

effectively
leavi
ng a clean database after every run.


Also note that since Hibernate 3.1

you can include a file called “
import.sql
” in the runtime classpath of Hibernate. At
the time of schema export it will execute the SQL statements contained in that file after the schema has been exported.

However, we have had real problems with this: the

import.sql file cannot contain multi
-
line statements, which makes
it very hard to read

(since insert statements with many fields becomes a very long line of text
)
. So we don’t use this
approach much.

Configuration Facilities and Basic API Calls

The API i
ncludes classes and factory objects for sessions, transactions, queries, criteria, etc. In addition, the Session
object has calls to save or persist an object.

Configuration

The Configuration object holds all configuration data, and can be populated by re
ading an XML file (defaults to
hibernate.cfg.xml. This file defines session configuration data, as well as related resource data, such as the pointers to
the mapping files. The mapping files are typically located with the Java classes that they map, and
there is typically
only one class per mapping file.

Sessions

Get the SessionFactory from the configuration object, and then open a session. At the end of using the session
(typically after a sequence of query/update operations), close the session, and the
n close the SessionFactory.


Given a SessionFactory, a session can be made for a specific Connection. The Connection class is part of java.sql.

Transactions

If a session is being used only for reading content, you need not have a transaction pending. How
ever, if the session is
being used to save or update objects, this must be done on a session with a transaction pending. Writes are held until
the transaction is closed by committing it.


The transaction is represented by a distinct object, which is creat
ed from the Session object.

Page
8

of
32

Queries and Criteria

These are description of SQL queries and their where clauses, expressed in terms of the objects that correspond to the
tables.

Example Code

The following code, from a test framework, shows examples of each o
f the above object (other than Queries and
Criteria):


@Override


protected

void

setUp()
throws

Exception {



config =
new

AnnotationConfiguration();



config.configure();


sessionFactory = config.buildSessionFactory();





newSession = sessionFactory.
openSession();





newTransaction = newSession.beginTransaction();




}



@Override


protected

void

tearDown()
throws

Exception {



newTransaction.commit();



newSession.close();



sessionFactory.close();


}

Programmatic Generation of Schema

The SchemaExpo
rt class is the key to having Hibernate generate schema and install it in the database (to avoiding
having to “mysql source” it yourself. Create a SchemaExport object as follows:


Configuration cfg = new Configuration().configure();

SchemaExport schemaExp
ort = new SchemaExport(cfg);


Key attributes of SchemaExport are:



String delimiter
--

string to place at the end of each line



Boolean format


true/false if the output should be broken up into several lines for readability (generally true,
but the defaul
t is false)



String outputFile


name of file to write with content.


There is one key method on this class (most of the other methods are calls to this):


public void execute(boolean script, boolean export,


boolean justDrop, boolean ju
stCreate) {



log.info( "Running hbm2ddl schema export" );





if ( export ) {


log.info( "exporting generated schema to database" );


connectionHelper.prepare( true );


connection = connectionHelper.getConnection();


statement = connection.
createStatement();


}



if ( !justCreate ) {


drop( script, export, outputFileWriter, statement );


}



if ( !justDrop ) {


create( script, export, outputFileWriter, statement );

Page
9

of
32


if ( export && importFileReader != null ) {


importScript( imp
ortFileReader, statement );


}




}


log.info( "schema export complete" );

}


From this, we can document the four boolean flags as follows:



script


if true, the output will be written to System.out as well as the database and the outputFileName.



export


if true, the database must be connected to and dropped/created using the flags below



justDrop


if true, the creation of new schema is disabled



justCreate


if true, the dropping of prior database schema is disabled


Domain Models and Metadata

Here is a
sample data model, which is from the Caveat Emptor project.


Writing Mapping Files

Here is a simple example of a mapping file:


<?xml version="1.0"?>

<!DOCTYPE hibernate
-
mapping PUBLIC


"
-
//Hibernate/Hibernate Mapping DTD 3.0//EN"


"http://h
ibernate.sourceforge.net/hibernate
-
mapping
-
3.0.dtd">


<hibernate
-
mapping>


<class name="com.storm.model.LocationDTO"



table="Location">


<id name="id" column="location_id" type="int"/>



<property name="name">


<column name="NAME" leng
th="16" not
-
null="true"/>

Page
10

of
32


</property>


<many
-
to
-
one


name="type"


column="LocationType_ID"


class="com.storm.model.LocationTypeDTO"


not
-
null="true"/>


</class>

</hibernate
-
mapping>


There are three primary cases t
o be dealt with:




Single:

Those which are keys are done through the <id> tag. Those which are not keys are done
through the <property> tag.



Many
-
to
-
one: using the <m
any
-
to
-
one> element, and specifing

the related class.



One
-
to
-
many:

this is discussed un
der “Using Associations”, below.



Many
-
to
-
many: this is discussed under “Using Associations”, below.


Here is a many
-
to
-
one mapping, where a product has a productType:


<
class
name
=
"com.storm.model.ProductDTO"
table
=
"product"
>


<
id
name
=
"productId"
type
=
"java.lang.Integer"
>


<
column
name
=
"Product_ID"
/>


<
generator
class
=
"increment"
/>


</
id
>


<
property
name
=
"productName"
type
=
"java.lang.String"
>


<
column
name
=
"ProductName"
length
=
"50"
/>


</
pro
perty
>


<
many
-
to
-
one


name
=
"productType"


column
=
"ProductType_ID"


class
=
"com.storm.model.ProductTypeDTO"
not
-
null
=
"true"
/>

</class>


Here is a one
-
to
-
many mapping, where an album has a list of many pictures.


<class

name=”com.storm.model.AlbumDTO” table=”album”>

<set name="tracks" lazy="false">


<key column="ALBUM_ID"/>


<one
-
to
-
many class="com.storm.model.TrackDTO"/>


</set>

</class>


Here is sample of a many
-
to
-
many mapping, where a user as

a list of roles, and other users have the same roles:



<set name="roles" table="usersrole" inverse="true">


<key column="USERS_ID"/>


<many
-
to
-
many


column="Role_ID"


class="com.storm.model.RoleDTO"/>


</set>


There is also a facility for defining “formula columns”, these are ones for which the generated query contains a SQL
expression with restrictions and is run by the database.


<class name="Product" table="Product"/>


<id name="productId"


length
="10">


<generator class="assigned"/>


</id>




<property name="description"


not
-
null="true"

Page
11

of
32


length="200"/>


<property name="price" length="3"/>


<property name="numberAvailable"/>




<property name="numberOrde
red">


<
formula
>


( select sum(li.quantity)


from LineItem li


where li.productId = productId )


</
formula
>


</property>

</class>


The formula doesn’t have a column attribute, and never appears in SQL upd
ate or insert statements, only selects.
Formulas may refer to columns of the database table, they can call SQL functions, and they may even include SQL
subselects. The SQL expression is passed to the underlying database as is


so avoid database vendor s
pecific syntax
of functions or the mapping file will become vendor specific.

Examples of Annotations

@Entity

@Table(name="language")

public class Language {


private String isoCode;


private String description;




@Id


@Column(name="code", length=5)


public String getIsoCode() {


return this.isoCode;


}


public void setIsoCode(String isoCode) {


this.isoCode = isoCode;


}




public String getDescription() {


return this.description;


}


public void setDescription(String description
) {


this.description = description;


}

}


The Table
-
level annotations specify the table name (which would define to the class name, anyway). The table level
association can also have a uniqueness
constraint
, which specifies the columns to be combined
in forming the unique
constraint. For instance, you might have:


@Table(name=”Role”, uniqueConstraints = @UniqueConstraint( columnNames = { “name” } ))


This would provide a database
-
level check of unique names for the roles.


Som
e of the most commonly
-
us
ed relationships are many
-
to
-
one relationships.

Suppose, in the above example, that
each
ModelPlane

is associated with a
PlaneType

via a many
-
to
-
one relationship (in other words, each model
plane is associated with exactly one plane type, although a given

plane type can be associated with several model
planes).

You could map this as follows:



@ManyToOne( cascade = {CascadeType.PERSIST, CascadeType.MERGE} )


@JoinColumn(name=”planeType_id”, nullable=false)


@ForeignKey(name=”FK_PlaneType_Plane”)


public PlaneType getPlaneType() {



return planeType;


}

Page
12

of
32


The
CascadeType

values indicate how Hibernate should handle cascading operations.


In this case, the join column is being given a name (if not specified, the join column would simply be “p
laneType”).


In this case, the index which is built for the relationship is being given a name (if not specified, the index name would
be a cryptic string of hexadecimal digits, at least for the MySQL dialect).


Another commonly
-
used relationship is the op
posite of the above: the one
-
to
-
many
-
to
-
one relationship, also known as
a collection.

Collections are complex beasts, both in old
-
style Hibernate mapping and when using annotations, and
we'll just be skimming the surface
here to give you an idea of how it
's done.


For example, in the above example, each
PlaneType

object may contain a collection of
ModelPlane
s.

You could map this as follows:




@OneToMany(mappedBy="planeType",


cascade=CascadeType.ALL,


fetch=FetchT
ype.EAGER)


@OrderBy("name")


public List<ModelPlane> getModelPlanes() {


return modelPlanes;


}

Many to Many with Join Tables

Suppose you are linking products to Items in a many
-
to
-
many way, and also wish to record price on a
pair wise

com
bination basis. Use the following approach, which is based on page 304 of the “Java Persistence with Hibernate”
book:


@Entity

@Table(name="ordered_lines")

public class OrderLine {





@Embeddable





public static class Id implements Serializable {









@Column(name="order_id")









private Long orderId;










@Column(name="product_id")









private Long productId;










public Id() {}










public Id(Long orderId, Long productId) {













this.orderId = orderId;













this.productId = productId;









}










public boolean equals(Object o) {













// compare this and that Id properties (orderIds and productIds)









}










public int hashCode() {













// add the orderId and productId

hashCode()s









}





}






@EmbeddedId





private Id id = new Id();






@ManyToOne





@JoinColumn(









name="order_id",









insertable="false",









updatable="false"

Page
13

of
32





)





private Order order;






@ManyToOne





@Joi
nColumn(









name="product_id",









insertable="false",









updatable="false"





)





private Product product;






// create attributes you want to save in the join table






public OrderedLine() {}






public OrderedLine(









Order order,









Product product









/* other attributes */) {









this.id.orderId = order.getId();









this.id.productId = product.getId();










// these are two one
-
to
-
many relationships mapped









// one in each of Order a
nd Product









order.getOrderLines().add(this);









order.getOrderLines().add(this);










// set other attributes





}






// accessor methods

}


For reasons that we don’t know, it is required that the annotations be placed on the field

declarations, rather than on
the get methods.

Id Generation

There are several different approaches, which were listed above.


In our examples, we have been using the “increment” mode; however, this will have a problem if there are multiple
applications hi
tting the same database. Sequences or identity columns are really the way to go, since they are
consistent across all users of the database.


On MySQL tables which have “auto
-
increment” in the id columns, you could use the “identity” approach, which
would

also be used by the “native” approach.


It is also possible to write your own id generator, using a Hibernate API.


There was a very important reference document on Id generation in Hibernate. It is located at “
Don’t Let Hibernate
Steal Your Identity
”, and makes the following points:




Unless you define hashCode() and equals() in a meaningful way, Hibernate will be unable to determine
if to objects re
present the same database row.



Defining hashCode() and equals() to use the id fields of the object can be problematic, since objects
which have not yet been persisted will all be treated as the same.



Implementing hashCode() and equals() to have values that

change when the fields are modified breaks a
requirement of Java Collections API.


The article concludes by suggesting that you don’t have Hibernate manage your ids, or use the generator in the
database, but instead have code in your application which cre
ates GUID’s whenever a persistable
object is

created.

Page
14

of
32


For describing an identity column through annotations, use the following:



@Id



@Column
(name =
"users_id"
)



@GenericGenerator
(name =
"generator"
, strategy =
"increment"
)



@GeneratedValue
(gener
ator =
"generator"
)

Mapping Information used by Hibernate Tools

Most of our early examples have attribute such as “length” and “not
-
null” in the mapping files. It turns out that these
are used only the Hibernate tools such as the ones that generate datab
ase schema or Java classes source code. While
most of the discussion here is about the Core Hibernate in which name, type, key, column, and table information is
needed, one of the goals of the mapping file specification is that it includes ALL information

needed to generate the
schema or Java class source code. This matches the “middle
-
out” development approach.


Examples of other information can be included is:



<id name=”id” type=”int” column=”TRACK_ID”>


<meta attribute=”scope
-
set”>protected</met
a>


<generator class=”native”/>


</id>



<property name=”added” type=”date”>


<meta attribute=”field
-
description”>When the track was added</meta>


</property>


This will generate a protected set method for “id”, and will add some comments in the

header for the get method for
“added”.

Mapping

Associations

Recall that in Hibernate jargon an
entity

is an object that stands on its own in the persistence mechanism: it can be
created, queried, and deleted independently of any other objects, therefore h
as its own persistent identity. A
component
, in contrast, is an object that can be saved to and retrieved from database, but only as a subordinate part of
some entity.


These are defined using the set tag. For example:


<set name=”bids”


inverse=”tru
e”


cascade=”all
-
delete
-
orphan”>



<key column=”item_Id” />


<one
-
to
-
many class=”Bid” />

</set>


The
outer
-
join

attribute is deprecated. Use fetch="join" and fetch="select" instead of outer
-
join="true" and outer
-
join="false". Existing appli
cations may continue to use the
outer
-
join

attribute, or may use a text search/replace to
migrate to use of the
fetch

attribute.


To control cascaded deletes,
use cascade=”add
-
delete
-
orphan”

on the association mapping. This means part of the
<set> tag.


W
hen a many
-
to
-
many association is used it is important to set the cascade options to “save
-
update”. This shown in
the following example:


<
class
name
=
"com.incra.modules.user.model.User"


table
=
"users"
lazy
=
"false"
>



Page
15

of
32


<
id
name
=
"userId"
type
=
"java.lang.Integer"
>


<
column
name
=
"USERS_ID"
/>


<
generator
class
=
"increment"
/>


</
id
>



<
many
-
to
-
one


name
=
"organization"


column
=
"organizations_Id"


class
=
"com.i
ncra.modules.user.model.OrganizationDTO"


lazy
=
"false"
/>


<
set
name
=
"roles"
table
=
"usersrole"


cascade
=
"save
-
update"
lazy
=
"false"
>


<
key
column
=
"USERS_ID"
/>


<
many
-
to
-
many


column
=
"Role_ID"



class
=
"com.incra.modules.user.model.Role"
/>


</
set
>


<
property
name
=
"userName"
type
=
"java.lang.String"
>


<
column
name
=
"NAME"
/>


</
property
>

</class>


In this case, a user has a many to one relationship to a
n organization, and a many to many relationship to roles.


Over on the roles side, there is no reference to users for a role. This also implies that the “inverse” attribute is false
(the default value) in the mapping shown above. If these settings are no
t established, then related data (such as the
USERSROLE table) will not be updated correctly.


The basic association mappings that correspond are @OneToOne, @ManyToOne, @OneToMany, @JoinTable,
combined with a number of Hibernate
-
specific annotations such a
s “org.hibernate.annotations.CollectionOfElements

.


A collection can also be sorted via a speciation in the Hibernate annotations.

Composite Keys

Here is an example in the mapping file:


<hibernate
-
mapping>


<class name="com.incra.modules.personalizati
on.model.PresentationDTO"


table="presentation" catalog="in" lazy="false">


<composite
-
id>


<key
-
property name="userId" column="users_id" />


<key
-
property name="componentPath" column="componentPath" />


<key
-
prop
erty name="presentationName" column="PRESENTATIONNAME" />


</composite
-
id>




<property name="fieldNames" type="java.lang.String">


<column name="fieldNameList" />


</property>


</class>

</hibernate
-
mappi
ng>


In this case, the object has a three
-
part key, which is composed of an integer and two strings. We have used this
approach quite successful. We think that the compare method of the objects in question should be written to compare
exactly these attri
butes, but we have not done that as yet. The discussion about composite keys is on page 303 of the
“Java Persistence with Hibernate” book.

Custom Types

Hibernate allows you

to define a new data type to be used in the mapping model, and register converter
methods for
that data type with the Hibernate SessionFactory. These converters work between the in
-
memory representation of the
custom type, and persisted field or fields, which may be strings, integers, etc.


Page
16

of
32

Most of the
relevant

discussion is located in

Chapter 5 of the book.

Polymorphic Mapping

This topic deals with the issue of having a class hierarchy of for your java objects, but having no equivalent facility in
the relational database.


There are several approaches, which are covered in detail in Ch
apter 5 of the book:



Table per concrete class with implicit polymorphism



Table per concrete class with unions



Table per class hierarchy



Table per subclass


In the first option “table per concrete class with implicit polymorphism” there is no knowledge of t
he class structure
provided to the database or even the mappings. You simply write distinct mapping files for each class. This is a
common starting point.


In the second option “table per concrete class with unions”, there is a Hibernate mapping that inc
ludes the superclass,
such as:



<class name=”BillingDetails” abstract=”true”>


<id name=”id” />


<property name=”name” column=”OWNER” />



<union
-
subclass name=”CreditCard” table=”CREDIT_CARD” >


<property name=”number” />


<pr
operty name=”expMonth” />


</union
-
subclass>



<union
-
subclass name=”BankAccount” table=”BANK_ACCOUNT” >


<property name=”acctId” />


<property name=”branchLoc” />


</union
-
subclass>


</class>


In the third option, “table per cl
ass hierarchy”, all of the classes of the hierarchy are collapsed into one table, with a
discriminator column. Here is an example:



<class name="Work" table="works" discriminator
-
value="W">


<id name="id" column="id">


<generator cla
ss="native"/>


</id>


<discriminator column="type" type="character"/>



<property name="title"/>


<set name="authors" table="author_work">


<key column name="work_id"/>


<many
-
to
-
many class="Author" column
name="author_id"/>


</set>



<subclass name="Book" discriminator
-
value="B">


<property name="text"/>


</subclass>



<subclass name="Song" discriminator
-
value="S">


<property name="tempo"/>


<prop
erty name="genre"/>


</subclass>



</class>


Page
17

of
32

The final option, “table per subclass” is to represent inheritance relationships as relational foreign key associations.
Every class/subclass that declares persistent properties
-

including abstract c
lasses and even interfaces


has its own
table.


Unlike the table per concrete class strategy that we describe first, the table here contains columns only for the non
-
inherited properties, and the tables are joined at fetch time to match the class is
-
a rel
ationships. This approach can
create more tables and more complex joins, and should be used cautiously.

Persistent Enumerated Types

If you use JDK 5.0, you can use the built
-
in support for type
-
safe enumerations. For example, a Rating class might
look li
ke:



package auction.model;



public enum Rating {EXCELLENT, OK, BAD}


The Comment class has a property of this type:



public class Comment {





public Rating rating;


public Item auction;





}


This is how you use the

enumeration in the application code:



Comment goodComment = new Comment(Rating.EXCELLENT, thisAuction);


You can have Hibernate handle the mapping of the possible values into strings or

integers in the database. In earlier
versions of Hibernate you h
ad to write a type handler for this, but in 3.2.x, you can just use the annotation
@Enumerated. For more information, see our document “Notes on Java Enum Construct”.

Working with Objects

This material is derived from Chapter 9 of the book, which deals wi
th the Persistence lifecycle. In many respects, the
mapping file describes the static aspects of the Object/Relational mismatch, here we are dealing with the behavioral
aspects, such as “when does an object get created? Or saved? Or restored? etc.” Chapt
er 9 is the first part of material
called “Conversational Object Processing”. The conversation is between the application and the database, over time.


Different ORM solutions use different terminology and define the different states and state transitions

for the
persistence lifecycle. The diagram is shown below:


Page
18

of
32



The diagram can be summarized as follows:


Objects created with the new operator are not immediately persisted. Their state is
transient
.


A
persistent

instance is an entity instance with a
database identity, such as having the id’s filled in.


An object is in the
removed

state if it has been scheduled for deletion at the end of a unit of work, but it’s still
managed by the persistence context until the unit of work completes.


A
detached

obj
ect is one that was read from a persistence session, but the session has now been closed.

Operations on Objects

Loading of objects is done through the query or criteria objects, as described above.


The most important concept to understand is that of lazy
fetching. As
sociations are lazy by default.

If you try to fetch
a contained object that was lazy loaded, and the session is already closed (in other words, the object is “detached”),
you will get a LazyInitializationException.


Since it is best practice
to map almost all classes and collections using
lazy="true"
, that is the default.


Here are the two most common operations on an object:


session.save(object);


session.delete(object);


The session.save() operation works for either new objects or persisted

objects that are being updated.


As indicated above, these calls only work inside a transaction. We learned this the hard way.


Page
19

of
32

There are a variety of options that can be specified in the mapping file to determine how Hibernate should generate
the keys o
f new objects being persisted, or how Hibernate should handle the updating or deleting of objects associated
with the object passed to save() or delete(). These are called “cascade” options, and are discussed below.


In the JPA api, this is written as Ent
ityManager.persist(myEntity).


One of the most common errors incurred when using the save() function is

“detached entity passed to persist”. T
here
is a good explanation of this at
https:/
/forum.hibernate.org/viewtopic.php?p=2404032
. The authors point out that the
error message is unclear, it might better expressed as

"cannot persist entity with identifier set manually AND via
IdentifierGenerator”.

For instance we got this error when the

object being saved accidentally had a value in its id
field.

Further Discussion of Fetch Modes

One of the most useful facilities in Hibernate is that although the fetch mode is specified in the annotations or the
mapping file, it can be overridden for a s
pecific query.

Use the setFetchMode method on the Criteria object. This
takes the following
parameters
:

associationPath

-

a dot
-
separated property path

mode

-

t
he fetch mode for the referenced association


Typically, you set up the fetch modes to be Laz
y in the annotations and mapping files, and then turn on
FetchMode=JOIN only when needed.


The meaning
s

of the FetchModes are:


static

FetchMo
de

DEFAULT



Default to the setting configured in the mapping file.

static

FetchMode

EAGER



Deprecated.

use
FetchMode.JOIN


static

FetchMode

JOIN



Fetch using an outer join.

static

FetchMode

LAZY



Deprecated.

use
FetchMode.SELECT


static

FetchMode

SELECT



Fetch eagerly, using a separate select.


In the Incra code framewor
k, there was a method that accepted a list of field paths and would examine each one
(going to the next
-
to
-
last field in the dotted set) to see what associationPaths needed to have the fetchMode
.JOIN

turned on. Then it would generate a criteria object, se
t the paths, and return the object.

Generating Keys

There are a variety of approaches which are available for generating keys or ids, with different overhead and different
behavior. The most commonly used are:




native
:

the native generator picks other ge
nerators, such as identity, sequence, or hilo, depending
upon the capabilities of the database. Use this for portability.



identity
:

supports the identity columns in MySQL, DB2, MS SQL Server, and Sybase.



sequence
:

creates a sequence in DB2, PostgreSQL,
Oracle, and SAP DB, and uses sequential values
from it.



increment
:


at startup, reads the maximum current value, then generates following values.


Most database schema design literature goes through a discussion of why keys should not be related to a busin
ess
domain function (such as social security numbers), because even though they appear to be unique, what happens if
they are changed or replaced?

Transaction Management

In this chapter, we finally talk about transactions and how you create and control
units of work in a application. We'll
show you how transactions work at the lowest level (the database) and how you work with transactions in an
application that is based on native Hibernate, on Java Persistence, and with or without Enterprise JavaBeans.


Page
20

of
32

Transactions allow you to set the boundaries of a unit of work: an atomic group of operations. They also help you
isolate one unit of work from another unit of work in a multiuser application. We talk about concurrency and how you
can control concurrent da
ta access in your application with pessimistic and optimistic strategies


Transactions group many operations into a single unit of work. If any operation in the batch fails, all of the previous
operations are rolled back, and the unit of work stops. Hibe
rnate can run in many different environments supporting
various notions of transactions. Standalone applications and some application servers only support simple JDBC
transactions, whereas others support the Java Transaction API (JTA).

Nonmanaged Environm
ent

Hibernate needs a way to abstract the various transaction strategies from the environment. Hibernate has its own
Transaction class that is accessible from the Session interface, demonstrated here:



Session session = factory.openSession();


Trans
action tx = session.beginTransaction();



Event event =

new Event();


//...populate the Event instance


session.saveOrUpdate(event);



tx.commit();


In this example, factory is an initialized SessionFactory instance. This code creates an insta
nce of the
org.hibernate.Transaction class and then commits the Transaction instance.

In a nonmanaged environment, the JDBC API is used to mark transaction boundaries.

You begin a transaction by
calling
setAutoCommit(false)

on a JDBC C
onnection

and end it

by calling
commit()
.

You may, at any
time, force an immediate rollback by calling
rollback()
.

Notice that you don't need to call session.flush(). Committing a transaction automatically flushes the Session object.
The Event instance is persisted to the
database when the transaction is committed. The transaction strategy you use
(JDBC or JTA) doesn't matter to the application code
-
it's set in the Hibernate configuration file.

Managed Environment

In a managed environment, the
JTA transaction manager is in

control of the transactions. The following are benefits
of managed resources with JTA and the reasons to use this J2EE service (it is available with both JBoss and Spring):




A transaction
-
management service can unify all resources, no matter of what type
, and expose transaction
control to you with a single standardized API. This means that you can replace the Hibernate
Transaction

API and use JTA directly everywhere. It's then the responsibility of the application deployer
to install the application on (o
r with) a JTA
-
compatible runtime environment. This strategy moves portability
concerns where they belong; the application relies on standardized Java EE interfaces, and the runtime
environment has to provide an implementation.



A Java EE transaction manager

can enlist multiple resources in a single transaction. If you work with several
databases (or more than one resource), you probably want a two
-
phase commit protocol to guarantee
atomicity of a transaction across resource boundaries. In such a scenario, Hi
bernate is configured with
several
SessionFactory
s, one for each database, and their
Session
s obtain managed database
connections that all participate in the same system transaction.



The quality of JTA implementations is usually higher compared to simple J
DBC connection pools.
Application servers and stand
-
alone JTA providers that are modules of application servers usually have had
more testing in high
-
end systems with a large transaction volume.



JTA providers don't add unnecessary overhead at runtime (a co
mmon misconception). The simple case (a
single JDBC database) is handled as efficiently as with plain JDBC transactions. The connection pool
managed behind a JTA service is probably much better software than a random connection pooling library
you'd use wi
th plain JDBC.

Page
21

of
32

Transaction boundaries can be specified by programmatic calls, or by declarations to the container. The latter looks
like one of the following annotations:


@TransactionAttribute(TransactionAttributeType.REQUIRED)


Here is a full table of t
hem (in the book, this is on page 513, in the chapter on conversations)


Conversation Management

Conversation management is a way of understanding the persistence and session management policies associated with
a sequence of operations on the database. T
here are no objects that are called “conversations” however.
Conversations are controlled by creating and deletion of session objects, and by attachment and detachment of
persistence objects to and from the session.


An example would be to have a conversat
ion that represents the sequence of steps to edit an object through a GUI,
including the checking for conflicting changes by another user if the system is multi
-
user (as are all web
-
based
applications).


There are three typical control policies:



No session

or transaction



Hibernate session

level



User session
-
level


Chapter 9 of
Hibernate: a Developer’s Guide

describes the following:


H
ibernate supports several different models for detecting this sort of versioning conflict (or optimistic locks). The
versioning

strategy uses a version column in the table. You can use either a
version

property tag or a
timestamp

property tag to indicate

a version column for the class (see the appropriate tag for more information). When a record
is updated, the versioning column is automatically updated as well. This versioning strategy is generally considered
the best way to check for changes both for
performance and for compatibility with database access that occurs outside
of Hibernate. For example, you can simply issue a
SELECT

to obtain a record by id, and include the version column
in the
WHERE

clause to indicate the specific record you wish to
UD
PATE;

a failure to update is an easy way to detect
Page
22

of
32

that the record is not properly synchronized. This is precisely the functionality as provided by the
StaleObjectException
.


The
dirty

strategy only compares the columns that need to be updated. For examp
le, let's say you load a
Cat

object,
with a name ("Tom"), weight, and color. You change the name to "Bob" and want to save the change. Hibernate will
verify that the current
Cat

name is "Tom" before updating the name to "Bob." This is computationally in
tensive
compared to the versioning strategy, but can be useful in situations in which a version column is not feasible.


The
all

strategy compares all the columns, verifying that the entire
Cat

is as it was when loaded before saving.
Using the
Cat

example
in the preceding paragraph, Hibernate will verify that the name, weight, and color are as
loaded before persisting the change to the name. While more secure than the dirty strategy, this strategy, too, is more
computationally intensive.


Finally, the
none

strategy can be used to ignore optimistic locking. Updated columns are updated, period. This
obviously performs better than any other strategy but is likely to lead to problems if you have more than a single user
updating related records.


Generally spea
king, using a versioning strategy to manage conflicts is recommended. The pessimistic model, unless
very carefully managed, can lead to more problems than it solves, and the dirty and all mechanisms are not very high
-
performance.

Modifying Obj
ects Efficiently

One could certainly persist each object in a tree or other graph of objects, by calling session.saveOrUpdate() on each
one. However, this would be tedious, and it would place a burden on the programmer to keep track of which objects
are c
hanged, new, deleted from the graph, etc.


Instead, Hibernate can carry out this traversal, and maintain transitive persistence.


It will follow each association, according the cascade mode. These are annotated on each association, and have the
following
values:


XML attribute

Annotation Description

None

(Default)

Hibernate ignores the association.

save
-
update

org.hibernate.annotations.CascadeType.SAVE_UPDATE

Hibernate navigates the association when the Session is flushed and when an object is passed to

save()

or
update()
, and
saves newly instantiated transient instances and persist changes to detached instances.

persist

javax.persistence.CascadeType.PERSIST

Hibernate makes any associated transient instance persistent when an object is passed to
persist
()
. If you use native
Hibernate, cascading occurs only at call
-
time. If you use the
EntityManager

module, this operation is cascaded when the
persistence context is flushed.

merge

Javax.persistence.CascadeType.MERGE

Hibernate navigates the association and

merges the associated detached instances with equivalent persistent instances when an
object is passed to
merge()
. Reachable transient instances are made persistent.

delete

org.hibernate.annotations.CascadeType.DELETE

Hibernate navigates the association
and deletes associated persistent instances when an object is passed to
delete()

or
remove()
.

remove

javax.persistence.CascadeType.REMOVE

This option enables cascading deletion to associated persistent instances when an object is passed to
remove()

or
del
ete()
.

lock

org.hibernate.annotations.CascadeType.LOCK

This option cascades the
lock()

operation to associated instances, reattaching them to the persistence context if the objects
are detached. Note that the
LockMode

isn't cascaded; Hibernate assumes tha
t you don't want pessimistic locks on associated
objects

for example, because a pessimistic lock on the root object is good enough to avoid concurrent modification.

replicate

org.hibernate.annotations.CascadeType.REPLICATE

Hibernate navigates the associat
ion and cascades the
replicate()

operation to associated objects.

evict

org.hibernate.annotations.CascadeType.EVICT

Hibernate evicts associated objects from the persistence context when an object is passed to
evict()

on the Hibernate
Page
23

of
32

XML attribute

Annotation Description

Session
.

refresh

jav
ax.persistence.CascadeType.REFRESH

Hibernate rereads the state of associated objects from the database when an object is passed to
refresh()
.

all

javax.persistence.CascadeType.ALL

This setting includes and enables all cascading options listed previously.

delete
-

orphan

org.hibernate.annotations.CascadeType.DELETE_ORPHAN

This extra and special setting enables deletion of associated objects when they're removed from the association, that is, fro
m a
collection. If you enable this setting on an entity collect
ion, you're telling Hibernate that the associated objects don't have shared
references and can be safely deleted when a reference is removed from the collection.

Examples

Many to one


@ManyToOne
(cascade = { CascadeType.
PERSIST
, CascadeType.
MERGE

} )
@
org
.
hibernate.annotations.
Cascade
(org.hibernate.annotations.CascadeType.
SAVE_UPDATE
)

@JoinColumn
(name =
"organizationtype_id"
, nullable =
false
)

@ForeignKey(name = "FK_ORG_ORGTYPE")

public

OrganizationType getOrganizationType() {


return

organizationType
;

}

public

void

setOrganizationType(OrganizationType organizationType) {


this
.
organizationType

= organizationType;

}


One to Many


@OneToMany
(mappedBy=
"organization"
, cascade=CascadeType.
ALL
, fetch=FetchType.
LAZY
)


@
org.hibernate.annotations.
Cascade
(org.hibe
rnate.annotations.CascadeType.
DELETE_ORPHAN
)

public

List<Facility> getFacilities() {


return

facilities
;

}

public

void

setFacilities(List<Facility> facilities) {


this
.
facilities

= facilities;

}


Here is the other side of the One to Many


@ManyToOne
()

@J
oinColumn
(name=
"organization_id"
, nullable=
false
)

public

Organization getOrganization() {


return

organization
;

}

public

void

setOrganization(Organization organization) {


this
.
organization

= organization;

}

Queries and Criteria

While we will
later
see a
n approach to querying objects using a SQL
-
like query language, the simplest approach to
fetch that uses an object
-
oriented interface and doesn’t require learning a query language is the Criteria class.
(Note
that the

Java Persistence with Hibernate


boo
k focuses on the SQL
-
like query language before discussing the Criteria
class
. This appears to be for the benefit of EJB3
-
oriented and native SQL
-
oriented developers who are used to a
statement
-
string
-
based representation. But we began to focus on object
-
oriented query languages several years back,
and this document reflects such.)


A Criteria object is

a
fetch specification which is built, restriction clause by rest
riction clause, using an object
-
oriented
API. Here is an example:


public static List get
Tracks(Time length) {


Criteria crit = session.createCriteria(Track.class);

Page
24

of
32



crit.add(Restrictions.eq(“categoryId”, 456);


crit.add(Restrictions.le(“playTime”, length);


crit.addOrder(Order.asc(“title”));



return crit.list();

}


(Note that Rest
rictions has replaced the semi
-
deprecated class Expression. The example above is correct).


From this example, you can conclude that an object
-
oriented fetch specification starts with the class to be fetched,
which in turn must have been field
-
mapped. Th
en we add material for the specific field
-
level restrictions. Also, we
added information about the ordering of results.


The C
riteria object is executed using one of the two routines:



Object result = criteria.uniqueResult(); // Throws Hibernate excep
tion


// if more than 1 match for


// criteria


List list = criteria.list();


The Criteria object is one of the most useful pieces in Hibernate, as many of ou
r programs (particularly on the
Summary screens) need to programmatically construct restriction specifications. It is better than the string
-
building
approach that we used with O/R mapping tools that didn’t have this facility. This also relates to the to
pic of when to
use EJB3/JPA vs. when to use Hibernate: you may want to use Hibernate in this case, since the API
-
based Criteria
building facilities are not present in JPA as of its current version.


Further discussion of how to use the Criteria object fol
lows. Since the Criteria is based on a target class, the
fields
that are being restricted on are directly in the target class. In order to add restrictions on subobjects, first build a sub
-
criteria, as in:


List cats = sess.createCriteria(Cat.class)

.add
( Restrictions.like("name", "F%") )

.createCriteria("kittens")

.add( Restrictions.like("name", "F%") )

.list();


Note that the second
createCriteria()
returns a new instance of
Criteria
, which refers to the elements of the

kittens
collection.


From a sourc
e
-
code perspective, this might be clearer as:


Criteria crit1 = sess.createCriteria(Cat.class);

crit1.add( Restrictions.like("name", "F%") );


Criteria crit2 = crit1.createCriteria("kittens");

c
rit2.add( Restrictions.like("name", "F%") );


List cats = crit
1.list();


The following, alternate form is useful in certain circumstances.


List cats = sess.createCriteria(Cat.class)

.createAlias("kittens", "kt")

.createAlias("mate", "mt")

.add( Restrictions.eqProperty("kt.name", "mt.name") )

.list();


Where
createAl
ias()
does not create a new instance of
Criteria
, instead this is similar to creating aliases for
tables in the SQL expression.

Page
25

of
32


Both the createAlias form and the creation of subcriteria allow for the specification of the join type. By default this an
inn
er join, but it can be changed to a left join using CriteriaSpecification.LEFT_JOIN.


There are also facilities for creating “And” and “Or” grouping of restrictions, as in:



Criteria crit = session.createCriteria(Track.class);


Disjunction any = Res
trictions.disjunction();



any.add(Restrictions.le(“
playtime”, length));


any.add(Restrictions.like(“title”, “%A%”));


crit.add(any);


Then there is a SQLRestriction facility, which allows a SQL
restriction t
o be added to the where statement.




Criteria crit = session.createCriteria(Track.class);


crit.add(Restrictions.sqlRestriction


(“’100’ > all (select b.AMOUNT from BID b where b.ITEM_ID = 456)”;



Chapter 15 of the “Java Persistence in Action” book explains more of this.


In addi
tion, there are facilities to specify sort order in the Criteria object. This again is often used in Summary or
reporting screens.


Finally, there is a “query by example” facility. This works by creating a Criteria object, then creating an example
object

and adding it to the criteria. Here is an example:



public List findUsersByExample(User u) {


Criteria crit = session.createCriteria(User.class);


Example e = Example.create(u);


e.ignoreCase();


e.excludeProperty(“passwor
d”);


crit.add(e);



return crit.list();


}


The benefit of this approach is that one could also add sort ordering conditions to the criteria before the call to
crit.list().


One gotcha that we have seen here is that Hibernate’s query by e
xample facility only generates restrictions on the non
-
id fields on the template object. It also skips the association fields.

Advanced Queries and Criteria

Reporting queries are a different way to use Hibernate
Criteria objects
. You can select exactly w
hich objects or
properties of objects you need in the query result and how you possibility want to aggregate and group results for a
report.

Reporting queries are described through Criteria objects in a very similar, with the ability to specify join and
s
orts, but they return field values rather than Java objects. A reporting query will have a
projection

specified on the
Criteria object.


The following criteria query (from the Hibernate book) returns only the identifier values of the items


session.create
Criteria(Item.class)


.add( Restrictions.gt(“endDate”, new Date()),


.setProjection( Projections.id() );


The trick here is the setProjection() method, which changes the Criteria object into a reporting query. While id() is a
built
-
in function in
the Projections class, there is another that will find a named field:


Page
26

of
32

session.createCriteria(Item.class)


.add( Restrictions.gt(“endDate”, new Date()),


.setProjection( Projections.projectionList.


.add(Projections.id() )


.add(P
rojections.property(“description”) )


.add(Projections.property(“initialPrice”));


Note that even if projections are added to the subcriteria, they are actually added to the main criteria object. Hence, if
they wish to refer to fields in the join

specified by the subcriteria, they must use a field name path that includes an
alias.


The result of a projection
-
based query are a set of Object[] rows, rather than objects of a specific Java class.
However, this can b
e

changed by using the ResultTransf
ormer called AliasToBeanResultTransformer, which takes a
Java class as its argument.


Aggregation and grouping can also be carried out through additional modifiers on the setProjection() method’s
arguments.


For instance, you can add Projections.groupPro
perty(“u.userName”),

Projections.count(“id”), Projections.avg(“amount”), Projections.min(“cost”), etc.


The effect of adding the projections is to add a set of projection clauses to the end of the SQL statement (such as
“group by”).


The result of using th
is query is set of row of Object arrays that are laid out just as the grid
-
like table would be directly
from the database. There is no hierarchy of rows to indicate groups, etc. This means, however, that i
f we want to
display a set of ra
w
data
records as

well as the groups and aggregatio
ns, we need to make two queries (m
ost of the
reporting engines are doing this behind the scenes).


However, the API provided by Hibernate is a much more economical way
to create the SQL statement than

writing
your own SQLC
lause builder.

Using HQL

Now we turn to the string
-
based representation of a query, called HQL.
The simplest form of a query is a Query
object, which takes an HQL expression. HQL is like SQL, but is more tuned to the objects that are mapped to the
tables
. For instance, you use Java class names instead of table names.


Examples of using Query objects are:


Query tenantQuery = session.createQuery(
"from dto.Tenant as tenant"
);


A query is carried out by calling its “list” method. This returns a java.util
.List object, which can be iterated over.



The HQL of a query can be parameterized, either by position or by keyword. Here is an example of one by position:


Query query = session.createQuery


(“from Item item where item.description like ? and item.ty
pe = ?”);

query.setString(0, “northern”);

query.setInteger(1, 40);


Here is one by keyword:


Query query = session.createQuery


(“from Item item where item.description like :desc and item.type = :type”);

query.setString(“description”, “northern”);

query
.setInteger(“type”, 40);


The query string of HQL can be located in the application program, or it can be a named query in the Hibernate
mapping file. An example would be:

Page
27

of
32


<query name=”findItemsByDescription”>


<![CDATA[from Item item where item.descri
ption like :description]]>

</query>


A query can be evaluated to return a list or a single object. Here is an example of each:


The
Q
uery

object
(like the C
riteria object)
is executed using one of the two routines:



Object result = query.uniqueResult(
); // Throws Hibernate exception


// if more than 1 match for


// criteria


List list = query.list();

Lazy and Non
-
Lazy Loading

Another important concept is Lazy an
d non
-
Lazy loading of related records. By default, Hibernate never loads data
that you didn’t ask for. You can either ask for this in the master configuration file, or within each query criteria.


An important content is that of lazy references, which al
low Hibernate to fetch an object that is associated to other
objects (including child objects), and not fetch the child objects until they are referred to by the application code.
However, this means that the session object that was used for the query of
the original object must still be open, as
Hibernate will use it to fetch on demand. This means that application designers must plan for the sessions to be long
-
lived, or determine a fetch strategy of marking some references lazy=

false” so that all neede
d objects are provided.
The lazy fetch mode is specified in the mapping file. This is discussed further below.

Cache Management

Cache Providers

As we mentioned earlier, caching is a common method used to improve application performance. Caching can be a
s
simple as having a class store frequently used data, or a cache can be distributed among multiple computers. The logic
used by caches can also vary widely, but most use a simple least recently used (LRU) algorithm to determine which
objects should be rem
oved from the cache after a configurable amount of time.


Before you get confused, let's clarify the difference between the Session

level cache, also called the first

level cache,
and what this section covers. The Session

level cache stores object instance
s for the lifetime of a given Session
instance. The caching services described in this section cache data outside of the lifetime of a given Session. Another
way to think about the difference is that the Session cache is like a transactional cache that onl
y caches the data
needed for a given operation or set of operations, whereas a second

level cache is an application
-
wide cache.


By default, Hibernate supports four different caching services, listed in table 2. EHCache (Easy Hibernate Cache) is
the defaul
t service. If you prefer to use an alternative cache, you need to set the cache.provider_class property in the
hibernate.cfg.xml file:



<property name="cache.provider_class">


org.hibernate.cache.OSCacheProvider


</property>


This snippet sets
the cache provider to the OSCache caching service.


Table 2: Caching Services Supported by Hibernate

Caching
Service

Provider Class

Type

EHCache

org.hibernate.cache.EhCacheProvider

Memory,disk

OSCache

org.hibernate.cache.OSCacheProvider

Memory,disk

Page
28

of
32

S
warmCache

org.hibernate.cache.SwarmCacheProvider

Clustered

TreeCache

org.hibernate.cache.TreeCacheProvider

Clustered


The caching services support the caching of classes as well as collections belonging to persistent classes. For instance,
suppose you ha
ve a large number of Attendee instances associated with a particular Event instance. Instead of
repeatedly fetching the collection of Attendees, you can cache it. Caching for classes and collections is configured in
the mapping files, with the cache elemen
t:



<class name="Event"table="events">


<cache usage="read
-
write"/>


...


</class>


Collections can also be cached:



<set name="attendees">


<cache usage="read
-
write"/>


...


</set>


Once you've chosen a caching se
rvice, what do you, the developer, need to do differently to take advantage of cached
objects? Thankfully, you don't have to do anything. Hibernate works with the cache behind the scenes, so concerns
about retrieving an outdated object from the cache can

be avoided. You only need to select the correct value for the
usage attribute.


The usage attribute specifies the caching concurrency strategy used by the underlying caching service. The previous
configuration sets the usage to read

write, which is desi
rable if your application needs to update data. Alternatively,
you may use the nonstrict

read

write strategy if it's unlikely two separate transaction threads could update the same
object. If a persistent object is never updated, only read from the datab
ase, you may specify set usage to read
-
only.


Some caching services, such as the JBoss TreeCache, use transactions to batch multiple operations and perform the
batch as a single unit of work. If you choose to use a transactional cache, you may set the usa
ge attribute to
transactional to take advantage of this feature. If you happen to be using a transactional cache, you'll also need to set
the transaction.manager_lookup_class mentioned in the previous section.


The supported caching strategies differ base
d on the service used. Table 3 shows the supported strategies.


Table 3: Supported Caching Service Strategies

Caching
Service

Read
-
only

Read
-
write

Nonstrict
-
read
-
write

Transactional

EHCache

Y

Y

Y

N

OSCache

Y

Y

Y

N

SwarmCache

Y

Y

Y

N

TreeCache

Y

N

N

Y


Clearly, the caching service you choose will depend on your application requirements and environment.

Configuring EHCache

By now you're probably tired of reading about configuring Hibernate, but EHCache is pretty simple. It's a single
XML file, placed
in a directory listed in your classpath. You'll probably want to put the ehcache.xml file in the same
directory as the hibernate.cfg.xml file.


Page
29

of
32


<ehcache>


<diskStore path="java.io.tmp"/>


<defaultCache


maxElementsInMemory="10"



eternal="false"


timeToIdleSeconds="120"


timeToLiveSeconds="120"


overflowToDisk="true"/>


<cache name="com.manning.hq.ch03.Event"


maxElementsInMemory="20"


eternal="false"


timeToIdleSeconds="120"



timeToLiveSeconds="180"


overflowToDisk="true"/>


</ehcache>


In this example, the diskStore property sets the location of the disk cache store. Then, the listing declares two caches.
The defaultCache element contains the settings for all cach
ed objects that don't have a specific cache element: the
number of cached objects held in memory, whether objects in the cache expire (if eternal is true, then objects don't
expire), the number of seconds an object should remain the cache after it was last

accessed, the number of seconds an
object should remain in the cache after it was created, and whether objects exceeding maxElementsInMemory should
be spooled to the diskStore. Next, for custom settings based on the class, the code defines a cache elemen
t with the
fully qualified class name listed in the name attribute. (This listing only demonstrates a subset of the available
configuration for EHCache. Please refer to the documentation found at http:// ehcache.sf.net for more information.).

Using Hiber
nate
-
Managed JDBC connections

The sample

configuration file in below

uses Hibernate
-
managed JDBC connections. You would typically encounter
this configuration in a non
-
managed environment, such as a standalone application.


<?xml version="1.0">

<!DOCTYPE
hibernate
-
configuration PUBLIC

"
-
//Hibernate/Hibernate Configuration DTD 3.0//EN"

"http://hibernate.sourceforge.net/hibernate
-
configuration
-
3.0.dtd">


<hibernate
-
configuration>


<session
-
factory>


<property name="connection.username">uid</property>



<property name="connection.password">pwd</property>


<property name="connection.url">jdbc:mysql://localhost/db</property>


<property name="connection.driver_class">com.mysql.jdbc.Driver</property>


<property name="dialect">org.hibernate.dialect.M
ySQLDialect</property>



<mapping resource="com/manning/hq/ch03/Event.hbm.xml"/>


<mapping resource="com/manning/hq/ch03/Location.hbm.xml"/>


<mapping resource="com/manning/hq/ch03/Speaker.hbm.xml"/>


<mapping resource="com/manning/hq/ch03/Atte
ndee.hbm.xml"/>


</session
-
factory>

</hibernate
-
configuration>


To use Hibernate
-
provided JDBC connections, the configuration file requires the following five properties:



connection.driver_class
-
The JDBC connection class for the specific database



connec
tion.url
-
The full JDBC URL to the database



connection.username
-
The username used to connect to the database



connection.password
-
The password used to authenticate the username



dialect
-
The name of the SQL dialect for the database

The connection prope
rties are common to any Java developer who has worked with JDBC in the past. Since you're not
specifying a connection pool, which we cover later in this chapter, Hibernate uses its own rudimentary connection
-
pooling mechanism. The internal pool is fine for

basic testing, but you shouldn't use it in production.


Page
30

of
32

The dialect property tells Hibernate which SQL dialect to use for certain operations. Although not strictly required, it
should be used to ensure Hibernate Query Language (HQL) statements are correct
ly converted into the proper SQL
dialect for the underlying database.


The dialect property tells the framework whether the given database supports identity columns, altering relational
tables, and unique indexes, among other database
-
specific details. Hib
ernate ships with more than 20 SQL dialects,
supporting each of the major database vendors, including Oracle, DB2, MySQL, and PostgreSQL.


Hibernate also needs to know the location and names of the mapping files describing the persistent classes. The
mappi
ng element provides the name of each mapping file as well as its location relative to the application classpath.
There are different methods of configuring the location of the mapping file, which we'll examine later.

Connection Pools

Connection pools are a

common way to improve application performance. Rather than opening a separate connection
to the database for each request, the connection pool maintains a collection of open database connections that are
reused. Application servers often provide their own

connection pools using a JNDI DataSource, which Hibernate can
take advantage of when configured to use a DataSource.


If you're running a standalone application or your application server doesn't support connection pools, Hibernate
supports three connecti
on pooling services: C3P0, Apache's DBCP library, and Proxool.

C3P0 is distributed with
Hibernate; the other two are available as separate distributions.


When you choose a connection pooling service, you must configure it for your environment. Hibernate

supports
configuring connection pools from the hibernate.cfg.xml file. The connection.provider_class property sets the pooling
implementation:



<property name="connection.provider_class">


org.hibernate.connection.C3P0ConnectionProvider


</pr
operty>


Once the provider class is set, the specific properties for the pooling service can also be configured from the
hibernate.cfg.xml file:



<property name="c3p0.minPoolSize">


5


</property>


...


<property name="c3p0.timeout">



1000


</property>


As you can see, the prefix for the C3P0 configuration parameters is c3p0. Similarly, the prefixes for DBCP and
Proxool are dbcp and proxool, respectively. Specific configuration parameters for each pooling service are available
in

the documentation with each service. Table 1 lists information for the supported connection pools.


Hibernate ships with a basic connection pool suitable for development and testing purposes. However, it should not
be used in production. You should alw
ays use one of the available connection pooling services, like C3P0, when
deploying your application to production.


If your preferred connection pool API isn't currently supported by Hibernate, you can add support for it by
implementing the org.hibernate.
connection.ConnectionProvider interface. Implementing the interface is
straightforward.


Pooling
Service

C3PO

Apache DBCP

Proxool

Provider
Class

org.hibernate.connec
-

tion.C3P0ConnectionProvider

org.hibernate.connec
-

tion.ProxoolConnectionProvider

org.h
ibernate.connec
-

tion.DBCPConnectionProvider

Page
31

of
32

Configuration
Prefix

c3p0

Dbcp

proxool


There isn't much to using a connection pool, since Hibernate does most of the work behind the scenes
.

Hibernate Tools

Working with Hibernate is very easy and developers
enjoy using the APIs and the query language. Even creating
mapping metadata is not an overly complex task once you've mastered the basics. Hibernate Tools makes working
with Hibernate or EJB 3.0 persistence even more pleasant.


Hibernate Tools is an enti
rely new toolset for Hibernate3, implemented as an integrated suite of Eclipse plugins,
together with a unified Ant task for integration into the build cycle. Hibernate Tools is
a core component of JBoss
Tools (see our document “Notes on JBoss Tools”).


H
ow to install:



In Eclipse, select Help / Software Updates / Find and Install



Select "Search for new features to install" and hit Next



Select "New Remote Site..."



For Name enter: Hibernate Tools



For URL enter:
http://download.jboss.org/jbosstools/updates/stable




Click OK to save the new site



Check the box next to the site you just added (Hibernate Tools) and click Finish



Click ar
row to expand Hibernate Tools



Click arrow to expand JBoss Tools Development Release: <version>



Check box next to Hibernate Tools (name may be slightly different)



Select Next, and accept the license agreement and Install All


To use Hibernate Tools, go
the New… menu and create Other…, then select Hibernate from the list of types, then
expand out. The list will look like:



We don’t generally do Reverse Engineering, so that isn’t too useful, and we don’t generate XML Mappings, so that
isn’t useful. The

option to create the Hibernate Configuration file could be useful, but we generally do it by hand
instead.


Other parts of
the
Hibernate Tools

package are

just a wrapper for the hbm2ddl facilities that are discussed above under

Programmatic G
eneration of

Schema”.

Actually, Hibernate Tools didn’t prove to be too critical for our work.

Appendix A: Guide to the “Java Persistence using Hibernate” Book

The book w
as written for Hibernate 3.2. It is o
rganized as follows, with the most important chapters shown

in bold.


Part 1: Getting started with Hibernate and EJB 3.0

Page
32

of
32


Ch
apter 1: Introduces the object
-
relational mismatch and nature of soluti
ons.

Chapter 2
: Example project and programs using Hibernate. Quite important, as it covers the current library set,
an
example problem, and s
etup.

Chapter 3
: Domain models and metadata. This introduces the concepts used in the mapping fil
es.
Page 138 discuses
dynamic manipulation of the metadata.


Part 2: Mapping Concepts and Strategies


Chapter 4
: Mapping persisten
ce classes

Chapter 5: Inheritance and custom types

Chapter 6
: Mapping collections and entity association

Chapter 7
: Advanced entity association mappings

Chapter 8: Legacy databases and custom SQL


Part 3: Conversational Object Processing


Chapter 9
: W
orking with objects.

This returns to the run
-
time API, such as the save, query, and transaction
facili
ties.

Chapter 10
: Transactions and concurrency.

Chapter 11: Implementing conversations.
This

deals with long
-
running transactions, called

conversatio
ns

.

Chapter 12
: Modifying objects efficiently.

Chapter 13
: Optimizing fetch and caching:
C
overs how to tell Hibernate when to use joins and when to use multiple
queries. The material on caching follows the material on fetch optimization.

Chapter 14
:
Querying with HQL and JPA QL: important, as it covers how to express queries in HQL. It also talked
about the reporting query facility.

Chapter 15
: Advanced query options.
This

chapter is the documentation for the Criteria object and related
interfaces,
which were quite important in writing filter expressions and similar tools in an object
-
oriented fashion.

Chapter 16
: Creating and testing layered applications. This was quite useful as it talked about Web applications, and
the organization of DAO layers
.

Chapter 17: Introducing JBoss Seam. This chapter is a discussion of a very powerful application building framework.
It is not specifically related to the prior chapters. There ar
e now entire books just on Seam, so this isn’t one of the
most important

chapters, though Seam itself worth treating as important.