Notes on Hibernate - The Risberg Family

towerdevelopmentData Management

Dec 16, 2012 (4 years and 6 months ago)

398 views

Page
1

of
32

Notes on Hibernate

Created 03/22/04

Updated 10/24/04, Updated 07/23/05, Updated 10/21/05, Updated 10/23/05, Updated 10/30/05
,
Updated 03/28/06

Updated 06/08/06, Updated 11/12/06, Updated 12/22/06, Updated 02/10/07
,
Updated 03/02/07, Updated 03/08/07

Update
d 03/25/07, Updated 04/02/07, Updated 04/04/07
,
Updated 04/15/07, Upda
ted 06/07/07, Updated 07/22/07

Updated 08/05/07, Updated 11/15/07
,
Updated 01/27/08
, Updated 03/20/08
,
Updated 05/20/08
,
Updated 06/08/08

Updated 06/26/08
, Updated 07/16/08
, Updated 07/2
0/08
, Updated 08/10/08
, Updated 09/08
/
08
,
Updated 12/12/08

Updated 12/21/08
, Updated 01/25/09
,
Updated 01/31/09
, Updated 03/13/09
, Updated 09/14/09
,
Updated 10/13/09

Updated 08/22/10
, Updated 10/02/10
, Updated 01/21/11
, Updated 02/27/11

Overview

Hibernate
is an Object/Relational Mapping tool, which supports a range of databases. From the www.hibernate.org
web site:


Hibernate is a powerful, ultra
-
high performance object/relational persistence and query service for Java. Hibernate
lets you develop persiste
nt classes following common Java idiom
-

including association, inheritance, polymorphism,
composition, and the Java collections framework. Hibernate allows you to express queries in its own portable SQL
extension (HQL), as well as in native SQL, or with
Java
-
based Criteria and Example objects.


Another goal of Hibernate
is to remove the presence of SQL from the Java application
(
or at least from the non
-
database portions
)
. Typical applications based o
n Hibernate

use a design which is partitioned into pre
sentation
objects, business objects, and data access objects in an MVC relationship.


Finally, a goal of these tools is to remove impact of different “dialects” of database implementation, such as the
difference between Oracle and MySQL for example. When
you consider that these are both leading implementations

of SQL
, but have slightly different datatypes, query syntax, sequence generators, and connection strings, it is easy to
see why this is important.


In all mapping
-
based tools, maintaining the matches

between the database, the Java classes, and the mapping files can
be a significant effort, and there are different approaches, which can be organized around the chronology of “which
piece of the implementation is the master embodiment of the design”. For

instance:




Top down: you start with an existing domain model, and its implementation in Java. Quite common.



Bottom up: you start with an existing database schema and data model. Quite common.



Middle out: you start by generating the mapping files, sinc
e the mapping metadata provides sufficient
information to completely deduce the database schema and to generate the Java source code for the
persistence layer. Recommended only for designers who have great familiarity wit
h the mapping file
concept

and rep
resentation.



Meet in the middle: you have an existing set of Java classes and an existing database schema for which there
may or may not be simply mapping. Some refactoring will probably be needed, and so this is the most
complex of the four cases.


Ther
e are tools within Hibernate to carry out the “top down” approach (called
hbm2ddl
), and the “bottom up”
approach (called
hbm2java
). In addition there are database reverse engineering tools in MyEclipse and others.
Both of these Hibernate tools could be u
sed to carry out the “middle out” approach, after the mapping files are
created. For the last approach, you are largely on your own.


There are several layers to Hibernate and its distributions:



Hibernate Core: this is the primary API t
o Hibernate and it
s mapping files
. All of the discussion in this
document is about Hibern
ate Core, except for those sections covering use of Hibernate Annotations.



Hibernate Annotations: this is alternate way of specifying mapping information, by using Annotations in
your

Java files, similar to how XDoclet operates. By keeping the mapping data locally, the number of files to
keep in sync is reduced.
Since
late
2007
, we have been using Hibernate Annotations
almost exclusively
.

Page
2

of
32



Hibernate Validator: this is a different app
roach to validation from the usual approach used in JSF: rather
than having the validation rules expressed on the editing views, the validation rules are expressed on the
entity definitions (in both cases, they are carried out on the server side).



Hibernat
e EntityManager: this is a set of tools that implement EJB 3.0 entity persistence, which is based on
Hib
ernate. During 2007 this was not highly important, as we were using Hibernate directly, but in 2008 with
JBoss, understanding the EntityManager became

important.



Hibernate Tools:

this is described as an entirely new toolset for Hibernate3, implemented as an integrated
suite of Eclipse plugins, together with a unified Ant task for integration into the build cycle. Hibernate Tools
is a core c
omponent of

JBoss Eclipse IDE
.


Hiber
nate Core comes with C3P
0
, which is a pooling system like DBCP. However, C3P0 won't block in
synchronized code like DBCP can. One note: recommend setting numHelperThreads up above 6 to limit the pool
size under load.


Hibernate

appears to have

more industry support than any other O/R mapping tool.
Hibernate is a p
rofessional Open
Source project and
a critical component of the JBoss Enterprise Middleware System (JEMS) suite of products. For
instance, there are far more books an
d resources available, it is referred to more often in the hiring requirements of
companies surveyed on CraigsList

(during our research of 2006
-
2007)
, and it has become the persistence model of the
EJB 3.0 specification and implementations (called JPA, for

Java Persistence API).

Layers


This shows the most of Hibernate is located in the Session class, which holds the state information, and used in most
of the API cal
ls. Sessions come from a SessionFactory, which is based
on a C
onfiguration facility.


Above the session are classes for generating queries, such as the Query and Criteria classes, and code for managing
the transactions. For some reason this diagram impli
es that application code is using the Session rather than the Query
facilities, while in fact the most common classes in DAO’s of the application are Query and Criteria classes. But
perhaps this represents the fact that to persist an object, you call a me
thod on the Session object.


Not shown below this diagram are the database connection management, pooling, and caching facilities.

Released Versions

The

current version

i
s
3.6.1, released February 3, 2011. The first 3.6.x releases came out in late 2010.


The prior series (and most commonly used) is
3.5.6, released September 15, 2010
.
Hibernate 3.5.x came out in
April
2010, and includes



JSR 317 (JPA2) support.



Integration of hibernate
-
annotations, hibernate
-
entitymanager and hibernate
-
envers into the core
project. See
http://in.relation.to/14172.lace

for details

Page
3

of
32



Added
Infinispan

as a standard second
-
level cache. See
http://infinispan.blogspot.com/2009/10/infinispan
-
based
-
hibernate
-
cache.html

for details



Improvements to the new second
-
level caching SPI introduced in 3.3 based on feedback from implementers

including Ehcache, Inifinispan and JBoss Cache.



Far better read only / immutable support. See the new chapter added to the core reference manual dedicated
to the subject.



Support for
JDBC 4

such that Hibernate can be used in JDK 1.6 JVMs and make use of JDBC4
-
compliant
drivers.'



Support for
column
-
level read/write

fragments (HBM on
ly for now)



Initial support for
fetch profiles
.

Hibernate 3.4.x came out in

2009, and addresses the following goals
:



A fix for an out of memory error generated upon
calling list()

in
getSingleResult
.



A fix to an exception in the
EntityManager

during the de
letion of an entity.



The ability to build independent of the Hibernate Core Structure.



An upgrade to Hibernate Core 3.3.


Hibernate 3.3.x ca
me out in
November

2008, and addresses the following goals:



Redesigned, modular JARs

-

There are now a number of fin
e
-
grained JARs rather than one large JAR
-

this
allows users to more easily see and minimize their dependencies, and allows organizations to build
customized Hibernate deployments with unwanted parts left out



Maven
-
based build

-

Hibernate is now built usi
ng the
Apache Maven

build system



Revamped caching SPI

-

Based on feedback, the cache system has been refactored to allow more fine
-
grained control of the characteristics of different cache regions



JBoss Cache 2.x

integration

-

Based upon the new caching
SPI, JBoss Cache 2.x integration is now
available out
-
of
-
the
-
box


There are some Hibernate 3.3 upgrade notes at
http://www.agileapproach.com/blog
-
entry/hibernate
-
33
-
upgrade
-
tips


Hibernate 3.2 came out in late 2006, and
adds Java Persistence compliance (the Java Persistence API is the standard
object/relational mapping and persistence management interface of the Java EE 5.0 platform). I
t appears that the 3.2
release wa
s a m
ajor milestone and is a reference version, as the primary authors of Hibernate have updated their main
book to match this version.

The version used in JBoss 4.2.3 is Hibernate 3.2.4sp1.
The final version of the

3.2 release
series was 3.2.7.


Hibernate 3.
0 came out in late 2004, and reached 3.0.5 as a bug fix release by early 2005. Note that the number of
libraries used by Hibernate increased greatly during this time. Hibernate 3.0 increases the mapping capabilities,
adds
support for association joins ba
sed on SQL formulas and for SQL formulas
defined at the
<column>

level.



There were versions of Hibernate 2.x during 2004 and very early 2005. They are important now only in that they are
the documented version in several Hibernate books written during
2004 to 2005.


At Menlo School, we were running release 3.0.5 during late Spring 2006, and switched to 3.2 in November 2006, and
to 3.2.2 in February 2007.

We switched to 3.2.4
sp1

for
JBoss 4.2.3
-
based projects in late 2008, and
began to evaluate
3.3.1

an
d 3.3.2

for native projects (mostly test programs).

Resources for Information on Hibernate

The Hibernate web site supplies the reference documentation, and a set of FAQ’s. These
questions
link to some
simple tutorials. The reference documentation PDF was

212 pages long as of Hibernate

2.1, and is 34
2

pages as of
3.3.2
. There is a one
-
page quickref to the Hibernate API that is handy for review. However, it is the books that carry
the main discussion about the concepts and patterns of use, and only the bo
oks have enough examples to communicate
some of the more complex concepts and patterns.


“J
ava Persistence with Hibernate”

by Christian Bauer and Gavin King. Manning Press, November
2006, 9
04 pages.
List price $49.99,
Amazon price $32.99, used from $28.4
4.
Rated 3.5 stars on Amazon.com.
This should be thought
Page
4

of
3
2

of as the second edition of “Hibernate in Action”

(by the same authors)

and
this book

is the one to buy. We got a
copy in late December 2006. It is twice as long as the prior edition, and covers
the tools and process for converting
schema definitions to Java classes and vice versa.
Expanding considerably on the reference material provided in the
download, this book is invaluable.

What is somewhat confusing about the book is that for each topic, i
t has sections
that describe how to do it with Hibernate Core, then with Hibernate Annotations instead of the

XML files, then with
the

JPA
annotations and
calls (this haphazard organization is what has caused the book to get some less
-
positive
reviews).



Harnessing Hibernate” by James Elliot, Tim O’Brian, and Ryan Fowler. O’Reilly Press, April 2008, 380 pages. List
price $39.99, Am
azon price $26.20, used from $22.6
0.
Rating 4 stars on Amazon.com
. This appears to be O’Reilly’s
answer to Manning’s covera
ge of Hibernate, and is considered to be well
-
written. However, from reviewing it in the
bookstore, it doesn’t appear to cover anything

regarding Hibernate that isn’t in the above book, but it does add
material about Spring, Stripes, and Maven.


“Hibern
at
e: A J2EE Developer’s Guide”

by Will Iverson. Addison
-
We
sley, November 2004, 351 pages. List price
$39.99. Amazon price $26.01, used for $3.75. Discusses Hibernate 2.1.2, and is largely parallel to the Bauer/King
book
first edition

(which was written a
t the same time). It has chapters on setting up Hibernate, starting schema from
Java classes, starting schemas from an existing database, mapping, transactions, etc. The Bauer/King book is
generally getting better reviews.


“Pro Hibernate 3 (Expert’s Voi
ce Series)” by Jeff Linwood, Dave Minter.

APress, July 2005, 242 pages. List price
$39.99. Amazon price $26.39, used for $19.99. Comments on this book were mixed, and I have only glanced at it in
the bookstore. Its main strength is that it was the fir
st book to cover Hibernate 3’s API, but I don’t think that it adds
much beyond the reference manual materials.


“Hib
ernate: A Developer’s Notebook”

by James Elliot. O’Rei
lly Press, May 2004, 170 pages. List price
$24.95.
Amazon price $16.47, used price $
12.00. Rather dated now. This is part of the “lab notebook” approach to learning
new pieces of software, in which you are given very specific steps of installing and running Hibernate. Goes into
great detail about the Ant scripts that you would write to

run the tools, set up the database, etc. Uses HSQLDB and
MySQL for the examples. We got a great deal out of it, but found that we next needed to know much more about the
mapping files and the API that this book provided.


There is an article from OnJava

in April 2005,
http://www.onjava.com/pub/a/onjava/2005/08/03/hibernate.html

that
discusses various uses of formula mappings in Hibernate3.


There are also tools for using Hibern
ate provided within MyEclipse. They are documented in our
document “Notes
on MyEclipse”
.


There are also tools for using Hibernate provided in the JBoss Tools download. They are documented in our
document
“Notes on JBoss Tools”
.

Minimum S
et of Libraries

U
sing guidance from

the MyEclipse and Hibernate 3.3.1

materi
als, plus taking in the most up
-
to
-
date of the required
Apache libraries, we have concluded that the following set is the minimum set of libraries for using Hibernate at run
-
time in our application
s:

Page
5

of
32



(The junit
-
4.4
.jar is not actually needed, except for tests with our application).


This set of libraries supports all of the standard session management, content mapping, query management, and
insert/update/delete behavior, plus
annotations and
tr
ansactions. While Hibernate appears to have far more libraries
shipped wit hit, they appear to be needed only for doing schema generation, jBoss linkage, JMX linkage, reverse
engineering, etc. These version numbers are current as of February 2007.


Note
that Hibernate is now using slf4J instead of log4j.


There is also a requirement to have a log4.properties in order to avoid getting a warning about lack of appenders for
Hibernate to use. The shortest contents are:


# A log4j configuration file.


# The r
oot logger uses the appenders called A1 and A2. Since no

# priority is set, the root logger assumes the default which is one of

# TRACE, DEBUG, INFO, WARN, ERROR, FATAL

log4j.rootLogger=
INFO,

STDOUT


# STDOUT is set to be ConsoleAppender sending its output

to System.out

log4j.appender.STDOUT=
org.apache.log4j.ConsoleAppender


# STDOUT uses PatternLayout.

log4j.appender.STDOUT.layout=
org.apache.log4j.PatternLayout


# The conversion pattern

log4j.appender.STDOUT.layout.ConversionPattern=
%
-
5d

%
-
5p

[%t]

%c
{2}

-

%m%n

Hibernate

Annotations

The Hibernate A
nnotations facility is a huge step forward in reducing the number of configuration files that are used.
The Hibernate Annotations package includes:



Standardized Java Persistence and EJB 3.0 (JSR 220) object/relati
onal mapping annotations



Hibernate
-
specific extension annotations



Annotations for data integrity validation with the included Hibernate Validator API



Domain object indexing/searching for Lucene via the included Hibernate Lucene framework

Requirements:

A
t a minimum, you need JDK 5.0 and Hibernate Core, but no application server or EJB 3.0 container.

You can use Hibernate Core and Hibernate Annotations in any Java EE 5.0 or Java SE 5.0 environment.


Page
6

of
32

The current release is 3.4.0 GA, released on 08/20
/2008.

It consists of two libraries: hibernate
-
annotations.jar and
hibernate
-
common
s
-
annotations.jar, and it requires

the library ejb3
-
persistance.ja
r.


Here is an example of using the annotations facility:


@Entity

@Table(name="language")

public class Language

{


private String isoCode;


private String description;




@Id


@Column(name="code", length=5)


public String getIsoCode() {


return this.isoCode;


}


public void setIsoCode(String isoCode) {


this.isoCode = isoCode;


}




public Stri
ng getDescription() {


return this.description;


}


public void setDescription(String description) {


this.description = description;


}

}


The important annotations are @Entity, @Table, and @Column. These are sufficient for describing simple cla
sses.
More complex tags are introduced
in the material below
as the more complex mapping examples are discussed.


Since these annotations
are
also the specifications of the SQL schema that will be generated, it is important to add the
annotations th
at

def
ine the SQL

(even if they have minimal effect on the Java)
, such as:



l
engt
h

for strings: P
ut this in the @C
olumn annotation
. Default 255.



n
ullable
: to indicate what fields can be null.


Put this in the @Column annotation. Default false



precision,
scale

f
or numbers
.


Put this in the @Column annotation. Default 0 and 0



unique
:
Put
this in the

@Column annotation.
default false



datetimes
,
annotated with
@Temporal

Hibernate and HSQL

Many of the Hibernate examples focus on using the in
-
memory database called
HSQL. This was

formerly called

Hypersonic SQL

, and is unrelated to Hibernate itself.


Since HSQL is distributed with JBoss, some of the reasons for using it include ease of set
-
up and administration. The
best way to use it is to ha
ve Hibernate generate

a schema
, using annotations, and then specify the following in the
Hibernate configuration:

Basic Configuration for Hibernate

The Configuration object holds all configuration data, and can be populated by reading an XML file (defaults to
hibernate.cfg.xml
. This file defines session configuration data, as well as related resource data, such as the pointers to
the mapping files. The mapping files are typically located with the Java classes that they map, and there is typically
only one class per mapping fi
le. Here is an example:


<?xml version='1.0' encoding='utf
-
8'?>

<!DOCTYPE hibernate
-
configuration PUBLIC


"
-
//Hibernate/Hibernate Configuration DTD 3.0//EN"


"http://hibernate.sourceforge.net/hibernate
-
configuration
-
3.0.dtd">


<hibernate
-
con
figuration>

Page
7

of
32

<session
-
factory>


<property name="show_sql">true</property>


<property name="connection.url">



jdbc:mysql://localhost:3306/in


</property>


<property name="connection.username">root</property>


<property name="connection.password">root</prope
rty>


<property name="connection.driver_class">



com.mysql.jdbc.Driver


</property>


<property name="dialect">



org.hibernate.dialect.MySQLDialect


</property>



<mapping resource="dto/User.hbm.xml" />


<mapping resource="dto/Role.hbm.xml" />


<mapping r
esource="dto/Location.hbm.xml" />

</session
-
factory>

</hibernate
-
configuration>


You can enable automatic export of a schema when your
SessionFactory

is built by setting the
hibernate.hbm2ddl.auto

configuration property to
create

or
create
-
drop
.

The first

setting results in
DROP statements followed by CREATE statements when the
SessionFactory

is built.

The second setting adds
additional DROP statements when the application is shut down and the
SessionFactory

is closed

effectively
leavi
ng a clean database
after every run.


Also note that since Hibernate 3.1 you can include a file called “
import.sql
” in the runtime classpath of Hibernate. At
the time of schema export it will execute the SQL statements contained in that file after the schema has been exported
.

However, we have had real problems with this: the import.sql file cannot contain multi
-
line statements, which makes
it very hard to read

(since insert statements with many fields becomes a very long line of text
)
. So we don’t use this
approach much.

Co
nfiguration Facilities and Basic API Calls

The API includes classes and factory objects for sessions, transactions, queries, criteria, etc. In addition, the Session
object has calls to save or persist an object.

Configuration

The Configuration object hold
s all configuration data, and can be populated by reading an XML file (defaults to
hibernate.cfg.xml. This file defines session configuration data, as well as related resource data, such as the pointers to
the mapping files. The mapping files are typical
ly located with the Java classes that they map, and there is typically
only one class per mapping file.

Sessions

Get the SessionFactory from the configuration object, and then open a session. At the end of using the session
(typically after a sequence of
query/update operations), close the session, and then close the SessionFactory.


Given a SessionFactory, a session can be made for a specific Connection. The Connection class is part of java.sql.

Transactions

If a session is being used only for reading co
ntent, you need not have a transaction pending. However, if the session is
being used to save or update objects, this must be done on a session with a transaction pending. Writes are held until
the transaction is closed by committing it.


The transaction

is represented by a distinct object, which is created from the Session object.

Page
8

of
32

Queries and Criteria

These are description of SQL queries and their where clauses, expressed in terms of the objects that correspond to the
tables.

Example Code

The following c
ode, from a test framework, shows examples of each of the above object (other than Queries and
Criteria):


@Override


protected

void

setUp()
throws

Exception {



config =
new

AnnotationConfiguration();



config.configure();


sessionFactory = config.buildSe
ssionFactory();





newSession = sessionFactory.openSession();





newTransaction = newSession.beginTransaction();




}



@Override


protected

void

tearDown()
throws

Exception {



newTransaction.commit();



newSession.close();



sessionFactory.close();


}

Programmatic Generation of Schema

The SchemaExport class is the key to having Hibernate generate schema and install it in the database (to avoiding
having to “mysql source” it yourself. Create a SchemaExport object as follows:


Configuration cfg = new

Configuration().configure();

SchemaExport schemaExport = new SchemaExport(cfg);


Key attributes of SchemaExport are:



String delimiter
--

string to place at the end of each line



Boolean format


true/false if the output should be broken up into several li
nes for readability (generally true,
but the default is false)



String outputFile


name of file to write with content.


There is one key method on this class (most of the other methods are calls to this):


public void execute(boolean script, boolean expor
t,


boolean justDrop, boolean justCreate) {



log.info( "Running hbm2ddl schema export" );





if ( export ) {


log.info( "exporting generated schema to database" );


connectionHelper.prepare( true );


connection = connectio
nHelper.getConnection();


statement = connection.createStatement();


}



if ( !justCreate ) {


drop( script, export, outputFileWriter, statement );


}



if ( !justDrop ) {


create( script, export, outputFileWriter, statement );

Page
9

of
32


if ( export &
& importFileReader != null ) {


importScript( importFileReader, statement );


}




}


log.info( "schema export complete" );

}


From this, we can document the four boolean flags as follows:



script


if true, the output will be written to System.out as
well as the database and the outputFileName.



export


if true, the database must be connected to and dropped/created using the flags below



justDrop


if true, the creation of new schema is disabled



justCreate


if true, the dropping of prior database schem
a is disabled


Domain Models and Metadata

Here is a sample data model, which is from the Caveat Emptor project.


Writing Mapping Files

Here is a simple example of a mapping file:


<?xml version="1.0"?>

<!DOCTYPE hibernate
-
mapping PUBLIC


"
-
//Hibern
ate/Hibernate Mapping DTD 3.0//EN"


"http://hibernate.sourceforge.net/hibernate
-
mapping
-
3.0.dtd">


<hibernate
-
mapping>


<class name="com.storm.model.LocationDTO"



table="Location">


<id name="id" column="location_id" type="int"/>



<p
roperty name="name">


<column name="NAME" length="16" not
-
null="true"/>

Page
10

of
32


</property>


<many
-
to
-
one


name="type"


column="LocationType_ID"


class="com.storm.model.LocationTypeDTO"


not
-
null="true"/>


</class>

<
/hibernate
-
mapping>


There are three primary cases to be dealt with:




Single:

Those which are keys are done through the <id> tag. Those which are not keys are done
through the <property> tag.



Many
-
to
-
one: using the <m
any
-
to
-
one> element, and specifing

t
he related class.



One
-
to
-
many:

this is discussed under “Using Associations”, below.



Many
-
to
-
many: this is discussed under “Using Associations”, below.


Here is a many
-
to
-
one mapping, where a product has a productType:


<
class
name
=
"com.storm.model.Produc
tDTO"
table
=
"product"
>


<
id
name
=
"productId"
type
=
"java.lang.Integer"
>


<
column
name
=
"Product_ID"
/>


<
generator
class
=
"increment"
/>


</
id
>


<
property
name
=
"productName"
type
=
"java.lang.String"
>


<
co
lumn
name
=
"ProductName"
length
=
"50"
/>


</
property
>


<
many
-
to
-
one


name
=
"productType"


column
=
"ProductType_ID"


class
=
"com.storm.model.ProductTypeDTO"
not
-
null
=
"true"
/>

</class>


Here is a one
-
to
-
many mapping,

where an album has a list of many pictures.


<class name=”com.storm.model.AlbumDTO” table=”album”>

<set name="tracks" lazy="false">


<key column="ALBUM_ID"/>


<one
-
to
-
many class="com.storm.model.TrackDTO"/>


</set>

</class>


Here
is sample of a many
-
to
-
many mapping, where a user as a list of roles, and other users have the same roles:



<set name="roles" table="usersrole" inverse="true">


<key column="USERS_ID"/>


<many
-
to
-
many


column="Role_ID"



class="com.storm.model.RoleDTO"/>


</set>


There is also a facility for defining “formula columns”, these are ones for which the generated query contains a SQL
expression with restrictions and is run by the database.


<class name="Product" table
="Product"/>


<id name="productId"


length="10">


<generator class="assigned"/>


</id>




<property name="description"


not
-
null="true"

Page
11

of
32


length="200"/>


<property name="price" length="3"/>


<property name="nu
mberAvailable"/>




<property name="numberOrdered">


<
formula
>


( select sum(li.quantity)


from LineItem li


where li.productId = productId )


</
formula
>


</property>

</class>


The formula doesn’t h
ave a column attribute, and never appears in SQL update or insert statements, only selects.
Formulas may refer to columns of the database table, they can call SQL functions, and they may even include SQL
subselects. The SQL expression is passed to the un
derlying database as is


so avoid database vendor specific syntax
of functions or the mapping file will become vendor specific.

Examples of Annotations

@Entity

@Table(name="language")

public class Language {


private String isoCode;


private String desc
ription;




@Id


@Column(name="code", length=5)


public String getIsoCode() {


return this.isoCode;


}


public void setIsoCode(String isoCode) {


this.isoCode = isoCode;


}




public String getDescription() {


return this.description;


}


public void setDescription(String description) {


this.description = description;


}

}


The Table
-
level annotations specify the table name (which would define to the class name, anyway). The table level
association can also have a uniqueness
con
straint
, which specifies the columns to be combined in forming the unique
constraint. For instance, you might have:


@Table(name=”Role”, uniqueConstraints = @UniqueConstraint( columnNames = { “name” } ))


This would provide a database
-
level check of uniqu
e names for the roles.


Som
e of the most commonly
-
used relationships are many
-
to
-
one relationships.

Suppose, in the above example, that
each
ModelPlane

is associated with a
PlaneType

via a many
-
to
-
one relationship (in other words, each model
plane is asso
ciated with exactly one plane type, although a given plane type can be associated with several model
planes).

You could map this as follows:



@ManyToOne( cascade = {CascadeType.PERSIST, CascadeType.MERGE} )


@JoinColumn(name=”planeType_id”, nullabl
e=false)


@ForeignKey(name=”FK_PlaneType_Plane”)


public PlaneType getPlaneType() {



return planeType;


}

Page
12

of
32


The
CascadeType

values indicate how Hibernate should handle cascading operations.


In this case, the join column is being given a name (
if not specified, the join column would simply be “planeType”).


In this case, the index which is built for the relationship is being given a name (if not specified, the index name would
be a cryptic string of hexadecimal digits, at least for the MySQL dia
lect).


Another commonly
-
used relationship is the opposite of the above: the one
-
to
-
many
-
to
-
one relationship, also known as
a collection.

Collections are complex beasts, both in old
-
style Hibernate mapping and when using annotations, and
we'll just be ski
mming the surface
here to give you an idea of how it's done.


For example, in the above example, each
PlaneType

object may contain a collection of
ModelPlane
s.

You could map this as follows:




@OneToMany(mappedBy="planeType",


casca
de=CascadeType.ALL,


fetch=FetchType.EAGER)


@OrderBy("name")


public List<ModelPlane> getModelPlanes() {


return modelPlanes;


}

Many to Many with Join Tables

Suppose you are linking products to Items in a many
-
to
-
many w
ay, and also wish to record price on a
pair wise

combination basis. Use the following approach, which is based on page 304 of the “Java Persistence with Hibernate”
book:


@Entity

@Table(name="ordered_lines")

public class OrderLine {





@Embeddable





public static class Id implements Serializable {









@Column(name="order_id")









private Long orderId;










@Column(name="product_id")









private Long productId;










public Id() {}










public Id(Long orderId, Long produc
tId) {













this.orderId = orderId;













this.productId = productId;









}










public boolean equals(Object o) {













// compare this and that Id properties (orderIds and productIds)









}










public int hashCo
de() {













// add the orderId and productId hashCode()s









}





}






@EmbeddedId





private Id id = new Id();






@ManyToOne





@JoinColumn(









name="order_id",









insertable="false",









updatable="false"

Page
13

of
32





)





private Order order;






@ManyToOne





@JoinColumn(









name="product_id",









insertable="false",









updatable="false"





)





private Product product;






// create attributes you want to save in the join table






public

OrderedLine() {}






public OrderedLine(









Order order,









Product product









/* other attributes */) {









this.id.orderId = order.getId();









this.id.productId = product.getId();










// these are two one
-
to
-
many rel
ationships mapped









// one in each of Order and Product









order.getOrderLines().add(this);









order.getOrderLines().add(this);










// set other attributes





}






// accessor methods

}


For reasons that we don’t know, it is
required that the annotations be placed on the field declarations, rather than on
the get methods.

Id Generation

There are several different approaches, which were listed above.


In our examples, we have been using the “increment” mode; however, this will
have a problem if there are multiple
applications hitting the same database. Sequences or identity columns are really the way to go, since they are
consistent across all users of the database.


On MySQL tables which have “auto
-
increment” in the id columns
, you could use the “identity” approach, which
would also be used by the “native” approach.


It is also possible to write your own id generator, using a Hibernate API.


There was a very important reference document on Id generation in Hibernate. It is loc
ated at “
Don’t Let Hibernate
Steal Your Identity
”, and makes the following points:




Unless you define hashCode() and equals() in a meaningful way, Hi
bernate will be unable to determine
if to objects represent the same database row.



Defining hashCode() and equals() to use the id fields of the object can be problematic, since objects
which have not yet been persisted will all be treated as the same.



Impl
ementing hashCode() and equals() to have values that change when the fields are modified breaks a
requirement of Java Collections API.


The article concludes by suggesting that you don’t have Hibernate manage your ids, or use the generator in the
database,

but instead have code in your application which creates GUID’s whenever a persistable
object is

created.

Page
14

of
32


For describing an identity column through annotations, use the following:



@Id



@Column
(name =
"users_id"
)



@GenericGenerator
(name =
"generator"
, strategy =
"increment"
)



@GeneratedValue
(generator =
"generator"
)

Mapping Information used by Hibernate Tools

Most of our early examples have attribute such as “length” and “not
-
null” in the mapping files. It turns out that these
are used only the
Hibernate tools such as the ones that generate database schema or Java classes source code. While
most of the discussion here is about the Core Hibernate in which name, type, key, column, and table information is
needed, one of the goals of the mapping fi
le specification is that it includes ALL information needed to generate the
schema or Java class source code. This matches the “middle
-
out” development approach.


Examples of other information can be included is:



<id name=”id” type=”int” column=”TRACK_
ID”>


<meta attribute=”scope
-
set”>protected</meta>


<generator class=”native”/>


</id>



<property name=”added” type=”date”>


<meta attribute=”field
-
description”>When the track was added</meta>


</property>


This will generate a protected se
t method for “id”, and will add some comments in the header for the get method for
“added”.

Mapping

Associations

Recall that in Hibernate jargon an
entity

is an object that stands on its own in the persistence mechanism: it can be
created, queried, and del
eted independently of any other objects, therefore has its own persistent identity. A
component
, in contrast, is an object that can be saved to and retrieved from database, but only as a subordinate part of
some entity.


These are defined using the set ta
g. For example:


<set name=”bids”


inverse=”true”


cascade=”all
-
delete
-
orphan”>



<key column=”item_Id” />


<one
-
to
-
many class=”Bid” />

</set>


The
outer
-
join

attribute is deprecated. Use fetch="join" and fetch="select" instead of outer
-
join="true" and outer
-
join="false". Existing applications may continue to use the
outer
-
join

attribute, or may use a text search/replace to
migrate to use of the
fetch

attribute.


To control cascaded deletes,
use cascade=”add
-
delete
-
orphan”

on the associ
ation mapping. This means part of the
<set> tag.


When a many
-
to
-
many association is used it is important to set the cascade options to “save
-
update”. This shown in
the following example:


<
class
name
=
"com.incra.modules.user.model.User"


table
=
"users"
lazy
=
"false"
>



Page
15

of
32


<
id
name
=
"userId"
type
=
"java.lang.Integer"
>


<
column
name
=
"USERS_ID"
/>


<
generator
class
=
"increment"
/>


</
id
>



<
many
-
to
-
one


name
=
"organization"



column
=
"organizations_Id"


class
=
"com.incra.modules.user.model.OrganizationDTO"


lazy
=
"false"
/>


<
set
name
=
"roles"
table
=
"usersrole"


cascade
=
"save
-
update"
lazy
=
"false"
>


<
key
column
=
"USERS_ID"
/>



<
many
-
to
-
many


column
=
"Role_ID"


class
=
"com.incra.modules.user.model.Role"
/>


</
set
>


<
property
name
=
"userName"
type
=
"java.lang.String"
>


<
column
name
=
"NAME"
/>


</
property
>

</class>


In t
his case, a user has a many to one relationship to an organization, and a many to many relationship to roles.


Over on the roles side, there is no reference to users for a role. This also implies that the “inverse” attribute is false
(the default value) i
n the mapping shown above. If these settings are not established, then related data (such as the
USERSROLE table) will not be updated correctly.


The basic association mappings that correspond are @OneToOne, @ManyToOne, @OneToMany, @JoinTable,
combined wi
th a number of Hibernate
-
specific annotations such as “org.hibernate.annotations.CollectionOfElements

.


A collection can also be sorted via a speciation in the Hibernate annotations.

Composite Keys

Here is an example in the mapping file:


<hibernate
-
mappi
ng>


<class name="com.incra.modules.personalization.model.PresentationDTO"


table="presentation" catalog="in" lazy="false">


<composite
-
id>


<key
-
property name="userId" column="users_id" />


<key
-
property name="compon
entPath" column="componentPath" />


<key
-
property name="presentationName" column="PRESENTATIONNAME" />


</composite
-
id>




<property name="fieldNames" type="java.lang.String">


<column name="fieldNameList" />


</
property>


</class>

</hibernate
-
mapping>


In this case, the object has a three
-
part key, which is composed of an integer and two strings. We have used this
approach quite successful. We think that the compare method of the objects in quest
ion should be written to compare
exactly these attributes, but we have not done that as yet. The discussion about composite keys is on page 303 of the
“Java Persistence with Hibernate” book.

Custom Types

Hibernate allows you

to define a new data type to b
e used in the mapping model, and register converter methods for
that data type with the Hibernate SessionFactory. These converters work between the in
-
memory representation of the
custom type, and persisted field or fields, which may be strings, integers,

etc.


Page
16

of
32

Most of the
relevant

discussion is located in Chapter 5 of the book.

Polymorphic Mapping

This topic deals with the issue of having a class hierarchy of for your java objects, but having no equivalent facility in
the relational database.


There are s
everal approaches, which are covered in detail in Chapter 5 of the book:



Table per concrete class with implicit polymorphism



Table per concrete class with unions



Table per class hierarchy



Table per subclass


In the first option “table per concrete class wi
th implicit polymorphism” there is no knowledge of the class structure
provided to the database or even the mappings. You simply write distinct mapping files for each class. This is a
common starting point.


In the second option “table per concrete class

with unions”, there is a Hibernate mapping that includes the superclass,
such as:



<class name=”BillingDetails” abstract=”true”>


<id name=”id” />


<property name=”name” column=”OWNER” />



<union
-
subclass name=”CreditCard” table=”CREDIT_CA
RD” >


<property name=”number” />


<property name=”expMonth” />


</union
-
subclass>



<union
-
subclass name=”BankAccount” table=”BANK_ACCOUNT” >


<property name=”acctId” />


<property name=”branchLoc” />


</union
-
subcl
ass>


</class>


In the third option, “table per class hierarchy”, all of the classes of the hierarchy are collapsed into one table, with a
discriminator column. Here is an example:



<class name="Work" table="works" discriminator
-
value="W">


<
id name="id" column="id">


<generator class="native"/>


</id>


<discriminator column="type" type="character"/>



<property name="title"/>


<set name="authors" table="author_work">


<key column name="work_id
"/>


<many
-
to
-
many class="Author" column name="author_id"/>


</set>



<subclass name="Book" discriminator
-
value="B">


<property name="text"/>


</subclass>



<subclass name="Song" discriminator
-
value="S">



<property name="tempo"/>


<property name="genre"/>


</subclass>



</class>


Page
17

of
32

The final option, “table per subclass” is to represent inheritance relationships as relational foreign key associations.
Every class/subclass that d
eclares persistent properties
-

including abstract classes and even interfaces


has its own
table.


Unlike the table per concrete class strategy that we describe first, the table here contains columns only for the non
-
inherited properties, and the tables
are joined at fetch time to match the class is
-
a relationships. This approach can
create more tables and more complex joins, and should be used cautiously.

Persistent Enumerated Types

If you use JDK 5.0, you can use the built
-
in support for type
-
safe enum
erations. For example, a Rating class might
look like:



package auction.model;



public enum Rating {EXCELLENT, OK, BAD}


The Comment class has a property of this type:



public class Comment {





public Rating rating;


public I
tem auction;





}


This is how you use the enumeration in the application code:



Comment goodComment = new Comment(Rating.EXCELLENT, thisAuction);


You can have Hibernate handle the mapping of the possible values into strings or

integers in t
he database. In earlier
versions of Hibernate you had to write a type handler for this, but in 3.2.x, you can just use the annotation
@Enumerated. For more information, see our document “Notes on Java Enum Construct”.

Working with Objects

This material i
s derived from Chapter 9 of the book, which deals with the Persistence lifecycle. In many respects, the
mapping file describes the static aspects of the Object/Relational mismatch, here we are dealing with the behavioral
aspects, such as “when does an obj
ect get created? Or saved? Or restored? etc.” Chapter 9 is the first part of material
called “Conversational Object Processing”. The conversation is between the application and the database, over time.


Different ORM solutions use different terminology a
nd define the different states and state transitions for the
persistence lifecycle. The diagram is shown below:


Page
18

of
32



The diagram can be summarized as follows:


Objects created with the new operator are not immediately persisted. Their state is
transient
.


A
persistent

instance is an entity instance with a database identity, such as having the id’s filled in.


An object is in the
removed

state if it has been scheduled for deletion at the end of a unit of work, but it’s still
managed by the persistence conte
xt until the unit of work completes.


A
detached

object is one that was read from a persistence session, but the session has now been closed.

Operations on Objects

Loading of objects is done through the query or criteria objects, as described above.


The m
ost important concept to understand is that of lazy fetching. As
sociations are lazy by default.

If you try to fetch
a contained object that was lazy loaded, and the session is already closed (in other words, the object is “detached”),
you will get a Lazy
InitializationException.


Since it is best practice to map almost all classes and collections using
lazy="true"
, that is the default.


Here are the two most common operations on an object:


session.save(object);


session.delete(object);


The session.save()

operation works for either new objects or persisted objects that are being updated.


As indicated above, these calls only work inside a transaction. We learned this the hard way.


Page
19

of
32

There are a variety of options that can be specified in the mapping file t
o determine how Hibernate should generate
the keys of new objects being persisted, or how Hibernate should handle the updating or deleting of objects associated
with the object passed to save() or delete(). These are called “cascade” options, and are disc
ussed below.


In the JPA api, this is written as EntityManager.persist(myEntity).


One of the most common errors incurred when using the save() function is

“detached entity passed to persist”. T
here
is a good explanation of this at
https://forum.hibernate.org/viewtopic.php?p=2404032
. The authors point out that the
error message is unclear, it might better expressed as

"cannot persist entity with identifier set manually AND via
Identifier
Generator”.

For instance we got this error when the object being saved accidentally had a value in its id
field.

Further Discussion of Fetch Modes

One of the most useful facilities in Hibernate is that although the fetch mode is specified in the annotatio
ns or the
mapping file, it can be overridden for a specific query.

Use the setFetchMode method on the Criteria object. This
takes the following
parameters
:

associationPath

-

a dot
-
separated property path

mode

-

t
he fetch mode for the referenced associati
on


Typically, you set up the fetch modes to be Lazy in the annotations and mapping files, and then turn on
FetchMode=JOIN only when needed.


The meaning
s

of the FetchModes are:


static

FetchMode

DEFAULT



Default to the setting configured in the mapping file.

static

FetchMode

EAGER



Deprecated.

use
FetchMode.JOIN


static

FetchMode

JOIN



Fetch using an outer join.

static

FetchMode

LAZY



Deprecated.

use
FetchMode.SELECT


static

FetchMode

SELECT



Fetch eagerly,
using a separate select.


In the Incra code framework, there was a method that accepted a list of field paths and would examine each one
(going to the next
-
to
-
last field in the dotted set) to see what associationPaths needed to have the fetchMode
.JOIN

turn
ed on. Then it would generate a criteria object, set the paths, and return the object.

Generating Keys

There are a variety of approaches which are available for generating keys or ids, with different overhead and different
behavior. The most commonly use
d are:




native
:

the native generator picks other generators, such as identity, sequence, or hilo, depending
upon the capabilities of the database. Use this for portability.



identity
:

supports the identity columns in MySQL, DB2, MS SQL Server, and Sybase
.



sequence
:

creates a sequence in DB2, PostgreSQL, Oracle, and SAP DB, and uses sequential values
from it.



increment
:


at startup, reads the maximum current value, then generates following values.


Most database schema design literature goes through a dis
cussion of why keys should not be related to a business
domain function (such as social security numbers), because even though they appear to be unique, what happens if
they are changed or replaced?

Transaction Management

In this chapter, we finally tal
k about transactions and how you create and control units of work in a application. We'll
show you how transactions work at the lowest level (the database) and how you work with transactions in an
application that is based on native Hibernate, on Java Pers
istence, and with or without Enterprise JavaBeans.


Page
20

of
32

Transactions allow you to set the boundaries of a unit of work: an atomic group of operations. They also help you
isolate one unit of work from another unit of work in a multiuser application. We talk abo
ut concurrency and how you
can control concurrent data access in your application with pessimistic and optimistic strategies


Transactions group many operations into a single unit of work. If any operation in the batch fails, all of the previous
operation
s are rolled back, and the unit of work stops. Hibernate can run in many different environments supporting
various notions of transactions. Standalone applications and some application servers only support simple JDBC
transactions, whereas others support

the Java Transaction API (JTA).

Nonmanaged Environment

Hibernate needs a way to abstract the various transaction strategies from the environment. Hibernate has its own
Transaction class that is accessible from the Session interface, demonstrated here:




Session session = factory.openSession();


Transaction tx = session.beginTransaction();



Event event =

new Event();


//...populate the Event instance


session.saveOrUpdate(event);



tx.commit();


In this example, factory is an initialized
SessionFactory instance. This code creates an instance of the
org.hibernate.Transaction class and then commits the Transaction instance.

In a nonmanaged environment, the JDBC API is used to mark transaction boundaries.

You begin a transaction by
calling
setAutoCommit(false)

on a JDBC C
onnection

and end it by calling
commit()
.

You may, at any
time, force an immediate rollback by calling
rollback()
.

Notice that you don't need to call session.flush(). Committing a transaction automatically flushes the Sess
ion object.
The Event instance is persisted to the database when the transaction is committed. The transaction strategy you use
(JDBC or JTA) doesn't matter to the application code
-
it's set in the Hibernate configuration file.

Managed Environment

In a ma
naged environment, the
JTA transaction manager is in control of the transactions. The following are benefits
of managed resources with JTA and the reasons to use this J2EE service (it is available with both JBoss and Spring):




A transaction
-
management ser
vice can unify all resources, no matter of what type, and expose transaction
control to you with a single standardized API. This means that you can replace the Hibernate
Transaction

API and use JTA directly everywhere. It's then the responsibility of the a
pplication deployer
to install the application on (or with) a JTA
-
compatible runtime environment. This strategy moves portability
concerns where they belong; the application relies on standardized Java EE interfaces, and the runtime
environment has to prov
ide an implementation.



A Java EE transaction manager can enlist multiple resources in a single transaction. If you work with several
databases (or more than one resource), you probably want a two
-
phase commit protocol to guarantee
atomicity of a transactio
n across resource boundaries. In such a scenario, Hibernate is configured with
several
SessionFactory
s, one for each database, and their
Session
s obtain managed database
connections that all participate in the same system transaction.



The quality of JTA im
plementations is usually higher compared to simple JDBC connection pools.
Application servers and stand
-
alone JTA providers that are modules of application servers usually have had
more testing in high
-
end systems with a large transaction volume.



JTA provi
ders don't add unnecessary overhead at runtime (a common misconception). The simple case (a
single JDBC database) is handled as efficiently as with plain JDBC transactions. The connection pool
managed behind a JTA service is probably much better software t
han a random connection pooling library
you'd use with plain JDBC.

Page
21

of
32

Transaction boundaries can be specified by programmatic calls, or by declarations to the container. The latter looks
like one of the following annotations:


@TransactionAttribute(Transacti
onAttributeType.REQUIRED)


Here is a full table of them (in the book, this is on page 513, in the chapter on conversations)


Conversation Management

Conversation management is a way of understanding the persistence and session management policies associat
ed with
a sequence of operations on the database. There are no objects that are called “conversations” however.
Conversations are controlled by creating and deletion of session objects, and by attachment and detachment of
persistence objects to and from t
he session.


An example would be to have a conversation that represents the sequence of steps to edit an object through a GUI,
including the checking for conflicting changes by another user if the system is multi
-
user (as are all web
-
based
applications).


There are three typical control policies:



No session or transaction



Hibernate session

level



User session
-
level


Chapter 9 of
Hiberna
te: a Developer’s Guide

describes the following:


Hibernate supports several different models for detecting this sort of versioning conflict (or optimistic locks). The
versioning

strategy uses a version column in the table. You can use either a
version

property tag or a
timestamp

property tag to indicate a version column for the class (see the appropriate tag for more information). When a record
is updated, the versioning column is automatically updated as well. This versioning strategy is generally co
nsidered
the best way to check for changes both for performance and for compatibility with database access that occurs outside
of Hibernate. For example, you can simply issue a
SELECT

to obtain a record by id, and include the version column
in the
WHERE

c
lause to indicate the specific record you wish to
UDPATE;

a failure to update is an easy way to detect
Page
22

of
32

that the record is not properly synchronized. This is precisely the functionality as provided by the
StaleObjectException
.


The
dirty

strategy only comp
ares the columns that need to be updated. For example, let's say you load a
Cat

object,
with a name ("Tom"), weight, and color. You change the name to "Bob" and want to save the change. Hibernate will
verify that the current
Cat

name is "Tom" before upd
ating the name to "Bob." This is computationally intensive
compared to the versioning strategy, but can be useful in situations in which a version column is not feasible.


The
all

strategy compares all the columns, verifying that the entire
Cat

is as it w
as when loaded before saving.
Using the
Cat

example in the preceding paragraph, Hibernate will verify that the name, weight, and color are as
loaded before persisting the change to the name. While more secure than the dirty strategy, this strategy, too, is

more
computationally intensive.


Finally, the
none

strategy can be used to ignore optimistic locking. Updated columns are updated, period. This
obviously performs better than any other strategy but is likely to lead to problems if you have more than a s
ingle user
updating related records.


Generally speaking, using a versioning strategy to manage conflicts is recommended. The pessimistic model, unless
very carefully managed, can lead to more problems than it solves, and the dirty and all mechanisms are
not very high
-
performance.

Modifying Objects Efficiently

One could certainly persist each object in a tree or other graph of objects, by calling session.saveOrUpdate() on each
one. However, this would be tedious, and it would place a burden on

the programmer to keep track of which objects
are changed, new, deleted from the graph, etc.


Instead, Hibernate can carry out this traversal, and maintain transitive persistence.


It will follow each association, according the cascade mode. These are an
notated on each association, and have the
following values:


XML attribute

Annotation Description

None

(Default)

Hibernate ignores the association.

save
-
update

org.hibernate.annotations.CascadeType.SAVE_UPDATE

Hibernate navigates the association when th
e Session is flushed and when an object is passed to
save()

or
update()
, and
saves newly instantiated transient instances and persist changes to detached instances.

persist

javax.persistence.CascadeType.PERSIST

Hibernate makes any associated transient ins
tance persistent when an object is passed to
persist()
. If you use native
Hibernate, cascading occurs only at call
-
time. If you use the
EntityManager

module, this operation is cascaded when the
persistence context is flushed.

merge

Javax.persistence.Casca
deType.MERGE

Hibernate navigates the association and merges the associated detached instances with equivalent persistent instances when an

object is passed to
merge()
. Reachable transient instances are made persistent.

delete

org.hibernate.annotations.Cas
cadeType.DELETE

Hibernate navigates the association and deletes associated persistent instances when an object is passed to
delete()

or
remove()
.

remove

javax.persistence.CascadeType.REMOVE

This option enables cascading deletion to associated persistent i
nstances when an object is passed to
remove()

or
delete()
.

lock

org.hibernate.annotations.CascadeType.LOCK

This option cascades the
lock()

operation to associated instances, reattaching them to the persistence context if the objects
are detached. Note tha
t the
LockMode

isn't cascaded; Hibernate assumes that you don't want pessimistic locks on associated
objects

for example, because a pessimistic lock on the root object is good enough to avoid concurrent modification.

replicate

org.hibernate.annotations.Ca
scadeType.REPLICATE

Hibernate navigates the association and cascades the
replicate()

operation to associated objects.

evict

org.hibernate.annotations.CascadeType.EVICT

Hibernate evicts associated objects from the persistence context when an object is pass
ed to
evict()

on the Hibernate
Page
23

of
32

XML attribute

Annotation Description

Session
.

refresh

javax.persistence.CascadeType.REFRESH

Hibernate rereads the state of associated objects from the database when an object is passed to
refresh()
.

all

javax.persistence.CascadeType.ALL

This setting includes a
nd enables all cascading options listed previously.

delete
-

orphan

org.hibernate.annotations.CascadeType.DELETE_ORPHAN

This extra and special setting enables deletion of associated objects when they're removed from the association, that is, fro
m a
collect
ion. If you enable this setting on an entity collection, you're telling Hibernate that the associated objects don't have shar
ed
references and can be safely deleted when a reference is removed from the collection.

Examples

Many to one


@ManyToOne
(cascade
= { CascadeType.
PERSIST
, CascadeType.
MERGE

} )
@
org.
hibernate.annotations.
Cascade
(org.hibernate.annotations.CascadeType.
SAVE_UPDATE
)

@JoinColumn
(name =
"organizationtype_id"
, nullable =
false
)

@ForeignKey(name = "FK_ORG_ORGTYPE")

public

OrganizationType g
etOrganizationType() {


return

organizationType
;

}

public

void

setOrganizationType(OrganizationType organizationType) {


this
.
organizationType

= organizationType;

}


One to Many


@OneToMany
(mappedBy=
"organization"
, cascade=CascadeType.
ALL
, fetch=FetchTyp
e.
LAZY
)


@
org.hibernate.annotations.
Cascade
(org.hibernate.annotations.CascadeType.
DELETE_ORPHAN
)

public

List<Facility> getFacilities() {


return

facilities
;

}

public

void

setFacilities(List<Facility> facilities) {


this
.
facilities

= facilities;

}


Here i
s the other side of the One to Many


@ManyToOne
()

@JoinColumn
(name=
"organization_id"
, nullable=
false
)

public

Organization getOrganization() {


return

organization
;

}

public

void

setOrganization(Organization organization) {


this
.
organization

= organizati
on;

}

Queries and Criteria

While we will
later
see an approach to querying objects using a SQL
-
like query language, the simplest approach to
fetch that uses an object
-
oriented interface and doesn’t require learning a query language is the Criteria class.
(Note
that the

Java Persistence with Hibernate


book focuses on the SQL
-
like query language before discussing the Criteria
class
. This appears to be for the benefit of EJB3
-
oriented and native SQL
-
oriented developers who are used to a
statement
-
string
-
ba
sed representation. But we began to focus on object
-
oriented query languages several years back,
and this document reflects such.)


A Criteria object is

a
fetch specification which is built, restriction clause by rest
riction clause, using an object
-
orient
ed
API. Here is an example:


public static List getTracks(Time length) {


Criteria crit = session.createCriteria(Track.class);

Page
24

of
32



crit.add(Restrictions.eq(“categoryId”, 456);


crit.add(Restrictions.le(“playTime”, length);


crit.addOrder(Order.asc(“
title”));



return crit.list();

}


(Note that Restrictions has replaced the semi
-
deprecated class Expression. The example above is correct).


From this example, you can conclude that an object
-
oriented fetch specification starts with the class to be fet
ched,
which in turn must have been field
-
mapped. Then we add material for the specific field
-
level restrictions. Also, we
added information about the ordering of results.


The C
riteria object is executed using one of the two routines:



Object result
= criteria.uniqueResult(); // Throws Hibernate exception


// if more than 1 match for


// criteria


List list = criteria.list();


The Criteria object is one o
f the most useful pieces in Hibernate, as many of our programs (particularly on the
Summary screens) need to programmatically construct restriction specifications. It is better than the string
-
building
approach that we used with O/R mapping tools that did
n’t have this facility. This also relates to the topic of when to
use EJB3/JPA vs. when to use Hibernate: you may want to use Hibernate in this case, since the API
-
based Criteria
building facilities are not present in JPA as of its current version.


Furt
her discussion of how to use the Criteria object follows. Since the Criteria is based on a target class, the
fields
that are being restricted on are directly in the target class. In order to add restrictions on subobjects, first build a sub
-
criteria, as
in:


List cats = sess.createCriteria(Cat.class)

.add( Restrictions.like("name", "F%") )

.createCriteria("kittens")

.add( Restrictions.like("name", "F%") )

.list();


Note that the second
createCriteria()
returns a new instance of
Criteria
, which refers to t
he elements of the

kittens
collection.


From a source
-
code perspective, this might be clearer as:


Criteria crit1 = sess.createCriteria(Cat.class);

crit1.add( Restrictions.like("name", "F%") );


Criteria crit2 = crit1.createCriteria("kittens");

c
rit2.add(
Restrictions.like("name", "F%") );


List cats = crit1.list();


The following, alternate form is useful in certain circumstances.


List cats = sess.createCriteria(Cat.class)

.createAlias("kittens", "kt")

.createAlias("mate", "mt")

.add( Restrictions.eqPrope
rty("kt.name", "mt.name") )

.list();


Where
createAlias()
does not create a new instance of
Criteria
, instead this is similar to creating aliases for
tables in the SQL expression.

Page
25

of
32


Both the createAlias form and the creation of subcriteria allow for the spe
cification of the join type. By default this an
inner join, but it can be changed to a left join using CriteriaSpecification.LEFT_JOIN.


There are also facilities for creating “And” and “Or” grouping of restrictions, as in:



Criteria crit = session.cr
eateCriteria(Track.class);


Disjunction any = Restrictions.disjunction();



any.add(Restrictions.le(“
playtime”, length));


any.add(Restrictions.like(“title”, “%A%”));


crit.add(any);


Then there is a SQLRestriction facility, which allows a SQL
restriction t
o be added to the where statement.



Criteria crit = session.createCriteria(Track.class);


crit.add(Restrictions.sqlRestriction


(“’100’ > all (select b.AMOUNT from BID b where b.ITEM_ID = 456)”;



Chapter 15 of the “Java Persist
ence in Action” book explains more of this.


In addition, there are facilities to specify sort order in the Criteria object. This again is often used in Summary or
reporting screens.


Finally, there is a “query by example” facility. This works by creatin
g a Criteria object, then creating an example
object and adding it to the criteria. Here is an example:



public List findUsersByExample(User u) {


Criteria crit = session.createCriteria(User.class);


Example e = Example.create(u);



e.ignoreCase();


e.excludeProperty(“password”);


crit.add(e);



return crit.list();


}


The benefit of this approach is that one could also add sort ordering conditions to the criteria before the call to
crit.list().


One gotcha t
hat we have seen here is that Hibernate’s query by example facility only generates restrictions on the non
-
id fields on the template object. It also skips the association fields.

Advanced Queries and Criteria

Reporting queries are a different way to use H
ibernate
Criteria objects
. You can select exactly which objects or
properties of objects you need in the query result and how you possibility want to aggregate and group results for a
report.

Reporting queries are described through Criteria objects in a
very similar, with the ability to specify join and
sorts, but they return field values rather than Java objects. A reporting query will have a
projection

specified on the
Criteria object.


The following criteria query (from the Hibernate book) returns onl
y the identifier values of the items


session.createCriteria(Item.class)


.add( Restrictions.gt(“endDate”, new Date()),


.setProjection( Projections.id() );


The trick here is the setProjection() method, which changes the Criteria object into a rep
orting query. While id() is a
built
-
in function in the Projections class, there is another that will find a named field:


Page
26

of
32

session.createCriteria(Item.class)


.add( Restrictions.gt(“endDate”, new Date()),


.setProjection( Projections.projectionList
.


.add(Projections.id() )


.add(Projections.property(“description”) )


.add(Projections.property(“initialPrice”));


Note that even if projections are added to the subcriteria, they are actually added to the main criteria object.

Hence, if
they wish to refer to fields in the join specified by the subcriteria, they must use a field name path that includes an
alias.


The result of a projection
-
based query are a set of Object[] rows, rather than objects of a specific Java class.
Ho
wever, this can b
e

changed by using the ResultTransformer called AliasToBeanResultTransformer, which takes a
Java class as its argument.


Aggregation and grouping can also be carried out through additional modifiers on the setProjection() method’s
argument
s.


For instance, you can add Projections.groupProperty(“u.userName”),

Projections.count(“id”), Projections.avg(“amount”), Projections.min(“cost”), etc.


The effect of adding the projections is to add a set of projection clauses to the end of the SQL sta
tement (such as
“group by”).


The result of using this query is set of row of Object arrays that are laid out just as the grid
-
like table would be directly
from the database. There is no hierarchy of rows to indicate groups, etc. This means, however, tha
t i
f we want to
display a set of ra
w
data
records as well as the groups and aggregatio
ns, we need to make two queries (m
ost of the
reporting engines are doing this behind the scenes).


However, the API provided by Hibernate is a much more economical way
to

create the SQL statement than

writing
your own SQLClause builder.

Using HQL

Now we turn to the string
-
based representation of a query, called HQL.
The simplest form of a query is a Query
object, which takes an HQL expression. HQL is like SQL, but is mor
e tuned to the objects that are mapped to the
tables. For instance, you use Java class names instead of table names.


Examples of using Query objects are:


Query tenantQuery = session.createQuery(
"from dto.Tenant as tenant"
);


A query is carried out by
calling its “list” method. This returns a java.util.List object, which can be iterated over.



The HQL of a query can be parameterized, either by position or by keyword. Here is an example of one by position:


Query query = session.createQuery


(“from

Item item where item.description like ? and item.type = ?”);

query.setString(0, “northern”);

query.setInteger(1, 40);


Here is one by keyword:


Query query = session.createQuery


(“from Item item where item.description like :desc and item.type = :type”
);

query.setString(“description”, “northern”);

query.setInteger(“type”, 40);


The query string of HQL can be located in the application program, or it can be a named query in the Hibernate
mapping file. An example would be:

Page
27

of
32


<query name=”findItemsByDescri
ption”>


<![CDATA[from Item item where item.description like :description]]>

</query>


A query can be evaluated to return a list or a single object. Here is an example of each:


The
Q
uery

object
(like the C
riteria object)
is executed using one of the tw
o routines:



Object result = query.uniqueResult(); // Throws Hibernate exception


// if more than 1 match for


// criteria


List list = query.list();

Lazy and N
on
-
Lazy Loading

Another important concept is Lazy and non
-
Lazy loading of related records. By default, Hibernate never loads data
that you didn’t ask for. You can either ask for this in the master configuration file, or within each query criteria.


An im
portant content is that of lazy references, which allow Hibernate to fetch an object that is associated to other
objects (including child objects), and not fetch the child objects until they are referred to by the application code.
However, this means tha
t the session object that was used for the query of the original object must still be open, as
Hibernate will use it to fetch on demand. This means that application designers must plan for the sessions to be long
-
lived, or determine a fetch strategy of ma
rking some references lazy=

false” so that all needed objects are provided.
The lazy fetch mode is specified in the mapping file. This is discussed further below.

Cache Management

Cache Providers

As we mentioned earlier, caching is a common method used
to improve application performance. Caching can be as
simple as having a class store frequently used data, or a cache can be distributed among multiple computers. The logic
used by caches can also vary widely, but most use a simple least recently used (LRU
) algorithm to determine which
objects should be removed from the cache after a configurable amount of time.


Before you get confused, let's clarify the difference between the Session

level cache, also called the first

level cache,
and what this section co
vers. The Session

level cache stores object instances for the lifetime of a given Session
instance. The caching services described in this section cache data outside of the lifetime of a given Session. Another
way to think about the difference is that the
Session cache is like a transactional cache that only caches the data
needed for a given operation or set of operations, whereas a second

level cache is an application
-
wide cache.


By default, Hibernate supports four different caching services, listed in t
able 2. EHCache (Easy Hibernate Cache) is
the default service. If you prefer to use an alternative cache, you need to set the cache.provider_class property in the
hibernate.cfg.xml file:



<property name="cache.provider_class">


org.hibernate.cache
.OSCacheProvider


</property>


This snippet sets the cache provider to the OSCache caching service.


Table 2: Caching Services Supported by Hibernate

Caching
Service

Provider Class

Type

EHCache

org.hibernate.cache.EhCacheProvider

Memory,disk

OSCach
e

org.hibernate.cache.OSCacheProvider

Memory,disk

Page
28

of
32

SwarmCache

org.hibernate.cache.SwarmCacheProvider

Clustered

TreeCache

org.hibernate.cache.TreeCacheProvider

Clustered


The caching services support the caching of classes as well as collections belonging

to persistent classes. For instance,
suppose you have a large number of Attendee instances associated with a particular Event instance. Instead of
repeatedly fetching the collection of Attendees, you can cache it. Caching for classes and collections is co
nfigured in
the mapping files, with the cache element:



<class name="Event"table="events">


<cache usage="read
-
write"/>


...


</class>


Collections can also be cached:



<set name="attendees">


<cache usage="read
-
write"/>



...


</set>


Once you've chosen a caching service, what do you, the developer, need to do differently to take advantage of cached
objects? Thankfully, you don't have to do anything. Hibernate works with the cache behind the scenes, so concerns
ab
out retrieving an outdated object from the cache can be avoided. You only need to select the correct value for the
usage attribute.


The usage attribute specifies the caching concurrency strategy used by the underlying caching service. The previous
confi
guration sets the usage to read

write, which is desirable if your application needs to update data. Alternatively,
you may use the nonstrict

read

write strategy if it's unlikely two separate transaction threads could update the same
object. If a persiste
nt object is never updated, only read from the database, you may specify set usage to read
-
only.


Some caching services, such as the JBoss TreeCache, use transactions to batch multiple operations and perform the
batch as a single unit of work. If you choo
se to use a transactional cache, you may set the usage attribute to
transactional to take advantage of this feature. If you happen to be using a transactional cache, you'll also need to set
the transaction.manager_lookup_class mentioned in the previous se
ction.


The supported caching strategies differ based on the service used. Table 3 shows the supported strategies.


Table 3: Supported Caching Service Strategies

Caching
Service

Read
-
only

Read
-
write

Nonstrict
-
read
-
write

Transactional

EHCache

Y

Y

Y

N

OS
Cache

Y

Y

Y

N

SwarmCache

Y

Y

Y

N

TreeCache

Y

N

N

Y


Clearly, the caching service you choose will depend on your application requirements and environment.

Configuring EHCache

By now you're probably tired of reading about configuring Hibernate, but EHCach
e is pretty simple. It's a single
XML file, placed in a directory listed in your classpath. You'll probably want to put the ehcache.xml file in the same
directory as the hibernate.cfg.xml file.


Page
29

of
32


<ehcache>


<diskStore path="java.io.tmp"/>


<
defaultCache


maxElementsInMemory="10"


eternal="false"


timeToIdleSeconds="120"


timeToLiveSeconds="120"


overflowToDisk="true"/>


<cache name="com.manning.hq.ch03.Event"


maxElementsInMemory="20"


ete
rnal="false"


timeToIdleSeconds="120"


timeToLiveSeconds="180"


overflowToDisk="true"/>


</ehcache>


In this example, the diskStore property sets the location of the disk cache store. Then, the listing declares two caches.
The def
aultCache element contains the settings for all cached objects that don't have a specific cache element: the
number of cached objects held in memory, whether objects in the cache expire (if eternal is true, then objects don't
expire), the number of seconds

an object should remain the cache after it was last accessed, the number of seconds an
object should remain in the cache after it was created, and whether objects exceeding maxElementsInMemory should
be spooled to the diskStore. Next, for custom settings

based on the class, the code defines a cache element with the
fully qualified class name listed in the name attribute. (This listing only demonstrates a subset of the available
configuration for EHCache. Please refer to the documentation found at http:/
/ ehcache.sf.net for more information.).

Using Hibernate
-
Managed JDBC connections

The sample

configuration file in below

uses Hibernate
-
managed JDBC connections. You would typically encounter
this configuration in a non
-
managed environment, such as a stan
dalone application.


<?xml version="1.0">

<!DOCTYPE hibernate
-
configuration PUBLIC

"
-
//Hibernate/Hibernate Configuration DTD 3.0//EN"

"http://hibernate.sourceforge.net/hibernate
-
configuration
-
3.0.dtd">


<hibernate
-
configuration>


<session
-
factory>


<pr
operty name="connection.username">uid</property>


<property name="connection.password">pwd</property>


<property name="connection.url">jdbc:mysql://localhost/db</property>


<property name="connection.driver_class">com.mysql.jdbc.Driver</property>


<property name="dialect">org.hibernate.dialect.MySQLDialect</property>



<mapping resource="com/manning/hq/ch03/Event.hbm.xml"/>


<mapping resource="com/manning/hq/ch03/Location.hbm.xml"/>


<mapping resource="com/manning/hq/ch03/Speaker.hbm.xm
l"/>


<mapping resource="com/manning/hq/ch03/Attendee.hbm.xml"/>


</session
-
factory>

</hibernate
-
configuration>


To use Hibernate
-
provided JDBC connections, the configuration file requires the following five properties:



connection.driver_class
-
The JDB
C connection class for the specific database



connection.url
-
The full JDBC URL to the database



connection.username
-
The username used to connect to the database



connection.password
-
The password used to authenticate the username



dialect
-
The name of th
e SQL dialect for the database

The connection properties are common to any Java developer who has worked with JDBC in the past. Since you're not
specifying a connection pool, which we cover later in this chapter, Hibernate uses its own rudimentary connect
ion
-
pooling mechanism. The internal pool is fine for basic testing, but you shouldn't use it in production.


Page
30

of
32

The dialect property tells Hibernate which SQL dialect to use for certain operations. Although not strictly required, it
should be used to ensure H
ibernate Query Language (HQL) statements are correctly converted into the proper SQL
dialect for the underlying database.


The dialect property tells the framework whether the given database supports identity columns, altering relational
tables, and unique

indexes, among other database
-
specific details. Hibernate ships with more than 20 SQL dialects,
supporting each of the major database vendors, including Oracle, DB2, MySQL, and PostgreSQL.


Hibernate also needs to know the location and names of the mappin
g files describing the persistent classes. The
mapping element provides the name of each mapping file as well as its location relative to the application classpath.
There are different methods of configuring the location of the mapping file, which we'll ex
amine later.

Connection Pools

Connection pools are a common way to improve application performance. Rather than opening a separate connection
to the database for each request, the connection pool maintains a collection of open database connections that are

reused. Application servers often provide their own connection pools using a JNDI DataSource, which Hibernate can
take advantage of when configured to use a DataSource.


If you're running a standalone application or your application server doesn't support

connection pools, Hibernate
supports three connection pooling services: C3P0, Apache's DBCP library, and Proxool.

C3P0 is distributed with
Hibernate; the other two are available as separate distributions.


When you choose a connection pooling service, yo
u must configure it for your environment. Hibernate supports
configuring connection pools from the hibernate.cfg.xml file. The connection.provider_class property sets the pooling
implementation:



<property name="connection.provider_class">


org.
hibernate.connection.C3P0ConnectionProvider


</property>


Once the provider class is set, the specific properties for the pooling service can also be configured from the
hibernate.cfg.xml file:



<property name="c3p0.minPoolSize">


5


</prope
rty>


...


<property name="c3p0.timeout">


1000


</property>


As you can see, the prefix for the C3P0 configuration parameters is c3p0. Similarly, the prefixes for DBCP and
Proxool are dbcp and proxool, respectively. Specific configuration
parameters for each pooling service are available
in the documentation with each service. Table 1 lists information for the supported connection pools.


Hibernate ships with a basic connection pool suitable for development and testing purposes. However,
it should not
be used in production. You should always use one of the available connection pooling services, like C3P0, when
deploying your application to production.


If your preferred connection pool API isn't currently supported by Hibernate, you can a
dd support for it by
implementing the org.hibernate.connection.ConnectionProvider interface. Implementing the interface is
straightforward.


Pooling
Service

C3PO

Apache DBCP

Proxool

Provider
Class

org.hibernate.connec
-

tion.C3P0ConnectionProvider

org.hi
bernate.connec
-

tion.ProxoolConnectionProvider

org.hibernate.connec
-

tion.DBCPConnectionProvider

Page
31

of
32

Configuration
Prefix

c3p0

Dbcp

proxool


There isn't much to using a connection pool, since Hibernate does most of the work behind the scenes
.

Hibernate Tools

Working with Hibernate is very easy and developers enjoy using the APIs and the query language. Even creating
mapping metadata is not an overly complex task once you've mastered the basics. Hibernate Tools makes working
with Hibernate or EJB 3.0 persist
ence even more pleasant.


Hibernate Tools is an entirely new toolset for Hibernate3, implemented as an integrated suite of Eclipse plugins,
together with a unified Ant task for integration into the build cycle. Hibernate Tools is
a core component of JBoss

Tools (see our document “Notes on JBoss Tools”).


How to install:



In Eclipse, select Help / Software Updates / Find and Install



Select "Search for new features to install" and hit Next



Select "New Remote Site..."



For Name enter: Hibernate Tools



For UR
L enter:
http://download.jboss.org/jbosstools/updates/stable




Click OK to save the new site



Check the box next to the site you jus
t added (Hibernate Tools) and click Finish



Click arrow to expand Hibernate Tools



Click arrow to expand JBoss Tools Development Release: <version>



Check box next to Hibernate Tools (name may be slightly different)



Select Next, and accept the license agr
eement and Install All


To use Hibernate Tools, go the New… menu and create Other…, then select Hibernate from the list of types, then
expand out. The list will look like:



We don’t generally do Reverse Engineering, so that isn’t too useful, and we don
’t generate XML Mappings, so that
isn’t useful. The option to create the Hibernate Configuration file could be useful, but we generally do it by hand
instead.


Other parts of
the
Hibernate Tools

package are

just a wrapper for the hbm2ddl facilities that a
re discussed above under

Programmatic G
eneration of Schema”.

Actually, Hibernate Tools didn’t prove to be too critical for our work.

Appendix A: Guide to the “Java Persistence using Hibernate” Book

The book w
as written for Hibernate 3.2. It is o
rganize
d as follows, with the most important chapters shown in bold.


Part 1: Getting started with Hibernate and EJB 3.0

Page
32

of
32


Ch
apter 1: Introduces the object
-
relational mismatch and nature of soluti
ons.

Chapter 2
: Example project and programs using Hibernate. Qui
te important, as it covers the current library set, an
example problem, and s
etup.

Chapter 3
: Domain models and metadata. This introduces the concepts used in the mapping fil
es.
Page 138 discuses
dynamic manipulation of the metadata.


Part 2: Mapping Co
ncepts and Strategies


Chapter 4
: Mapping persistence classes

Chapter 5: Inheritance and custom types

Chapter 6
: Mapping collections and entity association

Chapter 7
: Advanced entity association mappings

Chapter 8: Legacy databases and custom SQL


Par
t 3: Conversational Object Processing


Chapter 9
: Working with objects.

This returns to the run
-
time API, such as the save, query, and transaction
facili
ties.

Chapter 10
: Transactions and concurrency.

Chapter 11: Implementing conversations.
This

deals

with long
-
running transactions, called

conversations

.

Chapter 12
: Modifying objects efficiently.

Chapter 13
: Optimizing fetch and caching:
C
overs how to tell Hibernate when to use joins and when to use multiple
queries. The material on caching follo
ws the material on fetch optimization.

Chapter 14
: Querying with HQL and JPA QL: important, as it covers how to express queries in HQL. It also talked
about the reporting query facility.

Chapter 15
: Advanced query options.
This

chapter is the documentat
ion for the Criteria object and related
interfaces, which were quite important in writing filter expressions and similar tools in an object
-
oriented fashion.

Chapter 16
: Creating and testing layered applications. This was quite useful as it talked about
Web applications, and
the organization of DAO layers.

Chapter 17: Introducing JBoss Seam. This chapter is a discussion of a very powerful application building framework.
It is not specifically related to the prior chapters. There ar
e now entire books j
ust on Seam, so this isn’t one of the
most important chapters, though Seam itself worth treating as important.