Peter Jacso. Differences in the rank position of journals by Eigenfactor metrics and the five year impact factor in the Journal Citation Reports and the Eigenfactor Project web site The traditional, annually issued Journal Citation Reports (JCR) have been enhanced since its 2007 edition by the Eigenfactor Scores (EFS), the Article Influence Scores (AIS) and the 5-year Journal Impact Factor (JIF-5). These scientometric indicators are also available from the Eigenfactor Project web site that uses data from the yearly

toadspottedincurableInternet and Web Development

Dec 4, 2013 (3 years and 11 months ago)

82 views

Peter Jacso.

Differences in the rank position

of journals by Eigenfactor metrics

and the five year
impact factor

in the Journal Citation Rep
orts
and the Eigenf
actor Project web site


The traditional, annually issued Journal Citation Reports (JCR)

have be
en

enhanced
since its 2007 edition
by
the Eigenfactor Scores (EF
S),
the Article Influence
Scores
(AIS)
and the 5
-
yea
r Journal Impact Factor (JIF
-
5)
.
The
se scientometric indicators are
also available from the Eigenfactor Project web site that uses data fro
m the
yearly
updates of the
JCR.

Although
supposedly
identical data sources are used for computing
the metrics, there are differences in the
absolute
scores reported which in turn
resulted

in significant

(more than 10 rank position)
changes
of

several jour
nals

in the sample
.
The
differences in the scores

and rank
positions by

the
three
new scientometric
indicators
of 52 journals in the Information and Library Science category
were analyzed

to
determine


the range of differences
and the extent of
changes

in
rank positions
.


Introduction

The founders of the Eigenfact
or Project launched

the first public release of the
Eigenfactor database in 2007

[Bergstrom, 2008]
.
Thomson
Reuters started to include
first
the 5
-
year Journal Impact Factor (JIF
-
5)
in the customar
y yearly update,
then in an
unusual

in
-
between

years



update


added
the two Eigenfactor scores
to

the
second
release of
2007
edition (JCR
-
2007)

whi
ch was released in January
,

2009

[Thomson
Reuters, 2009]
.
For
this research
(prompted by spot checking the
indicators of a few
journals by the author

in both databases
, and seeing significant
differences
)
the
2007
edition/segment



of the two databases
(JCR
-
2007 and EF
-
2007)
have been used

for a
limited, but systematic analysis
.



The differences between the t
wo
-
year and five
-
year journal impact factors
, and the
rank
pos
i
tion changes
within JCR
-
2007
have been analyzed for
the Information

and Library
Science category [J
acso, 2009
a, Rousseau, 2009
]
. The advantages and limitations of
the EigenFactor metrics
which

are based on a 5
-
year target window,
were reviewed in
several
multidisciplinary and
discipline
-
specifi
c journals

in and by itself and in
comparison with other
[
Fersht, 2009, Bergstrom and West, 2008, Yin et al., 2009]
.

The availability of the Eigenfactor

metrics
and the five year impact factors
from the
same source data provides an excellent opportunity for testing the effect of using
different software algorithms for
computing important
journal
metrics
.

This paper reports
about the rank position changes

of journals
in the Information and

Library Science (ILS)
category

ranked by the same Eigenfactor indicators in
JCR
-
2007 and EF
-
2007. This
complements the
testing of the
intradabase rank position changes within JCR
-
2007

[
J
acso, 2010]

for the same set of

journals on two different software platforms.


The sample database sets

The JCR annual editions are split into a Science and a Social Science
s

sub
set
. In JCR
-
2007 there are
information about 6,426 and 1,866

journals, respectively
. Libraries may
choose how

many yearly editions they wish to license (back to 1994).
The Eigenfactor
(EF)
database combines the two subsets into a single database for each year (going
back to the 1995 edition)
. This database
service is open access, does not even require
registratio
n or login.

Its 2007 segment

(EF
-
2007)

reports to have information about

7,938 journals.

This
seems to be 344

fewer journals than
available
in
JCR
-
2007
, but
one of the

reason
s

for
this difference
in the total number of journals is that many
journals ar
e covered both in

the Science and the Social Sciences subsets of JCR
. In EF
-
2007 the overlap is
eliminated by merging the two subset
s

into one.
The other reason is that only those
journals are included in EF
-
2007 that have been covered by Thomson

Reuters
(formerly
Thomson
ISI
)
,

since 2002.
On

the other hand, there are

accidental duplicates
in EF
-
2007
when the
same
journal
with same title, ISSN,
and
Eigenfactor

scores
appe
a
r

twice

in the result list

(
such as
Information & Management in the excerpt from the
alphabetic
ally sorted

list

in Figure 1
)
.

When this happens to
several

highly ranked
journals by Eigenfactor Score (EFS) or
Arti
cle Influence Score (AIS) it creates

the
avalanche effect, producing rank position loss for all the journals ranked below them.





Figure
1.


Duplicate entries for the same journal
in the EF
-
2007 database segment


The five duplicates in EF
-
2007
sample were removed

in creating a consolidated
and re
-
ranked
list for this research, but the casual users looking up the rank position

from

a
larger list in a category with

more journal
s

may not spot such duplicates
,
or realize its
consequences,
and may take the reported rank position
s

at face value
.

There are some additional information about the journals, including the impact factor,

and a very

informative

pair of histograms that show the change
s

of the
Eigenfactor

score
s

and the impact factor across the years

between 1996 and the selected
census
year
, i.e. 2007 in this case
.

This is an excellent form of visualization

(which cannot be

given justice on these black and white pages)

.




Figure

2

Detail
s

page for the journal in the EF database


Journal subsets can be selected
in the EF database
either by the disciplinary categories

(
assigned to the journals in JCR
)
,

or
by
the much b
roader and fewer Eigenfactor
subject
categories.
The Information and Library Science

category in JCR
-
2007 brings
up 56 journals, in EF
-
2007 the number is 57
. F
ive

journals have duplicate entries
, and 4
journals that are
present in the ILS category of JCR
-
2007

are absent from the list in EF
-
2007.
The former are a
pparently due

to a
glitch in the de
-
duplication software,

while the
reason for the 4 missing journals

is understandable. T
hey

have not been in existence or
covered by Thomson Reuters for six years
(
the census year, 2007 and the prece
ding 5
-
year

target window

of 2002
-
2006
) which are necessary for calculating

th
e 5
-
year
Eigenfactor metrics, therefore

they were not included in EF
-
2007. For the above reason,
there were 52 unique ILS journals that appear

both in JCR
-
2007 and EF
-
2007.




Although the summary display is very compact, the various scores can be
overwhelming for the casual users. This may be alleviated by the fact that the summary
lists can be sorted/ranked by 9 variables in JCR. Sorting/
ranking is possible in the EF
database by 3 variables, but the 5
-
year impact factor on the native site (NIF
-
5) is not
one of them. (
For sake of clarity,
in referring to the

5
-
year impact factor
,

in JCR I use
the acronym JIF
-
5.
I use the acronym NIF
-
5 to
refer to the 5
-
year impact factor at the
native Eigenfactor web site, even though it uses
just the term

Impact Factor, but in JCR
the term Impact Factor without qualification refers to the traditional 2
-
year impact factor
.

It was
mentioned first by Garf
ield [1955], and introduced as an attachment
to the printed
volumes of the citation indexes
nearly 40 years ago [
Garfield, 197
2 )
. There have been
perennial
arguments and cases of
misuse
in spite of the many warnings and advise for
the correct use and in
terpretation of the journal impact factor by experts [
Marti
n
, 1996,
Moed, 1999, Rousseau, 2001, Garfield, 2006,
Leydesdorff, 2008,
Pendlebury, 2009]
. I
have my own share of reservations

about the algorithm
and document type assignment

[Jacso,
2001]
, and a
telling, but extreme example for its possible consequences [Jacso,
200]
, but I have been using JCR constantly for learning much about journals.






Figure 3. Summary result list of the journals in the ILS subject category


The subscription
-
based JCR edi
tions provide much more information about the journals,
the number,

distribution and types of

papers (article, editorial material, book review,
etc.),
their
references, self citations,
citing and cited journal lists,
and several other
bibliographic and sci
entometric characteristics. These make the
scoring and
ranking
process more transparent and verifiable. The new indicators
,

J
IF
-
5 and the Eigenfactor
scores
appear on the summary page for the set of journals that can be selected by
category, title, publish
er, and country of origin in JCR.






Figure
4.

Information
-
rich detail pages in JCR

It must be emphasized that self
-
cites statistics are only presented for informing the
users,
and they are not used for impact factors reported in the summary pages.
It would
be much more informative then, say, the Immediacy Index of the journals. There are no
similar summary table for JIF
-
5 telling what would be the JIF
-
5 score without self cites,
but there is an excellent graph showing the self citation rate of the j
ournals for a 10
-
year
time span.


The Eigenf
actor S
core
which is the overall performance measure of the journal
is
calculated to 5 decimal digits precision in JCR (7 digits

in EF).

It
is built on the principle
of not just counting the number of

citations r
eceived by a journal
,
but also the
importance

or influence

of the citing journal
s
, i.e. not considering each citati
ons
-
received from

sources
of different standing
in terms of prestige
/influence
-

to be equal
.
This
approach implements the ideas of a
mong o
thers [Gross & Gross, 1927,
Pinski &
Narin, 1976
,
and
Bollen et al, 2006
, Ma, Guan, and Zhao, 2008

]



The Art
icle Influence Score (AIS) i
s the measure

of the journal’s prestige based on the
ratio of citations
received
per article

in the journal
. It

is cal
culated to 3 digit precision in
JCR (5 or 6 digit in EF)
. It makes the score independent of the size of the journals, i.e
the volume
/frequency

of their publications. This
is
the
indicator

that is supposed to be
comparable to the

5
-
year journal impact fact
or (JIF
-
5) in JCR
, which

is also calculated
to 3 digits precision in JCR

(in addition to the traditional
2
-
year Impact Factor).

At the
Eigenfactor Project

Web

site
,
NIF
-
5
appears only on the details page of the
journal and
only
as a percentile value
, not
as
an absolute number

(which is not a loss)
.
EF displays

the EFS and AIS indicators on the summary page
both
as a percentile
value

and
as
an absolute score
.

The

visual micrographs
in EF

facilitate
to quickly scan

the result list
and
to get a
n instant

fe
el
about

the rank position of the journals within a
set that the absolute scores do not render.

It would help if the NIP
-
5 percentile would
also be listed on the summary page in EF

that still ahs space for additional key inicators,
such the number of paper
s published by the journal in the target window
.

Displaying

the
number
of cites and the

percentage of self cites which are ignored would be very useful.








Methodology

The relevant data were extracted from the summary and detail pages of t
he 52 journals
set, removing the five duplicates from EF
-
2007, and the 4 journals from JCR
-
2007
that
did not have JIF
-
5 scores.
To facilitate visual scanning of the various lists, p
ercentiles
were calculated from the absolute scores in JCR
-
2007 as EF had n
o absolute score for
NIF
-
5.
The set was then
re
-
ranked by EFS, AIS,
and
JIF
-
5/NIF
-
5 percentiles to
determine the final rank positions and the rank position chang
es in the two sets.
R
ank
positions are important because they

are the simplest to understand wh
en consulting
league lists, and
for

analyzing

the differences in the absolute scores
and their
consequences
on
rank position changes
.


Findings

As

the lists were
cre
ated from the same primary source, the raw data in JCR
-
2007
,
the
differences in the abso
lute scores and the rank positions by EFS and AIS were
expected to be

in perfect correspondence.
The
identical
Spearman and Kendall

rank
correlation coefficients at
0.97
, 0.
96 and 0.93

for EFS, A
IS and the 5
-
year impact factor

confirmed the expectation of

a very high degree of
rank
correlation.
However,
Spearman’s
footrule coefficients of 0.72, 0.69 and 0.
53 foreshadow
ed

the reality of less
than perfect concordance

between the metrics reported in

EF
-
2007 and JCR
-
2007
.


This footrule distance measure is the

best metric for expressing the loss and gain of
rank positions by different ranking schemes.

It sums up the absolute
rank differences,
irrespective of their direction (gain or loss, rise or decline)
, and then normalizes them.


It
is no wonder that
mathema
tician and info
r
metrician
,

Judit Bar
-
Ilan has been using this
metric for a long time for analyzing and comparing the result lists produced by general
search engines, and
the ranking component of the software program of
cited reference
enha
nced databases
[
Bar
-
Ilan,
2005
, Bar
-
Ilan et al. 2006]
. where rankings

are very
critical and closely watched.

This was another motivation

to explore the changes of the rank positions in JCR
-
2007
vis a vis EF
-
2007, as these rank positions are much closer to the minds and

hearts of
scientists, researchers, librarians and information professionals for informetric,
collection development
and pure ego
-
boosting purposes
than the
much better known

correlation coefficients

that are used to get the global picture on larger sets
.

In the world
of journal publishing (and
in many other other fields from
sports
to education,

entertain
ment and Internet searching
)
dwelling on individual cases is important because
being ranked #1, #2 and #3 is very critical, and losing even one
or two
ra
nk position
(s)

may mean the difference of settling for the
bronze medal instead of the silver or the gold,
or losing of subscribers to

a journal
, or special TV channel
.

The irony is not lost on me that Spearman developed the footrul
e

formula
[Spearman,
19
06] in his frustration over seeing researchers
“comparing one series of figures with
another”

and arguing that

“this comparison is still almost universally
performed on the
primitive plan of just inspecting the figures and forming a general impression”. He

developed the footrule coefficient for comparing rank lists to o
ffer researchers a
less
arduous
method
for “
translation from one measure in
to
the other

[
making it
]

almost as
easy as turning a few pounds

into their value in shillings” and to
quantify
the
degree of
correlation, where “we have not to consider the conceivable vagaries of individual
cases”
. LOUISE, I think it should be we don’t have to consider but I am quoting him, and
I am an ESL person, so let’s keep his wording”





With that said,
afte
r showing the overall profile of the changes,
I still f
eel the need to pay
attention to individual cases when

there is no perfect correlation, as those are the ones
th
at can reveal which
metrics
on which software platforms
are better for the
particular
pur
pose, and they can lead to the discovery of problem situations which may distort the
results for some journals

in a potentially systemic manner
.

Overall, slightly less than one third of the journals are effected by position changes
between the ranked lis
t from EF
-
2007 versus JCR
-
2007. Only in five cases are rank
positions lost or gained by all the three metrics, in another five by two of the metrics,
and in 7 only by one metric. The distribution and the rank position changes are
summarized in the figure b
elow. Of particular interest are those where the loss or gain is
5 or more positions.




Figure
5.

Overall profile of the rank position changes

by the three metrics


B
umpcharts
offer the best alternative to see both the forest and the trees (at least in
this
size of a f
o
rest
) through

the combination of at
-
a
-
gl
ance look at

the overall scene while
identifying the individual journals.
For space reason, in the print version of t
his paper I
can include only a single

bumpchart.

I chose
the rank list by the A
rticle Influence

S
core

(AIS)
,
because AIS is calculated by
normalizing the Eigenfactor Score by the number of papers. It is the one supposed to be
the closest
to the Journal Impact Factor, kind of
a bridge metric T
he digital pre
-
print
version
at
http://www.jacso.info/eigen
-
native
-
vs
-
jcr

will include the bumpcharts by all the
three metrics for all the 52 journals.

By the Eigenfactor S
core
(EFS)
ranking
, there are minimal on no rank position
c
hange
s
at the
top and very bottom of the lists.
T
he
following journals have the most, but still
moderate

gain
s

of 4 positions
in EF
-
2007 over JCR
-
2007:
College & Research Libraries,
Journal of Academic Librarianship, Library & Information Science Research, Libr
ary
Trends, Reference & User Services Quarterly, and Library Resources & Technical
Services
.

The

following journals have the
bigge
st
losses
in EF
-
2007
(with the position
change in parenthesis)
:

Social Science Information

(
-
13),
Scientist

(
-
10)
, and
Informa
tion Society (
-
9
)
, while

Journal of the Medical Library Association, Social
Science Compute Review, Online, and EContent

had a moderate loss of 4 positions.

Beyond the
Library and Information Science journal from Japan
(
which will be discussed
later
for
be
ing not merely an outlier, but a canary in the coal mine
)

the ranking b
y Article
Influence Score (AF
S)
-
shown in the bumpchart below
-

the
following journals
gain

the
most in EF
-
2007: Library Resources & Technical Services (6) , Reference & User
Services Qu
arterly (6) College & Research Libraries (5), Journal of Librarianship &
Information Science (4), and the Journal of Scholarly Publishing

(4).


ANNU REV INF SCI
ASLIB PROC
ECONTENT
ELECTRON LIBR
GOV INF Q
INF MANAGE
INF PROCESS MANAG
INF RES
INF SOC
INF SYST J
INF SYST RES
INT J GEOGR INF SCI
INT J INF MANAGE
INTERLEND DOC SUPPLY
J ACAD LIBR
J AM MED INF ASSN
J AM SOC INF SCI TEC
J DOC
J HEALTH COMMUN
J INF SCI
J LIBR INF SCI
J MANAGE INF SYST
J MED LIBR ASSOC
J SCHOLARLY PUBL
KNOWL ORGAN
LAW LIBR J
LEARN PUBL
LIBR COLLECT ACQUIS
LIBR INF SC
LIBR INF SCI RES
LIBR J
LIBR QUART
LIBR TRENDS
LIBRI
MIS QUART
ONLINE
ONLINE INF REV
PORTAL-LIBR ACAD
REF USER SERV Q
RES EVALUAT
RESTAURATOR
SCIENTIST
TELECOMMUN POLICY
Z BIBL BIBL
ANNU REV INF SCI
ASLIB PROC
COLL RES LIBR
COLL RES LIBR
ECONTENT
ELECTRON LIBR
GOV INF Q
INF MANAGE
INF PROCESS MANAG
INF RES
INF SOC
INF SYST J
INF SYST RES
INF TECHNOL LIBR
INF TECHNOL LIBR
INT J GEOGR INF SCI
INT J INF MANAGE
INTERLEND DOC SUPPLY
J ACAD LIBR
J AM MED INF ASSN
J AM SOC INF SCI TEC
J DOC
J HEALTH COMMUN
J INF SCI
J INF TECHNOL
J INF TECHNOL
J LIBR INF SCI
J MANAGE INF SYST
J MED LIBR ASSOC
J SCHOLARLY PUBL
KNOWL ORGAN
LAW LIBR J
LEARN PUBL
LIBR COLLECT ACQUIS
LIBR INF SC
LIBR INF SCI RES
LIBR J
LIBR QUART
LIBR RESOUR TECH SER
LIBR RESOUR TECH SER
LIBR TRENDS
LIBRI
MIS QUART
ONLINE
ONLINE INF REV
PORTAL-LIBR ACAD
PROGRAM-ELECTRON LIB
PROGRAM-ELECTRON LIB
REF USER SERV Q
RES EVALUAT
RESTAURATOR
SCIENTIST
SCIENTOMETRICS
SCIENTOMETRICS
SOC SCI COMPUT REV
SOC SCI COMPUT REV
SOC SCI INF
SOC SCI INF
TELECOMMUN POLICY
Z BIBL BIBL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
Native AIS
JCR AIS

Figure 6. Bumpchart for rank correlations for the AIS metric in EF
-
2007 and JCR
-
20
07


On the other hand these journals gain

the most positions
by AIS rank in JCR
-
2007
vis a
vis their
EF
-
2007
rank:
Social Science Information

(
15)
, Social Science Computer
Review

(
12),
Online Information Review

(
8)
, Information Society

(
7), and
Restaurat
or

(
6).


In analyzing the positions of the 52 journals in the ILS category when ranked by
Impact

Factor, it must be remembered

that JC
R does not exclude self
-
cites in calculating JIF
(and JIF
-
2)
, while in EF the five
-
year native impac
t factor (NIF
-
5), se
lf cites are excluded
.

According to a study [Nisonger, 2000], self
-
citation does not have a significant effect on
ranking ILS journals.
It is, indeed very likely

because in my preliminary test, the mean
self
-
citation rate in the ILS category of JCR
-
2007 is

21.5
%, and the median is 15.5%.
However in

Law Library Journal this
certainly
would not be the case

as its JIF
-
2

score
drops from 0.789 to 0.10

without
self cites
, b
ut this is just background information and
not a factor in

the actual calculation of JIF
values
for the summary

table.




Figure Extremely high self
-
citation rate for Law Library Journal


To
the best of
my knowledge the documentation and research papers about the
Eigenfactor database do not discuss this aspect, and the journals’ details pag
e do
es

not
include any data about self citation
s
. Considering the strong positions of the developers
about the manipulative efforts of gaming the numbers and the potentially distorting
effects of self
-
citations on metrics, it is safe to assume that self
-
ci
tations are excluded in
calculating NIF
-
5 (but it will require further research and corroboration
)
.


It is this metric

where the intensity and distance of rank position changes are the
highest, as predicted
so well
by the drop of Spearman’s foot
rule coeff
icient to 0.51.
The
most gains by the 5
-
year impact factor

in EF
-
2007 were obtained by Learned
Publishing (12), Scientist (10)
, Interlending & Document Supply (10), followed by Library
Journal (8), Library Resources & Technical Services (8)
,Online Informa
tion Review (7),
Law Library Journal (6), Online (6), Social Science Information (6),
Portal (5),
ASLIB
Proceedings (5)

Libri (5)
, Journal of Health Information (4), and Journal of Information
Science (4). The following journals had the most decline in EF
-
2007 vis a vis JCR
-
2007:
Restaurator (
-
17), Social Science Computer Review (
-
12), Reference & User Services
Quarterly (
-
9),
Information Society (
-
7), Library Quarterly (
-
7), International Journal of
Information Management (
-
7), Library Trends (
-
7), Knowled
ge Organization (
-
7)
,
Electronic Library (
-
6),
and
Program (
-
6).




Conclusions

It is clear that

in spite of using

supposedly identical raw
data, the scores and the rank
positions of many of the 52 ILS journals are
significantly different. This is perce
ived
most directly by journal publishers, editors, and librarians dealing with collection
development.
While any users who look up the scores and rank positions of, say,
Information Society, whose EFS rank drops to 21
st

in EF
-
2007 from 12
th

in JCR
-
2007,
it
s
AIS rank to 18
th

in EF
-
2007 from 11
th

in
the
JCR
-
2007, and
its
NIF
-
5 rank to 24
th

in
EF
-
2007 from 17
th

in JCR
-
2007. The same is true
, although to a lesser extent when
realizing that College & Research Libraries drops from

the

11
th

rank position by AIS
in
EF
-
2007

to
rank
15
th


in JCR
-
200
7, or seeing

that Learned Publishing’s 23
rd

rank
position by Impact Factor in EF
-
2007 declines to

35
th

in JCR
-
2007.

Publishers, editors and library managers are more sensitive to smaller
changes in
making decisions ab
out launching, folding, subscribing, and canceling

expensive
journals
.
Most of them can understand the reasons for differences, and interpret the
rank changes seen on different ranking list, due to the differences in the scope, depth,
and length of coverag
e in the underlying databases used for the
ranking, as well as the
differ
ent weighting of their software. However, many students, researchers,
budget

managers, financial administrators
may not have the time
, interest

and
/or the
background to learn about th
ese reasons
in making decisions where to publish, what
journals
to read, which one to cancel as long as they can rely on a
league list published
by reputable institutes

In the case of EF
-
2007 and JCR
-
2007 the premises looked much better
for perfect
correla
tion
as the source data is
supposed to be
the same, so the differences in the
rank positions
were expected to be much more close and remain in concordance (at
least by the new EFS and AIS ranking).

However, it became clear
during the test
that the

raw data

set

are not identical. The
nearly 10% duplicates in
the sample set
EF
-
2000
would not be a great concern if they
do not occur with the same frequency in the entire population. They may be

one of t
he
reasons

for rank position changes as they a
re not elimin
ated from the list (unless one
do
es a research project, “screen
-
scrapes”, convert

and edit the result list, as EF does
not offer an option for downloading)

Much more of a concern is the problem encountered when trying to discover
the reason
how could th
e

Library and Information
Science journal

have the highest gain by Article
Influence Score rising to position 31
st

in EF
-
2000 from position 42
nd

in JCR
-
2007

when it
had merely 25

papers for the period 2002
-
2007. O
ne
of
them did get

cited
(
twice
). This

y
ielded

a h
-
index of 1, an average citation
s

per item
rate
of 0.08, and an average
citation per year

rate

of 0.25.
To its credit the two

citing journals have very high and
high prestige, ranked 7
th

and
19
th

in EF
-
2007, but this still would not justify the
rank
posi
tion of a journal

with such performance.


It

was

an additional concern
that this journal
(
which
was one of the f
ive duplicates in EF
-
2007
)

should not have
been included in EF
-
2007
, as the EF software could not extract
accurate information

from JC
R
-
2007
, so there is no continuous coverage between
2002
-
2007 for this journal from the perspective of EF
-
2007 which found only 19 of the
25 papers in JCR, and could create statistics only for two years.

The import and conversion module of software of th
e EF database created a journal
record that one can expect from the crawlers of Google Scholar [Jacso, 2009b].
It was
just another distraction and sign of problems, that
an absurdly inappropriate Eigenfactor
subject
category

(Molecular and Cell Biology)

wa
s assigned to the journal by EF.
The
duplicate record was of no use as it had no data, and even if it had
,

the record should
have not been split in EF
-
2007.

More importantly, this record would make users

wonder how many other false

records
were created
in

EF
-
2007 during the conversion process, how many other records may
be effected and to what extent
,

due to problems in the import
and conversion
process
es
.


No doubt, there are errors

in JCR but most of them can be traced, discovered

and
corrected through

the transparent underlying raw data in the JCR itself, and through
Web of Science, but for users such traceability is not available in the EF database
, and
the import process does not go through the minimal plausibility test of excluding empty
and otherwi
se obviously wrong records.





Figure

8.

Record for journal details in EF
-
2007


The Eigenfactor concept for the evaluation of journals and citation analysis in a broader
sense
is an excellent idea by weighting citations depending on the prestige of the

citing
journal
s

based on
the prestige transferred to it from the journals that cite it.

It implements earlier ideas, and
deserves much praise for it, as the implementation may
have far reaching impact on citation analysis and citation
-
based searches in g
eneral.

The open access it offers is particularly precious
, but it does not justify the inattention to
problems in the process of transferring data from the JCR database(s)



apparently
based on mutually satisfying agreement
.


The problems mentioned may n
ot be systemic or of very large scale. Still, they

may
undermine

the credibility of the idea

and the service
. Of course, fixing the errors is
easier said than done, but allowing the

swift

downloading of records with all the useful
details would motivate

users to volunteer
to
test

the data quality and alert the
developers of potential problems
, begging for a fix. Open access to such valuable
information for a 15
-
year period is very encouraging. Even if it is not a substitute for
JCR , it can become a ver
y widely used and appreciated resource if the problems are
fixed. The different and well explained
Eigenfactor metrics could very well demonstrate
the advantages of the use of weighted citation counting and the exclusion of self
-
cites
on the really identi
cal data set.



References

Bar
-
Ilan, J. (2005). “Comparing rankings of search results on the Web

, Inform
ation Processing &
Management, Vol 41

No. 6,
pp.
1511
-
1519.

Bar
-
Ilan, J.,
Levene,
M.
and
Mat
-
Hassan
, M.

(2006). “Methods for evaluating dynamic chan
ges in search
engine rankings: a case study”, Journal of Documentation, Vol 62, No. 6,
pp.
708
-
729.

Bergstrom, C.T., West, J.D. (20
08), “Assessing citations with the Eigenfactor™ Metrics”,
Neurology
, Vol.
71, pp. 1850
-
1851.

Bergstrom, C.T., (2008), “
Eigenfactor: Measuring the value and prestige of scholarly journals”,
College &
Research Libraries News
, Vol. 68
,

No 5, pp. 314.

Bollen,

J.,
Rodriquez, M.A. and Van de Somp
el, H. (2006), “Journal status”,
Scientometrics
, Vol. 69 No. 3,
pp. 669
-
687.

Fersht, A. (2009), “The most influential journals: Impact Factor and Eigenfactor”,
PNAS
, Vol. 106 No. 17,
pp. 6883
-
6884.

Garfield, E. (1955), “
Citation indexes to science: a new dimension in documentation through association
of ideas”,
Science
, Vol. 1122, pp. 108
-
111.

Garfield, E. (1972), “Citation analysis as a tool in journal evaluation”, Science, Vol. 178
, No. 4060,

pp.
471
-
479

Garfield, E. (
2006), “The History and Meaning of the Journal Impact Factor”,
JAMA
, Vol. 295 No. 1, pp.
90
-
93.

Gross, P.L.K. and Gross, E.M. (1927), “College Libraries and Chemical Education”,
Science
, Vol. 66, pp.
385
-
389.

Jacso, P. (200
0
), “The Number Game”, Online Inf
ormation Review, Vol. 24, No.
2, pp. 180
-
183.

Jacso, P. (2001), “A deficiency in the algorithm for calculating impact factors of scholarly journals”, Cortex,
Vol. 34, No. 4, pp. 590
-
594.

Jacso, P. (2009
a
), “Five
-
year impact factor data in the Journal Cita
tion Reports”,
Online Information
Review
, Vol.

33 No. 3, pp. 603
-
614.

Jacso P. (2009
b
), “Google Scholar’s Ghost Authors”, Library Journal, Vol 134, No. 18
pp. 26
-
27.

Jacso, P. (2010), “
Eigenfactor and Article Influence Scores in the
Journal Citation Report
s
”,
Online
Information Review,
Vol.

34 No. 2.

Leydesdorff, L. (2008), “Caveats for the use of citation indicators in research and journal evaluations”,
Journal of the American Society for Information Science and Technology
, Vol. 59 No. 2, pp. 278
-
287.

Ma,

N, Guan, J. and Zhao, Y. (2008), “Bringing PageRank to the citation analysis”,
Information
Processing & Management
, Vol. 44, pp.800
-
810.

Martin, B.R. (1996), “
The use of multiple indicator
s in the assessment of basic research
”,
Scientometrics,
Vol. 36 No. 3, pp. 343
-
362.

Moed, H.F., van Leeuwen, T.N. and Reedijk, J. (1999), “Towards appropriate indicators of journal impact”,
Scientometrics
, Vol. 46 No. 3, pp. 575
-
589.

Nisonger, T.E. (2000)
, “Use of the journal citation reports for serials management in research libraries: An
investigation of the effect of self
-
citation on journal rankings in library and information science and
genetics”,
College and Research Libraries
, Vol. 61 No. 3, pp. 26
3
-
275.

Pendlebury, D.A. (2009), “The use and misuse of journal metrics and other citation indicators”,
Archivum
Immunologiae Et Therapiae Experimentalis
, Vol.
57 No. 1, pp. 1
-
11.

Pinski, G. and Narin, F. (1976), “Citation influence for journal aggregates
of scientific publications: Theory,
with application to the literature of physics”,
Information Processing & Management
, Vol. 12, pp. 297
-
312.

Rousseau, R. (2001), “Journal evaluation: Technical and practical issues”,
Library Trends
, No. 50 No. 3,
pp. 418
-
439.

Rousseau, R. (2009).

What does the Web of Science five
-
year synchronous impact factor have to offer?
”,

Chinese Journal of Library and Information Science,
Vol 2,
No. 3, pp. 1
-
7
1
-
7.

Rousseau, R. and the STIMULATE 8 Group (2009). On the relation bet
ween the WoS impact factor, the
Eigenfactor, the SCImago Journal Rank, the Article Influence Score and the journal h
-
index. Available at
E
-
LIS archive, ID: 16448;
http://eprints.rclis.org/16448/

Spearman, C
. (1906), “Footrule for measuring correlation”, British Journal of Psychology, v2: 89
-
108.
Thomson
-
Reuters (2009),
New Journal Citation Reports
, available at
http://thomsonreuters.com
/content/press_room/sci/350008

West, J., Bergs
t
rom, T. and Bergs
t
rom, C.T. (2009), “Big Macs and Eigenfactor Scores: Don’t Let
Correlation Coefficients Fool You”

-

available at
http
://arxiv.org/PS_cache/arxiv/pdf/0911/0911.1807v1.pdf


Yin, C
-
Y., Aris, M.J. and Chen, X. (2009), “Combination of Eigenfactor™ and h
-
index to evaluate scientific
journals”,
Scientometrics
,

DOI:
10.1007/s11192
-
009
-
0116
-
9.