6 - DocuShare - University of Rochester

jockeyropeInternet και Εφαρμογές Web

2 Φεβ 2013 (πριν από 4 χρόνια και 9 μήνες)

343 εμφανίσεις

Current System

1

XC Survey Report

Current System


Opinions of Currently Used OPACs

Of the 66 respondents who identified their system (Q1), 21 (31n percent) use some version of
Voyager, ranging from 5 to 6.2, and 18 (x percent) use an Innovative Interfaces product.
Another
11 respondents (x percent) report that they use Aleph and nine (x percent) that they use
Sirsi. Other named systems are Horizon, Evergreen, and GEAC Advance. Two respondents (x
percent) use a locally developed system.

With few exceptions, respondents are n
ot happy with their OPACs (Q2). Fifty
-
one respondents
(x percent) do not love their OPACs, some expressing frustration or even outright hostility.
Four respondents (x percent) do love their OPACs and six (x percent) are neutral.

Top Issues with Currently U
sed OPACs

The top issues expressed in these complaints are…



Difficulty of customization (42 instances)



Inadequacy of search functions (31 instances)



Opacity of results and lack of grouping or faceting (27 instances)



Limitations of the user interface (16 in
stances)



Lack of Web 2.0 functionality (9 instances)



Backend problems (8 instances)



Lack of integration with databases or other systems (8 instances)

There are also numerous complaints about…



Lack of access to data (7 instances)



Difficulty finding journals

and the articles in them (6 instances)



Lack of updates (7 instances)



Lack of an API (application programming interface) (6 instances)



Usability problems (6 instances)


What OPAC Libraries Would Buy If Money Were No Object (Q4:
If you could buy any
commerc
ial catalog product right now (money is no object), what would you buy and
why?)

When asked which commercial catalog product they would buy if money were no object,
respondents clearly favored Endeca (17 instances). Other popular choices were III
2

Encore a
nd Millennium (7 instances) and Aquabrowser (6 instances). Fewer respondents
chose Evergreen (4 instances), Google (3 instances), and Primo (3 instances). Products
that were chosen least were Aleph (2 instances), Talis’ new product(2 instances),
Worldcat

(2 instances), Sirsi (1 instance), and Amazon (1 instance). Amazon’s
programming was even more popular in the respondents’ comments than in their
selections, perhaps reflecting that it is not commercially available (3 instances of
comments).

Openness to
testing an open
-
source user interface (Q5: If you could install an alternative,
open
-
source user interface to work alongside your existing ILS, and if it promised to
solve most of your OPAC problems, how likely would you be to at least install it and try
i
t out?)


When asked whether they would be likely to try an alternative, open
-
source user interface
that promised to solve most of their OPAC problems and that would work alongside their
existing ILS, the response was overwhelmingly positive. Sixty respons
es were positive and
four were negative.

The respondents’ biggest concern was with the new interface’s ease of use, installation, and
maintenance, a concern commonly voiced by respondents with limited IT resources (6
instances). Other reservations were ov
er the acquisitions branch of the library being willing
to buy an open source product subsequent to their testing it (3 instances) and reservations
over support issues or the ability to back out changes if they were unhappy with the open
source product (3
instances). One such respondent clearly stated that that the presence of
paid support staff and an active user community would be a deciding factor.

Features


3

Features


These are ordered by “response total”, not “average response.” Also, I have only included fe
atures that received 400+ total “raffle
ticket” responses.



When asked to evaluate a range of new OPAC features, the two most popular features were its ability to work alongside the
respondents’ existing library servers to provide new features to end use
rs (x ) and the presence of a faceted search interface (x). Other
popular features were an integrated user interface that searches across digital and non
-
digital resources simultaneously (x), a Google
-
like search box with no need to select an index (x), a
nd “Did you mean…” spelling correction (x). Many respondents were also
interested in seeing flexible ways to view search results, including relevance, popularity, or availability (x) and a search
system that
produces better results for simple keyword sear
ches (x).



6.

Let's say you have 100 raffle tickets to distribute among the following great new OPAC features. If you put more than ten tic
kets
in the box, you're *sure* to get that feature. If you put around eight tickets in a box, you'll *probably* ge
t that feature. If you put
five or fewer tickets in a box, you still *have a chance* of getting that feature but it's less likely. Distribute your 100 t
ickets to
whichever boxes you want, allocating the most tickets for the features you find most desirable
. Be sure to use up all your tickets.

Respondents who answered this question/Respondents who skipped this question: 64/4


New OPAC Feature

Response Total

Response Average

Faceted search interface

545

10.69

Optional grouping of related works in search r
esults

337

8.02

Advanced search options made easier to use

177

6.56

Search system that returns better results for simple keyword searches

438

10.43

Google
-
like search box with no need to select an index

472

10.98

Integrated user interface that searches

across digital and non
-
digital resources (books,
articles, digital repositories, DVDs and more) at the same time

497

10.57

“Did you mean…” spelling correction

467

9.16

Amazon
-
like suggestions for further titles instead of only linking to subject heading
s

342

7.60

Browseable lists of content (for example, eJournals, Databases, new books, and so on)

248

7.09

Features


4

New OPAC Feature

Response Total

Response Average

Works alongside your existing library servers (catalog, metsearch, OpenURL link resolver,
authentication server, repository, course management syste
m) to provide new features to
end users

578

10.51

Personal showcase pages for institutional/faculty
-
created content

79

5.27

Authoring and collaborative environment for the creation and use of scholarly content

81

5.06

Tools to support the finding, gathe
ring, use and reuse of scholarly content (e.g., RSS feeds,
blogs, tagging, user reviews)

385

7.86

Display of the most relevant fields for different media on results screens

194

6.06

Display of the relevant request options display based on circulation sta
tus of the item

291

7.46

More ways to view search results, including relevance, popularity, availability, etc.

443

8.52

"More like this" suggestions

304

7.60

User
-
defined email notifications

161

5.75

User
-
defined communities with custom views

111

4.83

Support for harvesting by third parties

250

7.58

Implementation


5

Metadata Questions


Respondents were asked what metadata schema they currently use or plan to use in the near
future for their general non
-
digital collections. MARC21 was clearly the most popular (58
in
cidents). The next most popular were MARCXML (19 incidents), Dublin Core (19
incidents), and EAD (Encoded Archival Description) (18 incidents).
(question
7)


The two most popular metadata schemas for respondents’ specialized non
-
digital collections
are E
AD (Encoded Archival Description) (46 incidents) and MARC21 (45 incidents). Dublin
Core (21 incidents), MARCXML (17 incidents), VRA Core (10 incidents) and MODS
(Metadata Object Description Standard) (9 incidents) were the other commonly used
metadata sche
mas.
(question 8)

question 8? And are rest of numbers OK or offset by one?

They are all okay now


I accidentally skipped question 10 and had been counting from the
bottom up.


For their institutional repository, respondents use Dublin Core most commonly

(42
incidents). The second tier of schemas used for institutional repository includes MODS (22
incidents), METS (Metadata Encoding & Transmission Standard) (20 incidents), and EAD
(19 incidents). Other commonly used schemas are MARCXML (15 incidents), P
REMIS (15
incidents), MARC21 (12 responses), and VRA Core (10 incidents).
(question 9)


For their other digital collections, respondents favor Dublin Core (39 incidents) as their
metadata schema. Other popular schemas are EAD (33 incidents), MODS (29 inci
dents),
MARC21 (25 incidents), and METS (23 incidents).
(question 10)


There are a variety of other metadata schema that respondents use currently or plan to use
soon. The most common is MARCXML (14 incidents), closely followed by Dublin Core (12
inciden
ts), EAD (12 incidents), METS (12 incidents), PREMIS (12 incidents), and MODS
(11 incidents).
(question 11)

(should we refer to the response total or the response percent?)


Not sure how to summarize question 12’s responses.



Respondents favored using M
ETS for storing digital objects (23 incidents), with MPEG21
-
DIDL also commonly used (5 incidents).
(question 13)


When asked what metadata schemas they use for purposes besides resource discovery, such
as preservation and rights management, many responden
ts report that they use PREMIS (18
incidents). METS was also commonly used (7 incidents).
(question 14)


When asked if they re
-
use any other descriptive metadata from other sources, respondents
most commonly report using MARC21 (7 incidents). Other comm
only used sources are
ONIX (4 incidents), DC (3 incidents), and MODS (3 incidents). Respondents additionally
cite Amazon (2 incidents), MARCXML (2 incidents), OCLC (2 incidents), and UniMarc (2
incidents).
(question 15)


Implementation


6

Not sure how to summarize questio
n 16’s responses.


The clear majority of respondents use OCLC’s Connexion client or browser (44 incidents),
with the rest batch loading their own data (16 incidents).
(question 17)


Implementation


7

7. The two most popular metadata schemas for respondents’ specialized non
-
digital
collections are EAD (Encoded Archival Description) (46 incidents) and MARC21 (45
incidents). Dublin Core (21 incidents), MARCXML (17 incidents), VRA Core (10
incidents) and MODS (Metadata Object Description Standard) (9 incidents) were the
other c
ommonly used metadata schemas.


9. For their institutional repository, respondents use Dublin Core most commonly (42
incidents). The second tier of schemas used for institutional repository includes MODS
(22 incidents), METS (Metadata Encoding & Transmi
ssion Standard) (20 incidents), and
EAD (19 incidents). Other commonly used schemas are MARCXML (15 incidents),
PREMIS (15 incidents), MARC21 (12 responses), and VRA Core (10 incidents).


10. For their other digital collections, respondents favor Dublin C
ore (39 incidents) as their
metadata schema. Other popular schemas are EAD (33 incidents), MODS (29 incidents),
MARC21 (25 incidents), and METS (23 incidents).


11. There are a variety of other metadata schema that respondents use currently or plan to u
se
soon. The most common is MARCXML (14 incidents), closely followed by Dublin Core
(12 incidents), EAD (12 incidents), METS (12 incidents), PREMIS (12 incidents), and
MODS (11 incidents).



Not sure how to summarize the responses to question 12.


12.


Implementation


8

W
hat changes in metadata schema do you plan to implement within the next three years?

Respondents who answered this question/Respondents who skipped this question: 36/32

Comments:



None.



I'm planning on extending PatREST to start handling much of the social

data we'll be
harvesting.



Well, in the near term we'll be investigating SolrXML schemas and different approaches
to integrating our content with Sakai



Adoption to the extent feasible of RDF. The MIT SIMILE project seeks to address the
issues that arise w
ith the presence of so many metadata standards and choices. Others
not mentioned but in various anticipated use: ONIX, PRISM, MPEG
-
21



Move any ad hoc schemas to standard ones; move Fedora FOXML to METS when
available



Modified Dublin Core for local digi
tal and institutional repository collections; METS
investigation



No specific changes; would like to be able to support PREMIS



fulltext



LC implements new releases of or updates to MARC 21, MODS/METS, EAD, as they are
issued. We expect to scale up our imple
mentation of PREMIS for Web capture and for
the National Digital Newspaper Program, a long
-
term effort co
-
sponsored by the National
Endowment for the Humanities (NEH) and the LC Office of Strategic Initiatives/National
Digital Information Infrastructure an
d Preservation Program (NDIIPP) to develop an
Internet
-
based, searchable database of U.S. newspapers now in the public domain, to be
hosted on LC servers. LC, through its Network Development and MARC Standards
Office, is preparing to explore use of MADS fo
r authority data in XML.



VRA core; PREMIS; Use metadata profiles



Migrate to more robust rights schema when available; Use Encoded Archival Context
(EAC)as export from authoring tool; Locally defined data management schema; Watch
the adoption of MPEG21
-
DIDL versus METS



We will be increasing our use of non
-
MARC21 metadata. For instance, in the Vital
-
Fedora repository now being implemented we'll use schemas that we are not using now
(except in small ways): METS (Metadata Encoding & Transmission Standard),
PREMIS,
MODS (Metadata Object Description Standard), Dublin Core, MIX (NISO Metadata for
Images in XML), MARCXML. We must implement a rights schema soon. I would include
a URI naming schema (handles) as well.



Not sure, but we plan to expand how we take d
ata in



Switch all archival finding aids to EAD; Plan to move to more use of XML schemas for
interoperability; Plan to use METS wrappers to exchange data between various
databases and our catalog; Plan to use METS for storing archival files; The state is
fi
nalizing implementation software that will expedite the use of statewide EAD standards



Depends on what system(s) we decide to use.



We are beginning to plan for an Institutional Repository
--
it's too early to know what
metadata we'll be using for that. We a
re in the process of implementing Archivist's
Toolkit and plan to export Finding Aids in EAD. I'm not sure what other capabilities of
Archivist's Toolkit we will use.



Move to XML and the schemas that are expressed in XML. NLM DTD

Implementation


9



More metadata harvestin
g to feed metadata out for pick up by harvesters



None



No current plans.



We hope to routinize metadata use in the institutional repository. This may mean
conversion of metadata created in a variety of formats in the post to the kind of format we
want to sue

primarily in one repository. This is just an idea at this point.



Sorry, I am not involved in the process.



Not sure...



Mix and match with RDF



Not sure



None



We will be increasing our use of of non
-
MARC21 metadata. For instance, in the Vital
-
Fedora reposit
ory now being implemented we'll use schemas that we are not using now
(except in small ways): METS (Metadata Encoding & Transmission Standard), PREMIS,
MODS (Metadata Object Description Standard), Dublin Core, MIX (NISO Metadata for
Images in XML), MARCXML
. We must implement a rights schema soon. I would include
a URI naming schema (handles) as well.



Would like to use METS and EAD more for special and digital collections



Getting uncataloged collections into collection
-
level EAD and item
-
level Dublin Core
.



I'm not in a position to know.



Mainly to move from MARC21 to enhanced Dublin Core



We are currently participating in a pilot project for an Institution Repository. This involves
DSpace. Permanent choice of schema has not been decided. See also #15.



Enc
oding of access rights information, although we have not settled on a schema.



Introducing METS and MODS to IR



We will likely be using more schema than we currently do.



Transition from local standard (GDMS) to MODS is under consideration.

Implementation


10

13. Respondents
favored using METS for storing digital objects (23 incidents), with
MPEG21
-
DIDL also commonly used (5 incidents).


14. When asked what metadata schemas they use for purposes besides resource discovery,
such as preservation and rights management, many res
pondents report that they use PREMIS
(18 incidents). METS was also commonly used (7 incidents).


15. When asked if they re
-
use any other descriptive metadata from other sources, respondents
most commonly report using MARC21 (7 incidents). Other commonl
y used sources are
ONIX (4 incidents), DC (3 incidents), and MODS (3 incidents). Respondents
additionally cite Amazon (2 incidents), MARCXML (2 incidents), OCLC (2 incidents),
and UniMarc (2 incidents).



Not sure how to summarize question 16.


16.

If y
ou plan to make any changes to your use of descriptive metadata from other sources
in the next 24 months, please describe them.

Respondents who answered this question/Respondents who skipped this question: 27/41

Comments:



n/a



incorporating into unified s
earch; crosswalking to MODS



see above



fulltext, abstracts



The LC Acquisitions and Bibliographic Access Directorate is considering agreements to
obtain more completed descriptive metadata (completed cataloging) from vendors. We
would likely prefer data in U
TF
-
8. We would also like to expand to additional sources
for e
-
journals in our ERMS
--
from the ISSN Centre, the consortium EZB (Elektronische
Zeitschriftenbibliothek), Serial Solutions, etc.



Various non
-
standard metadata for digital collections



A few
things can be noted.
-
Directly or indirectly, OCLC will continue to be the dominant
provider of metadata for the library.
-
For printed materials and online versions of
traditional publications we will probably get much more of our cataloging from vendor
s
such as YBP, Serials Solutions, and publishers and aggregators.
-
We may need to
change our source for online serial cataloging records from Serials Solutions to MARCit!
(from Ex Libris.)
-
We'll be increasingly dependent on metadata from Ex Libris fo
r its
products: Verde, SFX, MetaLib.
-
As we do digital conversions of source materials such
as efforts for book preservation or slides and photos conversion we'll draw on our own
catalog records in the former case and on existing non
-
digital metadata. Re
-
use of
existing metadata and transformation of metadata from one format to another will
become commonplace and routine.

Implementation


11



Expect to take in multiple formats



We have considered using Syndetic Solutions.



n/a



Possibility of using ONIX as a source for descriptiv
e metadata



no



Metadata plans for media resources are still evolving, as we work with different sources.
We have no concrete plans for rights metadata yet, but will need to develop them within
the next 24 months.



No changes planned. We purchase MARC recor
ds.



I would like to test Onix



n/a



same as above.



N/A



None



A few things can be noted.
-
Directly or indirectly, OCLC will continue to be the
dominant provider of metadata for the library.
-
For printed materials and online
versions of traditional publicati
ons we will probably get much more of our
cataloging from vendors such as YBP, Serials Solutions, and publishers and
aggregators.
-
We may need to change our source for online serial cataloging
records from Serials Solutions to MARCit! (from Ex Libris.)

-
We'll be
increasingly dependent on metadata from Ex Libris for its products: Verde, SFX,
MetaLib.
-
As we do digital conversions of source materials such as efforts for
book preservation or slides and photos conversion we'll draw on our own catalog
reco
rds in the former case and on existing non
-
digital metadata. Re
-
use of
existing metadata and transformation of metadata from one format to another will
become commonplace and routine.



Mainly the use of EAD for collection
-
level descriptions.



don't know



Get
more records from Information Providers
-

still unknown as to format
-

some will be
MARC21



We may, depending on the results of the IR pilot project. In addition, we will be creating
a new database for library material/resources that will be used on our we
bsite. We also
have been exploring Content Management Systems. All of these may include the
adoption of one or more metadata schemas.



don't know of any



ONIX



In
GDMS

local standard, we plan to encode subject terms in a more granular
fashion.

Implementation


12

17 Nearly th
ree quarters of all respondents use OCLC’s Connexion client or browser (44
incidents), with the remaining quarter batch loading their own data (16 incidents).


Implementation


13

Implementation


Respondents were asked a few questions concerning their experience with open
-
sou
rce
software. All of those who answered had installed open
-
source software in their library (62
incidents). This is a frequent occurrence, with many having installed software within the last
year (18 incidents). Of these 18 incidents, 3 respondents had
done so that day and 3 had
within the previous week. Another 5 respondents had installed open
-
source software in their
library during 2006, and 3 had done so in 2004 or 2005.
(question 18a)


Unsure how to do question 18b


should I tally the most commonl
y loaded or just list all
these programs below?

What did you install?




Lucene/Solar



Shelflister



Eprints and OAI PMH



sakai



we looked at MIT's self archiving



dSpace



drupa



Sesame



Search engine



Ruby packages



Php, MySQL



Greenstone Digital Library



LINUX OS
es, mediawiki, subversion



WordPress, Drupal, DokuWiki



Drupal



evergreen



dlxs



Virginia Tech ETD software



D
-
Space



Swish (search engine)



dot project



A calender software



don't know



Moodle CMS



Not for catalog but for other uses



dspace



LibraryFind, Reserve
s Direct,
Archivist's Toolkit



PM wiki, AW stats, Drupal, Apache,
MySQL



Lucene, Apache, Cocoon,



DSpace



DSpace, LOCKSS, PKP Open Journal
Systems (OJS), Open Conference
Systems (OCS), Request Tracker RT,
dotProject, Jabber, Plone, DokuWiki,
MediaWiki, Wordp
ress, Apache,
MySQL, Perl, Unix, PHP, Python



Fedora, Maven, Tomcat, subversion,
Apache, Sakai, Fex, Elated, Netbeans,
Firefox, Thunderbird, Greasemonkey,
Python, Perl, Eclipse, Cygwin, Linus,
MsSQL, TortoiseSVN



Lucene, Tomcat, Jackrabbit, exchanger
xml ed
itor, timesheet, mantiss, + many
open source API for internal
development




Implementation


14


The installation of the open
-
source software was most commonly done either by the library’s
IT department (12 incidents) or the library staff (9 incidents). Occasionally the in
stallation
was performed by the university’s IT department (5 incident) or by software programmers or
developers (5 incidents).
(question 18c)


The technological infrastructure of the libraries was generally under control of the libraries
themselves (50 in
cidents). It was under control of a central ITS department in only a few
cases (7 incidents).
(question 19)

The respondents whose IT is centralized outside of the library were asked what issues might
arise with regards to implementing and working with an
open
-
source library
-
centric
application like XC. Common issues were staff availability, server access, and development
issues, each cited by 2 respondents. Other issues were security, expertise, central support for
specific applications, storage space, a
nd coordinating the installation of security patches
across departments.
(question 20)

When asked what internal obstacles to downloading, installing, and using a product like XC
they could anticipate at their institution, the most common concerns were per
sonnel
availability for installation and maintenance (7 instances) and lack of resources to customize
the product (6 instances). Time (5 instances) and expertise/training (4) were the next most
common concerns. After that came access/authorization, perfo
rmance load/need for
dedicated servers, resistance to something new, and security (3 instances each). Platform and
dependencies issues were also common (3 instances), with one respondent clarifying that
they are a mixed Mac/PC/Linux environment, so Mac su
pport would be an issue.
Compatibility issues and lack of administrative understanding of the reinstallation’s priority
were other common concerns (2 instances each). (
Question 21)

The vast majority of respondents said that they would want to build addit
ional applications
on XC’s platform beyond the supplied user interface (54 incidents). 9 respondents skipped
this question, although 5 directly said they would not built additional applications.
(question
22)

The additional applications that respondents a
re most commonly interested in building on the
XC platform are course management integration (3 incidents) and federated searching (3
incidents). Integration with existing library resources or campus authentication systems were
also common potential appli
cations (2 incidents), with respondents considering integration
with existing home grown e
-
reserves, Open URL, and ERMS systems, with Sakai and other
systems, with ILL, e
-
journals, and image databases, and with campus authentication and
related customizati
on based on the identity of user population. Customization in other ways
was also a common interest, with respondents desiring to make custom end
-
user interfaces to
non
-
MARC databases, personalized collections and interfaces, customized ILL, e
-
journals,
d
atabases/Open URL, and image databases. Specialized subject portals were also considered
a potential development, such as pulling together digital sources or giving different faces or
catalogs for different media types. Respondents were also interested i
n collaboration tools,
widgets, request/delivery services for remote storage, syndication of content in other
discovery environments, linking with CMS, linking with their own digital repository ERM
module, web
-
based apps for searching, instruction, and res
earching, creating new mash
-
ups
Implementation


15

as needed, leveraging content of objects in addition to their description, data mining,
statistical reporting, and an OAI data provider. Also, one respondent would give XC the
ability to search 3 Innovative catalogs at thre
e different colleges as if they were 1 catalog,
with searching the 3 databases as well as a lower priority. This respondent hoped that this
would be a built
-
in feature because they were certain that programming this feature
themselves would take a long ti
me due to their limited resources.

(Question 23.
)


20.

If IT is centralized outside of your library, and you imagine that this might make it
difficult to implement and work on a new open
-
source library
-
centric application such as
XC, what do you think th
e issues might be?

Respondents who answered this question/Respondents who skipped this question: 24/44

Comments:



Basic SysAdmin on UNIX servers centralized under ITS department, but all other
operations are under library control.



Security, expertise, staf
f availability.



The Library Director is also head of our central ITS department (Information and Library
Services). I think this benefits the library a lot. But it is still hard getting ITS time (e.g.,
programming time) for library projects, since librar
y projects are competing with all the
other campus projects for ITS staff resources.



Not applicable
-

British Library



Issues should not be insurmountable in our environment, but could involve levels of
access to servers, central support for specific app
lications, control over development
environment.



Access, storage space, development issues



n/a



We have a 'split/shared' infrastructure. This has been a challenge when it comes to
comiling and maintaining software/ In particular, 'security' patches need to
be well
coordinated.



N/A



n/a



We have a merged library/IT environment
-

if this were seen as a priority of the combined
department, there would not be a problem.



We have a split environment where application software support is in the Library, but
most supp
ort for server hardware, operating systems, and DBMS (e.g. Oracle) is in the
universty ITS department. Our ITS department is mainly a Windows shop, so any system
that can't be installed on a Windows server would require either extra support from
Library st
aff, or a third
-
party support contract.



Our entire institution is a library and archives of which the IT branch is a part.



n/a



While the overall technological infrastructure of the library is under the control of the
library, our ILS is managed in a consor
tial environment in which we have not traditionally
not had direct access to our server. Certain computing/networking aspects are also
centralized under Academic Computing.

Implementation


16



21.

What internal obstacles to downloading, installing, and using a product like

XC might
you anticipate at your local institution?

Respondents who answered this question/Respondents who skipped this question: 57/11

Comments:



Should not be a problem



Security, expertise, staff availability.



Installing XC would have to be defined as a
project and ITS would have to put it on the
list of projects it was going to do that year. It would not be hard to make it a high priority
for the Systems Librarian (for work that could be done within the software once it was
loaded and everything at the
server administration level was in place). If ITS had to do
the programming to get XC to search 3 college catalogs as 1, it would be a much bigger
project (see 22 and 23).



Possibly compatibility issues but no specific obstacles identified at the moment



re
sources (people, time, training)



Time and expertise. We have few staff dedicated to running our current ILS. Using an
open
-
source replacement would be unlikely to save much time (initially) and might
require redeployment of staff with the needed skills.



Incompatibilities with systems currently on our servers, lack of administrative
understanding of the importance of pursuing this work.



no obstacles at all



Access, authorization, security performance load
--
big servers and storage capabilities



Apart from co
ordinating with OIT on networking issues, none. Usual time and resource
considerations apply.



Depending on the level of complexity of the installation, it might require a high level
admin. This might put constraints on the time to install



That would be wo
nderful. Issues regarding security, stability, usability, interoperability,
and scalability need to be resolved.



Librarian and library staff buy
-
in; inertia of having been with III for so long; shortage of
programming time and expertise



The only worry woul
d be low staffing resources to develop it.



None, but it would have to outpace other options



It isn't clear to me that dressing up local search, no matter how attractively, is a good
strategic investment. I'm not convinced that libraries can compete in se
arch; our value
-
add is in fulfillment and services.



Staff to learn, customize, and support it. Secondary issue
--

if we need a server for it
money is not the biggest problem, space in the server room is the biggest problem.



Lack of people resources



staff
resources



No real technical obstacles assuming the software can be configured to work with our
ILS.



Time and personnel resources most often, not technological resources.



The complexity of our organization sometimes bogs down implementation.

Implementation


17



The only one wo
uld be lack of adequate staffing because of a number of competing
priorities.



Resistance to something new



Lack of resources both staff and budget, esp. for ongoing support



Lack of ability to support our ILS. Ability to support the application within the l
ibrary.
Local interest and support for the application.



Platform and dependencies. Staffing. Need for additional hardware. We share our
ILS with three other campuses.



Interest in the catalog has decreased because of our Aleph implementation. Th
e
remaining enthusiastic stakeholders are technical services managers and public services
management. These folks are more concerned with low
-
level Aleph operational bugs
than with new and improved technology.



Web Administrators are not amenable to test
ing new products or services.



We are a mixed Mac/Linux/PC environment
--

support for Macs may be an issue.



Differing opinions on value of product; workload issues



We don't anticipate Academic Computing would block us from using it. The catalog is
hosted
by a consortium.



Platform compatibility authentication



Academic Computing wouldn't put a block to use



Staffing and support for customization and maintenance.







23.

If you would be interested in building additional applications on the XC platform, what

applications would interest you most?

Respondents who answered this question/Respondents who skipped this question: 44/24

Comments:



collaboration tools, widgets



Custom end
-
user interfaces to non
-
MARC databases.



We would want XC to be able to search 3 Inn
ovative catalogs (at three different colleges)
as if it were one catalog. Perhaps we would want it to also search the databases of the 3
colleges, but that would be a lower priority. We would hope XC would have this
capability built
-
in so we didn't have
to program it. If it weren't built in, but if programming
this functionality were a possibility, we would want our ITS to add this functionality.



Don't know yet



Request and delivery services (e.g., from remote storage); course management
integration; sy
ndication of content in other discovery environments.



We don't know at this time.



link with our cms link with our digital repository erm module



probably more web
-
based applications for instruction, searching, and researching



Depends on what the XC platfo
rm will provide.

Implementation


18



This depends on what is available by default. At a minimum we may want to build an
interface to our course management system.



personalized collection and interface



integrating library content into Sakai and other systems at our institutio
n (SIS portal, etc).



Ability to creatively build new mashups as needed.



Not sure what is meant by 'platform', but definitely want to leverage the content of objects
in addition to descripton



Unknown at this time



data mining



Hard to say without knowing mor
e about the capabilities of the XC platform, but
interfaces to link resolvers, federated search systems, ldap, etc.



For sure integration with our campus authentication and related customization based on
the identity of the user population.



Ties to course m
anagement system Federated searching of databases



not sure at this time, but we usually aways make maximum use of any flexibility to add on
to open source systems.



This really depends on the framework that you develop and what's accessible. I can see
th
at we might want different faces on the catalog (i.e. for different media types) or might
want to add specific features or content.



Hard to know until we see it. We have home
-
grown e
-
reserves, OpenURL, and ERMS
systems, so integration with existing systems

is what I would anticipate at first.



Statistical reporting, interfaces to Storage facility software, inferfaces to 'myUB', i.e.
webservices for other University web applications.



specialized interfaces with built
-
in defaults, e.g. search only e
-
theses; s
earch only
children's literature.



Until we saw the 'out
-
of
-
the
-
box' features, it would be hard to say



With staffing available, we'd like to customize and integrate with our CMS (new Webgui
hosted by Academic Computing) federated searching (we use Webfeat h
osted Webfeat
now), ejournals and databatases/openurl (we use Serials Solutions hosted now with
Marc records in our catalog for eresources), ILL (we use Clioweb on site windows server
2003), image datatbases (Luna local, Contentdm hosted at consortium now
), DSpace,
etc.



specific subject portals pull together digital sources results manipulation



We would love to have an OAI data provider but wouldn't promise to build it....

Implementation


19

I’m unsure how to summarize question 24 or if I should just report the responses
as they are.



24.

What resources do you have for programming work for library projects in terms of staff
(headcount), equipment, and IT support (for software development)?

Respondents who answered this question/Respondents who skipped this question: 52/1
6

Comments:



2 professional programmers, 3 librarian/programmers 3 UNIX production servers, 2
development servers Campus
-
wide subversion which allows for a repository for code



3.5 FTE for application support, but none for development.



1
-
2 people c
ould be assigned to this project. The answers to question 25 are for ITS staff,
not library staff in particular. There are no programmers on the library staff. One library
staff member does some work with CSS and is familiar with Javascript and XML, but
does not have programming
-
level knowledge.



A large IT department of over 100 staff with access to any equipment required. Many
projects are undertaken on a regular basis and contract staff are brought in to fill gaps



3 staff members



6 staff with some direc
t programming responsibility; equipment as needed.



We have two FT developers, we also hire student workers for special projects (but
funding for them needs to be allocated in advance on a per
-
project basis). We also have
at least two librarians with some

programming experience, but development is not their
primary role/responsibility.



4 computer analysts 6 computer technicians



3 server, 3 support



We have a 12
-
person Systems Department, including one full
-
time software developer.
We maintain over 20 serve
rs and can call on OIT support when needed.




1 sysadmin 2 developers solaris/ linux servers



our library has a very strong in
-
house technical team and should be able to handle most if
not all technical issues involved.



2 FTE (1 windows, 1 linux program
mer); SQL server, various Linux servers; we have our
own IT department, so support should not be an issue.



The Library Technologies team is 4 people right now, with only one who specializes in
interface programming.



2 prof. librarians with IT background,
4 technicians



8 developers; 3 system administrators; 2 network specialists; 2 Web developers



The library technology dept includes 22 staff (support specialists, system librarians,
system programmers, programmer/analysts, Sr. researchers and system architec
ts), and
we run over 60 servers. Additional technologists work w/in other library depts.



Depends on the support for the project. If supported probably 1.5 fte programming.
Equipment is usually not an issue.



1 application developer, 1 server manager, 1
development server, ability to contract
to campus IT for other customized programming.



Distributed over a number of departments, something like 5
-
10 people, equipment tends
to be linux or unix based, different folks have different programming language sk
ills.



3 staff and a manager
-

equipment as we order it (not normally an issue)

Implementation


20



none at this time



We have two IT groups depending on the applications: Staff Desktop support 6
computer professionals Web and Web app support 6 computer professionals ILS
support 1/2 of two postions computer professional Public computer lab support
includes server supporgt: 4 computer professionals Mac 5 for PC



2 fte + 1 dedicated testing server



Two full
-
time staff in Library with programming experience (unfortunately

mostly in the
Windows ASP environment), though time for new projects has to be balanced with
support for existing applications. No IT support for software development. Equipment
-

desktop equipment is adequate for software development; test
-
bed servers ca
n usually
be obtained by recycling old equipment.



Have a development server and mid
-
range computers Maintain own servers Have
own IT department in library (limited time available)



We have 2 staff capable of library software development but with other

responsibilities
related to Aleph and other library systems. Also one staff member maintaining Aleph
tables and configurations, acts as an application administrator rather than developer.



60



6 programmers; production and staging Unix servers. Good relati
onship with campus IT.



3 programmers from Digital Initiatives, with computer science backgrounds; 2
programmers currently managing ILS; possibly 4 web programmers in our Systems Dept.



Current staff including graduate assistants, 2 FTE; equipment at the lib
rary includes small
servers running Debian GNU Linux, FreeBSD, Red Hat Linux, MacOSX, Solaris, Oracle
license; Academic computing also offers server space for perl, php, mysql, coldfusion,
etc.



12 developers, 4 systems folks



Staff: +/
-

10 Equipment: Unix,

Linux and Solaris servers IT support for software: 2


Implementation


21

I’m not sure how to summarize or condense the table of responses for question 25.

25.

How many current members of the library staff can program in the following languages, and what is your library's
level of
proficiency in each language?

Respondents who answered this question/Respondents who skipped this question: 49/19


Number of Programmers

Language

0

1

2

3

4

5

>5

Response Total

CSS

2%(1)

11%(5)

26%(12)

22%(10)

2%(1)

7%(3)

30%(14)

46

C++

21% (9)

29% (12)

26% (11)

2% (1)

5% (2)

2% (1)

14%
(6)

42

Python

42% (17)

35% (14)

18% (7)

0% (0)

0% (0)

2% (1)

2% (1)

40

PERL

0% (0)

20% (9)

33% (15)

17% (8)

2%

(1)

9% (4)

20% (9)

46

PHP

4% (2)

15% (7)

30% (14)

20% (9)

4% (2)

4% (2)

22% (10)

46

ASP

30% (11)

27% (10)

27%

(10)

3% (1)

3% (1)

5% (2)

5% (2)

37

Java

15% (7)

28% (13)

21% (10)

9% (4)

6% (3)

4% (2)

17% (8)

47

ColdFusion

53% (19)

39% (14)

3% (1)

3% (1)

0% (0)

3% (1)

0% (0)

36

JSP

31% (11)

31% (11)

14% (5)

3% (1)

3% (1)

3% (1)

17% (6)

36

Javascript

4% (2)

16% (7)

31% (14)

9% (4)

13% (6)

4% (2)

22% (10)

45

XML

4% (2)

28% (13)

11% (5)

19% (9)

4% (2)

9% (4)

26% (12)

47

XSLT

7% (3)

27% (12)

18% (8)

23% (10)

7% (3)

7% (3)

11% (5)

44

AJAX

23% (10)

33% (14)

28% (12)

12% (5)

2% (1)

0% (0)

2% (1)

43

Implementation


22

Flash

36% (14)

36% (14)

13% (5)

8% (3)

0% (0)

3% (1)

5% (2)

39

Ruby

37% (16)

42% (18)

14% (6)

5% (2)

0% (0)

0% (0)

2% (1)

43

Implementation


23


Library’s

Level of Proficiency

Language

Beginner (0
-
1 years)

Intermediate (2
-
3 years)

Advanced (>3 years)

Response Total

CSS

7% (3)

30% (13)

64% (28)

44

C++

10% (3)

48% (15)

42% (13)

31

Python

39% (9)

35% (8)

26% (6)

23

PERL

7% (3)

36% (16)

57% (25)

44

PHP

14% (6)

24% (10)

62% (26)

42

ASP

19% (5)

41% (11)

41% (11)

27

Java

13% (5)

32% (12)

55% (21)

38

ColdFusion

35% (6)

41% (7)

24% (4)

17

JSP

12% (3)

42% (10)

46% (11)

24

Javascript

10% (4)

36% (15)

55% (23)

42

XML

12% (5)

22% (9)

66% (27)

41

XSLT

18% (7)

28% (11)

55% (22)

40

AJAX

28% (9)

50% (16)

22% (7)

32

Flash

23% (6)

46% (12)

31% (8)

26

Ruby

46% (13)

39% (11)

14% (4)

28


Implementation


24

If t
hey decided to install XC, 41% of the respondents would not likely considering increasing their programming staff to customiz
e the
interface or build new user features. 12% probably would, and 13% definitely would. 42% report that they would probably hav
e the
resources to customize XC. 11% said they definitely would.
How to summarize “no”/ “not likely” responses to a “not likely”
question?

61% of respondents said that they would probably be able to dedicate enough resources to download, install, and su
pport
XC. 20% said they definitely would, with 13% being unsure. 4% said it was unlikely, and 2% said they would not have the res
ources.
(question 26)

Implementation


25


Implementation


26

26.

If you decided to install XC...

Respondents who answered this question/Respondents who skipped t
his question: 54/14



Definitely

Probably

Not Likely

No

Not Sure

Response Total

Would you consider increasing
your programming staff in order to
get the most out of XC, e.g.
customize the interface or build
new features for your users?

13% (7)

22% (12)

41% (22)

11% (6)

13% (7)

54

Is it likely that you would have the
resources to do this?

11% (6)

42% (22)

15% (8)

13% (7)

19% (10)

53

Would you be able to dedicate
enough resources (people and
budget) to download, install, and
support XC, an open
-
source
p
roduct?

20% (11)

61% (33)

4% (2)

2% (1)

13% (7)

54


Technical Requiremen
ts


27

Technical Requirements

In this section we ask about servers, authentication, and other technical matters.

When asked whether their library department had any platform preferences, the most
common res
ponses were Solaris (24 incidents) and Linux (20 incidents). The next most
common were RedHat ES (11 incidents) and Windows (11 incidents), with MacOs also
being notable (3 incidents). As for hardware, the most popular was Sun (16 incidents),
followed by

Dell (13 incidents). The next level of popularity included HP/Compaq (5
incidents), Apple (3 incidents), and IBM RS6000 (3 incidents). 4 respondants report
having no hardware preference. For databases, respondents favored MySQL (27
incidents) or Oracle

(26 incidents). These were distantly followed by Postgres (6
incidents), MS SQL Server (4 incidents), and Sybase (1 incident).
Other common
preferences include JAVA (10 incidents), Tomcat (6 incidents), Apache (5 incidents), and
PERL (5 incidents). Res
pondents also shared preferences for PHP (3 incidents), Spring
(3 incidents), Ruby on Rails (3 incidents), and Cocoon (2 incidents).

(question 27)


The two most commonly used database servers are MySQL (96.4 %) and Oracle (78.6%).
These were followed by
Microsoft SQL (50%) and PostgreSQL (31.1%). DB2 and
Sybase each were used by 8.9% of responders.
(question 28)

The majority of respondents would not need to display a user interface in a language other
than English (67.3%).
(question 29)

Of those who
needed non
-
English language support, the top need was for Spanish (12
incidents). CJK (Chinese, Japanese, and Korean) was the second most needed selection,.
These languages were separately indicated as well, with 4 respondents needing Chinese,
3 needing
Korean, and 2 needing Japanese. Other common language requests include
Arabic (5 incidents), French (4 incidents), Hebrew (2 incidents), and “All Including Non
-
Roman” (2 incidents). Respondents indicated that French was mandatory for Canadian
Government
institutions, which by law must provide services in both English and French.
Single responders additionally needed Cyrillic, Danish, German, Italian, Tagalog, and
Vietnamese.
(question 30)

When asked which repository or eprint systems they use, the respo
ndents cite Dspace most
commonly (46.2%). Other popular systems were ContentDM (34.6%), Sakai (21.2%),
Fedora (19.2%), and Greenstone (19.2%).
(question 31)

When asked what if they need their existing institutional repository to do anything that it
does
not already do, the list of new features included support for long
-
term archiving, an
interface that is easy to customize and will not require revisiting with each upgrade of the
software, the ability to offer different levels of access to materials in a m
anner that is easy
to set up and maintain, accept and deliver a variety of interactive materials such as
websites, web page capture and archiving, management interface, security access, control
provisioning, statistics for search/indexing/reports/usage, st
reaming content from
repository, versioning (editing/changing records after submission), integration with all
other applications, and integration with UMI for theses and dissertations. Many
Technical Requiremen
ts


28

respondents also desired various improvements, such as the search

interface, ingest
workflows, access layer, federated searching, and the user interface.
(question 32)

The most commonly used authentication system is LDAP (83.6%). The second two most
common were Ezproxy (70.9%) and VPN (60%). After those, Shibboleth (
38.2%) and
CAS (10.9%) were most common.
(question 33)

Not sure how to summarize question 34’s responses.

32.

If you have an institutional repository, is there anything you need it to do that it does not
already do?

Respondents who answered this question/
Respondents who skipped this question: 31/37

Comments:



we've barely scratched the surface!



A lot of different features, but they are often context
-
dependent.



This box is not big enough. better search interface



Yes, this is too open ended a question...



LC does not have an institutional repository.



No



Improved ingest workflows Improved access layer



Improve federated searching; support for long
-
term archiving.



Our Fedora repository is a collections repository, not an institutional repository. It is in
an
implementation stage.



NLM's repository, PubMed Central, is only used for journal literature and does not
currently handle other materials. NLM is currently looking for a repository to handle other
types of materials.



Easy to customize user interface t
hat will not require revisiting with each upgrade of the
software. Ability to offer different levels of access to materials in a manner that is easy to
set up and maintain. Accept and deliver variety of interactive materials, such as Web
sites. Web page ca
pture and archiving would be great. Wish DSpace had more flexibility
for presentation of data.



too soon to say



n/a



N/A



n/a



Management interface, security access, control provisioning. Search/ indexing/ reports/
statistics.



as far as I know, our library h
as developed in
-
house a knowledge management system to
handle digital contents in the library



Those of us filling out this survey arn't equipped to answer this question! If you really
want to know, you can contact Sam Kalb (Library Assessment and IT Proj
ects
Coordinator) at kalbs@post.queensu.ca



N/A



Lots of work needed on the interface. Support for streaming content from the repository.



Nothing

Technical Requiremen
ts


29



Our Fedora repository is a collections repository, not an institutional repository. It is in an
implementation

stage.



Probably better support for digital contenct such as streaming services.



don't know



We have just begun experimental work
-

so cannot say at this stage.



work with all of our other applications



Integration with UMI for theses and dissertations. Our E
TD system is currently very
clunky
--

users submit the ETD's using a combination of UMI/Proquest's interface for PDF
files, and an in
-
house system to submit their 'native' (e.g., MS Word) files since UMI
doesn't support that, then Library staff manually mo
ve the files and meta
-
data into
Virginia Tech's ETD system for our local repository. Would prefer to have the end
-
user
interface be based on (customizable) software based here, with the files and meta
-
data
then being exported to UMI. (Of course I realize

that this would require cooperation from
UMI.)



In pilot phase only.



Versioning, editing/changing records after submission



Better visual appearance of the user interface



Keep usage statistics.


The most commonly used authentication system is LDAP (83.6%)
. The second two most
common were Ezproxy (70.9%) and VPN (60%). After those, Shibboleth (38.2%) and
CAS (10.9%) were most common.
(question 33)

Not sure how to summarize question 34’s responses


should they just be listed, listed by
respondent, or tal
lied overall?

34.

What systems do you use authentication with?

Respondents who answered this question/Respondents who skipped this question: 45/23

Comments:



People need to authenticate for the course management system, some parts of the
catalog ('What do
I have checked out?'), the library intranet, ILL (Illiad), and multiple
electronic databases when accessed from off
-
campus.



ADS, Innovative, and Drupal



CAS and LDAP. I guess we also use Voyager (Barcode) for patron empowerment



OPAC, ILLiad, DSpace, SFX, L
icensed e
-
resources ('Vera')



License content, EReserves, ILL, patron services, ETD repository (but not single sign on
yet)



The Library's E
-
Resources systems.



Electronic resources, e
-
reserves, various locally developed applications



Teleworkers who need acce
ss to systems behind the LC firewall are 'authenticated'
through VPN; this includes the LC ILS acquisitions and cataloging modules. In general
LC does not authenticate end users, since our end
-
user community is truly global. We do
not authenticate users b
efore they accesss the LC Catalog. Subscription e
-
databases are
accessible to end users only on the LC campus.



All licensed resources, and some digital collections

Technical Requiremen
ts


30



ILS OPAC/CIRC modules (MyLibrary) Web applications Digital Library Management
S
ystem (DAMS)



Catalog, Metalib, Repositories, Computer log
-
in, ILLiad,...



all systems.



Most, i.e., Voyager, MetaLib, Sakai, multiple local applications



ezproxy



Employees are authenticated before using most NIH resources. Public access resources
(e.g.,

web sites) require no authentication.



Access to licensed electronic resources from records in the online catalog (EZProxy /
Shibboleth / locally written ap with III API) Remote access to the IR development server
for staff (VPN) Campus email, signon to
VPN, authenticated Internet access from public
PCs in library (Kerberos) Online catalog (local and consortial union catalog) (III)



ALeph, Digitool, Metalib



ILL, Course reserves, OPAC account



Interlibrary loan, MyAccount feature in Voyager, electronic
resource access, and other
non
-
library systems.



As far as library systems: proxy server, library catalog (to view patron record, place
requests, etc.), ILLIAD.



Reader Admissions Document supply customers



all, except open opac on selected machines



Voyag
er, SFX, MetaLib, external e
-
resource vendors, consortial borrowing, locally
-
developed course management.



WebVoyage (currently implementing external authN with CAS) Locally
-
developed ERM
(uses Library's LDAP) DD/ILL (currently uses campus LDAP, will be
moving to CAS at
some point) IR (uses separate ProQuest login)



library catalog databases & electronic resources Eleectronic theses business
applications



Not clear what this is getting at. If you mean catalog, ILL, etc., then we use authentication
for r
emote access to proprietary resources, for ILL and document delivery, e
-
reserves,
and internal applications.



EZProxy is tthe front end for electronic resources. EZProxy uses LDAP. VPN is used
for off
-
campus and wireless use.



Sakai uses LDAP; we use EZ
proxy to auth to IP
-
controlled resources; VPN for internal
network resources. Home
-
grown ILL system as well as EZproxy uses III patron API.



Interlibrary loan (VDX) Voyager OPAC (for patron requests)



Multiple!



ezproxy & shibboleth



most campus systems use
the central web sso system, as does our ILS.



Off
-
campus access to subscribed databases My Library for the catalog ILLiad interlibary
loan



Software downloads; some wireless networks; computer labs



Most, i.e., Voyager, MetaLib, Sakai, multiple local appl
ications



LDAP



All vendor databases for off campus access; working toward authentication adaptor
against campus LDAP for Voyager.

Technical Requiremen
ts


31



ILS, OpenURL Resolver, EzProxy, Meta search product, Library Staff Wiki, ssh login into
server boxes (linux), Campus Portal (wi
th library pages).



Licenced databases All desktop machines for staff and the public Scheduling system
Reserves Streamed media Videoconferencing classrooms iTunesU (future)



Aleph 500 ILS system. EZproxy for off
-
campus access to licensed content. IL
Liad for
ILL. Variety of services developed in
-
house, such as our My Library application.



IIS EZProxy ERes ILLiad



Aleph, Storage, parts of Website, Course Reserve, some Digital Library collections



Metalib, Voyager, library websites, LAN, soon Illiad; E
RES [Docutek]



Ejournals, Databases/Serials Solutions, federated searching/Webfeat, image databases
(CONTENTdm, Luna), Open Journals System, DSpace coming soon, etc.



Sirsi Electronic journals and databases Unix logins to servers and file storage E
-
mail
a
nd calendar access Fedora Local hosting of consortial resources (IP restricted).


56% of respondents said that a single authentication system was not used across their catalog,
interlibrary loan, and IR systems. Of those 44% who did use a single sys
tem, 65% said
that it is the same system that the campus uses.
(question 35)

Metalib (ExLibris) is the most commonly used metasearch software, with 54.1% of
responders using it. WebFeat (21.6%) and Serials Solutions Central Search (18.9%) are
the next mo
st commonly used software, with MetaFind (5.4%) and LibraryFind (2.7%)
far behind.
(question 36)

Of those respondents who use an open URL resolver, the majority use SFX (Ex Libris)
(62.3%). The next most commonly used are Article Linker (Serials Solution
s) (26.4%)
and WebBridge (III) (5.7%).
(question 37)

Repondents were asked what types of add
-
ons to their library catalog they currently
provided. Of these, the most commonly reported were electronic resources management
software (ERM) (72.9%) and serial
s solutions and like products (58.3%). Other popular
add
-
ons were offsite storage management software (22.9%), cover images (20.8%),
scheduling or calendaring software (12.5%), and user interface add
-
ons like Grokker,
Primo, or Aquabrowser (10.4%). Other
s specified in the comments section were
electronic reserves and tables of contents.
(question 38)

Not sure how to summarize question 39’s responses.

When asked what digital library products they used besides their ILS, respondents largely
reported using
course management systems. WebCT (38.8%) and Blackboard (36.7%)
were the most common of these. Drupal (16.3%) and Sakai (16.3%) were also
commonly mentioned.
(question 40)

Respondents were asked what personalization features they have currently implement
ed in
their library websites. The ability to notify patrons when new material matching a
patron’s interests (saved searches) is cataloged was the single most common feature, with
4 instances. Of these, 3 notify patrons by email and 1 by RSS alerts. Perso
nalization and
Technical Requiremen
ts


32

circulation information were other common features. Additional features included
tagging, reviews, rating, and commenting on items, as well as having personalized e
-
reserves.
(question 41)

When asked how they are using RSS feeds in thei
r library websites, the largest group of
respondents said they were using it for news feeds or events (12 incidents). Second most
common was the use of RSS to inform patrons of new items being added to the catalog (9
incidents). Other uses include public
izing library hours, publicizing when open
-
access
research papers have been deposited into an institutional repository, feeding course
reserves into Sakai, using OpenSearch interfaces, for subject guides, and for a database of
databases.
(question 42)

Res
pondents were asked how they are making applications that focus on cell phone or PDA
users. Many make the library catalog accessible, with 3 respondents using III’s “AirPac”
and 2 reporting that their library’s ILS makes a stripped down version of the onl
ine
catalog and stripped down webpage with library hours or other useful information. Other
uses include RSS feeds for searches, renewal of books, and using SMS to get catalog
records to cell phones. 2 respondents also mention that their health science l
ibraries
make resources available for cell/PDA users, including using PubMed for handhelds,
WISER for first responders in hazardous materials incidents, and AIDSinfo for PDA
tools. One respondent additionally mentions using iPod for distributing course re
serves
material, and one respondent mentions a program called “Cocoon.”

(question 43)


When asked whether they regularly conduct usability testing in their library, 56.9% of
respondents answered yes.
(question 44)

When asked whether they had done user r
esearch aside from usability testing, the most
common response was that they had done focus groups for various reasons (8 incidents).
The next most commonly performed forms of user research were search log analysis (4
incidents) and participation in LibQua
l (3 incidents). Other types of research were user
behavior, needs, expectations, and satisfaction, orientation surveys of users’ previous
library experience, interviews of faculty cyberinfrastructure needs, exit surveys, and
space use surveys. One respo
ndent additionally uses metrics to measure use of their web
pages and has recently implemented the American Customer Satisfaction Index for its
website.
(question 45)


39.

Do you routinely send copies of your bibliographic and holdings data to a regional
d
atabase or consortium? If so, briefly name the steps in the technical process.

Respondents who answered this question/Respondents who skipped this question: 42/26

Comments:



No.



Yes. Every other hour a batch export is run for our state consortial catalog
.



n/a

Technical Requiremen
ts


33



no



We don't actually send data out, but we are part of the Innreach system for the SUMMIT
consortia.



YES



Yes.



LC does not submit copies of its catalog data to a _regional_ database, but it distributes
its bibliographic (and authority) data to OCLC v
ia the LC Cataloging Distribution Service,
which maintains a separate database of LC cataloging data for distribution purposes.
LC exposes its EAD finding aids and its digital content via OAI
-
PMH.



No



Records are sent weekly to the California Digita
l Library to feed Melvyl
-

UC's Aleph
-
based union catalog Records are sent, in real time, to the San Diego Regional Library
Consortium's Innovative Interfaces' Inn
-
Reach system



yes sent to CDL.



Yes
-
Technical services staff flag the records to send us
ing a local application that writes
record IDs to Oracle tables
-
Weekly processes extract the MARC records using a
combination of local programs (to update the Oracle tables) and Voyager programs (to
extract the MARC records)
-
Records are ftp'd



Yes, run
perl script to extract records added each week and send that to WRLC...another
Voyager site



Yes,OCLC Worldcat, other licensees through NLM
-
developed programs to extract and
format the data.



We catalog on OCLC, export the record to the local catalog and it

is automatically
transferred to the OhioLINK central catalog by the III software.



No



Yes. Mark in catalog, bulk process for exporting and sending to OCLC.



We output MARC records and send them to OCLC to have our holdings attached (at the
bib level, not t
he LHR level). Our records are automatically up
-
loaded to the Inn
-
Reach
catalogs as they are loaded into our catalog. Updates to our catalog records
automatically queue the record for updating in the central catalog.



Yes to COPAC and other data custom
ers Data is extracted from the ILS, customised for
specific customer needs and files sent to them



Not exactly; copies are sent to OCLC; to LTI (for authority control), BNA (for TOC
enrichment), RAPID (for doc delivery). Copies sent by periodic extract an
d ftp using
various db criteria



We FTP OCLC PrompCat records to ReQuest, Connecticut's statewide database. This is
not a complete record of our holdings, but correcting the situation has not been a priority
for the organization.



NO



Yes. We do batch load in
to OCLC once a month.



yes and no. We send serials holdings information to OCLC>



Unsure, but I am guessing that we extract bib and item records and ftp them to the
central server for import.



Weekly files are sent to Library and Archives Canada. We upload
files to OCLC when
creating new bibliographic records.



Summit (Orbis Cascade Alliance)



export records for amicus (Library & Archives Canada)

Technical Requiremen
ts


34



various paths in use to get records/holdings into oclc. we are also part of an INNREACH
(III) system; the bib and
holdings data are automatically sent from the local system to the
INNREACH system in real time.



Original cataloging done in OCLC & exported to Innovative Holdings updated to OCLC
for material purchased with catalog records (when vendors license permits, t
he records
are added to OCLC)all done by batch Batch file sent to OCLC for holdings deletes



No



Yes
-
Technical services staff flag the records to send using a local application that writes
record IDs to Oracle tables
-
Weekly processes extract the MARC re
cords using a
combination of local programs (to update the Oracle tables) and Voyager programs (to
extract the MARC records)
-
Records are ftp'd



Yes, oclc via batch uploads.



We do a weekly export to OCLC for all the OCLC 'symbols' for our campuses (3 for o
ne
campus, one each for the 3 other campuses). The export has been scripted to retrieve
records in the ILS with particular OCLC codes and cataloged dates. We might still be
doing the RLIN exports (though that might have been stopped already).



yes, Access
PA, it's an INN
-
REACH system, so it's automatic



Yes Records are uploaded to shared catalog
--

INNReach.



SUNY union Catalog



Yes. When the bib record is complete, the cataloguing team editor adds it to the product
queue for batch loading. The products a
re selected weekly and files put in an ftp
directory which OCLC comes and picks up.



no



The cataloging that is done in our local Voyager database is added to the CARLI shared
catalog automatically overnight.



no



To OCLC; pull records by cataloging date, use

prep
-
MARC and FTP the file to OCLC.



41.

If your library website offers personalization features, what kind do you offer?

Respondents who answered this question/Respondents who skipped this question: 25/43

Comments:



We offer tagging, reviews, ratings, a
nd comments integrated into the OPAC via a set of
Drupal modules



n/a



n/a (but want them)



N/A (planned for future)



Personalized e
-
reserves and checked out items through portal



LC does not offer personalization features.



Catalog offers personalization in: ch
oosing default search preference; # of records per
page display; bookbag; saved searches additional personalization: My Contents: TOCs
of selected journals
-

no 'faculty/students/staffs views



Our OPAC includes account info, bookbag, self renewal and re
quests. We provide a
similar account info channel to our university portal. Federated search tool includes
bookbag
-
like facility, customized database sets for searching and display preferences.

Technical Requiremen
ts


35



LocatorPlus offers saved searches, preferences; Entrez offers
save records, set defaults,
bookbag (shopping cart)



n/a



None.



My Millennium in the OPAC allows patrons to save searches and get e
-
mails when new
materials matching their searches are cataloged.



none



MyLibrary (older version)



Below is a list of personalize
d services provided by Libraries & Media Services.
Additional services will become available over time. Gartner Information Portal Library
Skill Modules Personal Services Suggestion Form Logout



N/A



Our OPAC includes account info, bookbag, self

renewal and requests. We provide a
similar account info channel to our university portal. Federated search tool includes
bookbag
-
like facility, customized database sets for searching and display preferences.



portal login, email and rss alerts for circulat
ion content.



My Library (in
-
house implementation of concept). Also a form on home page where users
can enter their ID, and get list of course reserves and research guides for courses
they're enrolled in (this feature is also part of the My Library impleme
ntation).



Coding to allow personalization via browser.



Email alerting.



my portal stuff, fonts & colors, citation management



'My ILL' like features



circulation information



Ability to create watch lists and receive notification of new items.

42.

If you provi
de RSS feeds on your website or use RSS feeds in any other way, how are
those RSS feeds used?

Respondents who answered this question/Respondents who skipped this question: 43/25

Comments:



We have provided feeds to just about anything you can think of comi
ng out of our
catalog.



RSS used for bookbags; plans to expand RSS feeds into other areas.



Our subject guides and database of databases have RSS feeds, although I am not
sure who uses them. Our ejournal list, EAD finding aids and catalog all have
OpenSea
rch interfaces.



We use a Wordpress blog <http://news
-
libraries.mit.edu/blog/> for our news stories. The
feeds are categorized by broad subject, which roughly corresponds to the different
libraries on our campus. We use FeedDigest to generate code based on
those feeds to
embed in the individual home pages of each library, so their news headlines can also
appear on their individual home pages. We also use Windows active desktop with web
pages that have the feeds embedded (again via FeedDigest) and these a
ppear as the
desktop background of the public computers in each library. We also have some
specific subject feeds for new titles from our catalog, see
<http://libraries.mit.edu/help/rss/barton/>. We use FeedDigest to generate code based on
those feeds s
o they can appear as web pages, such as this one for architecture:
<http://libraries.mit.edu/help/rss/barton/architecture.html>.

Technical Requiremen
ts


36



We feed EReserves listings from ReservesDirect (our open source release) into
Blackboard courses



N/A (planned for future)



ne
ws



New books list



NEWS FEEDS, OPEN SEARCH



inform users about events



LC does provide some RSS feeds, mostly for news and information about events at LC.



arXiv.org for new documents added



News feeds New items list



To provide library news To publicize open
-
access research papers deposited in the
institutional repository



News and course reserves feed into Sakai



Current awareness for anyone who's interested



To distribute news/announcements from the Library



We don't offer feeds into or out of catalog yet



Yes,

for library news



None.



We will be implementing RSS feeds from the library catalog this summer.



Our ERM offers RSS feeds of new products added to the database



library catalog and library news



We use RSS feeds for the libray's 'What's New' column and for
the hours listing.



News feeds



not yet



We use RSS feeds to create changing content on our main website page
--
news items
and the like are posted to blogs; the RSS feeds are displayed on our front page.



Library News Headlines (RSS produced by WordPress)



New

Titles, Current Awareness, etc.



new books list



News and course reserves feed into Sakai



RSS feeds are generated from ILS content, and also links within the ILS are provided to
selected outside feeds. Individual circulation data is avaiable as rss feeds a
s well as
Interlibrary loan data. Also feeds of new book information provided directly from the ILS
and from an outside app created with php/mysql.



I don't think we are doing this in any major way.



update info



Just beginning no use data as of yet



RSS feeds

for:
-

new items lists
-

our subject portal and database lists offer RSS feeds
so users can bookmark databases or so we can put the lists in blackboard



Have just recently added an RSS feed to our online catalog.



Announcements are available via RSS fee
d. III Millennium catalog software allows
feeds.



Library news RSS feed

Technical Requiremen
ts


37



Notification of new stuff posted to IR. RSS feeds for new and trial databases are in the
works; thinking about RSS for new books too.



News blog uses WordPress



new book notification



A
lerts for new videos, additions to the repository, new databases, library news and
events.

43.

If you have implemented cell/pda focused applications, or if you are planning to do so,
please briefly describe what you are doing.

Respondents who answered this

question/Respondents who skipped this question: 29/39

Comments:



We use III's out
-
of
-
the
-
box 'AirPAC'



n/a



n/a



plan to use RSS feeds for searches



No



Because of issues with user authentication, cell/PDA applications are not currently high
-
profile for LC.
We are beginning to consider management/access to podcasts.



None used



no



PubMed for handhelds WISER is a mobile application designed to assist first
responders in hazardous material incidents. AIDSinfo's PDA Tools



We've implemented Airpac, the Inn
ovative mobile catalog. It is available from OSU's
mobile site and from the library home page, although the software doesn't offer statistics,
so we don't know how much use it gets.



Looking into cell/pda online renew of books.



Considering development of
PDA UI for users, particularly Medical School
students/faculty.



None.



n/a



library catalog is accessible via pda or other internet accessible handheld devices
(AirPAC)



n/a



Nothing is formally planned at this time.



N/A



are looking at cocoon for this



no



No
t much. Our ILS provides a simple web interface for PDA's.



Health sciences library folks work with their primary clients to improve PDA access to key
library resources. At this time is primarily clinical full text resources.



ipods for distributing course
reserve material



We support these as part of wireless service but demand has not lead us to increase the
use of these.



SMS catalog records to cell phone

Technical Requiremen
ts


38



Currently, we have a stripped down version of the online catalog for use on cell/pda's;
also a few st
ripped down web pages with library hours, etc.



no



Our Health Science Libraries (Chicago, Urbana, Peoria, Rockford) have made some
electronic resources available through PDAs for physicians, medical students, etc.



no

User Research


39

User Research


When asked whether they

regularly conduct usability testing in their library, 56.9% of
respondents answered yes.
(question 44)

When asked whether they had done user research aside from usability testing, the most
common response was that they had done focus groups for various r
easons (8 incidents).
The next most commonly performed forms of user research were search log analysis (4
incidents) and participation in LibQual (3 incidents). Other types of research were user
behavior, needs, expectations, and satisfaction, orientation

surveys of users’ previous
library experience, interviews of faculty cyberinfrastructure needs, exit surveys, and
space use surveys. One respondent additionally uses metrics to measure use of their web
pages and has recently implemented the American Cust
omer Satisfaction Index for its
website.
(question 45)



45.

If anyone on your library staff has engaged in any user research aside from usability
testing, please give a very brief description.

Respondents who answered this question/Respondents who skippe
d this question: 32/36

Comments:



In the spring of 2006 we conducted a user needs assessment, called the Photo Diary
Study. For project documentation, see: <http://libstaff.mit.edu/webgroup/userneeds >
Final report available here: <http://libstaff.mit.ed
u/webgroup/userneeds/report.html>



Quality Metrics project (Aaron Krowne, lead programmer)



http://statsbiblioteket.dk/publ/fieldstudies.pdf



Yes, but this person has recently left the Library.



In 2004 the Library contracted with Outsell, Inc., to study use
r expectations and
satisfaction with LC's collections and services. Outsell submitted the first draft of its
report in December 2006. In addition, prior to the upgrade to Voyager with Unicode, LC
conducted exit surveys of users in its reading rooms. LC
also has metrics to measure
use of its Web pages and has implemented the American Customer Satisfaction Index for
its Web site.



numerous projects: e.g. LibQual; space use survey; user behaviour in F2F, e
-
mail, chat
environments; log analysis studies, DSpa
ce usage; etc., etc.



Focus groups
-

for various reasons



We have used several methods: surveys to gauge user satisfaction for various library
services, a survey and focus groups for faculty cyberinfrastructure needs, structured and
unstructured interviews,
usage statistics and log file analysis especially for search terms.



Analysis of search logs, card sorting exercises, heuristic evaluations



Has been described in a recent email from Sally Rogers to Nancy Foster



Ongoing efforts: User feedback/assessment o
f instruction sessions; orientation surveys
of user expectations and prior library experience; collaboration with CIS to learn about
incoming student expectations and experiences with technology and research; focus
User Research


40

groups with different user groups on libr
ary services and library presence (online and
physical)



None.



We get feedback from focus groups. We've participated in LibQual.



We have a person who does user testing on our new web offerings and on any other
technical and non technical issues of interest
. Last year we did a survey of 100 graduate
students and their library use.



We have done some log analysis



We have conducted several anthropological studies of users, based on similar studies
conducted at the University of Rochester (with help from Nanc
y Foster).



Focus groups



focus group and regular survey regarding various kinds of user services



We participate in LibQual every few years. We recently hired a user experience
consulting firm to hold focus groups with library users. We do usability testin
g (sit
-
down
site walkthroughs) with users irregularly... We sometimes create targeted SurveyMonkey
surveys to detect interest in new library services.



Not sure.



our staff has done a fair bit of work on assessment, and much of that touches user
behavior.



Beginning to establish of usability program for the our main web site Conducting
surveys/focus groups with graduate students on what the Library can do for them



We have used several methods: surveys to gauge user satisfaction for various library
services,

a survey and focus groups for faculty cyberinfrastructure needs, structured and
unstructured interviews, usage statistics and log file analysis especially for search terms.



Assessment Team Leader participated on original committee and in development of
ET
S's ICT (Information and Communication Technology Literacy) first drafts, including
onsite delivery of instrument for testing purposes.



We use a full scale usability lab and 'quick and dirty' reference desk usability (ask some
users about specific things)
. Also try to test key apps with JAWS software. Not sure
consider this 'research.'



Speaking with faculty



we do focus groups and specific feature studies from time
-
to
-
time



Last year did usability testing of the library web site, which was the basis for a
site
redesign implemented in Summer 2006. This spring doing usability research on our
online catalog, though this has shifted to a focus group methodology rather than actual
usablity testing.



We review search entries for library website and OPAC We rev
iew statistics for ejournal
and ebook packages



Will be doing some user needs analysis this summer and fall for new website and other
stuff



not recently



User behavioral research.

Participation


41

Participation


Respondents were asked questions about their potential partici
pation in XC.


91% of respondents do not currently have a contract in place for an alternative interface to
their library resources. 55% feel that they will be under pressure to sign a contract for such
an alternative interface within the next 24 months
. When asked if they would still consider
implementing XC if its support were done by contract with a commercial vender, 92% would
still consider implementing it. 67% would consider implementing XC if no support were
available. 100% would be willing to
implement some change in data load or metadata
workflows in order to gain improved search capabilities for their users.
(question 46)


Respondents were asked whether they have any concerns that if they installed and ran XC
they might lose out on anything
they currently get form their OPAC vendor because of
contractual obligations. Most said no (28 respondents). 5 said they were not sure, 4 said yes,
3 gave conditional answers, 3 said that the loses would be minimal because of their high
dissatisfaction w
ith their OPAC, and 1 said that they would abide by their contract.
Concerns were raised over whether or not the vendor could claim that XC interfered with
system functionality, as well as the significant change from having an even minimally
supported pro
duct with defined expectations for problem resolution, upgrades,
enhancements, bug fixes, etc. to an unsupported product. Respondents also considered the
loss of “24x7” tech support, a defined enhancement request process, and error reporting.
Lastly, a re
spondent said that XC would have to run in parallel to Aleph for a while to prove
itself and that until XC is widespread, staff probably would not be weaned from ExLibris.
(question 47)


Question 48 is open
-
ended, “any additional comments”

46.

Respondents

who answered this question/Respondents who skipped this question: 59/9

Question

Yes

No

Response
Total

Do you currently have a contract in place for an
alternative interface to your library resources (e.g.
Endeca, Primo, or Aquabrowser)?

9% (5)

91% (53)

58

Are you feeling that you will be under pressure to
sign a contract for an alternative interface within the
next 24 months?

45% (25)

55% (30)

55

Support for XC might be by contract with a
commercial vendor (such as IndexData). Would you
still consider
implementing XC if this were the
case?

92% (48)

8% (4)

52

Would you consider implementing XC if no support
were available?

67% (37)

33% (18)

55

XC would entail some change in data load or
metadata workflows. Would you be willing to
100% (52)

0% (0)

52

Participation


42

implement some change
in order to gain XC's
improved search for your users?

47.


OPAC vendors provide valuable contractual obligations. Do you have any concerns that
if you installed and ran XC, you would lose out on anything that you currently get from
you
r vendor?

Respondents who answered this question/Respondents who skipped this question: 49/19

Comments:



No, we don't have concerns.



We're already considered a rogue customer by our vendor, I don't see how XC would put
us in any contractual problem, howeve
r.



We really don't get much from our vendor.



None.



no



No.



no



NO



specialized software modifications unique to our user needs?



LC would abide by the contract with Endeavor/The Ex Libris Group and with the
requirements of the Federal Acquisitions Regulations.



No



No



No.



no
-

would still require procurement system from ILS vendor just would not use OPAC.



no



No.



No



Not sure. If the vendor could claim that XC potentially interfered with functionality of their
system, we could lose some of the support that we exp
ect and require of them for use of
their product. We do not expect library system vendors to be enthusiastic about their
customers choosing alternative products to their own.



No.



Dissatisfaction is high enough, that any loses would be minimal



No.



No.



Don'
t know



no



The basic concern of moving from a supported product with defined expectations for
problem resolution, upgrades, enhancements, bug fixes, etc. to an open
-
source
environment. However minimal our current vendor support, this is a significant chang
e.



24x7 tech support



don't know enough yet to be concerned

Participation


43



No major concerns. Our experience with our OPAC vendor(s) has not been turbulence
-
free, so a stable, ongoing, and reliable open
-
source solution with an active
user/developer community doesn't pres
ent a major worry for us.



No, as long as support, a defined enhancement request process and error reporting exist



We are a turnkey site, so yes. If XC were open source, we would not be able to place a
phone call to have any and every problem solved.



No.



No



again, the key for XC would be to measure up to other OSS initiatives, esp. Erik
Hatcher's work



ILS vendor management is important to us
--

III has been a valuable development partner
and we don't want to damage that relationship.



No



Yes



No



Depends on w
hether all existing services could be provided with XC.



No, as XC would be an additional item not a replacement of the vendor provided OPAC.



Don't know enough to know.



Not to our knowledge but with our vendor who knows.



yes



If XC is an add
-
on (not a re
placement) for our ILS, then I don't see why it would affect our
vendor support. And we have not been thrilled with ILS vendor support either in terms of
trouble
-
shooting or development of new features, so open source is looking better all the
time.



No



XC
would have to run in parallel to Aleph for some time to prove itself. Until XC becomes
widespread, it is unlikely staff could be weaned from ExLibris.



no



not at this time



SIRSI is developing an Endeca like product



No.

48.

Do you have any other comments o
n any of the issues raised in this survey?

Respondents who answered this question/Respondents who skipped this question: 34/34

Comments:



We look forward to learning more at the two
-
day conference regarding the precise scope
of the platformƒ?Ts intention t
o provide means to create ƒ?oapplicationsƒ?? (user
interface, other, etc.). Thank you.



Since I'm from OCLC the survey didn't match my interest or position very well



1) We'd like a copy of our responses (kyle.fenton@emory.edu) 2) Regarding 9d, if there
were a strong open source community, we might consider implementing xC without a
commercial vendor contract 3) Would like to see a rich API and/or web services exposed
from xC 4) Would like option not only extending xC through additional plugin applicati
on,
but also to mash
-
up xC's services with external applications. I.e. a scholar's toolkit is a
great idea, but we're not sure implementing it in the OPAC is the right approach. 5) 'XC'
is a great brand name, but it's still somewhat unclear what it actu
ally is.

Participation


44



As (probably) the only customer with an existing add
-
on catalog, some answers were
hypothetical.



Thank you for the opportunity to participate in the survey. The LC responses under
'Participation' are naturally contingent on future budgeting, Cong
ressional guidance, and
Federal law.



Would you be willing to share the survey results and will there be any follow
-
up?



No



My main concern is that XC might be just a project that goes against current evolution:
we are actively dis
-
integrating the integrate
d library system. We are taking more and
more things out of the catalog: 1. User management to identity management like
Shibboleth 2. Electronic resources to Metalib and SFX 3. Business side to ERM and
approval plans Catalogs should have a simple mis
sion easily understood by the
campus community: keeping track of physical items. All the rest can better be
accomplished by other systems. If XC is a platform that ties together all of these other
systems, each with an easily identified mission, XC could b
e a success.



no



We have an Automated Retrieval System (HK), any user interface we provide will need to
be able to send requests to this system.



We hope these results are shared with those surveyed and posted on listservs such as
lita
-
l



In general, we a
re looking for a solution that improves the user experience in locating and
using resources made available by the libraries but it needs to offer a strategic
advantage. Solutions that essentially just tweak what we already have without taking into
account

the possibility of and need for entirely new directions are not likely to win out.
Although we answered above that we probably would consider implementing XC even
without support right now, realistically demands on library and campus IT are increasing
wi
thout any increases in budget. Our consortium faces the same problem. In the long
run, the need for a new support model seems inevitable.



Because we are in the process of implementing Endeca as an alternate OPAC and to
support a union catalog (see http:/
/catalog.fcla.edu) we would only entertain acquiring XC
for assessment and comparison initially. Going further would depend on that evaluation.



Open source is what we want!



We would hope XC would have a robust user community that would offer support in it
s
own way. Question 17. We use Connexion for in
-
house cataloging; we also batch load
records from vendors (e.g., Serials Solutions; Marcive; PromptCat/Yankee). Question
34c. We'll be implementing LDAP for our library systems in the next year or so.
Qu
estions 26c. If XC were able to give our users one view of three college catalogs, it
would be almost a sure thing that we could dedicate the resources to implement XC. If it
did not do that, it would be less sure.



Assuming that XC is conceived as essent
ially an OPAC replacement (not a complete
LMS), we would probably not realize significant staff savings, and would still need to be
concerned with current and future integration with a range of products. Improvements in
the local discovery environment are

important to us, but will require largely new
investment.



You should understand that we blow in the ARL Wind.



On some questions, we weren't sure whether we were being asked about the library
OPAC or the library Web site in general. Also, 'catalog' seem
ed to be used
synonymously for the OPAC and the non
-
public backroom processes.



Sorry for not being helpful. I don't work in a libray. If the URL works for a second try, I will
forward this survey to our library system dept and they can provide more helpful

input.
Participation


45

Here is the contact person in our library you may reach:
http://www.library.kent.edu/directory_details.php?id=41



THANK YOU for doing this. It's long overdue. And good luck.



Regarding Question 17: we use OCLC Connexion as well as doing bat
ch loading (it's not
one or the other).



No



indexing full text content is essential for a next generation catalogue. should also look at
options to distribute catalogue widely (portable apps, etc.) and provide a lucene index for
deduping with other collec
tions



We are piloting a localized version of Worlcat.org this spring that we hope will realize
many of our catalog
-
related desiderata and provide a syndication mechanism for libraries
content and services.



No



The last three questions (XC commercial support
, no support, change in data load and
metadata) I left blank. I would want to know more detail about type of support/lack of
support, and type of workflow changes before I could estimate a guess as to whether we
could implement such changes or do with/wit
hout support.



Consideration of XC would need to be done in context with other things going on
(success of Primo, etc.). Not sure who else at U of Minnesota you surveyed (John
Butler could give you better answers on the some of these questions)



There a
re a lot of places in this survey where black and white (yes and no) questions are
not applicable. We would have appreciated some 'grey'.



In question 6, I left several fields blank because we already have those features (e.g.,
Google
-
like search where you

don't have to select an index). So some of the blanks are
features that are very important to us, but that we already have.

Participation


46



New Catalog would have to resemble and compete with Google and Amazon. LMS staff
functions would have to match and compete with
Aleph. XC would have to be compelling
to garner interest while there is more interest in new electronic resources, digital
collections, institutional repository, setting up our storage facility and downsizing the on
-
campus physical library space.



You migh
t be better off to concentrate on an open
-
source interface rather than the entire
back
-
end system?



We're very interested in how XC develops.



not at this time



Nope.



It isn't clear to me that dressing up local search, no matter how attractively, is a good
st
rategic investment. I'm not convinced that libraries can compete in search; our value
-
add is in fulfillment and services. (was a response to question 21)