Sunteți pe pagina 1din 133

Koha is a comprehensive system that will intelligently run your

library - large or small, real or virtual.


Best Answer - Chosen by Voters
I'm not sure exactly what you mean by "library management system": do you want to create a catalog of books?
Software to manage circulation? Software to manage acquisitions?

If you're hoping to do it all, you may be underestimating the size of your project. The industry term for a
product that does it all is "Integrated Library System" or ILS. When a library purchases an ILS, it generally
issues an RFP (Request For Proposal) that specifies what the system should do. When the Library of Congress
bought an ILS a few years ago, their RFP ran to almost 300 pages. Do a Google search for "Integrated Library
System and RFP" and you'll get links to several RFPs. Look at a few of these to decide what you have the time
& energy for.

There are open-source systems in production as well. The one gathering the most buzz right now is Koha (link
in the Sources)

If you have specific ideas -- you wish your library's system would do this or that and you think you can do it
better (you're probably right!) -- talk to the librarians at your library. Chances are someone on staff would love
to talk about strengths & weaknesses of the library's system. From these conversations you could probably
come up with a practical project that doesn't reinvent the wheel.

Talk to technical services librarians, to library software specialists and to catalogers. As a technical services
librarian, I am delighted to talk about the system: it's very rare that anyone wants to talk about it beyond, "Why
is it down?" and "When will it be back?" I know of several libraries where students have programmed features
for the library system and where the projects were put into production upon completion. I would work with
students at my own library if any expressed interest.

Finally, a good resource for ideas & advice in programming for libraries is the code4lib mailing list (link in the
Sources).

Hope this helps, and good luck with your project!

• 1 month ago

Source(s):

Koha:
http://www.koha.org/

code4lib
http://dewey.library.nd.edu/mailing-list...

1
• About
• Showcase
• Download
• Support
• Community

About Koha

• » Features
• » Case Studies
• » Koha News
• » Awards
• » Contribute
• » FAQ
• » The Koha Team

Koha > About Koha > Features

Features : Features
Koha is a comprehensive system that will intelligently run your library - large or small, real or virtual.

Key features
• A full featured modern integrated library system (ILS).
• Award winning and open source no license fee, ever.
• Linux, Unix, Windows and MacOS platform.
• Web Based.
• We can full integrate it into your website.
• Copy cataloguing and z39.50.
• MARC21 and UNIMARC for professional cataloguers.
• Tailored catalogue module for special libraries.
• Use as a document manager or digital library.
• Manage online and off line resources with the same tool.
• RSS feed of new acquisitions.
• E-mail and/or txt patron's overdues and other notices.
• Print your own barcodes.
• Serials management module.
• Full catalogue, circulation and acquisitions system for library stock management.

2
• Web based OPAC system (allows the public to search the catalogue in the library and at home).
• Simple, clear search interface for all users.
• Simple and comprehensive acquisition options.
• Koha is multi-tasking and enables updates of circulation, cataloguing and issues to occur
simultaneously.

Home | Site Map | Legal Notices

• About
• Showcase
• Download
• Support
• Community

Koha is a full-featured open-source ILS. Developed initially in New Zealand by Katipo Communications Ltd
and first deployed in January of 2000 for Horowhenua Library Trust, it is currently maintained by a team of
software providers and library technology staff from around the globe.

• read more
• demo
• download

Koha Version 3 RC1 Released


Koha 3 RC1 is a release candidate for Koha 3.0, this will be followed by a stable release. Please report any bugs
to bugs.koha.org .

Download. Read the Release Notes


Website Redesign Contest

The Koha community is proud to announce the selection of Plone as its CMS for the koha.org community
website, as well as a Plone Skinning/Themeing contest for a koha-specific set of Plone stylesheets

Latest News

Koha 3.0.0 RC1 Released


24 Jun 2008
Koha 3.0.0 Beta Released
23 Mar 2008

3
Koha Plone Website Contest
10 Jan 2008
more news

Quick Links

• Documentation
• Development Wiki
• Discussions
• Koha for Windows

• Bugzilla
• Koha Version Control
• Koha Dev with Git
• Chat Information

© 1999-2005 The Koha Development Team & Katipo Communications Ltd.

http://www.koha.org/about-koha/

• » Features
• » Case Studies
• » Koha News
• » Awards
• » Contribute
• » FAQ
• » The Koha Team

Related pages

• Try out Koha


• Community Resources
• About the Logo and the Name: Koha

Koha > About Koha

Koha : The First Open Source ILS


Koha is the first open-source Integrated Library System (ILS). In use worldwide, its development is steered by a
growing community of libraries collaborating to achieve their technology goals. Koha's impressive feature set
continues to evolve and expand to meet the needs of its user base.

4
Why Koha?
Full-featured ILS. In use worldwide in libraries of all sizes, Koha is a true enterprise-class ILS with
comprehensive functionality including basic or advanced options. Koha includes modules for circulation,
cataloging, acquisitions, serials, reserves, patron management, branch relationships, and more. For a
comprehensive overview of features visit the Koha feature map.

Dual Database Design. Koha uses a dual database design that utilizes the strengths of the two major industry-
standard database types (text-based and RDBMS). This design feature ensures that Koha is scalable enough to
meet the transaction load of any library, no matter what the size.

Library Standards Compliant. Koha is built using library standards and protocols that ensure interoperability
between Koha and other systems and technologies, while supporting existing workflows and tools.

Web-based Interfaces. Koha's OPAC, circ, management and self-checkout interfaces are all based on
standards-compliant World Wide Web technologies--XHTML, CSS and Javascript--making Koha a truly
platform-independent solution.

Free / Open Source. Koha is distributed under the open-source General Public License (GPL). More
information on the GPL can be found here.

No Vendor Lock-in. It is an important part of the open-source promise that there is no vendor lock-in: libraries
are free to install and use Koha themselves if the have the in-house expertise or to purchase support or
development services from the best available source. For more information about obtaining support visit the
support page.

http://www.myacpl.org/about/koha/qa

Questions and Answers about Koha


An exchange between Pat Eyler, then Koha Kaitiaki , and Stephen Hedges, then Nelsonville Public Library
Director, in January 2003. Koha went live at the Nelsonville Public Library the following September.

When did you first consider using Koha?

We started lurking on the Koha list about a year ago, though now I don't remember how Koha first came to our
attention. In January of this year, I'd seen enough to pull together a "tech team" consisting of me, the assistant
director, our webmaster, our systems administrator, and our circulation software supervisor. We decided to
embark on an experiment: Could we switch all of our automation software, from operating systems to web
tools, to Open Source by the end of this year? We decided that there was no "success" or "failure" involved, just
a simple research question with a "yes" or "no" answer. No matter the outcome, we would have gathered a lot of
information, and we made plans from the very beginning to share this information with the library community.

5
What was the initial draw of Koha?

Freedom. We want to use the Internet to offer some cutting edge information services to our library patrons, but
we realized that this would require us to have control of our automation and database software. We needed the
freedom to change things, to change the code if necessary, because the types of things we want to do are not
going to appear in commercial library software for years. (Commercial library software vendors are more
interested in the bottom line than the cutting edge.)

What drawbacks did you see with Koha?

The same drawbacks that plague many of the smaller Open Source projects, I suspect. First is the irregular pace
of development. Since Koha code development is a "spare time" activity for most of the programmers, the pace
of Koha's growth is hard to predict. We found ourselves waiting for crucial modules to be developed, sometimes
wondering if they ever would be developed, often suspecting that the answer to our original research question
would be "no" - we couldn't switch to a completely Open Source system. The second is the problem of
"splintering." If we took the developing Koha code and finished it to fit our needs, hard-coding the features we
needed and leaving out things we didn't need, we would have in effect created a new version of Koha that had
departed from the main development tree. That would mean that we could not take advantage of future upgrades
to the main tree. On the other hand, if we mustered our patience and waited, would Koha ever have enough of
the features we needed to be a viable automation system for our library? This is a significant problem: How do
the programmers know what the majority of libraries will need, and how do libraries know if Koha will
eventually have all the elements that are considered crucial?

You've made a decision to contribute to the development of Koha. When HLT first
commissioned Koha, they decided to make it Open Source hoping that other libraries
would step in and support development of new features as well. What are your thoughts
on this model?

We've come to realize that this may be the only viable model, and perhaps the answer to the question I just
posed. It's important that libraries do not look on Open Source as free software that they just download, press a
button, and all their problems will magically be solved. Open Source requires just as much commitment as
commercial software - you still only get what you pay for. Libraries should approach Open Source with the
notion that they will commit a lot of staff time to understanding the code and how the software does its job.
They should also be ready to commit financial resources as well, just as if they were still paying annual license
fees for commercial software. The difference is, with commercial software a big portion of the license fee goes
to research and development over which the library has no control, while with Open Source that same money
can fund the development of the software modules that the library really wants. HLT has paid their money and
received a product that works fine for their needs. Now other libraries can pay a little more money and receive
enhancements to that product that will make it fit their needs - without having to pay for the development of an
entire software package. HLT got what it wanted, we plan to get what we want, other libraries can pay and get
what they want. The libraries are paying a one-time expense that's probably less than their annual license fees,
so they win. The programmers get a very clear idea of what libraries want (money talks!), so they win. It's a
great model for success!

6
What kind of external support are you using, or do you foresee using as you convert onto
Koha?

Very little. We originally thought we would use a database consultant to assist in moving our data from our
current system to Koha. (This is a task that's typically handled by the "new" software vendor when libraries
switch from one commercial product to another.) We've recently realized, however, that the mechanics of the
data transfer are not as difficult as identifying which data we want to keep and which is superfluous. For that
reason, we'll probably do the data transfer ourselves. Otherwise, the tech team is working to learn the tools we
will need to maintain the software: perl, linux, mySQL, php, etc. We regard this learning process as part of our
investment in Koha. And remember, our original motivation in looking at Koha was the ability to offer some
cutting edge services, so we'll need to have a thorough understanding of the software for more than just
maintenance purposes.

What are the major milestones in the conversion process?

A big one was the release of Koha 1.2.0 in July. For the first time, we saw a product that looked like a viable
alternative to our current software. In late August, we held a day-long meeting of the tech team to review
everything that we had seen and learned so far in our research project, and we decided that Koha lacked only
three absolutely critical things for our needs: full MARC support, a Z39.50 server module, and a SIP2 or NCIP
module. To fill those needs, we decided to commit some financial resources to Koha development. Our future
plan is to move some of our data into the current version of Koha to see what changes we will need to make in
the Koha tables. (And we would then suggest to the Koha developers that these types of changes should be built
into some sort of configuration utility or template, rather than being hard-coded.) Once we have a little of our
data transferred and working, we will transfer the rest and plan to run Koha in tandem with our old circulation
system. This will not only allow staff to get familiar with the new system (Koha), it will also give them an
opportunity to request changes to screens, customized functions, etc. Once we find that Koha has become the
system of choice for the staff, we will shut down the old automation system.

When will your patrons be able to see Koha in action?

We expect that Koha will take over from the old system sometime during the summer of 2003.

What do you think you're going to see in terms of an ROI on your investment in Koha
development?

I expect that an initial investment of about $10,000 will save us about $10,000 every year, beginning in 2003.
We might also seek some technology grant money to extend our initial investment. I think our real ROI will not
be financial, however. Within the next few years, we fully expect that our web site will offer some of the best
online library services available anywhere in the world. That's the value we expect to get from investing in
Koha.

7
• About
• Showcase
• Download
• Support
• Community

Case Studies

• » Horowhenua Library Trust


• » Nelsonville Public Library

Related pages

• www.library.org.nz
• www.katipo.co.nz

Koha > About Koha > Case Studies > Horowhenua Library Trust

http://www.koha.org/about-koha/case-studies/hlt.html
HLT : Horowhenua Library Trust
Koha was originally developed for the Horowhenua Library Trust by Katipo Communications in 1999. The first
version of Koha went live at HLT in January 2005. The following is the transcript of a presentation given by
Rosalie and Rachel about how Koha came into being.

Koha : Free Library Software

by Rosalie Blake, Head of Libraries, Horowhenua Library Trust, Levin and Rachel Hamilton-Williams, General
Manager and Webmistress, Katipo Communications, Wellington.

Rosalie

The first thing I remember learning about computers, back in the early eighties, was "Never buy the hardware
and then try to make the software fit". This worthy precept was followed fairly quickly by "Never buy a
programme that someone promises to write for you".

Why then, in 1999, was I contemplating a course of action that defied both precepts?

8
In 1999 we had a system we'd used and loved for 12 years. It was simple to operate and ran fast at our branch
libraries over an ordinary dialup modem. But it was looking tired, and we suspected our networking system
wasn't going to stand the challenge of 2000.

I started the time-honoured procedure of searching for a new system. As I read through the 25cm high pile of
paper that resulted from a (very selective) RFP, I got more and more depressed. It wasn't just that the new
offerings were expensive to purchase, the real killer was in the running costs. The support charges were a steep
increase, but added to that was a huge 500% increase in telecommunication costs every year, stretching
relentlessly into the future. That pretty Windows interface was going to be hideously expensive to implement.

Now, along with every other red-blooded kiwi, I resent sacrificing a huge chunk of my hard-won budget to a
telco. What to do? 1 January 2000 was approaching much too fast for comfort.

Rachel

Katipo had worked with the Library for many years, training staff and supporting the network. We watched the
RFP process with interest - read it and thought "you could do this using the Internet, and that would at least take
care of the telco costs". We'd already done an OPAC for Wellington City Library and lots of database work, and
somewhat naively thought "How hard can it be?"

So we got together a plan: as long as the library could tell us how a library worked, we could do the rest. As it
turned out, figuring out how a library worked really was the hard bit.

Our Mission was to have a new system running in the library for the first day of business in 2000. The
requirements: At the branches it should issue and return at least at the same speed as the incumbent system, it
should run on the libraries' existing equipment, a motley collection of 486 and pentium hardware, it must still be
nice and easy for the public and librarians to use.

With the lofty goals in place it was time for the hard grind. It would have two
interfaces - a web browser for most of the system, but a simple fast Telnet
interface for issues and returns. We quickly choose the systems we would use,
free and fast all the way. Linux server OS, MySQL database, Perl programming
for the web and telnet interfaces. Hmm all we were missing was a report writer,
but one would turn up.

What followed was 16 weeks of incredibly close work with the Library.
Dropdead dates came and went - amazingly without us dropping dead.

Rosalie

The new system started to take shape, and it looked good. When you change to
a new system it has to look good - people are more sympathetic to something
that looks nice, and there are all those members of the public to win over.

9
What colour should the screens be? And what shall we call it? Hard experience teaches you never to let a
committee near decisions like these. But we were wedded to democratic processes, and we finally found
solutions everyone could live with. The colours were a compromise, and the name chosen was Koha. Why
Koha? Because it's free. Because it's our gift.

Rachel

We recommended to the Library that they release the software as "Open Source". That means that anyone else
can download the software and run it up on their machine without paying either us or Horowhenua Library
Trust a cent.

I can see the colour draining from your faces from here. Really a free system? But why?

Rosalie

Not just for the glory (though this is a world first)!

Katipo convinced us this was a good idea for several reasons. One was to give the Library some surety. Katipo
is a small business, and it's important that the Library be future-proofed against anything happening to that
company. Another was because neither the Library nor Katipo saw themselves as in the business of marketing
and supporting a new library system. With an open-source system, that's not a problem. Word of mouth markets
it, and the users support each other. The more libraries and programmers using and working on a system, the
better it becomes.

Rachel

We tested on the librarians who were fantastic, coming in over the holidays to throw us the curve balls. We had
Christmas day off, and then started installing in the library system. The last days on the old system were
between Christmas and New Year. On the third of January 2000 Koha met the public, for real, for the first time.

By lunch time we still hadn't switched back to the old system. The OPACs
were in use, and issues and returns, while a bit flakey, were happening. Neither
the librarians nor the patrons had stormed out in protest. We'd done it!

Since then the pace has eased a little, and we've fine-tuned and even rewritten
some of the code that went by as a blur before Christmas. We won two awards
for Koha in September 2000 - the 3M Award for Innovation in Libraries 2000
& the ANZ Interactive Award, Community/Not for Profit category 2000.

Stage two was the online features for members, and some sexier bits to make the Library staff's lives easier.

10
Both of us

We'd like to see other public libraries take it on. If you're a small to medium-sized library, with or without
satellite sites, or a mobile library (yes you could run this from a laptop and cell phone!) Koha could be just what
you're looking for. But don't just take our word for it. Have a look. www.koha.org
Or email either of us: rosalie@library.org.nz and rachel@katipo.co.nz

If you're interested in the project please join our mailing list

http://www.koha.org/about-koha/case-studies/nelsonville.html

Home | Site Map | Legal Notices

• About
• Showcase
• Download
• Support
• Community

Case Studies

• » Horowhenua Library Trust


• » Nelsonville Public Library

Related pages

• NPL Home

• www.liblime.com

Koha > About Koha > Case Studies > Nelsonville Public Library

NPL : Nelsonville Public Library


The Nelsonville Public Library in Athens County, Ohio have been large contributors to the Koha project. The
library has 7 branches, including a mobile book mobile. The library has 250,00 items and approximatley 650,00
issues per year (based on 2003 figures)

NPL and Koha

11
The library has a section of their website dedicated to information about the project -
http://www.athenscounty.lib.oh.us/koha.html.

Questions and Answers about Koha

If you would like to find out more about why Nelsonville decided to go with Koha, read the transcript of a
discussion between Stephen Hedges (NPL Library Director) and Pat Eyler (then Kaitaki of the Koha Project) -
http://www.athenscounty.lib.oh.us/koha_questions.html

• Search the Catalog


• Search this Site

• Nelsonville (Main)
• Athens
• The Plains
• Glouster
• Chauncey
• Coolville
• Wells (Albany)

• catalog
• about us
• services
• outreach
• reader resources
• communities
• kids
• events

The Koha Project


In early 2002 the Nelsonville Public Library undertook a new project. The goal was implement a complete
library automation system using only open source software. The library focused its attention on the most mature
and fully-featured system available at the time, Koha. At the time, Koha was still in version 1, and used in only
a few libraries around the world. No public library in the United States used Koha or any open-source ILS.
After extensive testing and development, Koha went into use at The Nelsonville Library in the fall of 2003. The
library continues to use it today.

12
Read A Koha Diary: Implementing Koha at the Nelsonville Public Library

The Nelsonville Public Library has been closely involved with the development of Koha from the very
beginning of its investigations of the open-source ILS. The library has funded development, contributed code,
hosted Koha informational meetings, and promoted Koha at regional library conferences. One of our staff has
since left to start his own Koha support company, Liblime. The library now contracts with Liblime for Koha
support, and our staff works closely with Liblime to further Koha's development.

Open Source in the Library

Koha isn't the only open source in the library!

See the list >>

• From 2003: Questions and Answers about Koha with Stephen Hedges, Nelsonville Public Library
Director

Press coverage of Nelsonville's switch to Koha:

• Koha wins Free Software award (5/28/03)


• Biblio Tech Review: Nelsonville Public Library Chooses Open Source (9/19/02)
• LinuxPlanet: Koha: A Library Checks Out Open Source (8/30/02)
• Linux PR news story (8/26/02)
• Information Today: An Update on Open Source ILS (10/8/02)

Koha Resources:

• Koha Project homepage, and FAQ


• NPL's original Request For Proposal -- MARC 21 record support for Koha

Open Source Links

• Oss4lib -- Open Source Software for Libraries : encouraging open source software engineering to build
better and free systems for use in libraries
• GNU Project and Free Software Foundation : One of the founding organizations of free and Open
Source software. Home of the GNU Public license and many open source projects.
• Savannah, like Sourceforge, provides a home for open source projects, including Koha. Savannah "is a
central point for development, distribution and maintenance of Free Software that runs on free operating
systems."

• Collections
• Services
• Your Library Card
• Library Policies
• Three-Year Plans

13
• Library Board
• Circulation Statistics
• The Koha Project
o Open Source in the Library

• Home
• About
• Branches
• Library Catalog
• Events

Support Your Library

• Donate to the Library


• Volunteer at the Library
• Friends of the Athens Public Library

• Your Privacy
• Contact Us

Copyright © 1999-2008 Nelsonville Public Library.

http://www.kohadocs.org/koha_diary.html

Implementing Koha at the Nelsonville Public Library

Stephen Hedges
<shedges AT skemotah.com>

Copyright © 2005 Stephen Hedges

This document is related to Koha and is licensed to you under the GNU General Public License version 2 or
later (http://www.gnu.org/licenses/gpl.html).

Koha-related documents may be reproduced and distributed in whole or in part, in any medium physical or
electronic, as long as this copyright notice is retained on all copies.

You may create a derivative work and distribute it provided that you:

1. License the derivative work with this same license, or the Linux Documentation Project License
(http://www.tldp.org/COPYRIGHT.html). Include a copyright notice and at least a pointer to the license
used.
2. Give due credit to previous authors and major contributors.

14
Commercial redistribution is allowed and encouraged; however, the author would like to be notified of any such
distributions.

No liability for the contents of this document can be accepted. Use the concepts, examples and information at
your own risk. There may be errors and inaccuracies, that could be damaging to your system. Proceed with
caution, and although this is highly unlikely, the author(s) do not take any responsibility.

All copyrights are held by their by their respective owners, unless specifically noted otherwise. Use of a term in
this document should not be regarded as affecting the validity of any trademark or service mark. Naming of
particular products or brands should not be seen as endorsements.

2005-05-17

Revision History
Revision 2.2.2 2005-05-04 sh
Initial XML markup.
1. Foreword
2. Early Testing
3. Early Problems and Involvement
4. Spreading the Word
5. Koha Business
6. Making Koha Work
7. Breaking Away
8. The Final Steps
9. Conculsion

Abstract

This document is a collection of e-mail messages describing the experiences of the Nelsonville Public Library
as it went through the process of implementing the Koha integrated library system.

1. Foreword
Late in 2001, the Nelsonville Public Library became interested in exploring the possibility of doing away with
our commercial integrated library system (ILS) and replacing it with open source software. While I am sure cost
(or rather, the lack of cost) was part of the allure of open source, if I recall correctly Joshua F., the library's
systems administrator, was aware of open source software, was new to libraries, felt that librarians should be
able to easily sympathize with the philosophy of open source, and was shocked at how much libraries were
willing to pay for their commercial -- and very specialized -- software.

At some point Joshua convinced me to look into Koha, an open source ILS, and I eventually convened a
committee of library staff to investigate. This "Koha Team" included me (Library Director), the Assistant
Director Lauren M., Joshua, Owen L. the library's webmaster, and Ken R., our principal liaison with the vendor
of our commercial ILS. This committee started meeting in early January 2002 and held occasional meetings
throughout the next couple of years. The bulk of our "meetings," however, were conducted by e-mail.

15
I began saving selected e-mail messages in February 2002, thinking that they would someday provide a good
"history" of the Koha implementation process. Now, as the library prepares to move to Koha version 2.2, it
seems that the time has come to close off this history and share it with anyone who might be interested.

These messages are selected messages, so transitions between messages will not always be smooth. And since I
collected them, they do not include messages which I did not receive. Still, I think they pretty accurately convey
the problems, the solutions, and the general mood of the Koha Team as we worked through this process.

In the interest of protecting people's privacy, I have edited all messages to take out e-mail addresses and have
not used anyone's surname (other than my own). I have also removed the > sign typically used to denote text
copied from an earlier message, and instead use italics, so the text can flow without concerns about line breaks.

sh

2. Early Testing
Library Data Design/Library Cards
Date: Friday Feb 8, 2002, 3:38 PM
From: kenr
To: shedges
Cc: joshuaf, owenl, laurenm

Okay, I'll bite...what other means of identification could we use besides a library card #? I could see a change in
card format (perhaps to a magnetic stripe and/or smart card, which would allow us to store data on the card
itself), but not in the need to have a membership number.

I don't see us ever using a person's Social Security Number,although many institutions already do so. Perhaps
logins and PIN's are an option, but that presupposes self-checkout and/or some form of patron interaction with
the checkout process.

Ken

Congratulations to Joshua and Owen on the quick progress in setting up Koha, and also for getting Charles V
involved in the data transfer. Now we need to meet before you two meet with Charles on the 19th to discuss
some broader issues that may affect our data structure. For example: will we use library cards in the future? If
not, what will we use as an ID number? How do we need to structure that data column?

We also need to think about what Spydus data, particularly patron data, we want to keep in the transition. In
other words, we need to give some serious thought to our data design.

I'd like to meet next Friday, February 15, at 2:00 p.m. at the Athens Library. Please let me know if you cannot
make that meeting time and place. Thanks!

Stephen

***

16
RE: Loading sample data
Date: Tuesday Mar 12, 2002, 5:20 PM
From: joshuaf
To: laurenm, owenl, shedges

Hi all,

The sample data is loaded! Thanks Owen for your SQL expertise. I need to bring the circ interface up and we
will be ready for testing, etc.

To test the search features with the sample data go to: http://13x.18x.10x.25x

Let me know how it works.

Joshua

***

Re: Koha
Date: Friday Mar 15, 2002, 12:27 PM
From: shedges
To: owenl, joshuaf, laurenm, kenr

OK, let's meet on the 29th. If it's going to be at The Plains, then we'll need a computer to get to our koha page.
What time? (I have a meeting in Athens at 2:00.)

I've been working on installing koha on my laptop, usually a couple of steps behind Joshua, and Joshua and I
have both had problems installing the cdk libraries and perl module that koha requires for telneting to the
circulation interface. (Our problems have been different, but I think that's because we downloaded the cdk
libraries from two different sites.) Joshua was not happy with the web interface, I don't remember why. Chris
C's reply seems to indicate that the web interface is still the better choice, however.

Stephen

***

Koha notes
Date: Saturday Mar 30, 2002, 12:56 PM
From: owenl
To: "Stephen H", laurenm
Cc: joshuaf, kenr

I went over it pretty fast, so here it is again:

http://13x.18x.10x.25x/secret/tech/dev/notes.cfm

17
There you'll find the Koha notes we have so far, as compiled by Stephen. I made sure there was a way to add a
new note, so if anyone has any documentation they'd like to contribute, please do. This is pretty quick-n-dirty,
so if anyone has any problems or ideas about a better way to organize things, please let me know!

-- Owen

***

[Koha] A basic question


Date: Monday Apr 15, 2002, 11:10 AM
From: shedges
To: koha@lists.katipo.co.nz

If you're a little tired of looking for errors in scripts, how about a philosophical question -- how many different
kohas should there be?

I've noticed that there are some significant differences in what libraries from different areas of the world expect
their software to do. For instance, while ten-digit barcodes seem to work well in the South Pacific, in the US
(and Canada?) the fourteen-digit barcode for both book ID and 'member' ID seems to be becoming a de facto
standard. And then there's the whole MARC record discussion -- just whose MARC format are we discussing?
There's Paul's MARC standard in France, there's USMARC, and I get the feeling MARC may not really be all
that important Down Under. And while tracking and storing information about ethnicity is necessary in New
Zealand, most libraries in the States would find that idea disturbing.

It almost seems as if there are unwritten standards controlling the basic features of library automation systems
in different countries. If you think about it, this makes sense. Libraries act like government entities (indeed, they
often are), so its not surprising that they try to do things the same way whenever possible, to allow them to
interact more easily. But they seldom have any need to interact with libraries outside their own country or area
of the world, so regional differences have developed.

Let's use our library (Nelsonville Public Library, Nelsonville, Ohio, USA) to illustrate my point. We have been
investigating the possibility of using koha to replace our current Sanderson system. In listing the things we
would need to change to make koha work for us, we so far have identified:

• lengthen the barcode fields


• accomodate batch imports and exports of bibliographic records in USMARC format
• do away with the ethnicity fields
• add support for the Z39.50 protocol to allow us to share catalog records with other libraries in the state
of Ohio
• and add support for the NISO Circulation Interchange Protocol (NCIP) to allow us to share user records
with other libraries (and thus participate in the statewide resource sharing system).

While these modifications are necessary to make koha a viable alternative for our library, they would be useless
for Paul P's library (or Roger B's library, or Steve T's library, etc., etc.) So let's put my original question a little
differently -- is one version of koha enough? Maybe there should be groups in different areas of the world

18
working to develop regional versions of koha. Or has that happened already and we just aren't hearing anything
about it?

Stephen Hedges

Director, Nelsonville Public Library

***

Re: [Koha] A basic question


Date: Monday Apr 15, 2002, 2:11 PM
From: jb
To: shedges

Dear Stephen,

I run both a medium size library and and a library system of nine rural libraries in northwest PA. We are located
in Crawford County PA which borders the Ohio State line. For the past two and half years we have been
incorporating more and more of our information systems into open source platforms. For example we use open
source for e-mail servers, web hosting, firewalls, proxy servers, and filtering.

We have been investigating using some sort of Open Source software to migrate from our current circ. system.
We have investigated two programs, Koha and OpenBook. Unfortunatelly it looks like Open Book is no more
so that leaves Koha.

We have looked at the Koha Program and believe that the changes in the program you outlined in your e-mail
are similar to the changes we also wish to make. I also believe that your idea of creating different regional
versions of Koha makes a lot of sense.

At this time I wish to state that the *** Public Library would be willing to help out in anyway we can in
developing a midwest version of Koha.

We look foward from hearing from you.

Sincerely,

JJB

***

Re: Koha, too?


Date: Thursday Jun 27, 2002, 2:32 PM
From: cjl
To: shedges

Hello again,

19
How goes the Koha project? I'm sure you've noticed Koha 1.2.0 was released a little over a week ago. Although
there are a few bugs to work out, the new system (if you havn't tried it) seems to be a little bit smoother than its
1.0 counterpart. I've been able to get into contact with a few of the programmers working on the Koha source,
and they seem to be very open to any new ideas that people come up with. I've personally been working on
Koha for a small organization in Chicago who wishes to keep track of their small (very small) book collection,
and they've been very helpful when it comes to figuring out problems. I'm sure they would be more than willing
to help bring Koha to this area.

Was there any aspect of your project that I would be able to help you out with? There are parts of the interface
I'm working on that I would be more than willing to contribute to your project (a lighter GUI, designed for
smaller systems such as handheld and pocket PC, as well as some barcode check digit issues).

I look forward to hearing from your about your progress.

--CJ

***

Koha is cool!
Date: Tuesday Jul 9, 2002, 11:02 AM
From: shedges
To: joshuaf, owenl, laurenm, kenr

Hey, I've just done a MARC import to our Koha machine, and it's slick! Ken and Owen, you can get to Koha by
going to http://6x.21x.7x.7x:8080, use the same login as for the staff webpage. Under acquisitions, upload
MARC records, you'll see a list called "marclist" that I downloaded directly from one of my iPage orders. I
went ahead and added one item, "Black River" by G.M. Ford. Try it!

This could make cataloging very easy. I could download MARC lists as I place orders, and the catalogers could
pull up the items from those lists as they arrive, add barcodes, and load them into the system. Very nice!! I think
I like it better than the bulk MARC import that Paul P is working on.

Stephen

***

3. Early Problems and Involvement


Koha, The Next Step (draft)
Date: Wednesday Jul 10, 2002, 8:33 PM
From: joshuaf
To: shedges, owenl, laurenm, kenr

Koha, The Next Step(draft)

20
Well, Koha has been up and running for a few days now. During that time I have given thought to what our next
step should be. I think there are two main areas that we need to pursue, Database Concerns, and Module
Concerns. I will define them as follows (please add to this or redefine as you see fit):

Database Concerns

Ideally the database scheme will contain all the data that the library system uses. No doubt as we begin
looking at the specific database concerns we will encounter areas that we can improve our use of the
database scheme. Our primary concern now is to have an open source system running with local data. To
accomplish that we need to understand what information is stored in our current database including
patron records, catalog records including holdings, and usage information(add to this list).

Then we need to see how the database information fits into the framework of Koha (which parts are
covered, and which need to be added), and what to do when the information is in a different format.

Module Concerns

The modules are techniques of manipulating the data in the database to create tools for patron and staff
use. We need to compile a list of all the modules that we currently use (or want) and compare this list to
Koha's modules. We also need to compare the functionality of the Koha modules and decide if they are
adequate as replacements for our current solution.

When I think of our implementation of Koha, I am not merely thinking of it as a replacement for Spydus. It has
so much more potential than that.

Any ideas on when our next meeting will be?

***

Dumping patron records


Date: Wednesday Jul 24, 2002, 10:29 AM
From: shedges
To: joshuaf, owenl, laurenm, kenr

Owen's traffic on the koha list sent me back to look at our export options on Spydus. We can export MARC
records, of course, and probably use the bulkmarcimport.pl to load them without much problem. (Unless we
want to wait until there's full MARC support in the koha database.) But for patron records, the only option I see
is a listing of all patrons. We could, of course, capture the listing in a file as it scrolls by on the terminal, and
maybe even load it into Excel or Access for downloading to koha. Just an option to pursue if we can't come up
with anything better. (Owen and Lauren, didn't Charles say he would move the Unidata data to Access, then to
MySQL? Or is my memory faulty?)

Stephen

***

21
Re: Dumping patron records
Date: Wednesday Jul 24, 2002, 10:51 AM
From: owenl
To: shedges, joshuaf
Cc: laurenm, kenr

We can export MARC records, of course, and probably use the bulkmarcimport.pl to load them without much
problem.

But that just exports the MARC records, right? No item records, no circ records, etc?

(Owen and Lauren, didn't Charles say he would move the Unidata data to Access, then to MySQL? Or is my
memory faulty?)

I remember Charles praised Excel (maybe Access too?) for its data-importing capabilities, so I think you may be
right. Trying to picture the process in my head, I'm thinking that's the thing to do, because it would be an easy
way to correct problems with the data before we put it into MySQL. For instance, it drives me crazy that half
the patrons in the database have a phone number like this: 6145924272. Would could do a search and replace in
Excel/Access to correct formatting issues like that before putting the data into MySQL. It could be done either
place, Microsoft's interface is friendlier.

-- Owen

***

Re: MARC format tables


Date: Tuesday Jul 30, 2002, 1:57 PM
From: paul.p
To: shedges

Stephen Hedges a ecrit:

Paul -

I've been looking at the "ToDoMARC" list on the SAAS website and wondering about "create parameter table
for different MARC formats and different languages." Do you need MySQL tables set up to hold the fields of the
different MARC formats? Do you still need someone to do that in English? There are three of us here at the
library who could probably put our heads together and do this.

Thanks a lot for your proposal.

i've volunteers for UNIMARC in french, i already have USMARC in english, if you want to do another
parameters file, you're welcomed. I've requested an ASCII file, because i plan to build a small script during
installation of koha that permits chosing the MARC you want, and download it.

If you prefer to populate mySQL directly, you can, but please sent me ASCII exported files (CSV).

22
thanks

-- Paul

***

Note

Throughout the months of July and August, the Koha team came to the conclusion that the best way to get the
MARC features we felt we needed in Koha would be to hire a developer to make the modifications and
contribute them back to the Koha code. With the approval of the library board, we began working on an RFP for
this Koha development, and involved some key people from the Koha community in this process.

Re: RFPs, etc.


Date: Friday Aug 23, 2002, 10:48 AM
From: pate
To: shedges

On Fri, 23 Aug 2002, Stephen Hedges wrote:

Thanks for your help, Pat. We'll start working on the RFP's, and we'll try to do it without too much input from
you (since you might want to put in a bid yourself).

Sounds good. I'm really only interested in the z39.50 and NCIP/SIP2 pieces, so if you'd like advice/help on the
MARC RFP, I'd be happy to help there (while avoiding any conflict of interest). When you're ready to send out
the RFPs, please send them to kohabiz@. I'll probably put them online in an archive as well so that people not
yet on the mailing list can get to them through a permalink.

As for using the Nelsonville Library in Koha promotion, that's fine with us.

Thank you. I'll start up on an article right away then. I'll send along a second email with some questions. Please
feel free to send it along to other staff who might be interested, the more input the better.

In fact, we are going to be giving a presentation on open source in general (and probably Koha specifically) at
this fall's Ohio Library Council Annual Conference in Columbus. We don't believe in being shy librarians! And
of course, our presentation handout will list sources of information about open source for libraries, including
the Koha list archives.

That's great! Let me know if there's anything that I can do to help. When is the conference? It would be nice to
be able to coordinate some kind of Koha activity for it -- I don't know that we could get anyone there, but you
never know.

Pat

***

23
Athens County Library Koha questions
Date: Friday Aug 23, 2002, 11:29 AM
From: pate
To: shedges

Stephen, thanks again for taking some time to help with Koha. I think that having some bigger libraries here in
the States using Koha is a big step for us and will really help push Koha to the next level.

As I'd explained, I'm planning on putting together some materials to help market Koha more effectively and
wanted to get a little more information from you first. If you and/or your staff could help by answering these
questions, I'd very much appreciate it.

1. When did you first consider using Koha?


2. What was the initial draw of Koha?
3. What drawbacks did you see with Koha?
4. You've made a decision to contribute to the development of Koha. When HLT first commissioned Koha,
they decided to make it Open Source hoping that other libraries would step in and support development
of new features as well. What are your thoughts on this model?
5. What kind of external support are you using, or do you foresee using as you convert onto Koha?
6. What are the major milestones in the conversion process?
7. When will your patrons be able to see Koha in action?
8. What do you think you're going to see in terms of an ROI on your investment in Koha development?

If you have any other thoughts, insights, concerns, or questions I'd be happy to hear them as well.

Thanks,

pate

***

Re: Athens County Library Koha questions


Date: Friday Aug 23, 2002, 2:36 PM
From: shedges
To: pate

Pat, you ask some very good questions! To answer one question from a different e-mail, the OLC Annual
Conference is October 23-25, and our session is the last one on Friday afternoon. :-( We'll see who's truly
interested in Open Source!

1) When did you first consider using Koha?

We started lurking on the Koha list about a year ago, though now I don't remember how Koha first came to our
attention. In January of this year, I'd seen enough to pull together a "tech team" consisting of me, the assistant
director, our webmaster, our systems administrator, and our circulation software supervisor. We decided to
embark on an experiment: Could we switch all of our automation software, from operating systems to webtools,

24
to Open Source by the end of this year? We decided that there was no "success" or "failure" involved, just a
simple research question with a "yes" or "no" answer. No matter the outcome, we would have gathered a lot of
information, and we made plans from the very beginning to share this information with the library community.

2) What was the initial draw of Koha?

Freedom. We want to use the Internet to offer some cutting edge information services to our library patrons, but
we realized that this would require us to have control of our automation and database software. We needed the
freedom to change things, to change the code if necessary, because the types of things we want to do are not
going to appear in commercial library software for years. (Commercial library software vendors are more
interested in the bottom line than the cutting edge.)

3) What drawbacks did you see with Koha?

The same drawbacks that plague many of the smaller Open Source projects, I suspect. First is the irregular pace
of development. Since Koha code development is a "spare time" activity for most of the programmers, the pace
of Koha's growth is hard to predict. We found ourselves waiting for crucial modules to be developed, sometimes
wondering if they ever would be developed, often suspecting that the answer to our original research question
would be "no" - we couldn't switch to a completely Open Source system.

The second is the problem of "splintering." If we took the developing Koha code and finished it to fit our needs,
hardcoding the features we needed and leaving out things we didn't need, we would have in effect created a new
version of Koha that had departed from the main development tree. That would mean that we could not take
advantage of future upgrades to the main tree.

On the other hand, if we mustered our patience and waited, would Koha ever have enough of the features we
needed to be a viable automation system for our library? This is a significant problem: How do the
programmers know what the majority of libraries will need, and how do libraries know if Koha will eventually
have all the elements that are considered crucial?

4) You've made a decision to contribute to the development of Koha. When HLT first commissioned Koha, they
decided to make it Open Source hoping that other libraries would step in and support development of new
features as well. What are your thoughts on this model?

We've come to realize that this may be the only viable model, and perhaps the answer to the question I just
posed. It's important that libraries do not look on Open Source as free software that they just download, press a
button, and all their problems will magically be solved. Open Source requires just as much commitment as
commercial software - you still only get what you pay for.

Libraries should approach Open Source with the notion that they will commit a lot of staff time to
understanding the code and how the software does its job. They should also be ready to commit financial
resources as well, just as if they were still paying annual license fees for commercial software. The difference is,
with commercial software a big portion of the license fee goes to research and development over which the
library has no control, while with Open Source that same money can fund the development of the software
modules that the library really wants.

25
HLT has paid their money and received a product that works fine for their needs. Now other libraries can pay a
little more money and receive enhancements to that product that will make it fit their needs - without having to
pay for the development of an entire software package. HLT got what it wanted, we plan to get what we want,
other libraries can pay and get what they want. The libraries are paying a one-time expense that's probably less
than their annual license fees, so they win. The programmers get a very clear idea of what libraries want (money
talks!), so they win.

It's a great model for success!

5) What kind of external support are you using, or do you foresee using as you convert onto Koha?

Very little.

We originally thought we would use a database consultant to assist in moving our data from our current system
to Koha. (This is a task that's typically handled by the "new" software vendor when libraries switch from one
commercial product to another.) We've recently realized, however, that the mechanics of the data transfer are not
as difficult as identifying which data we want to keep and which is superfluous. For that reason, we'll probably
do the data transfer ourselves.

Otherwise, the tech team is working to learn the tools we will need to maintain the software: perl, linux,
mySQL, php, etc. We regard this learning process as part of our investment in Koha. And remember, our
original motivation in looking at Koha was the ability to offer some cutting edge services, so we'll need to have
a thorough understanding of the software for more than just maintenance purposes.

6) What are the major milestones in the conversion process?

A big one was the release of Koha 1.2.0 in July. For the first time, we saw a product that looked like a viable
alternative to our current software. In late August, we held a day-long meeting of the tech team to review
everything that we had seen and learned so far in our research project, and we decided that Koha lacked only
three absolutely critical things for our needs: full MARC support, a Z39.50 server module, and a SIP2 or NCIP
module. To fill those needs, we decided to commit some financial resources to Koha development.

Our future plan is to move some of our data into the current version of Koha to see what changes we will need
to make in the Koha tables. (And we would then suggest to the Koha developers that these types of changes
should be built into some sort of configuration utility or template, rather than being hardcoded.) Once we have a
little of our data transferred and working, we will transfer the rest and plan to run Koha in tandem with our old
circulation system. This will not only allow staff to get familiar with the new system (Koha), it will also give
them an opportunity to request changes to screens, customized functions, etc. Once we find that Koha has
become the system of choice for the staff, we will shut down the old automation system.

7) When will your patrons be able to see Koha in action?

We expect that Koha will take over from the old system sometime during the summer of 2003.

8) What do you think you're going to see in terms of an ROI on your investment in Koha development?

26
I expect that an initial investment of about $10,000 will save us about $10,000 every year, beginning in 2003.
We might also seek some technology grant money to extend our initial investment.

I think our real ROI will not be financial, however. Within the next few years, we fully expect that our website
will offer some of the best online library services available anywhere in the world. That's the value we expect to
get from investing in Koha.

Stephen

***

Re: RFPs, etc.


Date: Monday Aug 26, 2002, 2:36 PM
From: shedges
To: pate

Well, Pat, you said:

if you'd like advice/help on the MARC RFP, I'd be happy to help there

OK, here's my first draft. Since I've never written an RFP for "custom" software before, this may be really out-
to-lunch. Please let me know what you think, and I'll run it by some folks here, and _maybe_ we'll have
something that's ready to post in a few days.

Stephen

REQUEST For PROPOSAL -- MARC 21 record support for Koha application (koha.org)

The Nelsonville Public Library invites all interested parties to submit proposals in response to the following
request. Proposals may be submitted in any format, but should carefully answer all questions in the request.
Proposals should be sent to Stephen Hedges, Director, Nelsonville Public Library, e-mail
nelpl@athenscounty.lib.oh.us, fax 740-753-3543, mail 95 W. Washington Street, Nelsonville, Ohio 45764.
Proposals are due no later than 8:00 A.M. Eastern Daylight Time, September 30, 2002. Any responses made in
proposals from interested parties may be incorporated as part of a final agreement.

BACKGROUND

The Nelsonville Public Library is a public library system consisting of seven libraries serving the residents of
Athens County, Ohio, USA, with 36,000 active borrowers and over 250,000 items in the collections. The library
has made plans to switch from its current library automation system to Koha, but only if Koha has certain
required capabilities. Among these is the ability to store and retrieve item records in MARC 21 format at the
(Full) National Record Level.

Accordingly, the Library is seeking proposals from parties who are capable of modifying the current Koha code
to provide this capability. Proposals will be evaluated by a committee of five Library staff members, and a
contract will be negotiated with the submitter of the successful proposal.

27
SELECTION CRITERA

Proposals will be evaluated on the basis of cost, qualifications of the programmer(s), time to delivery, ease of
integration of the proposed code into the current Koha software, and ease of upgrading the delivered software to
incorporate future changes to the MARC formats.

REQUIRED INFORMATION

• How will you modify current Koha tables and/or scripts to accomodate MARC 21 National (Full) Level
Record Requirements? (See http://lcweb.loc.gov/marc/bibliographic/nlr/). Please provide enough detail
so that the committee may judge the viability of your plan, but do not submit sample tables and/or
scripts.
• Who will undertake this work?
• What are the qualifications of the person(s) doing this work?
• How have you been involved in previous Koha development?
• How long will it take to complete this work?
• How much will you charge for this work? (in US Dollars, please)
• How do you plan to incorporate this new code into the most current version of Koha?

***

4. Spreading the Word


Request for comments: Story about Nelsonville Public Library
Date: Monday Aug 26, 2002, 3:18 PM
From: pate
To: rachel, koha-manage@lists.sourceforge.net, brenda, shedges

Here's my first cut at a longer article about NPL and Koha. I'm planning on submitting it to a couple of news
outlets ... I'd happily take suggestions on possible targets as well as criticism about the writing or content.

thanks,

--pate

----------------------------------------------------------------

Library invests in Free Software.

Nelsonville Public Library (NPL) serves the residents of Athens County Ohio. Like most libraries, they have to
weigh the money they spend on Information Technology very carefully since every dollar spent on computers
and software is a dollar they can not spend on books. It might seem odd then, that they have decided to spend
money on free software.

They look at it as a wise investment though. "I expect that an initial investment of $10,000 will save us about
$10,000 every year, beginning in 2003." says Stephen Hedges. "I think our real Return on Investment (ROI)

28
will not be financial, however. Within the next few years, we fully expect that our website will offer some of the
best online library services available anywhere in the world. That's the value we expect to get from investing in
Koha."

NPL's plan is to initially copy some of their data onto a Koha system to use for testing. Then as the librarians
gradually become more accustomed to the new software they will move their live system to Koha as well. They
expect to complete their conversion in the summer of 2003.

What does their investment mean though, and how did they come to feel the way that they do? To answer these
questions, we'll need to look at some history.

Traditionally, libraries turn to big software vendors and proprietary software to run their libraries. In 1999, a
rural library system in New Zealand, the Horowhenua Library Trust (HLT), was at a crossroads. They needed to
upgrade their library software, but didn't want to be stuck with the high price tag that they knew would be a part
of the package. They made the bold decision to work with Katipo Communications of Wellington, New
Zealand. Katipo suggested that they develop a new system based on open standards (like using a web browser
for the client software), and Open Source software (like Linux, MySQL, and Perl).

Katipo recommended that the new application be Open Source as well. This accomplished three things: it
protected HLT from anything happening to Katipo, the software would always be there and anyone could work
on it; it freed Katipo from becoming a software marketing company, allowing them to concentrate on web
development; and it allowed other libraries to work with the software, installing it for little or no cost and
extending it to fit their own needs. Fittingly, this new sofware would be called 'Koha', a Maori word meaning
'gift'.

Koha was released in 2000, and was quickly picked up by several other libaries. Among the most active were
the libraries of the Coast Mountains School District in British Columbia, Cananda. Steve Tonnesen, a Network
Technician working for the district, found Koha as he was searching for an inexpensive method for upgrading
the library software at one of the schools in his district.

He quickly moved past just installing the system and began writing new functionality into the software. He
added MARC import tools, a client for Z39.50, and multiple improvements to the system. Because the software
was licensed under the GNU General Public License (GPL), he was not only free to make these changes but
also to release them to the rest of the world, which he did.

Other libaries and developers have picked up Koha for their own use. As each one has done so, they've started
to make improvements. Almost all of these improvements have worked their way back into the main Koha
system. The initial gift that made Koha has been passed along and has grown with each new stop on its path.

Stephen Hedges says, "HLT has paid their money and received a product that works fine for their needs. Now
other libraries can pay a little more money and receive enhancements to that product that will make it fit their
needs - without having to pay for the development of an entire software package. HLT got what it wanted, we
plan to get what we want, other libraries can pay and get what they want. ... It's a great model for success!"

29
NPL started looking into Koha about a year ago, and although things sometimes looked a bit bumpy, they put
together a formal team to explore using Koha and other Open Source tools in January of this year. By August,
they had decided to migrate their library system to Koha.

Key to making this migration work were three specific modules. One was under active development, the other
two were not. NPL decided that they could put their money where their mouth was and help fund development
on these three modules. According to Hedges, this seemed natural. "Open Source requires just as much
commitment as commercial software. Libraries should be ready to commit financial resources. The difference
is, with commercial software a big portion of the money goes to research and development over which the
library has no control, while with Open Source that same money can fund the development of software modules
the library really wants."

This has some profound implications for the Koha project as well. While much of the work is being done by
developers on their free time, this funding will allow some of them to expand their work on Koha. As libraries
step forward to fund work, the work that they are most interested in seeing will be the work that gets the most
attention. This really is a case where the consumer can vote with their checkbook.

For more information about the Koha project, please see its website at <www.koha.org>; or send email to
<<info@koha.org>>.

***

Very exciting news


Date: Monday Aug 26, 2002, 7:54 PM
From: dony
To: shedges

I just read the announcement on oss4lib that Nelsonville will be converting to Koha. I think that's very exciting,
and I wanted to congratulate you on taking this direction. And, of course, I'm interested in seeing how Koha
might be able to work with MORE.

Wishing you success,

DY

***

Open Source Catalog System


Date: Tuesday Aug 27, 2002, 9:34 AM
From: brianp
To: shedges

Mr. Hedges:

30
My name is Brian P, and I manage and write for several Linux and Open Source Web sites. I read with interest a
recent release regarding your library's decision to move to the Koha library system, and I was wondering if you
or a member of your staff would care to be interviewed for an article about this subject.

As a former library clerk myself, I have more than a little curiousity as to how this decision came about and
how it will be implemented.

If you are indeed interested, please feel free to reply to this e-mail and we can set up a time for a phone
interview.

Thank you,

Brian P

***

Final RFP for MARC


Date: Wednesday Aug 28, 2002, 10:02 AM
From: shedges
To: pate

Pat,

The library board gave official approval to release the MARC RFP last night -- I've included the version they
approved at the end of this message, and would appreciate it if you would distribute it however you see fit. It is
rather formal and intimidating, but library boards (attorneys, educators, etc.) tend to think that way, especially
since they are entrusted with stewardship of the public's money. In reality, I see the project as being a much
looser process -- let's get it done any way that works. I'm happy to answer any questions I can, join in any
discusssions where I'm wanted, etc. Once I have some idea of the type of money we're talking about, I'll
probably start soliciting other libraries to help, and apply for a matching grant from the Feds.

Thanks for all your help, Pat!

Stephen

***

Nelsonville commits to open source


Date: Saturday Aug 31, 2002, 11:05 AM
From: shedges
To: OPLINLIST

Nelsonville Public Library recently made the decision to become actively involved in the development of Koha,
an open source library automation system.

31
The news has been bouncing around in Linux e-zines and generating some interest in the open source
community, so some of you may be interested in more information, too. You can go to our homepage,
http://www.athenscounty.lib.oh.us, and follow the "Koha" link to learn more.

If you're not familiar with open source systems for libraries, please check out http://www.oss4lib.org.

Remember, NPL will also be doing a Friday afternoon session about our ongoing open source project at the
OLC Annual Conference.

Stephen Hedges

***

Re: Nice job!


Date: Saturday Aug 31, 2002, 11:45 AM
From: brianp
To: shedges

Good, I'm glad you liked it!

Please let me know when Koha is about finished--I'm also a private pilot, and the chance to fly out and do a live
story would be great.

Thanks again,

Brian

On Sat, 2002-08-31 at 13:52, Stephen Hedges wrote:

Brian, I just finished reading the Linux Planet article about Koha -- great work!

Stephen Hedges

***

Think-Linux show
Date: Tuesday Sep 3, 2002, 12:42 AM
From: pate
To: shedges
Cc: glenn

Stephen,

I've been talking to Glenn about a Linux Show in Toledo, Ohio on Oct 30-31. I think this might be a nice cross-
over opportunity. If you guys would be interested in making your presentation about OSS in libraries to a

32
second audience, Glenn can probably find a spot for you on their schedule. They'd also be willing to donate a
booth to Koha (or maybe OSS for Libraries) if we can scrape together a couple of bodies to staff the booth.

Glenn has offered free show passes to a presenter and a couple of booth staffers. I think they'd love to see
interested librarians at the show to see other Open Source solution providers. More information on the show can
be found at: http://lwn.net/Articles/8961/

Why don't we kick this around a bit and see what we can come up with.

-pate

***

Koha page update


Date: Tuesday Sep 3, 2002, 12:24 PM
From: owenl
To: shedges
Cc: joshuaf

I added stuff to our Koha page, following suggestions from both of you. Let me know if there's any changes
you'd like to suggest.

Also: You may have seen that someone beat me to the punch in volunteering for the HTML/CSS help, only to
be told that there was nothing to be done right now. Sounds like they're still working on a templating system,
and until then any HTML work would be redundant. Bummer.

-- Owen

***

Re: Think-Linux show


Date: Monday Sep 9, 2002, 4:47 PM
From: shedges
To: pate, chris

Pat and Chris -

The Ohio Library Council Annual Conference runs from October 23 through October 25. The slot for our
presentation on open source is on Friday at 2:00, the very last time slot. Just so you have no illusions, we are
expecting only a handful of people to show up for the presentation, since most folks will probably leave at
noon, having had their fill of conference. And there are no exhibitors at this conference (they do that separately
in May), just presentations. We'd love to have you on board for the presentation, Chris, and we'd also love to
have the chance to spend time meeting with you while we're all in Columbus, or spending time with you in
Nelsonville, whatever. I'm not sure what you'd like to accomplish on this trip, but we'll do whatever we can to
make that happen.

33
Now, for my purposes, the value of the Think-Linux show will be rubbing shoulders with a bunch of other open
source folks, so that's why I've been thinking in terms of helping to man a booth. However, I'd also be willing to
help with a presentation, if you folks want to do that. Just let me know.

Stephen

It certainly fits into the times ... the two shows are Oct 23-24 (right Stephen?) and Oct 30 and 31.

Shall the three of us put on our writing caps to come up with a presentation for Think-Linux?

-pate

On Mon, 9 Sep 2002, Chris wrote:

Hi Pat

I can get a return flight to columbus ohio for the grand total on 93$nz :) (taxes) the rest are covered with my
miles

Id leave nz oct 21, and arrive in columbus the same day (crazy timezones) Then leave from columbus on nov 8
and lose a day or 2 on the way back.

Does this fit with the times for both events? I can perhaps shuffle the flights around a bit (united isnt exactly full
at the moment)

If this sounds doable ill look into accomodation and transport around ohio, and then confirm the tickets.

Chris

***

Re: Think-Linux show


Date: Thursday Sep 12, 2002, 10:29 AM
From: chris
To: shedges
Cc: pate

Hi there,

Im still considering whether Ill head over or not. I think the things id like to get out of the trip is to meet the
guys from NPL. Listen to some of the talks at the conference. If we could organise some demo-cds to hand out
at the conference, that would be great.

Im thinking that the think-linux show will just be a bonus type thing. I dont think the audience there is going to
be ideal. But as Stephen said its always great to rub shoulders with other open source minded people.

34
Now on top of these 2 things, we have a potential client in libertyville IL, which is not far at all from chicago
O'hare airport which i have to fly through. So I thought I may be able to pop in on them as well.

So 3 reasons for going, plus the general tourist reasons. Going to see the bluejackets play. The browns are all
sold out so maybe going to see some college football .. i hear the buckeyes are good? And isnt Ohio where the
original seaworld is? Plus I want to perhaps take a look at the rock and roll hall of fame.

Reasons against, saving my miles for an occassion when we can get more of the koha developers together. This
would be a compelling reason ... if I wasnt so scared united was going to file chapter-11 and pull all my miles
anyway.

So now im leaning towards going again. So tonight Ill look into accomdation and if it doesnt blow my budget, I
think ill confirm my tickets.

Sorry to be so wishy-washy, I have to decide one way or another pretty soon as they wont hold my tickets for
much longer.

Chris

***

[Koha] From the Kaitiaki


Date: Friday Sep 13, 2002, 1:41 PM
From: pate
To: koha@lists.katipo.co.nz

To the Koha community

Welcome,

Koha seems to be showing up in a lot more places these days. I've heard from people in the US, Ireland,
Germany, India, and Panama about their desire to start using it. We've recently shown up on the South African
Government's Open Source Software website. I've even seen a Koha installation in Japan. This is really cool!
What's even better is that we're poised to do even more.

If you've got a question, comment, or a success story, please feel free to drop me a line.

1.2

More work has gone into 1.2.3, and RC13 has just been released. We're seeing a lot of testing help from
the community, and this will be key to making 1.2.3 work extremely well for everybody.

1.4

We're getting closer to a 1.3.0 release. Paul P, the 1.4 release manager expects it in the next 2 weeks.

35
What makes this 1.3/1.4 release so special? Well, on the surface, nothing seems to change... Koha should
work exactly as in the 1.2 versions, but the underlying database API has been completely rewritten. Data
is stored in the old, custom format and in MARC format too (MARC21 English by default, but other
flavours of MARC will be supported as well.

The 1.3.0 release has to be heavily tested to ensure that everything works as it did under 1.2.X. The next
steps in the 1.3 series include MARC tools for librarians, MARC export and import, and many other
nice features.

REMEMBER : the 1.3.0 IS alpha-software Use it only for TESTING PURPOSES. You've been warned !

Documentation

If you've converted from a proprietary ILS to Koha, please contact <nsr_koha>. He's working on a
migration guide to add to our existing manual.

Translations

Work is currently being done on some of our documentation translation tools. We're hoping to have tools
and documentation ready for translators soon.

Community

Philanthropy Australia is this weeks news maker, see <http://www.linuxpr.com/releases/5107.html >.


We've also made it onto a few more radar screens, systems from Follet.com and Epixtech.com have both
been spotted checking out Koha.

In breaking news, Steve T has worked a bit more Koha magic. This week he's released a demo CD for Koha.
This CD will allow a user to run a sample Koha installation on any Win32 systems that will boot from a CD.
What a great way to spread the word. An ISO image (suitable for burning CDs) is available at:
<http://sourceforge.net/project/showfiles.php?group_id=16466>;.

Koha has been invited to participate in the Think-Linux show in Columbus, Ohio (in the US). We're not sure yet
whether this is logisticly possible for us. If you're interested in making our appearance at this show a reality, or
in seeing Koha at a local conference, User Group meeting, or similar event, please let me know.

While the pace of Koha acceptance and development seem to be picking up, we're still a very friendly place for
newcomers. If you're new to Koha, please stop by the mailing list and introduce yourself.

happy hacking,

Pat E

***

36
Re: Think-Linux show
Date: Wednesday Sep 18, 2002, 1:01 PM
From: chris
To: pate
Cc: shedges, rachel

Hi Guys

Well Ive decided I will head over to Ohio to meet the NLP people and go to the 2 conferences/shows. Im hoping
to have a bit of a holiday as well, as im feeling the need :-) (Ive managed an overseas trip each year since ive
left uni, this year was looking to be my first without one).

I think ill take stephens suggestion and stay and a motel 6 or red roof inn or the like. But im not too keen on
hiring a car and driving on the wrong^H^H^H^H^H other side of the road. :-) So im hoping I can bride one of
the NLP librarians to show me around columbus a bit, with the promise to return the favour if they visit nz :)
And I think ill make use of the public transport system for the rest of the time.

So i'll talk to rach and pat about getting some brochures to hand out about koha. And we can perhaps burn some
koha cds to hand out.

Then ill work with pate et al (if they have time) to figure out what we want to do at the thinklinux show.

Chris

***

Re: Think-Linux show - again


Date: Friday Sep 20, 2002, 9:06 AM
From: chris
To: shedges
Cc: pate, rachel

On Thu, Sep 19, 2002 at 03:27:45PM -0700, Stephen Hedges said:

You've got a deal, assuming you mean "bribe" ;-| We're all looking forward to meeting you, so I'm glad you've
decided to make the trip.

Excellent and oops yes, i meant bribe :)

I have some news to share about the RFP process. We still have not received any formal proposals, but I believe
we will get a "pre-proposal" from evolServ Technologies in Dayton, Ohio. I met with Don C, their pres., and
Keith C, their account manager yesterday afternoon. (Don seems to get around on the Internet under the name
"thelinuxguy.") To summarize, they seem very interested in the project, two of their developers have been
investigating the Koha RCs on Sourceforge, but they're not quite sure how much time it will take them to
decipher the myteries of MARC. So their plan is to submit a pre-proposal, with the intention of getting feedback

37
from NPL and any of the current Koha team who want to chime in. I assured them that the process is flexible
enough to handle that kind of activity.

Cool. Im fairly sure Paul, (the koha MARC guru) was going to respond to the RFP, perhaps Pat and I will chase
him up. He may need some help writing the response as english is not his first language. But he has gotten Koha
to the point of lossless importing from MARC. And is currently working on export and display routines. So it
would be great to involve him in some way.

They may be at the Think-Linux show, too. Don says he has known Glenn J since the '80's, but they haven't yet
decided if they'll go.

Cool. It'd be good to meet them.

Oh, as an aside, I was most impressed with the COTA website, specifically the plan an itinerary facility. You
type in your departure address and ur destination one, and the time. And they email which buses to catch from
where. Quite neat, a system that many other cities could/should copy.

Chris

***

FW: RE: Koha project


Date: Friday Sep 20, 2002, 9:28 AM
From: nelpl
To: joshuaf, shedges, owenl, laurenm, kenr

Owen, here's another link for our Koha page.

S.H.

Thanks Stephen,

I am running a story on you implementation in this month's (September) Biblio Tech Review - out Friday
morning. http://www.biblio-tech.com//btr11/S_PD.cfm

Peter E

***

Re: Think-Linux Conference Sign Up Form


Date: Sunday Sep 22, 2002, 10:28 AM
From: chris
To: pate
Cc: shedges, rachel

On Fri, 20 Sep 2002, Stephen Hedges wrote:

38
Pat and Chris, I'd suggest that Chris fill out the forms (using "Koha.org" as the company name, so we don't get
charged?), but we can use the contact information for Nelsonville Public Library for phone number, fax
number, cell phone, e-mail (or maybe Chris' would be better), contact name (Stephen Hedges), etc., since I
don't imagine the Think-Linux folks would want to call New Zealand if they have questions.

Right will do.

As for the presentation, may I suggest we do a general overview of where Koha is and what it does, then launch
into a discussion of the "Koha model" in which libraries pay for development of open source software. From the
questions I've been getting about our RFP, this seems to be a radical notion in open source land.

Im thinking it might be cool to organise to have people in #koha chatting, and have that on the screen behind us
during the presentation too? And yep, we can look at HLT funding initial koha, philanthropy funding some
changes. and now NLP doing the RFP approach.

Its strange that something that seems so logical, is still seen to be so radical. Mind you good ideas are often that
way.

I arrive in columbus at 11pm on the 21st, I figure that gives me some time to coordinate with Stephen. So ill
work on a draft with rachel and rosalie, and then fire it through for you guys to peruse.

Chris

5. Koha Business
FW: proposal for MARC-koha RFP
Date: Wednesday Sep 25, 2002, 11:38 AM
From: nelpl
To: joshuaf, shedges, owenl, laurenm, kenr

We have our first RFP for Koha-MARC, and I've attached it to this note. I suddenly think this would make a
perfect LSTA mini-grant proposal, with NPL supplying $x,xxx and LSTA supplying the remaining $xx,xxx. Not
many grants would have the potential to benefit Ohio libraries like this one would! So, Lauren, would you
download the forms, and let's get started!

And no, I'm not yet assuming that this is the RFP we'll accept, but I am assuming that any others we get will be
in this same price range.

Stephen

Mr Hedges,

Please, find as attachment my proposal for integration of MARC21 management into koha.

I'm Paul P, french software developer, working on koha since january/february. I'm the release manager of the
1.4 version (the MARC one), and one of the leaders of "the french community" growing around koha. I'm

39
working as a freelance developper, and coding koha on my spare time. I intend to sell services here in France
whenever the french version can be released (installation/teaching/hotline...).

If you have questions, you can mail me (if you want to phone, I'm OK, but my spoken english is not as good as
my written english)

However, I'm happy to announce you that today the 1.3.0 version of koha shall be released. This version is the
1st step to MARC management. It's an unstable version, only for testers, but it shows that the analysis we made
(and that is inclosed in my attachment) was right : this version manages internaly the MARC DB. A big job is
still to be done to reach GUI MARC management and debug the stuff already done. If i'm the winner of this
RFP, the money you'll fund will permit a full-time development, so software will be released sooner.

Je vous prie d'agréer mes salutations respectueuses.

Paul

***

FW: Koha Proposal


Date: Monday Sep 30, 2002, 3:12 PM
From: nelpl
To: joshuaf, shedges, owenl, laurenm, kenr

Here's our second RFP, quite a bit different from the first. Please look it over, I'd like to discuss both when we
meet on Wednesday. Thanks!

Stephen

Dear Mr. Hedges,

Please find below our proposal in response to your RFP entitled "MARC 21 record support for Koha" to
provide certain enhancements to the Koha Library Managememt system. Our firm, PSL Tech, specializes in
information systems based on open source technology (http://www.psltech.com). We have been in business for
over seven years and have worked on many complex data management systems. While our direct experience to
date with Koha is limited, we have taken time over the last few weeks to review the code and database and
familiarized ourselves with the MARC21 standard. Given the difficult nature of this data, we feel our knowledge
of data base design, Perl programming, and open source data base systems could be a big asset to the project.
We have a strong interest in Koha since and are very interested in doing this work. Below we have set forth our
approach to meeting your requirements.

If you have any questions, please feel free to contact me.

Regards,

Stan G

40
***

Your proposal
Date: Wednesday Oct 2, 2002, 2:30 PM
From: shedges
To: paul.p

Paul, the "Committee" has met and considered your proposal. I should tell you that this group consists of
myself, Lauren M (our Assistant Director), Joshua F (Systems Administrator), Owen L (Webmaster), and Ken
R, who manages support of our current ILS.

We like your proposal very much and would like to support your work on Koha. The price of $1x,xxx is higher
than we are prepared to pay by ourselves, but we have a plan. We intend to apply for a federal grant (Library
Services and Technology Act funds) that would pay 75% of the cost, and we are fairly sure we can get this
grant.

However, using this grant would force us to delay any payments until after January 1, 2003. That means we
would actually be slowing down Koha development while we wait for the funds -- not a good situation.

Would you be willing to wait until January to receive payment? You would, of course, have to take other work
in the meantime, which would slow down Koha development, but it looks like it could still be done by
February. Or can we separate a small part of the job, something that you _really_ want to work on right now,
pay you for that, and then do the rest of the work later. Because this would reduce the amount of grant money
we would receive, we would like to keep any amount paid in 2002 as small as possible.

Finally, we would like to have your proposal (text) in a form that could be edited for use as an attachment to our
grant proposal. We would like to refine the language, then send it back to you for your approval, then you could
send us the revised proposal as a pdf file to include with our application. We think it would make our grant
application much stronger.

Thank you, Paul. We hope we can come to some agreement that will allow us to support your work.

Stephen

***

Re: Your proposal


Date: Wednesday Oct 2, 2002, 9:35 PM
From: paul.p
To: shedges

<snip>

I completly agree to to work at full time on koha until MARC works (in my planning, for instance, there is only
a 2x3 days of teaching free-software technologies, but it may be delayed until 2003 too - for budgetting reasons
- Other opportunities coming probably, but i can delay them). If I'm sure to be paid at the end of the job, then

41
i'm happy and will work on koha. Note it's my interest too, as I plan to sell services here in France on koha :
installation, teaching, support... To do this, we need a french version, which MUST be MARC-compatible. So,
your and my interests joined here. the sooner the software is MARC-compatible, the sooner i can install it in
France.

Finally, we would like to have your proposal (text) in a form that could be edited for use as an attachment to
our grant proposal.

No problem. I can send it in TXT, RTF, Word, or OpenOffice format, as you want (the original doc is written by
OpenOffice)

Thank you, Paul. We hope we can come to some agreement that will allow us to support your work.

It seems it begins very good ;-)

One last note : i'm very impressed by the differences between french and US "public" administration : we try
since 4 months to get some funds here in France for MARC support. We don't succeed for many reasons : "no
funds for this" (but funds to buy a proprietary software !), "funds only for fundamental research, not for
supporting free software", "12 months to get an answer"... And NPL arrives, ask for a RFP, answer 5 days after
my proposal, and need only 3 months to get the funds.

Note : I'm GMT+2, so it's time to go to bed for me. But i'm sure my dreams will be sweet with this good news
;-)

Paul

***

Re: Your proposal


Date: Thursday Oct 3, 2002, 9:16 AM
From: shedges
To: paul.p

Paul, sorry I didn't get back to you sooner. I had intended to look at my e-mail last night, but then my son
needed help with his schoolwork, and that filled the whole evening.

I've only one questions : do you plan to fund 75% or 100% (25% by yourself and 75% by federal grant ?) Just
to be sure.

The grant requires a 25% 'match,' so we would be paying 25%, the grant would pay 75%. However, you don't
need to worry about this. The library will pay all 100% to you, then get reimbursed 75% by the grant. You won't
be waiting for payment from the grant, you'll just get one check from the library, probably about January 15.
The easiest way to do this, in fact, would probably not be a check, but a transfer of funds to your bank account.
We can work out those details later.

I can send it in TXT, RTF, Word, or OpenOffice format, as you want (the original doc is written by OpenOffice)

42
OpenOffice is fine, that's what I use.

One last note : i'm very impressed by the differences between french and US "public" administration

We're probably not all that different from France. NPL happens to have people on staff who are very interested
in innovation and know how to 'work the system.' I'm sure that combination would have worked just as
effectively in France.

I'll keep you informed of any new developments here. We had another proposal from a firm in Dallas, Texas
(psltech.com), but we think they would be much better at working on standardized borrower records instead of
MARC records. I intend to call them today and propose that we hire them to help us transfer our borrower
records to Koha, thus getting them involved in the project . . .

Stephen

***

Re: Your proposal


Date: Thursday Oct 3, 2002, 3:36 PM
From: paul.p
To: shedges

Stephen Hedges wrote:

Paul, sorry I didn't get back to you sooner. I had intended to look at my e-mail last night, but then my son
needed help with his schoolwork, and that filled the whole evening.

No problem. I was in bed anyway, [and was awaken at 3AM by my 2nd son :-\\\ (2 years, the 1st being 7)]

You won't be waiting for payment from the grant, you'll just get one check from the library, probably about
January 15. The easiest way to do this, in fact, would probably not be a check, but a transfer of funds to your
bank account. We can work out those details later.

The fund transfer is for sure the best solution I think. Let's see this later.

OpenOffice is fine, that's what I use.

Here it is

Paul

***

Re: Status of Proposals


Date: Thursday Oct 3, 2002, 9:33 AM

43
From: shedges
To: pate

Stephen,

Now that we've passed the Sep 30 deadline, I thought I'd ask you how things were going with the RFP.

Hi, Pat!

Well, we had two proposals, one from PSLTech (psltech.com), and one from Paul, which I assume you have
seen (?). I'll forward the PSLTech proposal to you as a separate e-mail.

Even though PSL was considerably cheaper than Paul, we have reached an agreement with Paul to do the work.
We will be applying for a Library Services and Technology Grant through the State Library of Ohio's 'mini-
grant' program which would pick up 75% of the cost. We will be paying Paul the full price, and think our
chances of getting reimbursed by the LSTA grant are very good.

We still think it would be a good idea to get PSL involved in Koha, so I intend to speak with them today about
the possibility of hiring them to transfer our borrower data from our old Pick database to Koha. In the process,
we would want them to keep a close eye on NCIP and SIP2 to try to keep our data compatible with those
'standards.' They may prompt some changes to the Koha database structure, and would leave an opportunity for
writing NCIP or SIP2 modules/scripts for Koha. In other words, we're thinking ahead to our next RFP, which
actually may not take the form of an RFP. I think you were interested in doing some NCIP work, right? At this
point, we're very much groping our way forward, but we do think that it would be good to get PSL involved.

The other company that I spoke with, evolServ, called to say that they would not submit a proposal because
their costs (mainly time spent in learning MARC) would make their proposal much too expensive. They're still
keeping an eye on Koha, because they're seeing the same thing that PSL sees -- this is a big potential market for
them in the future. Have you seen the Red Hat slogan, "Would you buy a car with the hood welded shut?" (I
love that analogy.) These guys are getting themselves positioned to be the "mechanics" that serve Koha
libraries.

That's the news so far. What do you think?

Stephen

***

Re: Status of Proposals


Date: Thursday Oct 3, 2002, 12:13 PM
From: pate
To: shedges

On Thu, 3 Oct 2002, Stephen Hedges wrote:

44
Well, we had two proposals, one from PSLTech (psltech.com), and one from Paul, which I assume you have seen
(?). I'll forward the PSLTech proposal to you as a separate e-mail.

I did see Paul's. In fact, I was asked to proof-read it for him. (He's occasionally nervous about his english skills.)

Even though PSL was considerably cheaper than Paul, we have reached an agreement with Paul to do the
work. We will be applying for a Library Services and Technology Grant through the State Library of Ohio's
'mini-grant' program, which would pick up 75% of the cost. We will be paying Paul the full price, and think our
chances of getting reimbursed by the LSTA grant are very good.

That's wonderful. He'll be happy, and I'm sure this will help push Koha along.

We still think it would be a good idea to get PSL involved in Koha, so I intend to speak with them today about
the possibility of hiring them to transfer our borrower data from our old Pick database to Koha. In the process,
we would want them to keep a close eye on NCIP and SIP2 to try to keep our data compatible with those
'standards.' They may prompt some changes to the Koha database structure, and would leave an opportunity for
writing NCIP or SIP2 modules/scripts for Koha. In other words, we're thinking ahead to our next RFP, which
actually may not take the form of an RFP. I think you were interested in doing some NCIP work, right? At this
point, we're very much groping our way forward, but we do think that it would be good to get PSL involved.

Potentially. I actually touched base with a couple of very good Perl programmers who were invovled in writing
early SIP/3M code for a vendor some years ago. They're interested in working on NCIP as well. I'd rather back
down and let them do the work, because a) it will get done better and faster and b) I'd like to get them into the
Koha community.

Katipo has also expressed some interest in working on NCIP. I think they were offered some 3M equipment and
think that the time spent on NCIP would enable them to integrate self service 3M systems into Koha.

I may end up keeping my own work with Koha on more of a consultative level. We shall see.

The other company that I spoke with, evolServ, called to say that they would not submit a proposal because
their costs (mainly time spent in learning MARC) would make their proposal much too expensive. They're still
keeping an eye on Koha, because they're seeing the same thing that PSL sees -- this is a big potential market for
them in the future. Have you seen the Red Hat slogan, "Would you buy a car with the hood welded shut?" (I
love that analogy.) These guys are getting themselves positioned to be the "mechanics" that serve Koha
libraries.

I think this is a great position for evolServ and PSL to put themselves in. I'd welcome their presence on the
kohabiz list or the devel list if they'd care to join up. One place that they might be able to get involved with a
minimal investment of resources is a project that I'm starting.

Each of the library associations here in the midwest has put together a strategic plan. My initial goal is to merge
the plans and cull out the parts that Koha can address . My next step will be to contact the working groups
responsible for these visions to try to build a dialog. Hopefully, we can build the results of this dialog into Koha,
put ourselves firmly in the library associations' fields of vision, and show off the real value of Free Software.

45
I'd be happy to talk to PSL or evolServ about this project, or about Koha in general if they're interested.

That's the news so far. What do you think?

Sounds great! On this end, I know that Steve T, who wrote the Z39.50 client, is working on a server. I think he's
hoping to put in a bid on that project.

I guess a logical question at this point is: Since we announced the RFP, do should we also make an
announcement about the award?

As always, if there's anything I can do, please let me know.

-pate

***

Koha proposal
Date: Friday Oct 4, 2002, 1:09 PM
From: shedges
To: stang

Stan, thank you for your proposal in response to our RFP for MARC21 support for Koha. We have accepted the
proposal from Paul P of France and reached an agreement with him to do this work. Paul has been working on
Koha for some time, and it turms out he's already about half finished with the MARC21 support. He was way
ahead of everyone else.

I want to propose something to you in return. We're very interested in the database expertise you have available
at PSL Tech. At some point, we will need to transfer our borrower records from our old Pick database to the
Koha (MySQL) database. We're wondering if you would be interested in the job.

But of course, it's not quite as simple as that. I need to give you some background on library databases and
automation software before I can explain.

MARC, as I'm sure you now know, is a rather old standard developed in conjunction with the Library of
Congress to allow libraries to easily exchange bibliographic data, by making sure that the data structures are the
same in all libraries using MARC (which is now almost all public and academic libraries in the U.S., and many
school libraries).

There has never been a corresponding standard for borrower data. Every library automation package has it's
own way of storing borrower data, which makes sharing of borrower data difficult to impossible, and also
means that someone (usually the new software vendor) must figure out how to convert borrower data from one
system to another whenever a library changes systems.

3M ran into a problem with this as they started to market self-checkout hardware for libraries. Their machines
had to be able to communicate with a wide variety of library systems and exchange borrower and circulation
data efficiently. So they developed a standard interchange protocol (SIP) and software which could be added to

46
existing systems, allowing passing of borrower data between the self-checkout machine and the library's
automation system. Version 2 (SIP2) was released early in 1998.

Since then, NISO has become involved in developing an official standard based on SIP2, the NISO Circulation
Interchange Protocol (NCIP -- see http://www.niso.org/committees/committee_at.html ) This standard is close
to being adopted.

In Ohio, libraries can share resources with other libraries if their automation systems have full MARC records, a
Z39.50 server for sharing MARC records, and a SIP2 module (eventually to become NCIP). Nelsonville Public
Library (or any other library in Ohio) cannot use Koha until it meets these requirements.

So, back to our proposal. We would like to know if you are interested in the job of converting our borrower
records into Koha (since there's no traditional vendor to handle this). But this process also needs to keep the
SIP2/NCIP standards in mind, because someone will eventually need to write SIP2/NCIP support into Koha.

Stephen Hedges

***

Re: Koha proposal


Date: Friday Oct 4, 2002, 1:31 PM
From: pate
To: shedges

Excellent letter, thank you for BCCing me on it. I look forward to hearing Stan's response, and hope that he
moves forward with supporting you guys.

On a related note. I'm at the head end of starting a project to reach out to library associations with the intent of
starting an ongoing dialog about koha and how it needs to grow to meet the various library associations strategic
goals. I'm planning on working with the BC, WA, and OR associations (because I'm closest to them), but would
hope that other associations, developers, and support companies would be interested in joining the effort. You
can see the work in progress at http://www.kohalabs.com

thanks

-pate

***

Re: Koha proposal


Date: Monday Oct 7, 2002, 10:11 AM
From: stang
To: shedges, eugenev

Stephen,

47
Thanks for letting us know the proposal status. Sounds like Paul P will do a good job and it will be great to have
MARC21 capability in Koha.

We are very interested in working on other aspects of the project, including the one you mentioned. Eugene and
I have spent some time over the weekend to review the NCIP standard and feel confident we can convert your
borrower data from the existing Pick database to Koha. Koha has limited support for this data now, so it may be
advantageous to use auxillary tables that support full NCIP data until Koha can be expanded to utilize this
information. To provide access in Koha, we would simply link these auxillary tables to existing Koha tables.

Here are some important questions we have:

1. Do you want auxillary tables built to hold all NCIP information or do we just convert your SIP2 data
directly into current Koha tables?
2. What Koha enhancements, if any, are required to support NPL's circulation processes?
3. If Koha enhancements are required, which SIP2/NCIP data should to be added directly to the Koha
tables?

Stephen, as we proposed with the MARC21 work, it would be beneficial to have some actual data to look at.
Can you export some or all of the SIP2 data from Pick and send it to us? I realize this may be sensitive
information so we are willing to sign a non-disclosure agreement to ensure its protection.

All of this will help us develop a scope of work for the project. From there, we can provide a proposal to you.
Just so you will know, NPL does not incur any costs from us unless you accept the proposal. Any preliminary
effort is just to help us better understand your needs.

Thanks again for considering us,

Stan

***

Re: FW: Re: Koha proposal


Date: Monday Oct 7, 2002, 11:43 AM
From: owenl
To: shedges, joshuaf

Maybe I'm interpreting this incorrectly, but it sounds to me like we're talking about something different than
they are. Are we even concerned at this point about how our data is going to fit into Koha, since we know that
Koha isn't NCIP-compatible yet? Wouldn't the data-transfer we're asking them to to take place *after* that
compatibility was in place, and they wouldn't have to worry about 'auxillary tables' or Koha enhancements?

-- Owen

I'm thinking about the best way to answer this, and would appreciate any comments you might have. S.H.

***

48
Re: Koha proposal
Date: Thursday Oct 10, 2002, 2:08 PM
From: stang
To: shedges, eugenev

Stephen,

Yes, we agree, compatibility comes before data transfer, but having some sample data helps us better design and
implement the new features. It is not intended to use as the final transfer of borrower data for production use---
just as a tool to aid development and testing.

Thanks again and we look forward to your comments,

Stan

Stephen Hedges wrote:

Stan, I did get your e-mail, it's just been a crazy week. I need to look at your ideas more closely, but one
reaction I'v had so far is that we need to be sure that SIP2/NCIP compatibility comes before the borrower data
transfer in our hierarchy of tasks. Is that your understanding, too?

Stephen

***

Re: Koha proposal


Date: Friday Oct 11, 2002, 4:16 PM
From: stang
To: shedges, eugenev

Stephen,

Thanks for the help program. We'll take a look at it. No problem with Windows (we still keep a couple of copies
around or try Wine).

Don't worry about the translations, we will figure it out. We'll get the wIntegrate program too.

Look forward to the borrower data. Just send it in what ever form is convenient for you. CVS is good but we
can handle most anything.

Thanks,

Stan

Stephen Hedges wrote:

49
OK, Stan, let's start with this. I've attached a big file called SpydusHelp.exe which is the online help system for
our current system. I'm sending it because it includes the file specs for our data files. You'll notice that Spydus'
borrower data does not conform to SIP2 or NCIP. I suspect their SIP2 module must act as a translator and
communication interface (or maybe I'm completely out to lunch). Unfortunately, the help file requires Windows.

The most convenient interface to our Unidata (Pick) files is a program called wIntegrate, which can be
downloaded (trial version) from IBM. Unfortunately it requires -- yeah, you guessed it.

I'll try to get some time tomorrow to extract some borrower data in CSV format and send it off to you. Then we
can talk further.

Stephen

***

koha 1.3.1 released


Date: Monday Oct 14, 2002, 4:37 PM
From: paul.p
To: shedges

Good morning stephen,

Most is in the title. I've just released the 1.3.1 version of koha. Notes explain what's new.
(http://sourceforge.net/project/shownotes.php?release_id=116166)

I think you should consider installing it and taking a deep look to the parameters table. The default param tables
are USMARC21 in english. That's exactly what you need. So, verify that the texts are right (they come from
LOC !) and meet your needs.

I'm working on marcimport and manual add/modif of biblios now.

Paul

***

Re: Koha proposal


Date: Tuesday Oct 15, 2002, 9:30 AM
From: stang
To: shedges, eugenev

Stephen,

Thanks for the files. We are looking at them now and have already converted them into a temporary database
(easier to examine the data). Will keep you informed as to our progress.

50
We will definitely secure this data on our server. We are planning to provide you a secure ftp address in the
future so you don't have to email large files. This will be much better, faster and more secure than email
attachments.

Thanks,

Stan

Stephen Hedges wrote:

I've attached two text files: 1) the listing of the BRW file, limited to borrower numbers less than 1000; 2) the list
of the BRW.DETAILS (BRWDET) file, same limits.

These two files cross index by borrower number. What you're seeing here is warts and all. For instance, all
borrower numbers should be seven digits padded with leading zeros (e.g. 0000009, 0000099, 0000999, etc.). As
you can see, we have some that aren't.

Also, I'm relying on your integrity with this data -- names addresses, phone numbers, stuff people generally
don't like to broadcast.

Let me know if there's anything else I need to send. Thanks!

Stephen

***

Re: koha 1.3.1 released


Date: Sunday Oct 20, 2002, 10:12 AM
From: shedges
To: paul.p
Cc: joshuaf, owenl, laurenm, kenr

Hello, Paul. You wrote to me last Monday:

I think you should consider installing it and taking a deep look to the parameters table. The default param
tables are USMARC21 in english. That's exactly what you need. So, verify that the texts are right (they come
from LOC !) and meet your needs.

OK, I have looked at the new tables and scripts in 1.3.1. I am sending my comments to everyone involved with
our project, so they can all see what I have sent to you, and I would like you to send any reply to everyone as
well (reply to all). My comments are my own, however, and not comments from the group. You may get other
comments from those individuals.

First, I am very impressed by the amount of work you have done. I think your comment that fill_usmarc.pl is a
"simple" script is much too modest! I noticed that many of the '$dbh->do("update")' lines are still commented
out. Does that mean you are still working on this script?

51
You asked us to verify that the texts are right (from LOC). They look fine to me, and everything looks like it
will fit our needs nicely. We currently do not store our item information (barcode, etc.) in the 955 tag, but we
should be, and can take care of that detail when we migrate our data.

It looks like you have introduced a new primary key -- 'bibid' -- that was not in the kohadb, is that correct? I
also do not understand the relation between marc_tag_table and marc_tag_structure, or between
marc_subfield_table and marc_subfield_structure. That is not important, however, as long as it works!

Joshua and Owen and I will be spending most of next week at a conference with Chris Cormack. I mention this
because 1) we will probably not have much time to reply to e-mails, and 2) we will probably talk about you a
lot!

Thanks, Paul.

Stephen

***

Re: koha 1.3.1 released


Date: Monday Oct 21, 2002, 12:15 PM
From: paul.p
To: shedges
Cc: joshuaf, owenl, laurenm, kenr

Stephen Hedges wrote:

First, I am very impressed by the amount of work you have done. I think your comment that fill_usmarc.pl is a
"simple" script is much too modest! I noticed that many of the '$dbh->do("update")' lines are still commented
out. Does that mean you are still working on this script?

2 answers :

• the usmarc data comes from steve, who gets them directly from the loc. fill_usmarc.pl is mine.
• the fill_usmarc script will disappear in a future version : i will provide, during installation process, a
way to "import" CSV files for differents MARC flavours in differents languages. I'll use fill_usmarc,
then dump the DB, and will obtain a csv file with a "standard" form. I already have french unimarc too,
done by some french volunteers.

Note you'll still need to do your own parameters for links between the old koha-db and the MARC-db, and what
you want to manage/ignore in your library. Both operations are done in /admin/ scripts provided in 1.3.1.

You asked us to verify that the texts are right (from LOC). They look fine to me, and everything looks like it will
fit our needs nicely. We currently do not store our item information (barcode, etc.) in the 955 tag, but we should
be, and can take care of that detail when we migrate our data.

Nice.

52
It looks like you have introduced a new primary key -- 'bibid' -- that was not in the kohadb, is that correct? I
also do not understand the relation between marc_tag_table and marc_tag_structure, or between
marc_subfield_table and marc_subfield_structure. That is not important, however, as long as it works!

xx_structure

contains the STRUCTURE of the MARC.

xx_table

contains the DATA.

marc_biblio

contains the "header" of the marc biblio

marc_subfield_table

contains one row for each subfield of the biblio

marc_words

contains only 1 word. It will be used to speed up partial MARC searches.

Joshua and Owen and I will be spending most of next week at a conference with Chris Cormack. I mention this
because 1) we will probably not have much time to reply to e-mails, and 2) we will probably talk about you a
lot!

Did chris come to US, or do you fly to nz ?

Paul

***

NCIP Progress
Date: Friday Oct 25, 2002, 8:23 AM
From: stang
To: shedges, eugenev

Stephen,

Just wanted to give you a brief update as to our progress on developing an NCIP capability for Koha. Eugene
has been busy building an initial cut of a full NCIP database. As you know this is quite and extensive standard
so it has taken some time to do. The next steps are to analyze and refine this design, then load the sample
borrower data you supplied. Once this is complete we can do some testing and look at refinements.

53
The big questions for us are the things we have not tackled yet: linking this data to the current Koha tables and
defining Koha interface changes to support your needs. At some point, I think it would be good to have a
conference call to address these and other issues. After we feel comfortable with the scope of the effort, we will
prepare a proposal for your review, which I expect will be in the next 2-3 weeks.

Stephen, I will be out of town until late Monday, so I won't be able to reply to any of your email until then. If
you need help, you can email Eugene at the address above.

Regards,

Stan

***

Re: Please proof-read


Date: Wednesday Nov 20, 2002, 6:14 PM
From: paul.p
To: shedges

Stephen Hedges a ecrit:

Thanks, Paul. We will wait (impatiently!) for your announcement.

The site is running (http://demo.koha-fr.org). Announcement will be made tomorrow in weekly newsletter. You
can take a look before anybody else :-)

If you have problem to understand something, don't hesitate to ask (by mail or irc). Note it's 6pm here, so, i will
leave my computer soon.

Paul

***

Koha user groups


Date: Friday Nov 22, 2002, 2:39 PM
From: pate
To: kohabiz

Okay, I've been struck with a thought that I think bears some kicking about. In regions where there are several
Koha libraries, it might make sense to start Koha User Groups. If a vendor wanted to host a website and email
list, actively recruit attendees, and try to host/organize a meeting quarterly or semi-annually, I think that the
resulting user group would really help make Koha a more visible, more viable option for other libraries in the
area.

--some questions--

54
• What would it take to get a user group off the ground?
• What kind of an agenda would you want to run for a meeting?
• How big a region could you support?
• How many libraries would be a workable minimum?
• What would libraries get out of it?
• How much standardization should there be between groups?
• How much interatction should there be between groups?

I've got some thoughts about each of these, but I think I'll hold my peace for a bit and see what everyone else
thinks.

BTW, I'm planning on trying to jumpstart 2 of these as pilots, one in the Washington, DC area and one in the
Ohio-Michigan area. I'll plan on reporting my progress periodically. Kibbitzim welcome.

-pate

***

Re: interesting development


Date: Friday Nov 22, 2002, 3:04 PM
From: pate
To: shedges

On Fri, 22 Nov 2002, Stephen Hedges wrote:

> Hmm, I just got an email from a member of the > NCIP committee with regard > to our plans to build an
NCIP interface for Koha. > It sounds like you've > made bigger waves than I thought. Good! BTW, I'm
beginning to think that P** Tech (the co. in Dallas) is the wrong horse for this race.

I'm sorry to hear that. I'd have like to see them get involved. It didn't look like they wanted to play in that
direction though.

Chris told me that Katipo has an offer from 3M to deliver the code for SIP2 that would run on Koha. I wonder if
it wouldn't be wise to take their offer, then make modifications to convert the SIP2 to NCIP. I think there's some
SIP2 available as open source, too, although I can't find the links right now. (Seems like epixTech may have
been distributing it -- but was the source open? I don't know now.)

EpixTech has released an opensource NCIP toolkit, which we may be able to use to create a Perl NCIP library ...
it's certainly worth exploring. Perhaps working both directions with Katipo would be a good idea. I do have a
couple of very strong Perl guys (with an ILS background) in Portland who are interested in doing the work
though, and a vendor in Ann Arbor that is making noises about doing things with Koha, so if we can find a way
to run a competetive selection process that would be nice.

I don't suppose the NCIP folks would be interested in working with open source software (Koha) to test their
open standard (NCIP)?

55
I think there's some room to operate there. We just need to be careful not to put the NCIP standards folk in the
awkward position of working on Koha, while they work for a proprietary vendor.

> Have you had a chance to play with Pauls new > demo? I realize that it's > unimarc, not marc 21, but I think
it will still be > interesting. Yes, I've peeked at it (Paul sent me the URL a little early), and want to spend some
time digging deeper this weekend, but what I've seen is very impressive. I think the changes justify the ver. 2.0
label.

I'm glad you think so. I'm pretty excited by the way things are moving. We've also had some progress on
reports, and (just this morning) a volunteer to start working on a serials module.

***

Re: interesting development


Date: Friday Nov 22, 2002, 3:29 PM
From: shedges
To: pate

I do have a couple of very strong Perl guys (with an ILS background) in Portland who are interested in doing
the work though, and a vendor in Ann Arbor that is making noises about doing things with Koha, so if we can
find a way to run a competetive selection process that would be nice.

The Linux B**, right? We met them at the Think-Linux show, and they seem to be very interested and very
nice.

And yes, I know you're interested in this piece, too. Here's my current status. I'm going to be paying Paul about
$1*,*** for the MARC coding (which looks like it's going to include the Z39.50 server, right?). I had planned
on getting a grant for some of that, but just learned that if I apply for the grant that's most appropriate (and
therefore easiest to get), I would be disqualifying the library from receiving a grant for a different project.
Complicated, I know, but the other project involves several community partnerships, so I'm not going to request
money from that source for Koha.

I'm looking -- but not yet finding -- other sources of grant funding, but the timing is going to be a problem. I
think I'll be paying Paul the entire fee out of the library's pocket. No problem, the work's certainly worth it! But
it leaves me strapped for cash for a few months.

So that's why I'm considering alternative ways to get this (NCIP) done without waiting for the money to arrive.
Of course, it could take months for any grant money to arrive, also, and I may just have to learn a little patience.
Not in my nature!

Stephen

***

koha 1.3.3 released


Date: Monday Jan 13, 2003, 10:29 AM

56
From: paul
To: shedges
Cc: joshuaf, owenl, laurenm, kenr

Hi Stephen, (hi to other npl, that are cc in this mail too)

First of all, I wish you a happy new year, with a lot of free softwares :-)

Maybe you've seen that koha-1.3.3 has been released a few days ago. It's a very stable release for acquisition,
that is used in the Dombes Abbey, here in France since december.

I've included a "plugin" system to improve MARC entering of datas. With this system, you can define "events"
and "relations" to limit or check values entered.

For example, the plugin 210c (value_builder directory), when there is an ISBN, auto calculates the editor name
if it's entered in the thesaurus table. The plugin 700..., when you choose the right entry in the 700a, auto split
name, DOB, and title in 700b,c,d and f... The plugins i've developped are only for unimarc (and french
thesaurus/authorities structure), so i encourage you to develop you own plugins. Don't hesitate to ask me if there
is something you don't understand.

I've written a documentation about the plugin structure at:


http://www.saas.nsw.edu.au/wiki/index.php?page=PlugInDoc

Hope you'll enjoy this new version.

The next release will be 1.9.0, which will be the 1st beta before 2.0RC. In 1.9.0, you should be able to to all
what was included in my answer to RFP, with some bugs for the price ;-) The bugs will be corrected between
1.9.0 and 2.0 (1.9.1, 1.9.2... until 2.0 can be launched).

Paul

***

Re: Timeline for Nelsonville Public Library Koha Development


Date: Monday Jan 13, 2003, 11:09 AM
From: shedges
To: larryc

Hi, Larry -

Now that Paul has released the stable version 1.3.3, we are installing it on a server and will soon be starting the
process of migrating some of our data to it. I think the MARC data will be no problem, since Paul has done such
a nice job of making Koha MARC-compliant. Our patron data may take a while.

57
We intend to run both Koha and our old system side by side for a while to get Koha in shape for daily use in our
libraries. Our deadline for making the complete switch-over is September, when our annual contract with the
current vendor falls due.

Stephen

Dear Stephen,

We need to have a new library automation system in place by July 1st of this year, and we have been
considering Koha as well as commercial vendors. Can you tell me what sort of timeline your library is on for
the replacement of your current system? Since we may require a lead time of three or four months for the
installation of a new system, we don't have alot of time in which to make our decision, so we're wondering
whether you are planning to have a MARC- compatible Koha system installed within the same timeframe.

Larry C

***

Checking in
Date: Thursday Jan 16, 2003, 2:33 PM
From: brianp
To: shedges

Hey Stephen,

Hope you had a good holiday! I wanted to check in and see how things were going with the NPL's open
source/Koha project, and see if things were still on schedule for this summer.

Looking forward to hearing from you. I just found out my old boss is the assistant director of the Greene
County Public Library, and I need to contact him and see if he would be interested in your project, too.

Peace,

Brian

***

KOHA and we
Date: Tuesday Jan 21, 2003, 7:32 AM
From: red
To: shedges

Stephen Hedges: Nelsonville Public Library

Allow me to introduce myself briefly. I am a librarian, once head of the Boston Theological Institute's Library
Program (1967-1971), after that head of the Office for Systems Planning and Research in the Harvard

58
University Library (1973-1978). Then I became the executive director of OHIONET (1978-1988), next the
executive director of CoBIT (1988-1992). After that, I left Ohio (lived in Wyoming and North Carolina). Three
years ago I joined the staff of the North Carolina Supreme Court Library.

The NCSCL installed SIRSI some years ago, is currently up to date on revisions, but cannot justify the $6,000+
annual fees (for emergency services and little else). We were interested in an open source solution and began
searching for possible candidates - none on the horizon that looked good until KOHA.

The KOHA releases have stated that the project will need assistance in the area of documentation. I have spent
at least sixteen years of my life either writing or editing documentation, both technical support materials and
end-user documentation. The NCSCL, with a staff of six, might be able to take on a significant responsibility
here. We could continue to use SIRSI for cataloging and acquisition (we have no need of a circulation module)
until KOHA was prepared to replace fully what we have now. I have handled the archive MARC file of SIRSI
(now approaching 20,000 titles) and know that it could be readily converted into an import file for any other
system. I have had extensive programming experience in older compilers and interpreters, and am working to
become proficient in Web languages and open source SQLs. As a law collection our needs are for a system that
does not treat very large numbers of titles, but requires lots of detail for materials that are issued-in-parts,
serialized and/or periodical. We are also involved in considerable analytic cataloging (parts of monographs or
books). We have a wonderful collection of historical materials going back to the end of the eighteenth century,
almost all are imprints (very little manuscript, very little realia). We are one of the few libraries of this type that
still subscribe to the current statutes and court reports of all fifty states - with very large collections of older
compilations or imprints. It is clear that SIRSI will not fill our needs for the long run.

But we also want to know what we are getting into. Your library is the only one prominently mentioned as being
involved somehow in KOHA. We would simply like to know more.

I have two young children in Columbus (divorced). I visit them once a month - traveling from Raleigh via Wake
Forest, Charleston (WV) and Gallipolis, Chillicothe and Centerville to Columbus. I could easily alter this route
and come to Nelsonville on one of these trips (I used to stop there years ago at the shoe factory for good boots).

I hope that I have not gotten too far, but have at least covered the ground as to what our interests are.

Ronald E. D

***

Re: KOHA and we


Date: Tuesday Jan 21, 2003, 10:34 AM
From: shedges
To: red

Ronald, thank you for the introduction, and glad to meet you! My library background is not as extensive as
yours. I actually hold a PhD in the History and Theory of Music (talk about esoteric! -- but I was interested in
the subject), but I soon found that I hated teaching at the college level. The library is my second career.

59
Your e-mail is incredibly well-timed. Koha is quickly approaching its version 2.0 release, which will have the
full MARC support Nel.Pub.Lib. contracted for. (And more, actually, since the developers came up with some
nifty features we hadn't anticipated.) But I was just thinking this morning that the documentation is now getting
way behind as the development progresses. I was beginning to consider stepping in myself to try to write some
how-to's, but I'm very, very bad at that type of writing.

So where is Koha right now? Mostly in small special and school libraries that happen to have very strong IT
people on staff. And with a lot of other libraries quietly (or not so quietly, in the case of NPL) lurking and
waiting until Koha has enough features to make it practical for larger operations. We've specified the features
we need -- full MARC support, Z39.50 server module, and a SIP2 or NCIP module -- and the first two of these
requirements are in place, for all intents and purposes. Koha version 1.2.3, the latest stable release, is the one
currently in production, but has none of these features. Koha 1.3.3 (unstable) has been released for debugging,
version 1.9.0 (also unstable) is just around the corner, and version 2.0 (stable) is due out in about a month and
will have everything we need except SIP2. So we will be installing 2.0 and moving our data into it as soon as
it's ready, and will run our two systems side-by-side until we are sure Koha is handling our needs. Then come
September, when our ($12,000) annual support fee is due for our current system, we plan to shut the old system
down and move completely to Koha.

I am amazed at the speed at which Koha develops. But I see a major hurdle on the horizon, in that there MUST
be good user documentation before all that fine code is going to be usable. There is a documentation project
under way, but I haven't heard much about it lately. Either it has been overwhelmed by the pace of development,
or it's a very quiet project. There are various documents floating around on the Internet that address one aspect
or another of Koha, but no unified manual(s) that I know of.

At any rate, I think you should get in touch with Pat E, the Koha project manager (in Seattle), who would know
a lot more about the current state of the documentation and be able to answer any questions I've left hanging.
His e- mail is pate@*** and if you want, I can forward him a copy of your e-mail and this e-mail, so we won't
have to cover all this ground again. What do you think?

Hope I haven't scared you off, and looking forward to hearing from you again -

Stephen

***

KOHA again
Date: Wednesday Jan 22, 2003, 9:14 AM
From: red
To: shedges

Stephen:

Please forward these exchanges to Pat E and see where we go from there.

We are definitely ready to make some moves. As I noted before, we will need full MARC support: i.e., MARC
in and MARC out. I would prefer that the communication format MARC record be stored in the system

60
somewhere. We will definitely do much more than traditional cataloging, i.e., analytic cataloging and indexing
of North Carolina legal materials.

As I noted previously, I was in charge of The Library Machine (TLM) for many years. I wrote most of the code
(PL/1 and Assembler - ah, the good old days). In the code, I made sure that I wrote (and insisted that others do
the same) all the hooks and hangers for both technical and user documentation. For instance, in the Acquisitions
Module, where there was code to correct an invoice date that had been incorrectly entered previously, you will
find a box that describes how that function works and why it was programmed that way. It was an easy process
then to take each of those boxes and write documentation directly out of them. In the end, the material had to be
organized differently from the way that the code proceeded (the sequence of programming logic is not
necessarily the same as the functional or use-logic). If the KOHA code does NOT include such hooks and
hangers, then that should be a future priority - so that, in the end, as the code changes, so can the
documentation.

Another irony: I was originally trained in theology and classical languages. When I got to my last year of
seminary, I was one of twenty (eventually twelve - of course, twelve) who "dropped through" - we graduated
from the seminary but refused to be assigned to parishes. That was when I became a librarian. Later I went to
Harvard and did a ThD in ecclesiastical history under George Huntston Williams. I worked on a historiography
topic of the mid-sixteenth century. When I finished, George wanted me to take a teaching position at Boston
University (among other reasons, to keep me close by - we had become very close friends by then). I told him,
No thanks, because I could not afford a fifty percent salary cut. When he found out how much I earned as an
assistant to the University Librarian at Harvard, George went to the Dean (Rosovsky, at the time) and
complained that the librarians were overpaid! The stinker! Later he cooled off, and so did I. I did do some
graduate seminars at Ohio State for a few years, when Harold Grimm was the resident Reformation scholar.

More later.

Ronald E. D

***

[Koha] ding, ding, the bell rings => koha 1.9.0 is born !
Date: Tuesday Feb 4, 2003, 6:31 PM
From: paul.p
To: koha-devel@lists.sourceforge.net, koha@lists.katipo.co.nz

The koha 1.9.0 is born a few minuts agos.

His parents, the koha-team, are very proud of this almost-stable children. Please go to see the baby at :
http://sourceforge.net/project/shownotes.php?release_id=138050

Paul P

***

61
koha version 1.9.0 released
Date: Thursday Feb 6, 2003, 1:32 PM
From: paul.p
To: shedges

Hi Stephen,

You should have seen that koha 1.9.0 has been released a few days ago. The MARC part of this version is
stable. We just need to re-integrate z3950 client, fix some bug (not related to the MARC acquisition part of
koha). I plan for 2.0 to reorder acquisition/cataloguing as discussed on the mailing list. It's something that was
not on your RFP but seems really important, and you have the same wishes as me ! The acquisition is in
production in a christian library here in France since december, and the librarian is happy with koha "new look".
She asks for some minor improvments, but "true" bugs seems to be solved.

Important note : The MARC21 parameter table is NOT complete : completing it is a librarian job, and nobody
did it at the moment. every MARC21 fields/subfields are defined, but their mapping to non-MARC DB and
links to tabs still have to be done. Note i've added in the parameters section of koha a "check MARC" that
checks the MARC structure and shows problems. If you build a working MARC21 parameters table, please
send it to me by mail or commit it directly to cvs (marc_tag_structure and marc_subfield_structure tables).

Some questions to end this mail :

• have you got news from your bank ? do you plan a date for payment (the sooner being the better ;-) ) ?
• You need an invoice I suppose. What should contains this invoice ? (it MUST be at least in french for
me, with some specific items. I'll do a french/english one, unless you plan to learn french :-) )

Paul P

6. Making Koha Work


Missing MARC tags
Date: Monday Feb 10, 2003, 3:11 PM
From: shedges
To: paul.p

Paul -

There are a few MARC tags which seem to be missing from version 1.9.0 (at least I haven't found them :-)

• LDR -- the leader information, which is generally inserted by the cataloger and contains information
about the MARC record that follows
• 001 -- Control Number, also inserted by the cataloger
• 003 -- Control Number Identifier, not as common, but still often inserted by the cataloger.
• 005 -- Date and Time of Last Transaction, generated by the computer (not the cataloger), probably the
same as the old Koha "timestamp."

62
• 007 -- Physical Description Fixed Field, inserted by the cataloger and used extensively in shared
catalogs between libraries as a standard way to describe the item.
• 008 -- Fixed Data, inserted by the cataloger.

All of these are described, of course, at http://www.loc.gov/marc/bibliographic/ecbdlist.html

I've got two other complaints with 1.9.0 -- ready?

• the circulation.pl script just returns a blank page on my computer (no error messages), but maybe that's
because I have no data loaded?
• marc_subfields_structure.pl is in Polish, instead of English.

I'm running 1.9.0 on RedHat 8.0 and Apache 1.3.22.

Thanks, Paul!

Stephen

***

Koha-MARC DB links
Date: Monday Feb 10, 2003, 3:12 PM
From: shedges
To: paul.p, chris

Hi Paul (Hi Chris) -

I think I have found a BIG problem in 1.9.0. The links between the old (Chris) Koha database and the new
(Paul) Koha-MARC database cannot work.

From the way 1.9.0 is reacting, it seems that it requires a one-to-one link between the old and the new database,
and that the link must be unique. However, many of Chris' database fields should link to the same MARC tag.
For instance, the date of publication is always MARC tag 260c, even though Chris may have used it as different
fields (copyrightdate, volumedate, publication year) in the old Koha database. All of those old fields should link
to MARC 260c. (Correct me if I have misunderstood how these old fields are used, Chris.)

There are other things, like holdingbranch, that are never part of a MARC record, but are part of the circulation
record that keeps track of where the book is. 1.9.0 shoudn't return any error when this is not mapped to a
MARC tag.

Also, Chris, I have some questions about things like biblio.serial (how is that used) and the different
classification fields and how they are used. What is the difference between biblioitems.classification,
biblioitems.itemtype, biblioitems.dewey, and biblioitems.subclass, for example.

Chris' timestamp field should be mapped to MARC tag 005, which does not appear in the list of MARC tags.
(More about this in a separate e-mail, Paul.)

63
I think the old items table will be the hardest thing to map to Koha-MARC, since so much of it is not MARC
data. Would you like me to send you a list of which items fields I think should be mapped, and which should
not?

Stephen

***

Re: Koha-MARC DB links


Date: Monday Feb 10, 2003, 9:19 PM
From: paul.p
To: shedges, chris

Stephen Hedges wrote:

I think I have found a BIG problem in 1.9.0. The links between the old (Chris) Koha database and the new
(Paul) Koha-MARC database cannot work.

From the way 1.9.0 is reacting, it seems that it requires a one-to-one link between the old and the new database,
and that the link must be unique. However, many of Chris' database fields should link to the same MARC tag.
For instance, the date of publication is always MARC tag 260c, even though Chris may have used it as different
fields (copyrightdate, volumedate, publication year) in the old Koha database. All of those old fields should link
to MARC 260c. (Correct me if I have misunderstood how these old fields are used, Chris.)

In UNIMARC, we didn't face this problem.

I would be surprised to learn that koha old-db has more datas than MARC21 ! Anyway, if you are certain that
260c subfield can handle various old-DB subfield, let me know, it should not be hard to manage. The biggest
question being to know what to store when you have a publication year AND a @date, AND a volumedate !)

What would be almost impossible is the opposite : store something from old-DB into various MARC subfield
(ie : publicationyear in 260c OR 248d OR 115f : how to choose ?)

There are other things, like holdingbranch, that are never part of a MARC record, but are part of the circulation
record that keeps track of where the book is. 1.9.0 shoudn't return any error when this is not mapped to a
MARC tag.

Holdingbranch is an item-related information.

In koha, item-related information can be stored in the MARC record (proposed 995, as defined in UNIMARC).

Note that even if item-related informations are stored in the MARC record, you will be able to export your
MARC datas with or without item informations.

Chris' timestamp field should be mapped to MARC tag 005, which does not appear in the list of MARC tags.
(More about this in a separate e-mail, Paul.)

64
The MARC21 tag table may be incomplete. It had been builded by steve, from loc, I thought, but maybe some
informations are missing. Just add the field/subfield if needed !

I think the old items table will be the hardest thing to map to Koha-MARC, since so much of it is not MARC
data. Would you like me to send you a list of which items fields I think should be mapped, and which should
not?

Any tag containing a 9 (9xx, x9x, xx9) is "local" field, so you can use it in koha ! 995 is a pseudo-standard in
UNIMARC, and we propose to use this for MARC21 too.

Maybe the good news is that koha can do more than marc21 planned to :-)

Paul P

***

Re: Koha-MARC DB links


Date: Tuesday Feb 11, 2003, 9:17 AM
From: chris
To: shedges
Cc: paul.p

On Mon, Feb 10, 2003 at 02:57:18PM -0800, Stephen Hedges said:

From the way 1.9.0 is reacting, it seems that it requires a one-to-one link between the old and the new database,
and that the link must be unique. However, many of Chris' database fields should link to the same MARC tag.
For instance, the date of publication is always MARC tag 260c, even though Chris may have used it as different
fields (copyrightdate, volumedate, publication year) in the old Koha database. All of those old fields should link
to MARC 260c. (Correct me if I have misunderstood how these old fields are used, Chris.)

Nope you are right.

Also, Chris, I have some questions about things like biblio.serial (how is that used) and the different
claasification fields and how they are used. What is the difference between biblioitems.classification,
biblioitems.itemtype, biblioitems.dewey, and biblioitems.subclass, for example.

Ahh, right.

biblioitems.itemtype is the type of item, eg NF = Non fiction, YAF young adult fiction ... however you set these
up. Koha then uses this itemtype along with the borrowers categorycode, to look in the categoryitem table to
work out the rules for circulation of the item.

For instance borrower of type A, borrowing itemtype NF may be allowed it out for 21 days. Borrower type A
again, borrowing itemtype PAY might be allowed it out for 14 days, with a rental charge of $1.

So itemtype is mostly an internal koha thing.

65
The other three fit together. classification + dewey + subclass callnumber

So for one of Terry Pratchetts books we have this

+----------------+----------+----------+
| classification | dewey | subclass |
+----------------+----------+----------+
| F-Fantasy | NULL | NULL |
+----------------+----------+----------+

Now for a nonfiction book we have this

+----------------+------------+----------+
| classification | dewey | subclass |
+----------------+------------+----------+
| NULL | 920.000000 | c |
+----------------+------------+----------+

This is hangover from how HLT's old system did things.

I think the old items table will be the hardest thing to map to Koha-MARC, since so much of it is not MARC
data. Would you like me to send you a list of which items fields I think should be mapped, and which should
not?

That would be really useful thanks Stephen.

Chris

***

Re: KOHA Demo at Ann Arbor


Date: Tuesday Feb 11, 2003, 9:53 AM
From: shedges
To: eli
Cc: pate

Hi, Eli -

I was wondering how I would arrange to meet Pat while he was in the area. Your kind invitation might just
solve the problem. I'd like to tentatively accept, and can give you a firmer answer after I've checked a few
things out.

As for our progress, I spent a good deal of time yesterday, and plan to again today, working on version 1.9.0,
specifically the links between the old Koha database structure and the new MARC database structure. I think
between Chris, Paul and I, we will manage to wrestle it to the ground (with me doing the complaining and Paul
doing the work).

Good to hear from you!

66
Stephen

Stephen:

How is your Koha migration going? As part of our automation system evaluation process, we have organized a
Koha demo and discussion for a subset of our staff. Pat E is coming to present the system and assist in a
general discussion about the idea and the possibilities of open source.

We were wondering if you would be willing and able to join us in the afternoon (say 1-5 or so) of Friday,
February 28th, to tell the group (about 10 staff members plus geeks) about why you chose Koha, where you've
taken it so far, and where you plan to go with it. If you could come to ann arbor, that would be great, but a
conference call would work too. If you are able to come to ann arbor, we would pay your mileage and lunch. We
would also have the opportunity for Kip and myself to sit down with you and Pat afterwards, and to give you a
tour of our main branch if you're interested.

Please let me know if you are available and interested in joining us. If you have any questions, please don't
hesitate to call me at my office. We hope you'll be able to come. We may also like to bring a small group to visit
you in Nelsonville after you've completed your migration this fall.

Hope to hear from you soon!

Eli N

***

A Visit
Date: Thursday Feb 13, 2003, 5:46 AM
From: ronald
To: shedges

Stephen: Many thanks for spending time with us talking about KOHA. I fully intend to write to Paul P today,
and will copy you on the matter. I like very much the approach that you have taken: to map out a point in the
future where - if all goes well - you can migrate to this new system. I believe it is a solid plan and will keep you
and others out of trouble.

At the present time, we have at the Supreme Court Library two machines that can be turned over to this project
very soon, with another pair of workstations to be available in a few weeks. This way, we can have staff
members doing input work with close supervision quite apart from sitting and having to keyboard the entire
matter myself. We have two machines fully set up with RedHat Linux (rev 7) and will like convert them and the
two "new" machines to rev 8. A very knowledgeable chap who left a very good position in the north to be with
his in-laws in Raleigh (both of them ill and need care). He is about to be hired by the law school library at UC
Chapel Hill. He can advise us also, not only on the minutiae, but also on the overall strategies. He has worked
with open source projects before.

Finally, I am delighted to see the combination of libraries that we represent, you and I: public and special (law) -
because if we can meet the needs of this mix (public: heavy emphasis on circulation, moving materials to where

67
people might need them, etc.; special (law): heavy emphasis on bibliographic sophistication and integrity) we
can cover most of the ground that people want covered in ALL libraries.

I am actually excited about going back to work next week and getting on this project!

fond regards,

RED

***

KOHA
Date: Thursday Feb 13, 2003, 7:31 AM
From: ronald
To: paul.p, pate, shedges

I am addressing two groups here: the current KOHA inner circle (Pay E and Paul P), and the rest of us who are
interested in KOHA. The subject of this email concerns the mapping of KOHA data fields with the MARC
standard.

From conversation with Stephen Hedges, I understand that the librarians in New Zealand did NOT want to
adopt a bibliographic standard at the time that they built their data- or field-tables. They wanted to order the
information in their library catalog according to their best lights: that is their decision and good luck to them.

But this brings up four important concerns:

• First, the nomenclature of the tags, fields, indicators, subfield codes, end-of-field and end-of-record
designators, the fixed fields, the numeric control fields (000-009), the internal and external linking
designators, etc., were initially developed in the original MARC project by the Council on Library
Resources, in their contract with Inforonics, Inc., in 1965-1966. After the initial work, the continuing
efforts at the Library of Congress were moved in-house with the MARC Development Office (MDO,
under the supervision of Henriette A, promoted by Bill W [at the time, Head of Processing]). After
considerable politicking and unimaginable compromise, the Library of Congress allowed library
practitioners to come inside and advise on changes to the MARC standard, chiefly though an American
Library Association committee called MARBI. As a result, the nomenclature was very carefully
developed in the first place. Secondly, all changes were done with literally dozens and sometimes even
hundreds of individuals examing proposals for consistency and uniformity. Third, this process has been
in place for over thirty years and I find no other word to characterize it, but "successful." Fourth, I would
be most reluctant to abandon this nomenclature for a set of field designators designed by people who did
not want to adhere to ANY standard at all.
• Second, through the years several conventional usages have come to be standard, so as to minimize
confusion or contradiction.
1. In general, the term "bibliographic unit" or "title" is used for the entity being cataloged: this can
be a book, microfiche, cd, artifact - even an article in a journal or encyclopedia, a reference to a
Web site.

68
2. The term "volume" is used as descriptive of the physical extent of the unit: e.g., one volume,
three volumes, volumes six to eleven.
3. The term "copy" is used to describe one or more identical physical entities, as in multiple copies,
or copy 1, copy 2, copy 3.
4. In descriptive cataloging, the term "issue" is used to describe the physical representation of the
"title" when there was more than one typesetting done, as in "multiple issues" or in "two issues,
same date."
5. The term "item" is used to describe the volume[s], copy[-ies], issue[s], etc., of a single "title"
(e.g., four copies of a sixteen-volume children's encyclopedia, or sixty-four items; the item
database that is used as the basis for circulation).
• Third, the ordering of the data elements in the MARC standard is rational and direct. The arrangement is
also by "paragraph:"

0XX - Control fields (numeric, but also including codes and classifications)
Paragraph 1
1XX - Main entry
Paragraph 2
24X - Title
25X - Edition
26X - Imprint
Paragraph 3
3XX - Physical description
4XX - Series*
Paragraph 4
5XX - Notes
Paragraph 5
Access: 6XX - Subject
Access: 7XX - Added entry (i.e., added to Main entry)
Access: 8XX - Linking
Paragraph 6
9XX - Local information (items)

Originally the 4XX field[s] was a separate paragraph, but changed very early - leading to the changed
numbering scheme thereafter. The numerical ordering scheme has proved to be very useful. Using the
descriptive names of these fields in and of themselves might be useful, but they are not coherently or
innately ordered (for instance, there is no order at all if they are alphabetized). The same can be said of
the subfield codes: they are ordered as lower case letters, as their descriptive names do not evidence any
order at all on the surface.

• Fourth, the Library of Congress took the lead internationally in the 1970s and thereafter to promote the
use of a bibliographic standard for the exchange of information. One country after another adopted the
MARC standard, with occasional modification. Because of the far-sighted and brilliant layout of the
MARC record (it was designed by William N and Donald MacD at Inforonics, Inc., some years
BEFORE there was a magnetic disk available on the market!), programmers were captivated by the
consistancies and orderliness of the MARC record and took to using it - either with minor or major
modifications (e.g., replacing numeric measures with binary numbers, retaining either starting position

69
or length but not both [because one can be calculated from the other]) - as the record layout in one
database after another. Once the French adopted the MARC standard, they began their own prosyletizing
among francophone countries and they also began making capricious and arbitrary modifications to the
standard itself (without any reasonable justification that I ever heard). The final irony is that the Soviets
adopted the MARC standard WITHOUT modification (except to the character set, of course), while the
French finally achieved their strange goal of making it impossible to convert UNIMARC to MARC
solely by computer logic!

MOST American libraries are committed to standardized cataloging a la MARC, and will insist on being able to
manipulate the content of the MARC record. It is the only nomenclature that is commonly known, and it has
served well for over three decades. I would urge that these facts and the above considerations be part of any
further discussion about implementing the import and export MARC data in the KOHA system.

RED

***

Re: KOHA
Date: Thursday Feb 13, 2003, 5:08 PM
From: paul.p
To: ronald
Cc: pate, shedges, Koha-manage

ronald wrote:

I am addressing two groups here: the current KOHA inner circle (Pay E and Paul P), and the rest of us who are
interested in KOHA. The subject of this email concerns the mapping of KOHA data fields with the MARC
standard.

you missed some other persons from the management team, but i copy this mail to them all.

From conversation with Stephen Hedges, I understand that the librarians in New Zealand did NOT want to
adopt a bibliographic standard at the time that they built their data- or field-tables. They wanted to order the
information in their library catalog according to their best lights: that is their decision and good luck to them.

I think they are happy and have the koha they wanted.

But this brings up four important concerns: <snip> The final irony is that the Soviets adopted the MARC
standard WITHOUT modification (except to the character set, of course), while the French finally achieved
their strange goal of making it impossible to convert UNIMARC to MARC solely by computer logic!

Some french people always thinks they will for sure reinvent something better than a wheel :-) And I must add
that i worked 5 years in our national social insurance, and it was really a nightmare : nobody to take a decision
sometimes makes the final decision a stange monster with 5 heads, 10 legs and no arms where the user just
asked for 2legs - 2 arms !

70
MOST American libraries are committed to standardized cataloging a la MARC, and will insist on being able to
manipulate the content of the MARC record. It is the only nomenclature that is commonly known, and it has
served well for over three decades. I would urge that these facts and the above considerations be part of any
further discussion about implementing the import and export MARC data in the KOHA system.

yes, and koha 2.0 will be able to handle fully the cataloging á la MARC ! in fact, the 1.9.0 can already handle
this, except for some bugs which are due to the "UNSTABLE" version.

koha 2.0 will handle ANY marc flavour through a specific parameter table. The parameter table is done for
UNIMARC and MARC21. It works really well for UNIMARC.

For MARC21, there are some fields that still have to be "mapped".

why ?

Because as frenchies reinvented the wheel, a field in unimarc has a completly different meaning in MARC21.
For example, the parameter table explains "field 100$a" => means "title". So, when the user ask for a book with
"title = lord of the ring", the software looks at the parameter table and say "ah, ok, the title is 100a for me, so i
search through 100a subfields". And that's the same for every "important" field/subfield (ie : every fields that
the software uses for it's internal use : title, author, barcode, itemnumber, branch...)

Importing/exporting a MARC record in koha is a very simple thing ! The internal structure of a MARC record is
the same in UNIMARC and MARC21 ! Only the meaning of the field is different. (fortunately, in UNIMARC,
fields 000 -> 009 have no subfields !) You can import a MARC21 record in a koha-UNIMARC parameters DB.
It just show the number of pages of the biblio as title or strange things like this :-) But if you import a MARC21
record with a MARC21 parameter table, everything is OK !

To conclude :

The mapping has to be done by a LIBRARIAN imho. I'm not a librarian, the french UNIMARC mapping has
been done by 2 librarians, and that's what i'm waiting for english-MARC21.

I hope you understand better now the structure of koha and agree it should be nice for an american librarian too
:-)

Note : I'm still not sure to understand well why you sended this mail. Do you know koha well ? Or did you hear
"rumors" ? Or maybe i'm missing something ?

Paul P

***

Re: Koha Presentation Agenda


Date: Tuesday Feb 18, 2003, 6:55 PM
From: shedges
To: eli

71
Eli,

Yeah, I'll be there for the afternoon, probably in time for lunch. I'll probably skip dinner, however, and get back
on the road for home, unless I can convince Joshua to come along for the ride and share the driving.

I don't think I'll even bother to bring my laptop, since you guys are going to be demonstrating 1.9.0 all day. I
think it's best to spend my time talking ideas, and you don't need a laptop for that.

I'm curious to see how you're going to handle MARC records as they currently exist in 1.9.0. I assume you'll
have some catalogers on hand, resisting change and raising objections. At least that's what our catalogers would
be doing. But I guess every project needs a Doomsayer, right? (Come to think of it, that's been my role in my
most recent e-mails to Paul P!) |-(

Anyway, I'm looking forward to seeing you and Kip again and meeting Pat for the first time.

Stephen

***

Spydus to Koha borrowers


Date: Thursday Mar 13, 2003, 3:49 PM
From: shedges
To: joshuaf
Cc: owenl, laurenm, kenr

Joshua -

I'm attaching three files to this e-mail: SH_Rec.txt, which is a copy of my Sypdus record, filling as many fields
as I could from the RGA screen; MeInKoha.txt, which is my record after filling in all the fields in the Koha
1.9.0 "add members" screen; and SpyKoha.txt, my map for transferring patron data from Spydus to Koha.

I want to walk through each of these files with you, but first, here's a copy on an e-mail I just sent to the koha-
advisors list:

Could we add a "Postal Code" field to the borrowers table before 2.0 is released? I know from Chris that postal
codes are no big deal in New Zealand, but they're pretty much required in North America and Europe. (Maybe
this would be better for the developers list.)

It might be a good idea to look at the whole "members" job flow some time soon and make it less New Zealand
specific. I know from playing around with 1.9.0 that there are some fields in the table that don't get anything
written to them because the current "add members" interface doesn't provide an opportunity to enter data. The
ethnicity fields are a good example -- but in this case, that's a step that's already been taken to make Koha more
palatable to non-NZ libraries. Other data entered from the "add members" screen -- like the physical street
address -- don't seem to get written to the database by the current script.

72
Some of this is bugs, of course, and lots could be handled at the local level as a customization of the program,
but the missing postal codes fields (one for primary address, one for secondary address) seem to be a basic
need for most installations and should be included in the basic Koha package.

Is anyone looking at the borrower data/scripts for version 2.0? I know Paul is focusing on the bibliographic
data, and maybe not too concerned with the borrower data. Or maybe he is?

Obviously, the problem here is that there's is no place to put our zip codes! Though we could transfer them for
now to a Koha borrowers field that wouldn't be used otherwise, such as "ethnicity," so they wouldn't be lost,
then move them to a new postal codes field later.

Now some notes on the files. First, SH_Rec.txt, my record in Spydus.

We're going to have a problem with dates. Spydus has some fields saved as Pick dates (date of birth, for
example), others saved as DD Mon YY. Koha (MySQL) can accept input in a wide variety of date formats, but
neither of these work. I know there's a way to set Spydus so it uses DD Mon YY instead of Pick dates (e.g. "12
Mar 03" instead of "12855"), but I'd have to research it to remember how, or maybe we have to have Sanderson
do it (maybe Ken knows). But what we really need is DD/MM/YY or YYYY-MM-DD for the cleanest transfer
to Koha. If we can't set Sanderson to save in that format, we may have to write a Perl script to make the
conversion. I've bracketed other explanatory notes directly in the file.

Next some notes on MeInKoha.txt, my Koha record.

The member number is automatically generated by the add member script, starting with number 1. We'll have to
be carefully with this if we're going to do periodic data dumps. Addresses are weird. The screen asks for
"mailing address" "city" then "physical address" "city." These get stored as "streetaddress" "city" then
"physstreet" "streetcity" in the borrowers table -- except that there's a bug and "physstreet" doesn't get saved.
There's no address asked for the alternate contact, just a phone and relationship, and a notes field (I used Lauren
in my record). Otherwise, there are lots of fields that end up blank or NULL. Again, I put in some bracketed
comments.

Finally, the SpyKoha.txt file, my map of Spydus to Koha data.

I referenced field numbers in Spydus (F2, F3, etc.) since all fields can be referenced this way and it might be a
few nanoseconds faster than using the alias field names. Wherever I needed a blank field for Koha, I used a
Spydus field that we don't use -- and there are plenty to choose from! The only data that doesn't come from the
BRW.DETAILS file is the status, and I suggest we leave that blank for now and work on a Perl script to insert
that data later. Koha allows a member to set a userid and a password. But these don't really correspond to our
card number/PIN pair. The password is encrypted, so library staff wouldn't be able to tell someone what their
PIN was, iPrism couldn't use it, etc. I think it's best to make userid our current PIN and not use passwords.

So, would you use wIntegrate to construct a CSV file of a couple hundred records containing the Spydus fields
in the order I've mapped? E-mail it to me and I'll load it into Koha and we'll see what we have. We won't have
zip codes, of course, and the dates will be a mess, but at least we'll get to try out a procedure.

Stephen

73
***

Re: Spydus to Koha borrowers


Date: Thursday Mar 13, 2003, 4:25 PM
From: owenl
To: shedges, joshuaf
Cc: laurenm, kenr

Obviously, the problem here is that there's is no place to put our zip codes!

I would argue very strongly for separating City and State in the database as well. Any time you combine two
different kinds of data, you're creating a potential problem for yourself down the road. With the whole U.S.
market to consider, this shouldn't be an issue.

We're going to have a problem with dates. Spydus has some fields saved as Pick dates (date of birth, for
example), others saved as DD Mon YY. Koha (MySQL) can accept input in a wide variety of date formats, but
neither of these work.

MySQL prefers year-month-day ordering. In my test it accepted '2000-3-1' and '2000/3/1' but not '2000 Mar 1'.

If we can't set Sanderson to save in that format, we may have to write a Perl script to make the conversion.

If we move our data into an intermediate database, we could do a lot of transformations there.

Owen

***

Re: MARC links


Date: Thursday Mar 20, 2003, 12:33 PM
From: paul.p
To: shedges
Cc: koha-Devel@lists.sourceforge.net

Stephen Hedges wrote:

I've attached my list of suggested links between the koha database and the MARC database.

As you can see, some of the links need to be repeatable. 1.9.0 currently accepts only the last link that's entered
when there are duplicate links, which means the "Checks MARC" function always reports errors.

I've sprinkled other comments in the body of the text attachment.

1 month after, I haven't forgotten this mail, and use it. I try to improve the MARC21 english parameters for
1.9.1, to have a working parameters in the release.

74
***

!!!UPDATE:NPLS's Spydus--->Koha Database Transfer!!!


Date: Thursday Mar 20, 2003, 3:19 PM
From: joshuaf
To: owenl, kenr, laurenm, shedges

Hi All,

Stephen asked me to give you an update on how the Spydus-->Koha database conversion is going. Here goes:

There are basically three types of data that we need to transfer from our Sypdus (UniData) database to our Koha
(MySQL) database: holdings, borrowers, and transactions.

HOLDINGS

Paul is still working on Koha's MARC compatibility. Some estimates have placed the release of 2.0
(which will be a bug-free %100 MARC compatible release)to sometime in April but there is really no
way to tell how accurate that estimate is. In any event, as soon as the MARC compatibility is there it
will be a simple matter to upload our MARC records to the Koha database. Basically the only
"tweaking" that we will have to do will be in configuring the OPAC display to list our holdings
information the way we want it.

BORROWERS

In our Sypdus (UniData) database there are two files that contain information relevent to our borrowers:
BRW and BRW.DETAILS. These two files in Spydus correspond to the "borrowers" table in Koha and
so our goal is to transfer the data in these two files to the "borrowers" table in the Koha (MySQL)
database. BRW holds the "status" information and BRW.DETAILS holds everything else. Earlier this
morning Stephen and I electronically browsed the entire BRW.DETAILS file (all 100 items for all
35,000 patrons) to determine exactly what we want to transfer to Koha. From there, we were able to
dump the relevent items into a file and using Stephen's perl script the raw data was formatted and
prepared for an import to the Koha (MySQL) database. In the end, we uploaded all the data from the
BRW.DETAILS file to a Koha (MySQL) database on Stephen's laptop. We both believe that ALL THE
BUGS ARE WORKED OUT OF THAT DATA TRANSFER. It still remains to review the BRW file and
make sure that there is nothing besides the "status" information held in that file. Following that,
Stephen's perl script can be modified slightly to parse the BRW file dump and format it in preperation
for the import to the Koha (MySQL) database.That should take care of our borrower information.

TRANSACTIONS

The transaction data in Spydus is stored in one file: ONLOAN. In Koha it is stored in the "issues" table.
Since the transaction data is keyed (that is, since it is organized by) the items, we will have to wait until
Koha is MARC compatible before we can do an export of that information. However in the mean time,
Stephen and I are working on tentatively mapping out how the data should be transfered. After we finish

75
that, as in with the borrower files, the dumped file will need to be parsed to prepare it for an import to
Koha.

To sum up, here is a diagram:

STATUS OF DATABASE TRANSFER FROM SPYDUS TO KOHA:


------------|--------------------|-------------------------|-------|
SPYDUS FILE |CONTENTS |CORRESPONDING KOHA TABLE |STATUS |
------------|--------------------|-------------------------|-------|
BRW |borrower status data| ----> borrowers |pending|
BRW.DETAILS |other borrower data | ----> borrowers |done |
ONLOAN |transaction data | ----> issues |pending|
?MARC? | | ----> ?MARC? |pending|
------------|--------------------|-------------------------|-------|

That's it for now.

Joshua

***

Re: Archived borrowers


Date: Sunday Mar 23, 2003, 10:11 PM
From: chris
To: shedges
Cc: joshuaf, owenl, laurenm, kenr

I think the advantage of storing archived borrowers separately is to reduce clutter in the database, while
keeping those records handy in case an inactive borrower suddenly decides to become active again.

If we move our archived borrowers into the Koha deletedborrowers table, we get rid of the clutter but lose the
ability to retrieve those records "on the fly" if someone shows up with an archived card. (At least I assume that's
true of Koha's behavior.) Same thing if we move them into a new table (archivedborrowers) that Koha never
addresses.

If we put the archived borrowers into the regular borrowers table, then they are available at any time, but we
have clutter. (Lauren, getting numbers of inactive borrowers for the state report is not a problem, we just run a
MySQL query to get the number.) I'm thinking the clutter is not really a problem for MySQL. Seems like MySQL
is very robust and we won't see much difference in performance if we have 65,000 borrower records instead of
35,000. Of course, staff would probably appreciate it very much if they only had to wade through 35 "Smith"
entries instead of 65.

Probably the best bet then is to have a setting inactive on their card. Currently we have gone no address, lost
card, or debarred. We could have inactive, which is ignored for the most part, unless someone tries to use the
cardnumber assigned to thta borrower, in which case Koha throws an exception saying .. hey this borrower is
inactive, should i mark them active again.

76
Deleting the borrower allows the cardnumber to be reused, so its not what you after here, i suspect a flag on the
borrowers table to say inactive is the easiest solution.

Chris

***

Re: Koha questions


Date: Sunday Mar 23, 2003, 10:25 PM
From: chris
To: shedges

On Sat, Mar 22, 2003 at 01:25:10PM -0500, Stephen Hedges said:

I've got a couple of questions which I think you can quickly answer, but would take me days to figure out. They
all deal with Koha borrowers data.

How are the new columns in the borrowers table used, password, flags, and userid. I know the password is used
when a member logs in from the OPAC, but is it used anywhere else? It looks like the sessions table also stores
userid, but I'm not sure how it's used.

Yep, userid password and flags are all used for authentication and authorisation purposes. Userid + password
provide the login for the opac, and in the intranet .. (if userid is blank it defaults to cardnumber) and the flags
are used to decide the level of authorisation they have. I cant recall all the values of the top of my head, Ill ask
steve, but you can set them from the borrowers/members section.

And what are the valid values for the "status" column? I assume there's some status code that means
"debarred." Are there other codes, too?

Hmm we have a status column? there are gonenoaddress | lost | debarred columns in the table, which having a 1
set in them marks the borrower as having a lost card etc.

BTW, Joshua and I were appreciating your bibliographic data organization yesterday. Whereas MARC data is
stored in two "levels," general and specific to the item, Koha inserts the "biblioitem" level that can describe
various formats associated with one biblio record, with multiple items associated with the various biblioitems --
right? That's why Koha is so much easier to search than the "standard" (i.e. MARC) OPAC. Nice work!

Thanks, yes it provides what to me, is a more sensible data format. Which is why I was striving to make sure
that we didnt lose it in our quest for MARC compliance. Which we've mananged to do I think.

Roll on 2.0 :-)

Chris

***

77
Re: Archived borrowers
Date: Monday Mar 24, 2003, 9:48 AM
From: laurenm
To: shedges

Stephen--

It looks like you have been working on the weekend! Again!

I quess the way I feel about archived borrowers is why do we have to 'archive'/keep those records in the
database anyway. If the borrower record hasn't been used--why can't we automatically delete the record. BUT,
don't delete the record automatically if items are still on the record. Those inactive borrower records--that have
items still checked out--they don't get 'archived' now when you purge the borrower records?? I know the state
library report asks if the patron files have been purged in 3 years. I always wondered why SLO has chosen the
benchmark of 3 years.

LM

***

Koha questions
Date: Friday Apr 18, 2003, 11:33 AM
From: owenl
To: joshuaf, shedges
Cc: laurenm, kenr

I think everyone must be on holiday for the weekend. No one responded to Stephen's question about imports, no
one responded to mine about boolean searching, and hardly anyone was on IRC yesterday evening (at least until
6 our time).

I did talk to Pat a little on IRC, and asked about feature requests on Bugzilla. He said that it was appropriate to
add items to Bugzilla that were feature requests (marking them as such). He said we could request our searching
features that way if we wanted.

I also wanted to mention that I noticed yesterday poking around Koha that there don't seem to be any 'reserves
administration' functions built in to Koha: no way to list reserves by patron or by item, no way to amend or
delete reserves, etc. If anyone knows otherwise, please correct me! Otherwise, this looks like a hole in the
feature set that needs to be plugged before we can go live.

-- Owen

***

Re: Phone numbers


Date: Wednesday Apr 23, 2003, 12:17 PM

78
From: shedges
To: owenl, joshuaf, laurenm, kenr

I think it would be _nice_ to have standardized phone numbers, but not necessary. The phone numbers that are
in Spydus now reflect what the person entering the data thought was needed for later retrieval, so while they
may look messy, they're good enough to get the job done.

So I guess my caveat would be that whatever we do should keep in mind the people doing the actual data entry.
I would NOT, for instance, make the area code a mandatory field -- people will get tired of being forced to enter
something they think is useless. And if the area code is not mandatory, then the actual phone numbers will still
ending up looking different -- some with area codes, some without.

Those are _my_ thoughts.

Stephen

I think we should decide on a standard format for storing phone numbers, and try to build that into the Koha
templates in an attempt to mediate the way the staff enters the information.

At first I thought we could just leave off the area code, since we're a county system. But we might have patrons
who work in the area but live outside our area code, so that won't work. I think the only alternative is to store
the area code for *all* numbers, and include it as the default in the entry templates.

The only other alternative would be to separate the area code into a distinct column in the table, but that would
mean a change to the database that wouldn't be upgrade-friendly, and would be difficult to roll into the next
upgrade because if internationalization.

Any thoughts?

-- Owen

***

Koha MARC import


Date: Saturday Apr 26, 2003, 8:36 AM
From: shedges
To: joshuaf
Cc: owenl, laurenm, kenr

Joshua -

I've been combing through a message Paul sent me about the importing of MARC records, trying to figure out
why he says that function is "not planned!" (Actually, I think there must be a French-English language problem
going on here.) At any rate, I've been looking through the marcimport stuff on Koha to see why it's not working
for us. I think our biggest problem is the MARC::Record perl module, which is used heavily in Koha. We have

79
version 1.21 installed, but Paul has been telling people that his scripts don't work with any version newer than
0.93 -- so will you uninstall MARC::Record 1.21 and install 0.93?

Another potential problem is the fact that the marcimport.pl script loads the _entire_ MARC dump file into
memory before it starts processing the file. So I think we will almost certainly need to subdivide our MARC
records into small files. That's not much of a problem, except that it will make the process of importing all our
records labor-intensive. We won't want to be doing this every night, for example.

Stephen

***

Re: Koha MARC import


Date: Monday Apr 28, 2003, 5:11 PM
From: paul.p
To: shedges
Cc: joshuaf, pate

Stephen Hedges wrote:

Hi, Joshua, I'm copying this reply to Paul and Pat, so I'm going to start with some quoting to bring them up to
date.

> The last time that I imported the MARCOUT file I used the breeding farm import tool(and it didn't work;
maybe due to the wrong version of MARC::Record). However, I am still a bit confused as to whether that is the
only way to import MARC records.

yes. the "full import" is still missing. Will be a "must-have" for 2.0.

> I seem to remember another bulkmarkimport.pl (or something like that) that was used before Paul added full
MARC support. Barring that I am still not sure what to do with the records once they are in the breeding farm.
Any ideas?

You're wrong here => Stephen is right.

> On Mon, Apr 28, 2003 at 06:12:39AM -0400, Stephen Hedges wrote:

>>> I wonder if there are plans to upgrade to the latest version of MARC::Record.

>>Yeah, I'm pretty sure Paul has mentioned that he's working on that, but hasn't quite figured out what's
causing the problem. Shall we try to import our MARCOUT again and see what happens?

>>Stephen

The good news here is that i think i've fixed this problem today ! Try a CVS version, and that should work. At
least, it works for me with MARC::Record 1.12

80
OK, if I remember correctly, the bulkimport.pl was a version 1.2.x tool. I think marcimport.pl replaces it, except
that it just dumps incomplete MARC records into the breeding farm. I say "incomplete" because as far as I can
tell, none of the item information (the copy information that goes into our current 952 tag, Paul's 995 tag, and
Koha's biblioitems table) ever goes into the breeding farm -- it seems to be intended for partial MARC records
that are then completed by the cataloger, and thus is not a really a tool for importing MARC records. This
remains to be done, IMHO, despite Paul's comment to me that it is "not planned."

Larry C asked about this six weeks ago, and Paul's reply (like his reply to me) was that Larry needed to use the
C4/Biblio.pm to import item information. This involves splitting the MARC record in two parts, using the
NEWnewbiblio subroutine to load the basic MARC record, then using NEWnewitem to add the item
information. In other words, a job for Perl geeks, not for librarians!

This is a REAL problem! In order for Koha to break into the US library market (and I don't consider people's
home bookshelves or a company's technical collection to be "real" libraries, sorry), it MUST be easy to migrate
_full_ MARC records to Koha, including item information. An effective MARC import script is an essential
component of Koha, not a luxury, and Koha simply cannot claim to offer full MARC support until such a script
exists.

It seems like it would be simple to ask the user to just map their old item information (our 952 tag, for example)
to Koha's 995 tag, and then import everything at once. Maybe it's not that simple.

So, to return to your question ("what to do with the records"), I guess my answer would be "nothing." They are
worthless to us in the breeding farm, so we will need to postpone our move to Koha until we can import them
directly into the database.

It seems this is an urgent script now ! Ok, if you or anyone else can't do it, i'll do it asap (i'm working on Koha
all this week. z3950 is running better and better. As it's done, i'll do bulkmarcimport) I hope/think it will be a 50
lines script. However, I fully agree that it's a MUST-have for a 2.0 release. I'll try to do it just sooner !

Note : please, could you send me a little sample (50-100 biblio) to do test when the script is builded ? HTH

Paul P

***

Re: Koha MARC import


Date: Tuesday Apr 29, 2003, 1:42 PM
From: shedges
To: paul.p
Cc: shedges

Hi, Paul -

I'm attaching a small file of our data that was used to upload records into our current ILS. MARC::Record
seems to be happy with it for the most part, though some records have values in the 856 tag that MARC::Record
doesn't like. At any rate, I think it should give you an idea of what our MARC records look like.

81
Our 952 tag, which holds our item data (=> 995 in Koha MARC) has the following subfields:

a = barcode number
d = call number (Dewey)
e = category code
f = date added
p = price

I found that our MARC records that we download directly from our ILS (a file called MARCOUT) are
apparently not in standard MARC format! MARC::Record complains that each record has an "invalid record
length" in the header, and the values that it lists as invalid are all kinds of garbage. Maybe the header is not
downloaded (??impossible??), or maybe the records are not USMARC. We'll need to do some research to find
out.

Stephen

***

Re: NCIP Support


Date: Wednesday Apr 30, 2003, 12:19 PM
From: shedges
To: chris
Cc: joshuaf, owenl, laurenm, kenr

Do you have an RFP for it anywhere?

No, Chris, we don't. To be honest with you, we had put this part of our plans on hold, because as it turns out, our
current ILS will not allow us to participate fully in Ohio's statewide sharing system, even though we have SIP2.
So we would not be losing anything by moving to Koha without NCIP.

Instead of an RFP (and all the gov't paperwork that entails), let me decribe to you what we would like to have,
then you can tell us what you think is possible or feasible. If your price is low enough, we aren't required to do
any bid process. We would just give you the job -- we already know you and your capabilities ;-)

We need SIP2 to participate in Ohio's statewide resource sharing program. There's information about the
technical requirements of this program at http://www.moreforohio.org/technology.html and you can find links to
more information there.

We want NCIP instead of SIP2, simply because it seems silly to use an old commercial protocol (SIP2) instead
of a new standard (NCIP). BUT whatever we implement has to also be compatible with SIP2. We have a good
contact in Don Y, the techie who oversees the "MORE" (resource sharing) program, who's interested in open
source and would be willing, I'm sure, to test any software you write with the MORE system to make sure they
like each other.

Now, personally, I think there's some good stuff in the NCIP standard that we should steal for Koha.
Specifically, the committte that worked on this seems to have identified just about every possible bit of

82
information about a borrower that an ILS might ever need to access. That would be a valuable guide to anyone
working on revising the Koha borrowers tables, if you get my drift.

Drop me a line if there's any MORE information I can give you. (Sorry 'bout that.)

Stephen

***

Re: NCIP Support


Date: Friday May 2, 2003, 8:44 AM
From: chris
To: shedges

On Wed, Apr 30, 2003 at 12:19:54PM -0400, Stephen Hedges said:

> Do you have an RFP for it anywhere?

No, Chris, we don't. To be honest with you, we had put this part of our plans on hold, because as it turns out,
our current ILS will not allow us to participate fully in Ohio's statewide sharing system, even though we have
SIP2. So we would not be losing anything by moving to Koha without NCIP.

Instead of an RFP (and all the gov't paperwork that entails), let me decribe to you what we would like to have,
then you can tell us what you think is possible or feasible. If your price is low enough, we aren't required to do
any bid process. We would just give you the job -- we already know you and your capabilities ;-)

Ahh good plan :)

Now, personally, I think there's some good stuff in the NCIP standard that we should steal for Koha.
Specifically, the committte that worked on this seems to have identified just about every possible bit of
information about a borrower that an ILS might ever need to access. That would be a valuable guide to anyone
working on revising the Koha borrowers tables, if you get my drift.

Yeah, NCIP is a very very comprehensive standard. The ideal way to build koha borrowers module is to set it up
in a way. That when setting up koha (or indeed at any time). The techinician/administratot can choose from a
list of all possible fields. That they want to use. And Koha uses their selected list. We may of course have to
have some mandatory fields. But not a lot.

I think it would be a fun project to work on, and a very very useful addition to Koha. So ill have a talk with
Rachel about how long i think it would take to do. And let her get in contact with you.

Chris

***

83
NPL's Spydus -> Koha migration
Date: Thursday May 8, 2003, 4:59 PM
From: joshuaf
To: pate, paul.p, chris
Cc: shedges, laurenm, owenl, kenr

Hi All:

The NPL tech team (Stephen, Lauren, Owen, Ken, and I) met yesterday and discussed our Spydus -> Koha
migration.

As the expiration date on our current ILS contract approaches rapidly (August), we are accelerating our efforts
to make sure that Koha will meet our needs before that deadline. We are concerned that some of the current
deficiencies of Koha (by our standards) may need to be patched with PHP solutions (since we don't know Perl),
but we would prefer to use as much of the main Koha code as possible.

On one level, our commitment to the Koha project leaves us at the mercy of the Koha developers--we have little
direct control over what gets developed (especially now, with our budget woes). But in my view, NPL's use of
Koha will make Koha a far more attractive (and viable) solution for other mid-sized libraries--especially those
who are waiting for someone else to make the first "leap of faith." So it may be worthwhile to consider whether
any of the items currently holding us back are development priorities.

Regardless, we have set up a Wiki as a sort of "todo" list for our Spydus -> Koha move. It's hosted at
www.seedwiki.com and its called SpydusKoha (Note: the search feature on the front page of SeedWiki does not
work with a carriage return). We would love to get your comments and suggestions.

Thanks,

Joshua

***

Re: MARC data


Date: Thursday May 15, 2003, 3:01 PM
From: paul.p
To: shedges

Stephen Hedges wrote:

Joshua said you requested some records, and I happen to have a file of fifty I can send right now. I've attached
it, let me know if you'd like to have more.

thanks, I'll use it to test bulkmarcimport.

We have a question. What will you do with the 001 tag (our old control number)? We will be using MySql
commands to load a lot of our item information, such as barcode, date accessioned, date last checked out, etc.,

84
and would like to use our old control number as a key. We can do this by setting up a separate table and
modifying the bulkmarcimport script to write the Koha biblionumber and our old control number to this new
table. But of you're planning to keep the old control number somewhere (in a modified biblio table?) we
wouldn't need to do that.

I'm not planning to do that.

A solution I could suggest is to "move" your old control number to an unused subfield. As everything is
"searchable", it will still be possible to search for records having "oldcontrol number = xxxx". Could it be
enough ?

Paul P

***

Re: MARC data


Date: Thursday May 15, 2003, 2:44 PM
From: shedges
To: paul.p
Cc: joshuaf

A solution I could suggest is to "move" your old control number to an unused subfield. As everything is
"searchable", it will still be possible to search for records having "oldcontrol number = xxxx". Could it be
enough ?

I don't think so. We're not interested in a catalog (OPAC) search. We're interested in being able to do something
like:

mysql> SELECT biblionumber FROM sometable WHERE oldcontrolnumber = xxxx;


[then store the biblionumber in some variable that gets passed to another
statement:]
mysql> INSERT INTO items "a bunch of stuff read from our old data that we group
by oldcontrolnumber","biblionumber";

. . . or something like that.

To state the problem in a different way -- there is currently a relationship between the biblionumber in the biblio
table, the biblioitemnumber in the biblioitems table, and the itemnumber in the items table. We need to be able
to (temporarily) set up an additional relationship to our oldcontrolnumber, actually called a Bibliographic
Record Number (BRN), especially in the items table.

We think we must load the items table using mysql commands, taking data from our old "copies" table. This
table is keyed to an accession number (= itemnumber) and contains the BRN (= biblionumber) and a "set"
number (= biblioitemnumber), as well as a lot of other data that you would find in Koha's items table. So we
will somehow need to find the biblionumber that corresponds to our BRN.

85
We could just use our BRN as our Koha biblionumber, but we are assuming that this would create problems for
Koha. In the old (1.9.2) bulkmarcimport.pl, for example, biblionumber is just read from the last biblionumber in
the file and incremented ($biblionumber++). Using our old BRN as the new biblionumber would probably have
a bad effect on Koha's performance. BUT, we still need to be able to use our old BRN to find the new
biblionumber for a MARC record in Koha.

Are we making this too complicated?

Stephen

***

Re: dumpmarc
Date: Wednesday May 21, 2003, 10:02 AM
From: paul.p
To: shedges

Stephen Hedges wrote:

I looked at dumpmarc.pl (I think you need a newline after each record to make the printout correct.) Is this the
start of a script for dumping MARC records into Koha?

No, it's just a small script I use to check that a file is an iso2709 one. I have problems with some file ppl send to
me, those problems being most of time a wrong iso2709 script.

We've been discussing our item information. There is _so_ much valuable information in our old system about
each item that we don't want to lose, but which doesn't fit into a MARC tag. Things such as date last seen
(which is also in Koha items table) or date last borrowed (also in Koha). We're very tempted to just write a
script to read the item information from our old system and load it directly into the Koha items table.

Why don't you add specific non-standard marc tags in the parameters table and map them to the corresponding
koha fields ? Note that I agree to say that there are only a few 852 subfields code that are free in MARC21
(1,4,5,7,9,d,o,u,v,w,y if my table is right), but it may be enough.

The problem is then associating the item information with the "larger" MARC information that would be loaded
by a bulk MARC import script. We would, for example, not be loading an itemnumber or a biblionumber or a
biblioitemnumber, since these are generated by Koha. We _could_ load our old control number -- somewhere --
and use that to build the associations, if we had a way to associate the old control number to the new
biblionumber!

One simple solution I imagine : store your old control number into XXX$9 subfield. to find the link between
koha and your old system, just enter : select bibid from marc_word where tag=XXX and subfieldid=9 and
word=YourControlNumber It's 100% guaranteed indexed, so, immediate resulte, and 100% hack free from
standard 2.0 Koha.

If we get this solved, we should be able to migrate _any_ library's data to Koha!!

86
Does my suggestions help ?

Paul P

***

7. Breaking Away
MARCOUT problem
Date: Thursday May 22, 2003, 11:46 AM
From: shedges
To: kenr

Ken -

I have another problem with the MARCOUT file, which I'd like you to submit to Sanderson. (We got this info
from the P B University, but don't tell Sanderson!)

Stephen

We are having a problem running the WRITE.MARC program to convert our MARCOUT file to a sequential
Unix file. The program seems to hang as soon as it starts the loop for reading records. Specifically, it behaves as
normal for the first part:

"Enter MARC source file name ?" [MARCOUT -- yes, it exists]


"Enter destination UNIX file name ?" [u/sanlib/SL/mymarc -- empty file is
created, as per program]
"Enter start ID of MARC source file ?" [1, have also tried 10]
"Enter customer code (for PARAMS WRITE.MARC*code) ?" [We've used 'NPL' and
'ACLS' -- neither one get's written to PARAMS, since that's the last step in
the program.]
"Enter # of records to output or return for maximum ?" [50 -- there are
several thousand in the MARCOUT file]

At this point, the program prints:

PROCESSED :
ID :

then the cursor drops to the next line (!) and the program freezes.

***

MARC data transfer


Date: Saturday May 31, 2003, 8:54 AM
From: shedges
To: joshuaf, owenl, laurenm, kenr

87
Since our meeting with Sanderson on Thursday, I've been digging for more information, and thought I'd share
some of the more interesting bits.

I've decided that Z39.50 is not the solution to our problem of extracting MARC data from Spydus. In order to
get holdings information out, we would have to build a Z39.50 server using a profile that has not yet been
standardized (see http://www.unt.edu/zprofile/ ) and for which no widely available software exists. In a pinch,
we might use Z39.50 to dump our basic MARC data and then somehow use Unidata and perl to build our own
holdings information, and then use MySQL commands to add this information to the basic MARC info in Koha.
I think this is our "last-ditch" plan, however.

I think it seems more reasonable to pay Sanderson a fee (<$1000?) to "commission" our MARCOUT module so
it works properly. If you want to get an idea of what our data will look like when it comes through the
MARCOUT module, see http://www.nla.gov.au/kinetica/batch.html . I think this is the "Australian National
catalog" that Sandra mumbled about in our meeting with Sanderson, and the data profile specified here closely
matches the brief description Sandra gave me in an e-mail.

So my current plan is to wait for word from Sanderson about our MARCOUT module and be working in the
meanwhile on perl scripts to move our issues and reserves data into Koha.

Stephen

***

RE: MARCOUT
Date: Wednesday Jun 4, 2003, 2:50 PM
From: SandraB
To: shedges

Hi Stephen,

We have tried to get on to your system to ascertain where you are at, however as you suggested the password
has been changed. If you could change it to the password as suggested that would be great. In the general
release of the software (according to the contracts this is what you have installed) the MARC out software is
disabled, that is, it has not been commissioned as it is not licenced unless specifically requested. If you would
like to change the password on the account we can look at and see if this is indeed the case.

Kind regards,

Sandra B

-----Original Message----- From: Stephen Hedges Sent: Tuesday, 3 June 2003 11:59 PM To: Sandra B Subject:
MARCOUT

Sandra -

88
Any new information about out MARCOUT module? I wonder if you will be able to get into the system -- we've
probably changed pswrds (several times) since the last time anyone from Sanderson logged in. I can change the
pswrd for the SPYdus SYStem account to the latest Spydus webpage pswrd that Ian has sent us if you want.

Stephen Hedges

***

Re: FW: Open source software


Date: Thursday Jun 5, 2003, 6:47 AM
From: shedges
To: danield

Hello, Daniel -

Let me start with the bad news: open source software is _not_ for everyone. I don't know what your particular
situation is, but you almost have to have some good 'geeks' on your staff before you start working toward
implementing open source. (Do you, for instance, have any experience with open source software like linux or
mySQL? Are you willing to learn?) Either that, or be willing to spend some money with a good geek company.
Either way, it's not 'free' in the sense of no money -- it's free in the sense of free to change any way you want.

The good news is -- it's free to change any way you want. Our purpose in moving to open source was to be able
to quickly offer new and innovative services to the public, without waiting for a commercial vendor to decide
that the "bulk" of the market would be willing to buy something new. Our intent was never to save money. In
fact, we've spent more on Koha developement this year than we would spend on the annual license for our old
ILS. (Of course, eventually it _will_ save us money.) Our intent, quite frankly, was to be better than other
libraries.

We do not yet have Koha in production, nor do I know of any public libraries in the US that are using Koha.
Until version 2.0 is released, it's not really ready for public libraries. Here's our current status:

• We have version 1.9.3 running on the machine we will be using as our production server.
• Our webmaster is working on templating the pages to look "our way."
• We have developed a procedure for loading our patron data into Koha using home-made perl scripts,
mysql commands, etc. This is the part we thought would be hard, but it seems to be working.
• We have _not_ been able to retrieve our MARC data from our old system. The vendor wants more $$
before they will set up the system to dump in standard MARC format. This is the part we thought would
be easy. We are negotiating.
• To the extent that we can with no bibliographic data, we are testing the software and pointing out bugs
for the developers.
• By the time version 2.0 comes out (three weeks?), we hope to have access to our MARC data and be
able to set up a production sysytem to run alongside our current system, and start gathering user
comments.
• By the time our current system license expires (September), we want to move completely to Koha.

89
I'd definitely recommend a visit to http://www.koha.org if you haven't looked there already. There are lots of
links to more detailed information. And I'm always willing to answer any questions I can.

Stephen Hedges

***

MARCOUT and Kinetica


Date: Thursday Jun 5, 2003, 6:54 AM
From: shedges
To: SandraB

Sandra -

Would it be possible to see a MARC record with multiple copies that has been dumped through MARCOUT.
I'm concerned about the format. If MARCOUT was developed for sending records to Kinetica at the National
Library of Australia, then does the entire 984 tag repeat for multiple copies? Or is there one 984 tag with
repeating subfields? A look at a sample record would quickly answer my questions.

Thanks!

Stephen Hedges

***

FW: Access to Spydus System


Date: Thursday Jun 5, 2003, 11:49 AM
From: kenr
To: shedges

Stephen:

I just received this from Civica...I know you changed the passwords prior to our meeting last Thursday. Looks
like they're ready to tackle the MARCOUT.

--Ken

Hi Ken,

I have been assigned to investigate your SR dated 23rd May 2003 regarding MARCOUT software. Our number
is SR28608, your reference is 052203.

The password has been changed for our support signon "spysys" and no one at Civica knows what it is.

Can you find out what the password is or alternatively assign a new one and let me know?

90
Frank A

***

Re: Update MyLibrary/Koha Plus CESA and Computer Science


Date: Thursday Jun 5, 2003, 3:00 PM
From: shedges
To: annez
Cc: pate

Hello, Anne. You wrote:

I must admit, I have been having subterranean qualms about the MARC dump from old systems. I see an
idiosyncratic use of fields for local holdings, for example. We will encounter more than one or two vendor's
systems as we work on this. I can list at least five without stopping to think. Which vendor have you been using?

We are using Spydus from Sanderson Computers (now called Civica, I guess) in Australia -- not one you're
going to encounter. And yes, the holdings are the real problem. Everything else is pretty easy. We could use our
Z39.50 server, for example, to get all of our MARC data out -- except for the holdings.

For obvious reasons, I haven't yet been able to try out the Koha bulkmarcimport script to see how it handles
holdings. I suspect that it's probably happiest when the standard MARC use of the 852 tag and subfields is
present in the MARC record. But since almost every vendor uses a 9xx tag for holdings, so they can define it
themselves, I'm going to predict that you don't find any school with their _complete_ holdings information in an
852 tag. If you do, I definitely recommend starting there!

Stephen

***

RE: MARC OUT and Kinetica


Date: Friday Jun 6, 2003, 5:42 PM
From: SandraB
To: shedges

Dear Stephen,

Thank you for your hospitality last week, we enjoyed our visit with you. Please find attached the special offer
that we are extending to our US Customers. If there are any questions please do not hesitate in contacting me. I
would be very interested in hearing your thoughts on this.

The cost of commissioning MARC OUT is $1,XXX

See attachment NELMK.TXT for 10 MARC records with holdings and Sample MARC Output.doc for screen
dumps and documentation on tag 984 which is used for extracting the detailed holdings.

91
I trust this answers all of your questions, however please feel free to contact me if you have any questions

Kind Regards

Sandra

***

MARC import
Date: Friday Jun 6, 2003, 10:17 AM
From: shedges
To: paul.p

Hello, Paul. I realize you're not planning to work on Koha this weekend, so I won't be offended at all if I don't
get a reply until next week...

I've just received some information from our vendor about the format of their exported MARC records. (I've
also finally taken the time to look at your bulkmarcimport.pl -- very pretty! And I finally realize why we have to
change our MARC parameters from the default -- I'll do that.)

The exported records are going to store holdings information in a 984 tag. OK so far. But the barcode and price
both get stored in the same $r subfield, with an @ separating the values, like this: $r12345678@29.95 Also, if a
branch has more than one copy of a book, the 984 tag does not repeat -- just the $r subfield.

So I have two questions. First, if I set up the parameters so the $r subfield in the 984 tag is repeatable, will the
imported data load correctly in the items table? And second, can the $r subfield values be mapped to two
different places in Koha, one for barcode (and/or itemnumber), the other for price?

I've attached two files that the vendor sent me. One is a .doc explaining the 984 format. The other is a .txt
iso2709 file of some sample export records. I may play around this weekend to see if there's a good way to use
MARC::Record to split the $r into two subfields.

Thanks for any tips you can give me!

Stephen

***

RE: MARC OUT and Kinetica


Date: Monday Jun 9, 2003, 8:03 AM
From: shedges
To: SandraB

Hello, Sandra -

Let's go ahead and start the process of commissioning our MARCOUT module, as per your quote:

92
The cost of commissioning MARC OUT is $1,XXX

I'll open a purchase order when I get to the office this morning. Do you need the purchase order number for
your records?

Stephen

***

Koha to do list
Date: Friday Jun 20, 2003, 10:05 AM
From: shedges
To: joshuaf
Cc: owenl

I wonder if the Simple Server can currently query from 2709 files: if so it might be able to use the marc field for
the z3950 queries...?

Joshua

From what I can tell, SimpleServer is happiest when it interfaces with a database that has already split the marc
record into parts. And now that we know were Paul is storing the parts (marc_subfield_table), I'm thinking he's
right in not being too enthused about storing the iso2709 in the biblioitems.marc column. He makes a good
point when he asks how you keep the iso record current when changes are made to the holdings. His
marc_subfield_table would take care of that. Just put the pieces together and run them through MARC::Record
to generate a new iso2709 record, if and when you need it. So I'm thinking we should stick with the current
"paul" version of bulkmarcimport.pl

I tracked down the answer to one of our import bugs. The onloan status and date due for "Buck-Buck the
Chicken" come from stuff Owen did before we imported. bulkmarcimport will delete everything from the biblio
tables, but it doesn't touch the transaction tables. So there was data in the issues table saying that biblio number
x (that Owen had put in to test) had been checked out. We loaded a new biblio number x (Buck-Buck), and now
Koha thinks the chicken is onloan. Mystery solved, but we'll have to remember to clean the transactions tables
before we do a production import.

I'm going to work today on cleaning up the other bugs, namely the missing decimal point in the prices and the
'#' indicator in the 856 tags that Spydus erroneously generates.

The other bugs, like bibliosubjects being empty, may have been solved by Paul. At least I notice there is a new
version of addbiblio.pl in http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/koha/koha/acqui.simple/ Would you
take some time to comb through CVS and download and install anything that's new since our 1.9.3 install. And
then when we're both done, we'll try another import and see how things look.

And then when Owen's back, he gets to re-write the templates so they display the _whole_ title :^)

Stephen

93
***

Re: Koha to do list


Date: Friday Jun 20, 2003, 12:26 PM
From: shedges
To: joshuaf

Sounds good...Shall we plan to meet on Monday...earlier (I am available anytime this weekend)?

Joshua

Yeah, let's meet at 8:30 Monday morning at Athens.

The new version of holdingprep.pl is in /root@koha. I fixed the prices, but didn't do anything with the 856 tags.
As it turns out, those are old image tags from when the catalogers first started putting image 856s in. Nowadays,
they set the indicators to 0 and blank. (Our cataloguing tool won't accept blank blank, otherwise they'd do that.)
The 4# indicator that shows up on OCLC's FirstSearch is wrong, and our catalogers know it. The "4" is obsolete
(indicates http -- duh!) and the # is of course just plain wrong. So when we import records and bulkmarcimport
sets those offending indicators to blank blank, it's doing exactly what I would be trying to do anyway. So I just
say let bulkmarcimport do it and ignore the warnings.

I also uploaded a handy script called dumpmarc.pl. It lets you look at a MARC file in a clean, "human-readable"
form. Invoke as ./dumpmarc.pl -file [MARCfilename]

So you'll have the current CVS stuff ready to go on Monday morning?

Stephen

***

Re: Bug question


Date: Wednesday Jun 25, 2003, 12:04 PM
From: paul.p
To: pate
Cc: shedges, chris

Pat Eyler wrote:

On Tue, 24 Jun 2003, Stephen Hedges wrote:

>Good question. It seems to me that the numbers really clog this table and will significantly slow down any
searching when you're talking about 250,000+ biblio records. We'll probably do something to strip all of those
rows out of our table after we've loaded our data, which looks like it will be with the current version, so don't
do anything for our sake. I'd suggest talking this over with Chris and Paul. I think Chris is working on the
searching, right? If he doesn't intend to use the marc_word table for that, or thinks the numbers are not a

94
problem, then I'd forget all about it. On the other hand, Paul has made an affort to keep stopwords out of the
table, so I'm assuming there is a good reason to keep it "clean." Time for an executive decision?

Okay, given your explanation, my take is that we should call this an enhancement and shelve it until 2.0.1 or
later. Paul/Chris, am I way off base here?

-pate

Note that it needs only 1 line in the code. So we could add it for 2.0.0 release...

My question is a theorical one : should we drop all number from this table ? If we do so, we couldn't search on
ISBN for example, as it's a number. I think the best would be to add a column in marc_parameters table to say
"index" or "don't index". This task is not a big one, but should wait for 2.0.1.

Note also that chris works on search, but for old-db only. I'm the only author of MARC search. I think that in
the future we could use only the MARC one, as it's (imho) more versatile. But that's another problem, for the
not-near future.

Paul P

***

issues problem
Date: Tuesday Jul 1, 2003, 1:08 PM
From: joshuaf
To: shedges

Stephen,

Ok...I reread the Wednesday meeting log and I figured out what happened when we imported our issues data.
Basically, in biblioitems, itemtypes never gets populated on bulkmarcimport (but we knew that). According to
Chris Koha depends on itemtypes being set for every biblioitem; its how it knows how long to issue for or even
if it can issue; so bugs like 508 (patron cannot see what books are checked out to her) happen. The solution is to
fix the bulkmarcimport script to populate the itemtype table when we import our biblios (bug 516 (now
assigned to Paul). I suppose a temporary workaround (one that Chris tried) is to set all the biblioitem itemtypes
to YA or something else directly in Mysql.

Now...the real question: why can we still issue even though itemtype is not set?

Joshua

***

MARC21 and Koha interaction question


Date: Thursday Jul 3, 2003, 1:37 PM

95
From: pate
To: annez, shedges, chris, paul.p, mwh, et. al.

Hello everyone,

We're facing a small problem at the moment, and are looking for some guidance on working to a real solution
rather than a work around. I'm not sure if I have all the facts corect, so please feel free to jump in if you see
something I've got wrong.

NPL has imported a number of MARC records from their existing system. During the import, several fields
handled (by MARC21) at the item level were moved into the item level in Koha, when they should have been
treated (within Koha) as biblioitems. This caused problems in many other parts of Koha.

Quoting from Paul's earlier email:

OK, I understand the problem... 952 it the tab you use for ITEMS. So it's stored with items, NOT with
biblioitems. Thus, it's not transmitted to the correct function.

[...]

Why is it coded like this ? it's an historical behaviour (from 1.2.x) biblioitems is supposed to store infos
common to some/many items, and there can be more than one for a given biblio. It's something that doesn't exist
in MARC, who only has 2 different levels (biblio & item)

One sample :

* biblio =>
1- the 2 towers, from tolkien
* biblioitem =>
1- edited in 1957, by "greatbook edition".
2- edited in 2002, by "pocket edition".
* items =>
1- barcode 100001, biblioitem 2
2- barcode 100002, biblioitem 1
3- barcode 100003, biblioitem 2

This may have been a problem with the way the importer handled NPLs data, or it may be a problem with the
way we're handling MARC21. To find this out, I'd be interested in having other groups try to import their
MARC data into Koha using the same tools NPL used, then checking to see if the biblioitem data is set.

If this proves to be a problem with NPL (Spydus) data, we can address it that way (and expect that we'll see
other conversion issues which we'll also have to deal with).

If this is a problem with Koha's handling of MARC21 we will probably have to do a couple of things. First
implement some kind of work around for 2.0.0 and come up with a real fix for 2.2 -- it will likely involve DB
changes, so it's not likely to happen in 2.0. I'd really like to find a way to avoid making DB changes, so if we're
overlooking something on the MARC side please point it out to us.

96
-pate

***

FW: SR 29093 Re MARC Output


Date: Monday Jul 7, 2003, 11:38 AM
From: kenr
To: shedges

Stephen:

The MARC Output should work fine now. If not, give me a ring and I'll contact Sanderson again. It looks as if
Frank has already done some of the grunt work already.

--Ken

Hi Ken,

The program was wrong in the way it handled the option for including/excluding previously output records. The
program is now fixed and tested on your site.

There is new option on the MARC Output Menu to clear the MARC Output flag whenever you like. I have run it
and cleared all existing flags.

While testing the MARC Output I found some empty MARC records (i.e. no bibliographic data). Further
investigation showed that you had 2114 deleted records hanging around in the MN.MAIN file which links all the
bibliographic data together. I have removed those faulty records to save you asking me later why there are some
empty MARC records.

You also have some BRN's which have fields like the 245 Title missing. Please review BRN's 30250 117950
160428 141489 13160 and fix where necessary.

There are also 47 BRN's which have holdings but no bibliographic data. I have saved a list called NODATA
which you can use in Inquiry/Saved Lists to check on them. All these BRN's need fixing before you run the
MARC Output for real. If these BRN's are not fixed you will be outputting MARC records which will have no
bibliographic data.

Frank A

***

new bulkmarcimport
Date: Tuesday Jul 15, 2003, 1:22 PM
From: shedges
To: joshuaf, owenl

97
Owen and Joshua -

I've revised the holdingprep.pl script so it now builds one 942 tag with instituion code in subfield a, item type in
subfield c and call number in subfield k. All the other item specific information that goes into the items table is
in the 952 tag(s). I revised the MARC mapping in our Koha parameters and downloaded the newest version of
bulkmarcimport.pl (in the root directory), then ran it to import the same 600 records we already had in the
database.

I originally had the call number mapped to the biblioitems.dewey column, but that didn't work -- that column is
set up for a number ("double(8,6)"), so all I got was truncated dewey numbers for some non-fiction books. So I
changed the mapping to put the call number in biblioitems.classification. It now displays, but I'm not sure what
else may be affected by that mapping.

Anyway, time to beat on it and see how it works now that itemtypes and classification are being imported
through Paul's script.

Steohen

***

Re: CT and NODATA


Date: Friday Jul 18, 2003, 4:01 PM
From: kenr
To: lindaw
Cc: shedges

Linda:

I just re-read Frank's e-mail. He is complelety in error regarding that CT status. It's impossible. You can't delete
an item with a copy status attached...EVER. That's a fail-safe built in to the system to prevent accidental
deletion of items on loan/missing/damaged etc.

I would simply eliminate that step from his instructions. In fact, I tried this out myself with the barcode you
gave me: 34-20447. I removed the CT status, used the DIS feature, and the item was transferred to CANC.
COPIES. I have removed the CT status from all records associated with that BRN (118362). Once all items are
purged from that record, the BRN SHOULD be automatically deleted. If not, I can delete any BRNs at will
through my supervisory function. (all Spydus users with supervisor logins have this privilege).

The way I'm reading his e-mail, is that the deletion of these items is not much different than our BCD deletions,
except we're using the DIS feature. I'm a little curious as to why he would have us set the status to CT...Frank
should KNOW that's a no-no when it comes to deletions! I'll ask him for clarification, but can guarantee he's
way off on this one.

--Ken

98
Copy 34000000020447 cannot be deleted: Status CT (Catalogue maintenance required ) on copy
34000000020447 bars deletion

Ken, I set CT on several of the "nodata" records. They won't delete with any copy status on them.

Linda

***

Re: fines/charges
Date: Saturday Jul 19, 2003, 12:59 PM
From: chris
To: shedges

On Fri, Jul 18, 2003 at 01:17:02PM -0400, Stephen Hedges said:

I'm working on loading fines/charges into Koha, which would go into the accountlines table. However, there's
also an accountoffsets table that seems to be related. Do we know how these two interact? I hate to shove a
bunch of data into one table and not be able to retrieve it because of a broken linkage to another table.

Hi there

Accountoffsets is used, but you shouldnt need to populate it. Its basically used for statistical/reporting purposes.

Heres how it works, say someone has a fine of $5 on an item .. that would be one line in accountlines. Then they
pay off $3, the accountlines line would change so amount outstanding =2 and there would be a line inserted into
accountoffsets with the 3 in there. Then when the other 2 was paid off the amountoutstanding in accountlines
becomes 0, and another line is inserted into accountoffsets. That way, you can tell it was paid off in 2 bits, and
at what dates.

But, apart from the statistic scripts nothing fetches from it. So. you should be fine to fill just accountlines, and
accountoffsets will fill over time as charges/fines are paid off.

Hope this helps

Chris

***

NODATA BRN Removal


Date: Friday Jul 25, 2003, 9:46 AM
From: kenr
To: shedges
Cc: lindaw

99
I have completed the deletion of the BRNs and barcodes on the NODATA list. This was a very time-consuming
job and far more complicated than imagined! I'm sure the people in Australia now hate us for all the questions I
had, but at least we're getting our money's worth in our final days with Spydus. :-)

There were a few records that appeared to be without BRNs, but a little sleuth-work revealed that they were
there...just buried deep within the record. Many of the barcodes appeared to have already been deleted, so it's
anyone's guess as to why they remained in the system.

I have created a simple WORD file of all of the BRNs and barcodes associated with the NODATA list for future
reference, if necessary. I have attached that list to this e-mail. Only one of the records actually had a title
attached, and I listed it with that record. Most of these appeared to be old records to me (could they have been
left from our conversion from Follett?); regardless, they are history now.

--Ken

***

More info from yesterday's meeting


Date: Friday Aug 1, 2003, 1:20 PM
From: shedges
To: joshuaf, owenl, laurenm, kenr
Cc: chris

Hey, I wanted to report on some of the follow-up stuff I've been doing after yesterday's Koha meeting. (And
Chris, I'm copying this to you as an FYI. Please feel free to jump in and say 'no, no, you fools!')

OK, let's start with borrower categories. I didn't find anything that would help to set the time limit before a
borrower gets debarred for having overdue books -- but I'm still looking. I made a few changes to the borrower
category data, such as the issue limit (99) and whether or not they get overdue notices. Also changed Juvenile
code from J to C. I have no idea what the 'bulk' flag in the categories table does.

I also took another peek at item types, because I thought the 'renewalsallowed' column in the itemtypes table
might let us set the number of times a type could be renewed. (It's a smallint column instead of a tinyint.) But
from the template, it looks like this is just a flag -- it's either renewable, or it's not. So I still have no idea why
most stuff we check out shows up as 'not renewable."

I completed a successful load of new items from the catalogers, using a home-grown script to alter the format of
the MARC record -- that seems to be working OK.

And finally, I looked at branchtransfers.pl (one of Findlay's scripts). It looks like this generates a display of
books that are to be transferred to another branch. I _think_ it builds this list from the requests, which would
answer one of the questions we had (how does the system notify us when reserves are ready to be transferred or
picked up?) But I'm not really sure. Findlay's a little stingy with his comments. At any rate, it doesn't seem to
make any modifications to the database, so it's not a way of changing the holdingbranch for an item.

Stephen

100
***

Biblio load
Date: Monday Aug 4, 2003, 10:32 AM
From: shedges
To: joshuaf, owenl, laurenm, kenr

Our load of bibliographic records from Spydus to Koha is still underway, and will continue throughout the day.
Joshua's Spydus dump took 38 hours, and the import into Koha will take about 12 hours (about 10 more hours
to go!)

So tomorrow morning I plan on sending out an e-mail to all staff saying:

1. here's the url and password for accessing Koha


2. put comments in the Koha forum (Owen, can you activate that link this evening?)

Since I will be spending about an hour each evening this month working on loading patron records, I plan on
leaving an hour earlier each afternoon (just FYI).

Stephen

***

NO biblio load
Date: Monday Aug 4, 2003, 12:03 PM
From: shedges
To: joshuaf, owenl, laurenm, kenr

Well, we've hit our first major roadblock --

We've run out of space to store our data on the new machine. (The machine actually has plenty of space, we just
guessed badly when we divided it up for various purposes.) So that means we're going nowhere until Joshua
tries to work some magic and make more space. This could easily take all day, so it's probable that everything
will move back by at least one day. Sorry!

(Owen, I suspect your problem is related to the space issue. /var is set up as it's own file system, not very big
(1G), but all our mysql tables are stored in /var/lib/mysql. /var is out of space, so mysql simply refuses to do
anything.)

Stephen

***

Look at Koha
Date: Tuesday Aug 5, 2003, 9:40 PM

101
From: shedges@athenscounty.lib.oh.us
To: lindaw, maryy, marilynz

Although it took longer to load the data than expected, Koha is finally ready for your inspection.

Open a web browser (like Internet Explorer or Netscape, although Netscape Navigator 4 is too old to work very
well), and go to: http://koha.athenscounty.lib.oh.us (Just click on this link -- that's easiest.) That's the OPAC that
patrons will use. To see the staff pages (for circulation, etc.), go to: http://koha.athenscounty.lib.oh.us:8080.

Koha still has many bugs, so expect to find weird things. When you do, please report them on the Koha Forum,
found on our intranet pages where you find the Tech Forum. For now, please limit your forum comments to
practical problems ("When I try to check books in, they all get checked out to Chris C.?!?") and leave the
aesthetic problems ("I hate the color") for later. Owen and Joshua and I automatically get e-mail alerts when
someone adds a comment to the forum. Feel free to sign up for e-mail alerts, too, if you'd like to keep aware of
what others are finding. (Just click on the little flying envelope in the lower left corner of the Koha Forum
page.)

We will be adding newly cataloged books every morning, and all patron records and transactions will be loaded
every night. That means what you see in Koha should be no more than one day older than what you see in
Spydus. Toward the end of the month, we may play around with deleting and adding books, but for now, please
do not add or delete books from either Koha or Spydus.

Thanks!

Stephen

***

Sample data
Date: Thursday Aug 7, 2003, 1:16 PM
From: shedges
To: paul.p

Paul -

Owen tells me that there was some discussion about sample data yesterday on irc. (Sorry I wasn't there -- I
couldn't get a connection.) I have attached a file (sample_data) of some records in iso2709 as modified for
loading through bulkmarcimport.pl. They assume the following mapping:

942a -- not mapped (institution code not used by Koha)


942c -- biblioitems.itemtype
942k -- biblioitems.classification

952b -- items.homebranch
952d -- items.holdingbranch
952p -- items.barcode

102
952r -- items.replacementprice
952v -- items.dateaccessioned
952y -- items.notforloan

These tags are built from the original 952 holdings tag in our MARC records as produced by the catalogers
(using the commercial product ITS.MARC). I've attached a copy of a 'raw' file (com807.cp) from the catalogers
and the script (loadbibprep.pl) which converts it for use by bulkmarcimport.pl

Stephen

***

Redirects
Date: Thursday Aug 7, 2003, 10:14 PM
From: owenl
To: joshuaf, shedges

I'd like to do away with that silly 'welcome to koha' splash screen, so I propose adding Redirect directives to
each <VirtualHost> blocks in httpd.conf

In <VirtualHost koha.athenscounty.lib.oh.us:80>:

Redirect permanent /index.html http://koha.athenscounty.lib.oh.us/cgi-


bin/koha/opac-main.pl

and in <VirtualHost 66.2xx.7x.7x:8080>:

Redirect permanent /index.html http://koha.athenscounty.lib.oh.us:8080/cgi-


bin/koha/mainpage.pl

I'm not an Apache expert, but I think this will do what we need it to.

-- Owen

***

MARC import issues


Date: Sunday Aug 10, 2003, 1:59 PM
From: shedges
To: koha-devel@lists.sourceforge.net

I would like to update all the Koha developers on two issues that have come up as NPL has been migrating to
Koha. Both relate to the difference between the way Koha stores bibliographic information and the way MARC
records (USMARC) structures the same information.

First a reminder of what those differences are. Koha subdivides bibliographic information into three tables
(basically): biblio, biblioitems, and items. Biblio holds the basic information about the work -- title and author,

103
copyright date, that sort of thing. Biblioitems holds information about a particular manifestation of the work --
item type, classification, actual publication year (which may be different from the copyright year), etc. And
items stores information about individual copies of the particular manifestation of the work -- price, barcode
number, date acquired, etc. MARC, on the other hand, currently subdivides bibliographic data into two area:
"Bibliographic," which holds the information that Koha would put in biblio and (some) biblioitems, and
"Holdings," which has the information that Koha would put in items and (some) biblioitems. It's the process of
fitting two-part information into a three-part database that leads to complications.

1. The process of importing MARC records results in one and only one row in biblioitems for each row in
biblio. That's because MARC makes an individual record for each manifestation of a work, so when you
import a MARC record you are really importing each record into biblioitems, with a related row in
biblio to hold the information that cannot be mapped to biblioitems (author, title, etc.). That, of course, is
exactly backwards from the way Koha is designed to work. The import works OK, but when you do a
search, you get lots of duplicate titles listed, because each printing or video or audio recording of a work
has its own row in biblio. That makes it very hard to decide which title listing you want to look at more
closely. Which "Gone with the wind" is the audio recording?

MARC handles this problem by providing tag 245h for the "Medium" of the work, surrounded by square
brackets. (The title itself is in 245a.) Many MARC-based library systems display this tag when showing
the results of a search, so you know that "Gone with the wind [audio recording]" is different from "Gone
with the wind [videorecording(DVD)]" or "Gone with the wind."

There are two solutions I can think of for this problem, neither of them very satisfactory. One is to add a
column to biblio to hold "medium" for the 245h tag. That, of course, violates the whole philosophy of
what the biblio table is supposed to store. The other is to actually modify the title that is stored in biblio.
That's the workaround solution we are using at NPL. We periodically run a crude but efficient script that
handles the job:

my $sth_getformat = $dbh->prepare("SELECT bibid,subfieldvalue FROM


marc_subfield_table WHERE tag = '245' and subfieldcode = 'h'");
my $sth_gettitle = $dbh->prepare("SELECT title FROM biblio WHERE biblionumber ?");
my $sth_put = $dbh->prepare("UPDATE biblio SET title = ? WHERE biblionumber ?");

$sth_getformat->execute();

my $row;
while ($row = $sth_getformat->fetchrow_arrayref) {
my $bibid = $row->[0];
my $subfieldvalue = $row->[1];

$sth_gettitle->execute($bibid);
my $titleref = $sth_gettitle->fetchrow_arrayref;
my $title = $titleref->[0];
$sth_gettitle->finish;

$subfieldvalue =~ /.+]/;
my $newtitle = "$title $&";

$sth_put->execute($newtitle,$bibid);
}

104
Could something similar be included as part of the MARC import process? It's not elegant, but it does
solve the problem. Or better yet, can anyone think of a way to combine duplicate biblio rows into one
biblio row? (Seems like this would really screw up the relationships between tables.)

2. While Koha stores the copyright date in biblio and the publication year in biblioitems, MARC puts both
in one tag (260c), which of course can only be mapped to one Koha table.column. So currently the
library importing their MARC records into Koha has to decide which Koha table.column to fill, and then
the Koha Biblio.pm strips out the first date found in the 260c tag and puts it there. This is not good,
because:
a. either the screens which display biblio.copyrightdate or the screens which display
biblioitems.publicationyear are going to have nothing to display;
b. we're losing information in the import which could easily be retrieved; and
c. it leads to inaccurate information, because (in the US) if 260c has two dates, the first is always
the publication year and the second is the copyright date, and the current Koha solution could
end up putting the publication year in the copyright date column.

Again, we periodically run a (crude) script at NPL to load both table.columns:

my $sth_get = $dbh->prepare("SELECT bibid,subfieldvalue FROM


marc_subfield_table WHERE tag = '260' and subfieldcode = 'c'");
my $sth_cprdate = $dbh->prepare("UPDATE biblio SET copyrightdate = ? WHERE
biblionumber = ?");
my $sth_pubdate = $dbh->prepare("UPDATE biblioitems SET publicationyear = ?
WHERE biblionumber = ?");

$sth_get->execute();
my $row;
while ($row = $sth_get->fetchrow_arrayref) {
my $bibid = $row->[0];
my $subfieldvalue = $row->[1];
if (length $subfieldvalue > 8) { # if it is this long (even with extra
# letters and punctuation), it must be
# publication date, copyright date
$subfieldvalue =~ /(\d{4}).+?(\d{4})/;
$pubdate = $1;
$cprdate = $2;
} elsif ($subfieldvalue =~ /(\d{4})/) { # only one date
$pubdate = $1;
$cprdate = $1;
} else { # no dates
$pubdate = '';
$cprdate = '';
}
$sth_cprdate->execute($cprdate,$bibid);
$sth_pubdate->execute($pubdate,$bibid);
}

Again, could this somehow be worked into the MARC import process?

Stephen Hedges

105
***

Re: Adding temporaries to Koha


Date: Tuesday Aug 12, 2003, 12:53 PM
From: owenl
To: shedges, joshuaf

I'm thinking that the best way to do this might be to modify the template for
http://koha.athenscounty.lib.oh.us:8080/cgi-bin/koha/acqui.simple/addbiblio.pl so it only shows the MARC
fields we need: author (100a), title (245a), item details (942 and 952 tags), maybe even URL (856) for Margy's
websites.

Without having actually looked at the template, I'll say this sounds like a good idea. Since template names are
tied directly to the perl file that populates them, I'll see about duplicating addbiblio.pl to create custom
templates for different kinds of items. Maybe different ones for magazines, MORE items, and web sites?

-- Owen

***

IRC conversation from yesterday evening


Date: Wednesday Aug 13, 2003, 10:40 AM
From: owenl
To: joshuaf, shedges

chris: how are things in ohio?


owen: Going well. The staff is giving lots of positive feedback about Koha.
owen: Mostly small display customizations requested so far.
chris: excellent, i see that mike is fixing bugs nice and fast too
owen: Yeah. Thanks Mike! :)
chris: ahh thats good
chris: display customizations are the best requests :-)
mhansen: heh
owen: Well, some are pretty closely tied to Stephen's message yesderday on Koha-
Devel, 'MARC import issues'
owen: Our staff expects item-level information at stages where Koha only
expects biblio-level information.
chris: right
mhansen: i've been working a bit on it, but i have been busy the past couple
days getting the high school ready to use the online gradebook
mhansen: owen: i'll take a look at that e-mail tonight
chris: so its matter of getting the item information to display earlier?
chris: UNIDO want much the same thing
owen: Often, yes. It's tied to the nature of our data, though: we've got
separate biblios for each different format of one title (book version, audio
version, movie version)

106
owen: Koha expects all to be attached to one biblio, thus returning only one
search result per title.
chris: ahh right
owen: What we see are three different search results, and we need a way to tell
which is which at that early stage in the search.
chris: yeah koha would expect 1 biblio, lots of biblioitems
chris: right
chris: so on the search results page, you need format to show eh?
owen: Format and call number, even.
chris: http://www.philanthropy.org.au/search/catalogue.html
mhansen: you want one result for each item?
chris: heres what we did for philanthropy australia
chris: try a search on keywords fish
chris: something like that.. .buyt format, instead of say publisher?
owen: ...or both. I think that's very much like what our staff expects.
owen: How much altered is that from the way Koha usually works?
chris: not too much, koha has to look up all the biblio biblioitem and item
data for that page anyway
chris: it just isnt showing most of it
owen: That's what I guessed from the fact that it shows number of holdings for
each branch.
chris: (it needs to know the format to know if its requestable etc, and the
item data to know if books are out or not)
chris: yeah
owen: I suspect there are lots of places where we just need more variables
available to choose from in building the template.
chris: so its pretty much just changing the template
chris: yep
chris: what you can do, is usually if its inside a TMPL_LOOP
chris: just put <TMPL_VAR NAME='columnname'>
chris: so you could try just whacking in <TMPL_VAR NAME="itemtype">
chris: it wont break anything, it simply wont show if it isnt being passed to
the template
owen: Yeah, I know I've tried that in places, with limited success. I don't
know about this particular case.
chris: ahh yep
owen: Wish I could stay and ask more questions, but I've got to get home.
chris: if you get stuck
chris: send me your template with the variables you want to show
chris: and ill check that koha is returning all of them for you
owen: Great. Good luck to laurel for her exhibition!
chris: and we can tweak Search.pm to get anyones its not
chris: thanks :)

-- Owen

107
***

Re: Open Source column


Date: Thursday Aug 14, 2003, 9:19 AM
From: roy.tennant
To: shedges

Cool, thanks for letting me know. It's good to see Koha out there in more real-life installations. And I like your
screen displays better than your old catalog. Good luck,

Roy

On Thursday, August 14, 2003, at 09:06 AM, Stephen Hedges wrote:

Roy -

Very nice column in the latest Library Journal ("Open Source Goes Mainstream").

I thought you might be interested in knowing that Nelsonville is about to unveil it's installation of the most
current (MARC) version of Koha. You can see the OPAC at http://koha.athenscounty.lib.oh.us

We have not yet made a public announcement to the library world about this, because we still have a couple of
staff suggestions we'd like to implement first. We'll probably announce early next week, but feel free to take an
advance peek now. The system is running with 130,000 bibliographic records (about 275,000 item records),
35,000 patrons, and seven branches. It will never really be "finished" -- being open source, we'll always be
tweaking something -- but we have to show it to folks sometime!

Stephen Hedges

***

Termination of Agreement
Date: Thursday Aug 14, 2003, 12:57 PM
From: shedges
To: info@civica.com.au
Cc: shedges, kenr

By this e-mail I am informing you of our intention to discontinue service from Civica Pty Limited for our
Spydus Library System on September 30, 2003. This notification applies to:

Agreement No. 20283, expiring 30.09.2003, for


Spydus Annual License
Spydus Annual Support
Unidata Annual Licenses

Stephen Hedges

108
***

Re: Identifying reserved items


Date: Friday Aug 15, 2003, 9:09 AM
From: rosalie
To: chris, shedges

Hi Stephen,

Several answers, or at least several prongs to the answer.

We keep our main branches on line with IRC all the time. When a book is reserved that is available at another
branch, we use IRC to get the the book plucked off the shelf as soon as possible, and started on its journey to the
branch where it is required.

For 'available' books which we can't find, or books reserved from OPACs, we run daily reports of all items
reserved with available status, and use these reports to start the serious search. The reports are run in UrbanSQL
and exported to Excel. We need two slightly different ones to get everything. I'll copy them below, in case they
are of use to you.

Cheers

Rosalie

Fixed report for any group, including the holding branch

select
cardnumber,surname,firstname,biblio.title,reservedate,found,holdingbranch
from reserves,borrowers,biblio,items
where cancellationdate is null and
(found <<> 'F' or found is NULL) and
reserves.borrowernumber=borrowers.borrowernumber and
reserves.biblionumber=biblio.biblionumber and
biblio.biblionumber=items.biblionumber and
constrainttype='a' order by cardnumber

Fixed report for specific group, including the holding branch

select
cardnumber,surname,firstname,biblio.title,reserves.reservedate,found,
itemtype,classification,holdingbranch from
reserves,borrowers,biblio,reserveconstraints,biblioitems,items
where cancellationdate is null and
(found <<> 'F' or found is NULL) and
reserves.borrowernumber=borrowers.borrowernumber and
reserves.biblionumber=biblio.biblionumber and
biblio.biblionumber=items.biblionumber and
constrainttype='o' and
reserves.borrowernumber=reserveconstraints.borrowernumber and
reserves.biblionumber=reserveconstraints.biblionumber and

109
reserves.reservedate=reserveconstraints.reservedate and
biblioitems.biblioitemnumber=reserveconstraints.biblioitemnumber
order by cardnumber

On 14 Aug 2003, at 14:03, Stephen Hedges wrote:

Rosalie and Chris -

I have a quick question, which may require a long answer, I don't know:

How does the library staff know which books with an "available" status have been reserved by a member and
need to be pulled from the shelf (and possibly sent to another branch)?

BTW, our installation of Koha is just about ready for public announcement. It's at
http://koha.athenscounty.lib.oh.us

Stephen Hedges

***

Reserve notification
Date: Friday Aug 15, 2003, 6:20 PM
From: shedges
To: joshuaf, owenl

I can't decide if this is a bug, and if it is, how do we want it fixed?

When you check-in a book that has a reserve on it, the status is set to "waiting" (by returns.pl) and a subroutine
called "printslip" is called from C4/Print.pm. This subroutine either prints a routing slip on the circulation slip
printer, or it prints to the file /tmp/kohares if system preferences are set to no circulation slips. I can't find any
file /tmp/kohares, so I assume there is a bug. (Or have we just never tried checking-in an item with reserves?)
However, the file wouldn't do us much good even if it was there.

What we really want is a pop-up message on the screen that contains the information that would otherwise have
been sent to the slip printer. Is this easy to do? Can we ask that the code (probably in returns.pl) be re-written so
it either calls the printslip routine (if using slip printers) or generates the pop-up window (if no slip printer)?

Maybe we should just buy slip printers.

Stephen

***

Re: Reserve notification


Date: Saturday Aug 16, 2003, 3:33 PM
From: owenl
To: shedges, joshuaf

110
What we really want is a pop-up message on the screen that contains the information that would otherwise have
been sent to the slip printer.

It is *supposed* to show a message right now. It's just not doing it. Here's the snippet from the template:

<TMPL_IF Name="waiting">
<font color='red' size='+2'>Item marked Waiting:</font><br>
Item <TMPL_VAR Name="itemtitle"> (<TMPL_VAR Name="itemauthor">) <br>
is marked waiting at <b><TMPL_VAR Name="branchname"></b> for <TMPL_VAR
Name="name">

It should be showing the title, author, destination branch, and patron name. The only thing it doesn't show that
Spydus does is the patron's phone number. So really it should just be a matter of getting the bug fixed.

I think we need to complain again to the list about these two bugs:

• Bug 293 (Error Issuing Book -1); and


• Bug 562 (Returns not showing 'on reserve' message for holds)

...which I think are the biggest obstacles to smooth operation on September first.

-- Owen

8. The Final Steps


Koha start date
Date: Monday Aug 25, 2003, 7:25 AM
From: shedges
To: all staff

Well, it's Monday, August 25 and September 1 is a week away. That was our original target date for starting to
use Koha, and I think we'll stick to that. We still have some issues to resolve, but nothing we can't work around.
Besides, the long holiday weekend (remember Labor Day?) will give Joshua and I plenty of time to do the final
transfer of our data from Spydus to Koha. I'm sure we'll keep finding things that need attention, but if need be
we can do things like deleting records, changing status, etc., directly in the database. If you find things that
stump you after next Monday, please e-mail the details to Owen and he can either guide you through the process
or fix things up manually.

So here's the calendar:

Friday, August 29

last day to enter/change book data in Spydus. Joshua and I will begin the data transfer process after the
libraries close at 6:00, and work through the weekend until it's all done.

Saturday, August 30

111
data transfer continues, Spydus may be slow at times throughout the day.

Sunday, August 31

data transfer continues, should finish.

Monday, September 1

Labor Day, libraries closed. "Extra" day for data transfer, if needed.

Tuesday, September 2

use Koha (http://koha.athenscounty.lib.oh.us:8080). Make sure you have some browser other than
Internet Explorer 6 to use on the circulation computers, since IE6 does some strange stuff. As a
precaution to make sure you use Koha, the Spydus passwords will be changed so you can't get in.

Any questions? Put them on the Koha Forum (please) for everyone to see. And by the way, I've been very
pleased with the comments and suggestions on the Forum so far. Keep up the good work!

Stephen

***

Re: Nelsonville migration and MORE


Date: Wednesday Aug 27, 2003, 6:49 PM
From: shedges
To: carolb
Cc: joshuaf

Hi, Carol -

We're going to leave our current Z39.50 server running on our old system through September, so there's no need
to change anything from your end yet. (You just won't see any new materials for a month.) Joshua is ironing out
the final kinks in a Z39.50 server program, and then we'll send along the details for testing from your end. (We
are assuming that you want results returned in USMARC format.)

No SIP2 or NCIP -- yet -- but we didn't have that with our old system either. And we're still keeping our old
Follett patron cards, so we still have the old problem of the number on the card not matching the number that
gets scanned from the card (therefore the number that needs to be stored in the system.) That's for future
development.

Thanks for keeping an eye out for us!

Stephen

Hi Stephen,

112
I understand that Nelsonville is migrating to a new system the week of Labor Day. Is Nelsonville going to
continue their participation in MORE? We can test your new system in MORE to determine if it's compatible;
we'll need your Z39.50 information (IP, port, database name, record format). Does your new system have SIP?
If not, we may be able to authenticate your patrons using barcode validation if the patron records have been
standardized (i.e. the number on the card matches the number in the patron record).

If you have any questions, please give me a call.

Regards,

Carol B

***

Re: reserve list


Date: Thursday Aug 28, 2003, 10:16 AM
From: owenl
To: shedges

All of which is by way of saying I hope you haven't been spending any real time on reserves, but are working on
a way to add magazines, etc. come Tuesday. How's that going?

I haven't been working on reserves. I spent all day Tuesday working on the temp problem. The entry seems to
work--I can get the records in the database okay. Editing isn't so easy. I can't modify the edit page to load just
the brief record my temp entry page creates. However, if editing was necessary the standard edit screen could be
used.

Deletion is the biggest problem I'm having. It should be simple--pass the biblio and biblio item number to the
delete script, but for some reason I'm having trouble getting to work consistently. If I can't crack it today I may
have you take a look at the script.

Either way, I think we're in good shape. Editing and deleting are requirements, but not for the first couple of
days of operation. Can you think of any other biggies that need to be taken care of before Tuesday?

-- Owen

***

Proposed magazine entry procedure


Date: Friday Aug 29, 2003, 2:40 PM
From: owenl
To: shedges

I was talking to Ken today and we went over the magazine entry procedure a little and refined it a bit. I propose
that it happen this way:

113
1. When magazines arrive, staff searches for that title in the catalog using the 'search existing records'
search on this page: http://koha.athenscounty.lib.oh.us:8080/cgi-bin/koha/acqui.simple/addbooks.pl
2.
• If the particular issue already has a bibliographic record, click the 'edit' button to open the
MARC record. Click the button marked 'save bibliographic record and go to items' Add a new
copy via the 'add new item form'
• If the issue has not been added already, click the 'Add Brief Record' button at the bottom of the
search results screen to go to the brief record entry form
(http://koha.athenscounty.lib.oh.us:8080/cgi-bin/koha/acqui.simple/addbiblio-simple.pl?)

Maybe this is what you had in mind all along. Either way, I think it will be better to at least maintain single
biblio records for each magazine issue. That way duplicate results won't appear in the opac--it will just show
titles with multiple holdings like any other book.

What do you think? This process should be working now, if you'd like to test it out.

-- Owen

***

Re: MARC fields


Date: Tuesday Sep 2, 2003, 2:26 PM
From: shedges
To: owenl

I was just going over this same issue with the catalogers, and I think we've got a bug to report. It seems like all
the MARC tags you define in parameters should be available for searching and editing. The selection there now
seems very random and doesn't even include all the tags we have mapped (let alone defined). In the case of the
440 tag, it's mapped to biblio.seriestitle, so it's in both the 'old' Koha tables and the 'new' MARC tables -- yet
you can't get to it!

And one more thing the catalogers would like: to be able to search for records by biblionumber. Is that possible?

Stephen

When I catalog paperback romances, I use the 440 field to enter the series, i.e. 'Harlequin Superromance,' and
a volume subfield for the number in the series. This helps in locating an individual title in a big rack of
unalphabetized romances.

But I don't see that tag in Koha's display: http://koha.athenscounty.lib.oh.us:8080/cgi-


bin/koha/MARCdetail.pl?bib=14228

Is the information there, but Koha's not configured to display it? Or did we lose those values because the
database isn't set up to handle it?

-- Owen

114
***

reserves dump
Date: Tuesday Sep 2, 2003, 4:23 PM
From: shedges
To: chris

Hey, Chris, sorry I didn't see your note on bug 562 until today...

I've attached the mysql dump of reserves and reserveconstratints. I've also tried to short-circuit the templating
process and fiddled with returns.pl to just send a plain message back to the browser with the reserved item,
borrower name, etc. Because that didn't work, and because the 'waiting' status also does not seem to be getting
set, I'm suspecting that the entire conditional is failing. I can't find anything in any of the modules that the script
uses that would get passed back to returns.pl as param('resbarcode'), so I think (imho) that the conditional if
($query->param('resbarcode')) is always failing. Maybe that's something that was in the older C4::Reserves that
got left out of C4::Reserves2? I should have checked that one, too...

Stephen

***

Housekeeping matters
Date: Wednesday Sep 3, 2003, 11:26 AM
From: shedges
To: chris.t
Cc: kenr

Dear Chris,

I received your letter of August 28 today, and I want to let you know of our plans, to be sure they fulfill the
requirements of our agreement.

• As of yesterday, we are using Koha as our ILS. We still have the ability to access our Civica software
and data in order to cross-check any problems we might find in our data.
• By September 30 (probably a few days before), we intend to issue a 'rm /u/*' command from the root
shell on our server to delete all Civica-related files. (Are there other directories we need to empty?) You
should be able to login to verify this.
• When we have deleted the Civica-related files, I will verify that by letter to you.
• About a week after we have removed the files, we will disconnect the server from the network. This
should give you time to verify the deletion.

As to your second point, Civica's pride in it's enquiry engine is justified, and I appreciate your concern that we
respect your intellectual rights to that property. Our current system is based on perl and mySQL, and probably
doesn't have the ability to use your enquiry engine anyway. But I do want to assure you that we will strictly
respect our confidentiality obligations and your intellectual property rights.

115
Stephen Hedges

***

Re: bug 589


Date: Thursday Sep 4, 2003, 9:48 AM
From: paul.p
To: shedges
Cc: joshuaf, owenl

Stephen Hedges wrote:

Well I'm beginning to see why Paul's addbiblio is the way it is.

Almost all of our (1,127) subfields have a -1 display tab value, so they don't display. If we set them all to 0, we'd
have a very long MARC edit page. By setting them from 0 to 9, and using the old template, we could display
about 100 at a time. Maybe that would be the way to go, instead of trying to show them all at once.

I thought I could do a search through marc_subfield_table to see which tags and subfields have never been used
in our 130,000 records and leave those at -1 so they don't display. But we have 116 different tags in
marc_subfield_table, and 107 different tags in marc_tag_structure -- meaning, of course, that the catalogers
are actually using more tags than they asked me to define!

I think we could also use the same trick as phpmybibli. Take a look at http://phpmybibli.free.fr log-in, then
catalogue, then add biblio. You have nice + & - to expand fields/subfields.

At any rate, Paul's right -- everything that does not have a -1 value is getting displayed. Bug 589 is invalid.

OK, status changed in bugzilla.

Paul P

***

Re: Nelsonville migration and MORE


Date: Sunday Sep 7, 2003, 6:34 PM
From: joshuaf
To: shedges
Cc: carolb

Carol,

Here is where we stand currently:

I've hacked together a Z3950 Server for Koha that meets the level 0 requirements for Functional Areas A and B
of the Bath profile. That is, it supports "author" , "title", "subject", and "any" keyword searches; and the

116
holdings data is embedded in the MARC21 records that are returned (Stephen or I can give you the holdings
details when the time is right).

I plan to improve the server to conform to the full Bath Profile, but the improvements will require a complete
rewrite--not exactly something I want to do in the month deadline that we have given ourselves for using Koha
with MORE. So, before I embark on that rewrite, are there any requirements that I need incorporate into my
"temporary hack" before it can be used for MORE? What exactly are the minimum requirements for
participation in MORE? Can we start testing?

Thanks,

Joshua F

***

RE: Nelsonville migration and MORE


Date: Monday Sep 8, 2003, 6:53 AM
From: carolb
To: joshuaf, shedges

Hi Joshua,

The other search that MORE requires is an ISBN; you may think that's unusual because library patrons don't use
ISBN, but that's the search that our software uses to identify libraries that own the requested item.

I can start testing what you have now. Eventually, I will need the holdings information to complete testing. In
the meantime, if you give me the IP/URL for the Z39.50, port number, and the name of the database, I'll start
testing.

Regards,

Carol B

***

[Koha] ding, ding, Koha 2.0.0pre3 has been released.


Date: Monday Sep 8, 2003, 4:12 PM
From: paul.p
To: koha-devel, koha

A new version of Koha, the 1st Free software ILS has just been released, and is avaible here:
https://sourceforge.net/project/showfiles.php?group_id=16466

Enjoy

117
RELEASE NOTES ============= This version of Koha is still in the "unstable" directory, because some
bugs are still reported. But it's fully useable and used in (at least) 3 libraries in "real world" : * Nelsonville
Public Library, Ohio, USA * Esiee, high school, France * Dombes Abbey, France The next version
should/might be the 2.0.0RC1. The only missing features are : * a migration tool for libraries already using
Koha1.2.x * an automated upgrading tool for future version. It's possible to upgrade manually.

***

Reserves
Date: Wednesday Sep 17, 2003, 10:21 AM
From: owenl
To: shedges
Cc: kenr

I was going to point you to a conversation Chris and I had on IRC this afternoon, but it looks like the logger's
not working, so I'll have to remember what we said:

First of all, he said that the 'reserve date' and 'end reserve date' that appear in the default opac template DO NOT
work, that there's no place in the database for them to work. When I described how I thought they should work,
he said that I should post an enhancement bug for it, which I did.

Next, found vs. waiting. Here's how the 'found' column works in the reserves table. When a reserve is first
placed, of course, the value is NULL. When the item is returned, the value of the 'found' column is changed to
'W' for waiting. Then when the item is checked out and the reserve is fulfilled the value is changed to 'F' for
filled. Chris said the name of the column, 'found,' is an artifact of how the system used to work.

I said that the way we though it should work was this: That when the book was scanned the status would be
changed to 'found.' Then, when the book was scanned again at the destination location, its status should be
changed to 'waiting' (to use the Koha vocabulary). Just then Rosa came online and he said 'hey Rosa listen to
this!' and we went over it again. She thought it sounded like a good idea. Chris asked me to write up the process
in the Wiki or in a bug report. Sounds like if Rosa likes it for their library, that we've got a good chance of it
getting into a post-2.0 version fairly quickly.

I didn't write up the report on it because I wanted to confer with you to come up with our preferred process (we
should bring Ken in on it as well). I didn't mention the whole idea of 'in-transit,' because at first I didn't want to
be asking too much. With Rosa on board, though, we should lay out all our ideas, though. So...

Book is scanned, status set to 'found' (preventing identical copies from being set aside) and status (somehow) is
set to reflect it's transit status (if any).

When book is scanned at its destination location, the reserve status is set to 'waiting,' indicating that the book is
available at the pick-up location.

Finally, when the book is issued, the reserve status is changed to 'filled.' and the reserve is closed.

Am I leaving anything out?

118
Also: Rosa said she was struggling with getting a reserve list report as well, and Chris asked if they could have
a look at what you've put together. I told him all the bugs weren't worked out, but since he's interested I'll bet
he'd help with it.

-- Owen

***

Re: Reserves
Date: Wednesday Sep 17, 2003, 11:20 AM
From: kenr
To: owenl
Cc: shedges

Owen:

Looks perfect to me. The most confusing part about reserves right now is the fact we really have little idea
about status until the item actually arrives at the destination. The 'waiting' status listed in the catalog and the
'waiting' status on the patron's account don't match UNTIL the item is scanned at the destination location. We
could be telling patron's "oh, you have a DVD waiting for you at APL", based on their account, when actually
it's in-transit (in cargo)to PPL.

When the item is "FOUND", and sent off, it should be listed as "in-transit to" (or SENT TO), and NOT listed as
waiting until the item arrives...so you have it down pat in your instructions to Chris. One thing: we need to
make it clear that, once scanned, the reserved item sent off to another branch should be AUTOMATICALLY
placed in-transit. This was the way you listed it in your example, and we need to make sure this is how they
perceive our needs. Otherwise, we could have three-step approach (item found, item in-transit, item waiting),
rather than the easier two-tier we're all wanting (that is, combine 'found' and 'in-transit' into one step).

Let's face it, Our "previous vendor-who-shall-not be-named" had it down perfectly, and we need for Koha to
adopt that system, or something like it. I prefer the Koha terms of 'found' and 'waiting' to 'delayed allocation'
and 'allocated', however. It seems to me it's really ALL about getting an in-transit status functioning within the
system, and there's no getting around that fact.

--Ken

***

Re: Reserves
Date: Wednesday Sep 17, 2003, 12:48 PM
From: shedges
To: owenl
Cc: kenr

119
Sorry to be so tardy responding, but I've been busy doing overdues, getting the catalogers set up to import their
records by themselves, and fixing the too-few renewals problem (should be allowing two renewals now, Ken).
Now to reserves --

Book is scanned, status set to 'found' (preventing identical copies from being set aside) and status (somehow) is
set to reflect it's transit status (if any).

I almost agree. I think there's an unnecessary extra step in here (like Ken), and it's a step that would cause
problems for the developers, namely the setting status to in-transit. That would require another column in the
items table, alongside notforloan, itemlost, etc. If the row in the reserves table gets the value 'found' (maybe "L"
for "located?"), then the scripts and modules could interpret that information and report "In transit to $branch"
on the intranet displays, or "Located" in the OPAC, or whatever. That wouldn't require any change to the tables.

When book is scanned at its destination location, the reserve status is set to 'waiting,' indicating that the book is
available at the pick-up location.

right.

Finally, when the book is issued, the reserve status is changed to 'filled.' and the reserve is closed.

yep.

Seems simple. I'm sure Chris will have it done by tomorrow ;-)

Also: Rosa said she was struggling with getting a reserve list report as well, and Chris asked if they could have
a look at what you've put together. I told him all the bugs weren't worked out, but since he's interested I'll bet
he'd help with it.

Bugs is putting it mildly. I'm going back to square one and starting over. In fact, that's next on my list...

Stephen

***

Bookmobile system
Date: Friday Sep 19, 2003, 3:27 PM
From: owenl
To: shedges

I've been working on a PHP system for checkouts for the bookmobile. I've got two separate tables set up--
issues and returns*. When you check out, a row is inserted with the patron barcode, the item barcode, and a
timestamp. When you return a row is inserted with the item barcode and a timestamp.

My question now is, how shall we get this data into Koha? Will you be writing a script to put the transactions
into Koha? I'm guessing that we'd need to add items and returns simultaneously, sequentially by timestamp.
That way if an item was returned at the beginning of the day and checked out at the end it would keep those

120
transactions straight. Will that be a problem with the issues and returns in separate tables? I haven't even
thought it through. Can you select all from both tables and order by timestamp? Looks like maybe not, but my
sql is rusty.

I'm also assuming that when at the end of the day a dump is made of all the transactions that the tables should
be emptied. That way you're not sifting through old stuff when you're loading the day's transactions.

Let me know how all this sounds,

-- Owen

***

Re: Bookmobile system


Date: Saturday Sep 20, 2003, 1:54 PM
From: owenl
To: shedges

I'd suggest we do something simpler and use only one table, with a column that is set to 0 if it's an issue, 1 if it's
a return.

That sounds good. I was thinking that a script could tell it was a return if the patron barcode field was NULL,
but it would probably be safer to set a 0 or 1 as an identifier.

These functions also ask questions, like "renew item?" "check out item even though it's reserved?" etc., so we
would probably want to build a template similar to the current circulation template to give Greg the chance to
see the questions and respond to them.

You mean when the database dump is loaded into Koha? I wonder how difficult that would be... On the other
hand, once he's at that point the deed is already done--the item *has* been renewed, it *has* been checked out.
Is there a point to displaying the warning? Unless Greg is going to make a note of it for later.

You know, at some point his patrons are going to say "can you tell me what I have checked out?" and this
system won't be able to handle that. But I don't see any practical way to make that possible.

Yeah, I kept finding myself wanting to extend it to provide more features, but it would have quickly cascaded
into a full-blown Koha clone, and we don't really have time for that, do we?

-- Owen

***

Re: Bookmobile system


Date: Monday Sep 22, 2003, 10:45 AM
From: owenl
To: shedges

121
We could always set the script to just send back a "Y" reply everytime. Would Greg be missing anything
important then? Yes, if something was returned to the bookmobile but reserved for someone else, he would need
to know. Perhaps we could trap that message only.

I don't know if it's possible, but could you just log which returned items had reserves and give Greg a list at the
end of each database load? Then you really could send a 'Y' reply every time.

We could install a copy of Koha on the laptop, but would somehow have to do daily mysqldumps to it to keep it
relatively current.

I'm assuming that the amount of data involved makes this impractical. Too bad we can't set Greg up with
wireless access on the bookmobile.

Time? We still haven't got the $#@% laptop! And Greg has functioned without being computerized for years
now, another month won't hurt anything.

I know folks at other branches would like to be able to know what the true status of items on the bookmobile,
and they'd like to be able to have reserve access to those items as well. The sooner we can get that happening
the happier they'll be.

I modified my php pages to use just the one table now, so it should be ready to go as soon as the laptop arrives
and I get a chance to configure it.

I'll probably run that script every couple of days until the bug is fixed. (The bug happens four or five times a
day.)

That's good to hear. When was the last time one of our bugs got fixed, anyway? >:(

-- Owen

***

Re: bug 599


Date: Monday Sep 22, 2003, 8:52 PM
From: paul.p
To: shedges

Stephen Hedges wrote:

Paul, any progress on bug 599? Did you get the mysql dump?

yes, i've got it. my local mySQL import your DB since 4 hours... Really huge (allready 500Mb of datas !).

I'll work on it tomorrow morning, before leaving (I've a demo of Koha in Strasbourg, 700km far from
Marseille)

122
Paul P

***

Searching suggestions
Date: Tuesday Sep 23, 2003, 9:00 PM
From: joshuaf
To: chris
Cc: shedges, owenl

Chris,

As you know we've been gettig a lot of complaints about the level of accuracy that the Koha OPAC and Intranet
Catalog searchs require. Here are a couple of examples:

Searching for "baby sitting 101" turns up no results; but "baby-sitting 101" pulls up the video by that name.
Same goes for: "world's best true ghost stories" vs "world's best "true" ghost stories" "don't stop loving me" and
""don't stop loving me""

These are just a few results, there are literally thousands of titles, authors, etc., that contain problematic
punctuation marks.

So, my suggestion is the following. Parse all incoming search terms and replace all punctuation and spaces with
a % so that MySQL will treat them as wildcards; also, place a % before and after each search term. So, for
instance, "baby sitting 101" becomes "%baby%sitting%101%" which returns the appropriate record. Likewise,
"don't stop loving me" becomes "%don%t%stop%loving%me%" which also works fine (this will also catch
cases where the catalogers missed the apostrophe). I can't think of a solution for the problem of users missing
the apostrophe--that will probably always fail.

Does that sound reasonable? Of course it would be a temporary solution for NPL before Koha searching is
completely rebuilt post-2.0-release, but it may save us headaches in the meantime. How long would that take to
implement?

Thanks,

Joshua

***

Re: Searching suggestions


Date: Wednesday Sep 24, 2003, 12:56 PM
From: chris
To: joshuaf
Cc: shedges, owenl

Hi All

123
Hmm yep, punctuation is the biggest problem currently with the searching. Basically what happens now is we
dont phrase search, we search on separate words

so for your example above

baby sitting 101

is searching this

baby% and sitting% and 101%

so it would find 101 baby sitting or baby 101111 sittings etc (order doesnt matter)

So perhaps what what we could do is add a checkbox (phrase search) or something. Which does what you
suggest search on %baby%sitting%101%

So that we can still do our order not important, and missing words not important eg 101 baby finds baby-sitting
101 in your opac currently and it wouldnt if order mattered.

So although its more picky in some ways its more forgiving.

So would being able to do both be a good solution?

Chris

***

Re: Nelsonville migration and MORE


Date: Tuesday Sep 23, 2003, 9:50 PM
From: joshuaf
To: carolb
Cc: shedges

Carol,

Thanks for giving me access to the MORE training site. It was really useful for adding the search types that I
had not anticipated...I think I've got most if not all of them. However, although the searching seems to work (the
correct number of hits are displayed), the records that are retrieved are not being displayed (or are not being
retrieved correctly) by the MORE Z-Client. There are a couple of possible reasons for this that I can think of.

1. The MARC records that I am generating are not well-formed But I am skeptical of this since three other
MARC clients display the MARC records just fine (two of them are Bookwhere and the generic
gateway: http://www.g7.fed.us/enrm/pilot/genericz.html.)
2. MORE requires use of a database name. I have added a database name subroutine; the server will now
respond to the name NPLKoha, so if you add that to the client specs and things still aren't working
correctly it's probably not the database name.

124
3. The MORE client does not accept MARC21 (ISO 2709) records or our test client is not set up to do so. I
doubt this is it, but just to be sure, does MORE accept ISO 2709? Are we sure that the testing client is
not looking for SUTRS, XML, UNIMARC, or any other record types (my server does not support those
types yet).
4. The MORE client requires that MARC records contain specific leader digits that my MARC records do
not contain (for example, leader field 05, Record status; field 06, the type of record, etc.). Are there such
requirements of the MARC record leaders for MORE? If so I can easily populate the leader with some
defaults, but I will need to know what exactly is required.

That's all I can think of. I'll leave the server up and running so you can run diagnostics on it. Again, thanks for
all the help, I'm sure we can get this sorted out. Let me know if anything turns up on your end or if you have
any suggestions, etc.

Joshua

***

OPLIN Award for Innovation


Date: Thursday Sep 25, 2003, 2:14 PM
From: dony
To: shedges

Stephen--

Have you thought about nominating your own library for the OPLIN Award for Innovation in Network-
Delivered Services? Becoming the first American public library to implement an open source library automation
system is clearly one of the most innovative things I've heard of in the last year. The award criteria does skew
toward less culturally momentous projects, but I would definitely put you in the running. Anyway, I'm
considering you nominated.

A nomination from you, however, would probably have stronger answers to the award criteria:

1. Serves the public in a direct, immediate, and beneficial way.


2. Builds on traditional library areas of strength but uses the technology to actually expand service.
3. Overcomes previous constraints, such as location and hours, using network technology.
4. Demonstrates creativity and courage in seeking new ways to do things better.

I have some from-a-distance, academic answers to how your Koha implementation meets these criteria. If you
want to provide some insider answers, feel free.

Regards,

Don Y

***

125
Bug 599, more detail
Date: Thursday Sep 25, 2003, 3:43 PM
From: shedges@athenscounty.lib.oh.us
To: paul.p
Cc: owenl

Paul -

Owen and I did some testing of the item editing problem I reported (bug 599) using the default templates, with
inconsistent results. We were able to reproduce the error sometimes, but not every time. So now, I think it would
be valuable to take a closer look at what actually happened when we first noticed the error to see if this is a
database problem or a code problem.

First, remember that the problem started when staff had added an item to an existing biblio record without a
barcode, and then edited that item to add the barcode. The barcode was 31000000109284. After the edit, this
barcode appeared correctly when searching the MARC database (bibid 130229), but was attached to the wrong
biblio (number 37717) in the old Koha DB. (At this point in the database, the bibid and the biblionumber are the
same.)

So here's the bibid/biblionumber 37717 from the MARC database:

mysql> select * from marc_subfield_table where bibid=37717;


+------------+-------+-----+----------+---------------+--------------+
| subfieldid | bibid | tag | tagorder | tag_indicator | subfieldcode |
subfieldorder | subfieldvalue | valuebloblink |
+------------+-------+-----+----------+---------------+--------------+
| 4946182 | 37717 | 942 | 33 | | k | 3 | AF Steel | NULL |
| 4946181 | 37717 | 942 | 33 | | c | 2 | AF | NULL |
| 4946180 | 37717 | 942 | 33 | | a | 1 | ACLS | NULL |
| 4946179 | 37717 | 300 | 13 | | c | 3 | 17 cm. | NULL |
| 4946178 | 37717 | 300 | 13 | | b | 2 | pa. | NULL |
| 4946177 | 37717 | 300 | 13 | | a | 1 | 385 p. | NULL |
| 4946176 | 37717 | 260 | 12 | | c | 3 | c1973. | NULL |
| 4946175 | 37717 | 260 | 12 | | b | 2 | Pocket Books | NULL |
| 4946174 | 37717 | 260 | 12 | | a | 1 | New York : | NULL |
| 4946173 | 37717 | 245 | 8 | | c | 2 | Danielle Steel. | NULL |
| 4946172 | 37717 | 245 | 8 | | a | 1 | Going home / | NULL |
| 4946171 | 37717 | 100 | 6 | | a | 1 | Steel, Danielle. | NULL |
| 4946170 | 37717 | 020 | 2 | | a | 1 | 0671749412 | NULL |
| 1385068 | 37717 | 952 | 12 | | b | 1 | ALB | NULL |
| 1385069 | 37717 | 952 | 12 | | p | 2 | 37000000019134 | NULL |
| 1385070 | 37717 | 952 | 12 | | r | 3 | 6.99 | NULL |
| 1385071 | 37717 | 952 | 12 | | u | 4 | 74135 | NULL |
| 1385072 | 37717 | 952 | 13 | | b | 1 | NPL | NULL |
| 1385073 | 37717 | 952 | 13 | | p | 2 | 31000000010720 | NULL |
| 1385074 | 37717 | 952 | 13 | | r | 3 | 5.97 | NULL |
| 1385075 | 37717 | 952 | 13 | | u | 4 | 74136 | NULL |
| 4946183 | 37717 | 090 | 34 | | c | 1 | 37717 | NULL |
| 4946184 | 37717 | 090 | 34 | | d | 2 | 37717 | NULL |
| 4946190 | 37717 | 952 | 35 | | b | 1 | ALB | NULL |
| 4946191 | 37717 | 952 | 35 | | d | 2 | ALB | NULL |
| 4946192 | 37717 | 952 | 35 | | p | 3 | 37000000025523 | NULL |

126
| 4946193 | 37717 | 952 | 35 | | r | 4 | 6.99 | NULL |
| 4946194 | 37717 | 952 | 35 | | v | 5 | 2003-09-05 | NULL |
| 4946195 | 37717 | 952 | 35 | | u | 6 | 271848 | NULL |
+------------+-------+-----+----------+---------------+--------------+
29 rows in set (0.02 sec)

And here's bibid/biblionumber 37717 from the items table:

mysql> select * from items where biblionumber=37717;


+------------+--------------+-----------------+------------------+
| itemnumber | biblionumber | multivolumepart | biblioitemnumber
| barcode | dateaccessioned | booksellerid | homebranch | price
| replacementprice | replacementpricedate | datelastborrowed
| datelastseen | multivolume | stack | notforloan | itemlost | wthdrawn
| bulk | issues | renewals | reserves | restricted | binding | itemnotes
| holdingbranch | paidfor | timestamp |
+------------+--------------+-----------------+------------------+
| 74135 | 37717 | NULL | 37717 | 37000000019134 | 2003-09-01
| NULL | ALB | NULL | 6.99 | 2003-09-01 | 2002-05-30 | 2002-06-13
| NULL | NULL | 0 | 0 | 0 | NULL | NULL | NULL | NULL | NULL | NULL
| NULL | ALB | NULL | 20030902020713 |
| 74136 | 37717 | NULL | 37717 | 31000000010720 | 2003-09-01
| NULL | NPL | NULL | 5.97 | 2003-09-01 | 1998-06-23 | 1998-07-06
| NULL | NULL | 0 | 1 | 0 | NULL | NULL | NULL | NULL | NULL | NULL
| NULL | NPL | NULL | 20030902020715 |
| 271848 | 37717 | NULL | 37717 | 31000000109284 | 2003-09-05
| NULL | ALB | NULL | 6.99 | 2003-09-05 | NULL | 2003-09-05
| NULL | NULL | NULL | NULL | NULL | NULL | 1 | NULL | NULL | NULL
| NULL | | ALB | NULL | 20030905103248 |
+------------+--------------+-----------------+------------------+
3 rows in set (0.00 sec)

Notice that it look like the edited barcode (31000000109284) replaced barcode 37000000025523 that appears in
the MARC record. But also notice that the MARC record has an accession date (tag 952v) attached to barcode
37000000025523 that is actually the date when the edit to barcode 31000000109284 was done! It should
actually have no accession date, because no date was in the imported MARC record loaded on September 1.

So what happened to barcode 37000000025523 in the old Koha DB?:

mysql> select * from items where barcode='37000000025523';


Empty set (0.00 sec)

It's gone, been replaced by the edited barcode number. Here's the edited barcode number in the MARC table:

mysql> select * from marc_subfield_table where subfieldvalue'31000000109284';


+------------+--------+-----+----------+---------------+--------------+
| subfieldid | bibid | tag | tagorder | tag_indicator | subfieldcode |
subfieldorder | subfieldvalue | valuebloblink |
+------------+--------+-----+----------+---------------+--------------+
| 4946298 | 130229 | 952 | 35 | | p | 3 | 31000000109284 | NULL |
+------------+--------+-----+----------+---------------+--------------+
1 row in set (0.02 sec)
mysql> select * from marc_subfield_table where bibid=130229;

127
+------------+--------+-----+----------+---------------+--------------+
| subfieldid | bibid | tag | tagorder | tag_indicator | subfieldcode |
subfieldorder | subfieldvalue | valuebloblink |
+------------+--------+-----+----------+---------------+--------------+
| 4974785 | 130229 | 090 | 37 | | d | 2 | 130229 | NULL |
| 4946326 | 130229 | 952 | 36 | | b | 1 | APL | NULL |
| 4974784 | 130229 | 090 | 37 | | c | 1 | 130229 | NULL |
| 4974783 | 130229 | 942 | 36 | | c | 1 | YA | NULL |
| 4974782 | 130229 | 245 | 10 | | a | 1 | JLA #85 (Comic) | NULL |
| 4937788 | 130229 | 952 | 4 | | b | 1 | APL | NULL |
| 4937789 | 130229 | 952 | 4 | | d | 2 | APL | NULL |
| 4937790 | 130229 | 952 | 4 | | p | 3 | 32000000140494 | NULL |
| 4937791 | 130229 | 952 | 4 | | r | 4 | 2.25 | NULL |
| 4937792 | 130229 | 952 | 4 | | v | 5 | 9/3/2003 | NULL |
| 4937793 | 130229 | 952 | 4 | | u | 6 | 271215 | NULL |
| 4946185 | 130229 | 952 | 35 | | b | 1 | NPL | NULL |
| 4946186 | 130229 | 952 | 35 | | d | 2 | NPL | NULL |
| 4946187 | 130229 | 952 | 35 | | r | 3 | 2.25 | NULL |
| 4946188 | 130229 | 952 | 35 | | v | 4 | 2003/09/03 | NULL |
| 4946189 | 130229 | 952 | 35 | | u | 5 | 271848 | NULL |
| 4946298 | 130229 | 952 | 35 | | p | 3 | 31000000109284 | NULL |
| 4946327 | 130229 | 952 | 36 | | d | 2 | APL | NULL |
| 4946328 | 130229 | 952 | 36 | | u | 3 | 271853 | NULL |
| 4974323 | 130229 | 952 | 38 | | b | 1 | NPL | NULL |
| 4974324 | 130229 | 952 | 38 | | d | 2 | NPL | NULL |
| 4974325 | 130229 | 952 | 38 | | p | 3 | 31000000109390 | NULL |
| 4974326 | 130229 | 952 | 38 | | r | 4 | 2.25 | NULL |
| 4974327 | 130229 | 952 | 38 | | v | 5 | 2003-09-12 | NULL |
| 4974328 | 130229 | 952 | 38 | | u | 6 | 273475 | NULL |
| 4974786 | 130229 | 952 | 39 | | b | 1 | COV | NULL |
| 4974787 | 130229 | 952 | 39 | | d | 2 | COV | NULL |
| 4974788 | 130229 | 952 | 39 | | p | 3 | 36000000020678 | NULL |
| 4974789 | 130229 | 952 | 39 | | r | 4 | 2.25 | NULL |
| 4974790 | 130229 | 952 | 39 | | v | 5 | 2003-09-13 | NULL |
| 4974791 | 130229 | 952 | 39 | | u | 6 | 273510 | NULL |
+------------+--------+-----+----------+---------------+--------------+
31 rows in set (0.03 sec)

So what happened here? It looks like the editing process saved the barcode to the correct item ('A') in the
MARC table, but also saved the accession date to an item ('B') attached to a completely different bibid in the
MARC table. And it also looks like the barcode and date for item A overwrote the information for item B stored
in the old Koha DB. So if that is what happened, why did it happen? (Does the tag and subfield order in bibid
130229 look odd to you?)

Stephen

P.S. I just got a call from Francois d'A in Quebec. (He said he had talked to you?) We're not ready for a serials
module -- yet -- but it was still an interesting conversation.

***

RE: [oss4lib-discuss] WebJunction is looking for public library OSS project stories
Date: Friday Sep 26, 2003, 11:11 AM

128
From: joea
To: shedges

It's a fine piece that provides an excellent perspective on the issues.

If you're willing to contributed a "v2" of the interview to WJ, that would be wonderful. Could I get something
from you by, say, October 15?

Re rights on the site, we are working on a Creative Commons style digital rights management model. Today our
policy says that by contributing you give OCLC (the main partner on WebJunction) rights to reuse the material
(though of course you retain rights ownership). Our rather onerous-sounding current legalese says:

NOTE: By submitting material to any public area of the site, you automatically grant, or warrant that the owner
of such material has expressly granted OCLC the royalty-free, worldwide, perpetual, irrevocable, non-exclusive
right and license to use, reproduce, modify, adapt, publish, display, translate and distribute such material (in
whole or in part) and/or to incorporate it in other works in any form, media or technology now known or
hereafter developed.

Let me know if the timeframe and the rights sound OK to you. Happy to discuss either with you further if need
be.

Thanks, Stephen!

Joe

-----Original Message-----
From: Stephen Hedges
Sent: Friday, September 26, 2003 4:48 AM
To: joea
Subject: RE: [oss4lib-discuss] WebJunction is looking for public library OSS
project stories

> The "Questions and Answers about Koha" piece on your site is a good one--is there a chance we could
publish that interview on our site?

Yes, that's a 'fake' interview that we wrote. I should probably update it, maybe that would work for your needs.

> Have you taken a look at our site?

Oh yes, and I get the e-mail newsletter, too. Very nice!

Stephen

***

129
Re: Nelsonville migration and MORE
Date: Monday Sep 29, 2003, 8:41 AM
From: joshuaf
To: carolb
Cc: shedges

Carol,

I'm still working on a solution to the missing separators. They are there when I build the MARC record but get
lost somewhere along the way when I send them off to a Z-client. I hate to have to say this but it looks like
Nelsonville will need to temporarily stop using MORE until this problem is resolved (our old ILS contract
expires tomorrow and we need to shut the machine down).Thanks again for all your help and suggestions. I'll let
you know as soon as the problem is fixed.

Joshua

***

Spydus server
Date: Tuesday Sep 30, 2003, 11:34 AM
From: shedges
To: chris.t

Dear Chris:

As promised, we are now running the command:

rm -fr *

in the /u directory on our Spydus server, to remove all Spydus-related files (which is taking a while!). We are
leaving the /u directory itself, so you should be able to login and verify that the directory is empty. On Friday,
we will disconnect the server from the internet, unless we hear that you would like to have more time to verify
removal of the Spydus files.

Is there anything else we should be doing?

Stephen Hedges

***

Re: Nelsonville migration and MORE


Date: Tuesday Sep 30, 2003, 1:19 PM
From: joshuaf
To: carolb
Cc: shedges

130
Carol,

Ok, so the MARC records that I am building have always had record separators (they are added automatically
when creating the records using MARC::Record) and I thought that my server was interpreting the file encoding
wrong and stripping out the record separators (1D, 1E, and 1F hex). But with further testing it appears that the
yaz-client might have a bug which causes the "no separator at end of field" error. The records can be
downloaded using other clients and the separators are there. In fact, if you use yaz-client to dump the records to
a file you should see the separators. For example:

Z> set_marcdump marc.txt

will dump to marc.txt when you run the "show" command. Note that you will still get the "no separator at end
of field" error but the resulting file should be in clean iso2709 format with all MARC separators present.

Could you confirm this?

I think that the yaz-client is expecting a human readable version of the MARC record and is stripping the
control characters out when it prints them. If the MORE system also expects a human readable version I will
need to know the specs on that are.

You mentioned that is might be possible to get in touch with the vendor to find out what's going wrong...is that
still an option?

Thanks for your patience,

Joshua

***

Z-Server working with MORE


Date: Wednesday Oct 1, 2003, 6:43 PM
From: joshuaf
To: carolb
Cc: shedges

Carol,

I have good news: I was able to get the Z-Server working with MORE this evening! I solved the problem by
populating the leader with the following default values:

nac 22 u 4500

As far as I can tell these are the positions of the MARC leader that MORE (and yaz for that matter)needs to
identify a MARC record as valid ( aside from the reclen and baseaddr which need to be automatically calulated
based on the record string).

So, could you verify that everything is in fact working? What is the next step?

131
Joshua

***

Re: Z-Server working with MORE


Date: Fri, 3 Oct 2003 05:45:58 -0700 (PDT)
From: carolb
To: joshuaf

Hi Joshua,

I'm not able to configure our software to display holdings in the 952 field. I emailed our vendor asking for help;
in the meantime, in case they say we can't work with 952 holdings, do you have other options for holding
fields? 852?

Carol

***

RE: Spydus server


Date: Monday Oct 6, 2003, 9:20 AM
From: chris.t
To: shedges

Stephen,

If you could just send us a letter indicating that you have removed the Spydus software from your systems and
destroyed any Spydus documentation.

Regards

Chris

***

Subject: Re: Z-Server working with MORE


Date: Fri, 10 Oct 2003 05:15:31 -0700 (PDT)
From: carolb
To: joshuaf

Joshua,

Nelsonville is back in MORE-- library staff should be able to make requests and Nelsonville locations will start
going into rotas today (staff may want to wait until Monday, or Tuesday if you're closed for the holiday, to print
a picklist).

132
Have a good weekend,

Carol

9. Conculsion
The end of the story is short and sweet: a little less than two years after we started thinking about moving to an
open-source integrated library system (ILS), we had found a program (Koha) tested it, modified it, and
successfully replaced our former commercial ILS. The e-mails above attest to the fact that it was a busy, but
very interesting, two years.

Since October 2003, the Nelsonville Public Library has continued to be involved in Koha development and
promotion. Koha continues to evolve at blistering speed, and we continue to be satisfied. We consider our
experiment with Koha, which started out as a learning project with no strong expectation of success, to have
been a very rewarding experience. We hope that others who are contemplating a similar experiment will benefit
from our experiences.

133

S-ar putea să vă placă și