Category Archives: Electronic Resources

Stats here, Stats there, Stats everywhere

World setting statistics I decided to take a break from ERM for a few hours today.  Something caught my eye: our Google Analytics statistics for the library’s website, our Summon (discovery service) usage reports, 360 statistics (AZ management system: EBooks and EJournals, Databases) and LibGuides / LibAnswers.

Some background:
Google Analytics gives us insight on how users are interacting with our website. We can get information such as:

  • number of page views
  • number of unique page views
  • average time spent on a page/screen / or set of screens
  • bounce rate:  each time a person leave your site without interacting with it
  • new and returning visitors
  • Browser and Operating system used
  • Mobile statistics such as desktop, mobile and tablet used
  • Users demographics: age and gender
  • and many more

Summon (Discovery layer)  stats provide:

  • Visitor Profiles:
    • Referring Source
    • Geo Location
    • Geo Map Overlay
    • Network Location
    • Domains
  • Technical Profiles
    • Browser
    • Platform
    • Browser and Platform combos
    • Connection Speed
  • Top Queries

Taken from Summon Knowledge Center

360 Usage Statistics provide:

  • Click-Through: A variety of views into the number of times users click on article, journal, ebook, and database links in ProQuest discovery tools.
  • Search and Browse: Shows what types of searches your users are conducting (for example, Title Contains and ISSN Equals) and what subjects they are browsing within ProQuest discovery tools.
  • 360 Link Usage: Reports about where your 360 Link users are starting their research, and how much use 360 Link is getting.
  • 360 Search Usage: Shows the amount of 360 Search federated search sessions and searches.

(Taken from 360 Core and Intota knowledge center)

LibGuides provide statistics on

  • libguide homepage tracking (daily and monthly basis)
  • detailed statistics for all libguides
  • session tracking
  • browser / OS tracking
  • search term tracking
  • assets
  • content summary

(Taken from LibGuide Dashboard)

LibAnswers also provide statistics on

  • general statistics on inquiries
  • FAQs
  • turnaround time

among others.  (Taken from LibAnswers Dashboard).

From all those stats, I started to wonder:

  • What does the term/keyword entered by users into the various access points mean?
  • If a certain keyword/term appears constantly, is it pointing to a lack of information? or lack of awareness of an existing service?
  • Can we improve our library homepage usability when the bounce rates are high?

There are more than just the ones listed above.

How can all these stats influence the way we provide services to our library users – whether it’s an online service or a physical service.  After all, we are here to serve our users.

And we have another ‘toy’

sketchnote project management
After months of research, communication, and discussion, our ERM (Electronic Resources Management) team finally got what we wanted …. a new ERM System (ERMS) to replace the obsolete one. Thank God.

We decided to go for Proquest 360 Resource Manager.  One of the advantages is that we are currently using 360 Core, 360 Marc Updates and Summon; all on the same platform and vendor.  Thus seamless integration.

Now that I have finished the recommendation report, I am planning the Implementation Phase of this ERMS. Exciting times.

The road ahead will be challenging and hopefully rewarding.  Given that we only have a small team with varied expertise level, there will be some learning curve (hopefully NOT a steep one).  Thinking back, it was interesting to note the technologies that we had used before in relation to ERM:

  • Innovative Millennium ERM > Replaced by Proquest 360 Resource Manager
  • CASE > Replaced by Proquest 360 Core
  • Encore > Replaced by Proquest Summon
  • MS Outlook > Complimenting the future ERMS
  • MS Sharepoint > Storage point

as well as the challenges that we faced.

I’m thankful and glad to have the opportunity of leading those projects listed above.

Stay tuned.

Frozen in DC

I attended the Coalition for Networked Information (CNI) conference in Washington DC 2016. Stayed at Capital Hilton which was quite near the White House. Weather wise: Freezing cold but no snow.  Flight in was good with no delays.

Before the conference started, I had the chance to discover DC.  But due to time constraint, I only managed to walk to the White House and the nearby streets.  It was not like in 2010 ALA conference where I had the chance to visit the mall and the nearby Library of Congress.  One incident which I won’t forget is the hotel evacuation due to a fire incident sometime during 4 am.  It was freezing cold.  Had to stand with the others in the streets, watching the Fire Brigade or Battalion in action.  However, the hotel informed us that we could wait in the nearby hotel lobbies or chill@Starbucks.

Back to the conference stuff.  There was a bunch of interesting project briefings given by various universities.  The ones that I attended were:

  • Research Software Preservation/Sharing
  • Cost of Open Access: Pay it Forward
  • Scholars@Cornell: Visualizing Scholarly Record
  • Expanding Research Data Services
  • The Future of Finding at Oxford
  • Institutional Learning Analytics

Below are some of the CNI conference videos:

The Cost of Open Access to Journals: Pay It Forward Project Findings from CNI Video Channel on Vimeo.

Makerspaces, Virtual Reality, The Internet of Things at alia Stories from CNI Video Channel on Vimeo.

In the nutshell: It was my maiden conference for CNI.  I found it useful as there was a lot of takeaways as well as insights topics new to me.  Given my interests in Web Discovery and Virtual Reference, there were several briefings that caught my attention.  One of which was “The Future of Finding at Oxford”.  They have published their report online.  It’s very comprehensive, outlining their aims, objectives, project methodology and related matters.  (I am still reading this).

I also googled for previous CNI briefings in Youtube and discovered an interesting talk on Virtual Reference:

One more thing: I should have listened to my wife on bringing just a few clothes for the conference (I was there for about 4.5 days). The custom officers were looking at my ‘huge’ luggage and decided to take at look at it.  Out came the Nescafe coffee bottle, sugar sticks, 4 sweaters, biscuits and so forth.  Before clearing me, the officer commented that I should be well insulated during my stay there 🙂

 

Dealing with unmanageable stuff

Access

How do you deal with something that is not easy to manage? By this, I mean issues related to the excessive or systematic download of the library’s subscribed electronic resources materials.

A little background to this:  Libraries sign license agreements with electronic resources publishers to ensure that users do not violate or infringe any copyright regulations.    In addition, we also have to ensure that there are no ‘crawling’ activities – our users do not deploy some form of software to download multiple documents within a very short period of time.  So, when any of these happen (excessive / systematic downloading), it will trigger an automated access block on the suspected campus IPs from the publishers.  Thus, users are not able to access that particular electronic resources.

The last few weeks have been pretty busy with a spike of such incidents.  I had to liaise with the publishers to ensure that our access is reinstated as well as assuring that we will investigate the matter thoroughly on our end.  In addition, I have to contact our counterparts – campus IT Security Team.  Once we have identified the ‘perpetrators’, my next task will involve contacting them and ensuring that they are NOT to repeat the ‘act’ again. (if they are guilty of it)

We have been very proactive in ensuring that our users are well-informed about these issues.  Some of our initiatives include:

  • Announcement on our library website
  • Email blast
  • Posters at strategic places on campus such as campus diner, congregation areas in the library
  • Library trainings
  • Electronic Billboards
  • Informal meetings / chats with our users on this issue
  • Re-visiting our license agreements and renegotiating with the electronic publishers

Hopefully, we can reduce the number of such incidents; though eradicating them would be rather tough.  Like I said before, sometimes when it rains, it pours …

 

User Experience: “You are NOT your user”

I’ve started on our user experience interviews on the use of our new discovery service namely Summon and AZ portal.  One of the objectives is find out our users’ behavior when they search/browse our electronic resources.  Participants include:

  • Faculty members
  • Post-docs
  • Students

Being a digitally born library, our e-resources far outnumber the print collection. Therefore it is imperative for us to know how best we can align our discovery services and other added value services to meet our users’ needs.

Some Questions posed:

  • Have you used Summon / 360? If Not, why?
  • How often do you conduct your research?
  • What obstacles do you face during your research process?
  • How can we, the library, help you to make your user experience (Summon/360) better?

And taking a leaf from this book that I am currently reading: “You are NOT your user” – Admit it.

img_5854

Of Search Boxes and library websites Pt 1

Recently, I conducted an exploratory study on library websites search boxes (5 Oct 2016).  I wanted to know the types of search boxes deployed by libraries that are using Summon as their web scale discovery layer. I googled and discovered around 83 Summon-ed libraries from Australia, Canada, Denmark, Ireland, Saudi Arabia, New Zealand, Singapore, Sweden, UAE, United Kingdom, and the United States of America.

Here are the various types of search boxes:

There are several distinctive types:

  • Simple search boxes
  • Multi-Tabbed search boxes
  • Search boxes with radio buttons
  • Search boxes with drop down features
  • Combination of multi-tabbed / radio buttons / drop down

I noted that the libraries tend to go with simple search boxes or the multi-tabbed search boxes.  Out of the 83 chosen sites, 35 (42%) deployed simple search boxes while 39 (47%) deployed multi-tabbed search boxes.

datatab

I was curious whether simple or multi-tabbed would be a better choice to deploy on a library website.  In this Google-era, most searchers would just enter their search phrases or keywords into the search boxes.  They expect to get relevant results at the top of the list.  (Bear in mind that we have a multitude of users out there: the experts, the intermediates and the novices).

Results are often determined by some of these factors (not exhaustive):

  • search terms / phrases used
  • metadata used to describe the library’s resources
  • storage medium of these resources
  • level of IT knowledge of the users
  • exposure to any form of training

Coming back to the library websites, I did some sample known item searches (for example: exact title of an electronic journal) on random library websites and noted that:

  • Most of them cataloged their electronic resources (e-books, e-journals etc) into their OPAC (classic catalog) even though there is evidence that they have an AZ Portal for electronic resources

Our library is ‘moving’ all our electronic resources from the classic catalog and tap into the AZ portal as the resource base.  We are hoping that Summon and AZ are capable to ‘talk’ to each other.  And we are implementing a feature in AZ so that these e-resources can be easily searched and located in Summon.  I’m hoping that this work. Otherwise, I may have to re-think on other alternatives.  Stay tune.

 

 

 

IP Registry

Got to know this from one of my colleagues. Looks promising. For my Electronic Resources peers who are interested to find out more about this, check their website:  IPRegistry.org

One of the benefits:  “make it easier for libraries to communicate any changes in their authentication details to all publishers who sign up to use the service, saving them significant time and reducing errors. The registry already contains 1.5 billion validated IP addresses for over 60,000 content licensing organisations worldwide.” Taken from their news release .

Hope this helps.

« Older Entries Recent Entries »