Discovery platform evaluation
The SWAN UX team and the Discovery and User Experience Advisory Group (DUX) conducted an evaluation of the current landscape of online catalog (OPAC) and discovery platforms. The goal of this evaluation was to establish a shared understanding of the options available to our consortium and determine the future direction of our online catalog.
Phase 1: Survey
The first phase of this evaluation was a survey of all available discovery platforms, with an initial evaluation of each against a set of inclusion and exclusion criteria. These criteria determined if the platform deserved further evaluation in the next phases of our research.
62 discovery platforms were collected in the survey. The platforms that meet the inclusion criteria are:
- Enterprise/Portfolio - SirsiDynix
- WorldCat Discovery - OCLC
- Aspen - Open Source
- BiblioCore - Bibliocommons
- Encore - III
- Evergreen OPAC - Open Source
- Koha OPAC - Open Source
- Polaris Discovery - III
- SearchIt/ShareIt and Verso - Autographics
Read the Platform Survey Report, including the full list of platforms reviewed and inclusion and exclusion criteria.
Phase 2: System Usability Scale (SUS) analysis
The DUX group will perform an analysis of the remaining 9 platforms against the System Usability Scale (SUS) and assign a score. Platforms with a score lower than the score for Enterprise, our current platform, will be eliminated for inclusion in the next phase of research. Ideally, 3-4 platforms, including Enterprise, will move onto the next phase of research.
Read the SUS Analysis Report
Phase 3: Discovery Platform Feature Matrix
The final 3-4 platforms were evaluated against the Discovery Platform Feature Matrix.This tool is a weighted matrix template, which lists important features or goals and assigns a 'weight' based on importance. DUX assigned weights to a comprehensive list of discovery platform features. The possible scores are as follows:
- 0 - Not important at all
- 1 - Of little importance
- 2 - Of average importance
- 3 - Very important
- 4 - Absolutely essential
Each platform will then receive a score for each feature, based on if it meets, doesn’t meet, or “sort of meets” the requirement:
- 0 - Not present or unknown
- 1 - Future release
- 2 - Partial functionality
- 3 - Full functionality
The weight and score are multiplied, resulting in a weighted score for each feature and each discovery platform. In addition, features are grouped into categories so we can more easily compare the score for broader categories of features (e.g. which platforms score higher for mobile experience, eResource integration, etc.)
It is important to note that this is a qualitative research method that provides a structure for conversations about the potential features available in different discovery systems. A platform that receives the highest score may not necessarily be "the best" platform. However, the scores will be a valuable decision-making tool for the consortium to determine the future direction of our discovery platform.