Difference between revisions of "Open Refine Data Cleaning"
| Line 171: | Line 171: | ||
# LengthRule (lines 506-535): Validates that a string value has exactly the expected character length and provides a transformation function to truncate to that length. | # LengthRule (lines 506-535): Validates that a string value has exactly the expected character length and provides a transformation function to truncate to that length. | ||
# TextContainsRule (lines 537-565): Validates that a value contains at least one of the required substrings (case-insensitive). | # TextContainsRule (lines 537-565): Validates that a value contains at least one of the required substrings (case-insensitive). | ||
| + | |||
| + | These rules are instantiated and applied by the Validator class's <code>validateCell()</code> method (lines 580-801), which checks constraints in a specific order: required rules first, then dependent required rules, then type-specific rules, and finally value constraint rules. | ||
'''Visual Feedback''': | '''Visual Feedback''': | ||
Revision as of 00:18, 21 November 2025
Contents
1 Introduction
Data cleaning is an important part of the scientific data management and curation process and involves activities such as identification and correction of spelling errors (for example on scientific names), mapping data to agreed controlled vocabularies (e.g. ISO country names), setting correct references to external files (e.g. digital images), as well as re-arranging data so that they comply with standards required for specific data processing steps (e.g. imports into the JACQ collection management system at the BGBM).
The BGBM scientific data workflows are based on the assumption that scientists are cleaning their data themselves to meet the BGBM data formats and standards. To support this important step of the scientific process, we provide an open installation of OpenRefine (formerly known as Google Refine) on a BGBM server at http://services.bgbm.org/ws/OpenRefine/.
!! Please note that the BGBM OpenRefine server is only intended for the use as a tool for working on your data. It does not offer functions for storing data permanently and safely. We will however not delete any data without notice !!
Like Microsoft Excel, OpenRefine operates primarily on tabular data with rows representing a measurement, occurrence record, or specimen for example and columns representing specific attributes (e.g. collectors, locality, longitude, latitude). Unlike Microsoft Excel, OpenRefine focusses specifically on functions and operations which are particularly useful for cleaning and arranging scientific data. This includes, for example, the automated detection and merging of (potential) duplicate data items such as collector names, which have been captured using slightly different spellings but are referring to the same persons or person teams. The BGBM BDI team is constantly adding functions to the OpenRefine server installation, which are particularly useful in the context of botanic collection data processing. Please contact us for further information (email).
2 (Very) first steps
OpenRefine is extremely powerful and we will definitely not be able to compete with dozens of already existing excellent manuals and tutorial videos. Please refer to the Resources section for further information. Here, we would like to guide you through the very first steps with OpenRefine using the example of a BGBM Collection Data Form:
- Open your web browser and navigate to the BGBM OpenRefine installation at http://services.bgbm.org/ws/OpenRefine/.
- Press the “Choose File/Browse”-Button and select a CDF-File of your choice.
- Press “Next”.
- You will now be greeted with a first preview of your data. Since your CDF-File consisted of several (Excel) worksheets, OpenRefine needs to know, which of them need to be considered for importing data. In this case, de-select all worksheets except the collection dataform worksheet (see fig. 1). OpenRefine also needs to know, which row contains the relevant header of your worksheet. For the CDF you will have to “ignore first 3 line(s)” and “parse next 1 line(s)”.
- Finally, press the “create project” button and you are ready to go.
Fig.1: selection of worksheets and header row.
3 E-Mesh Extension
3.1 Overview
The e-mesh extension is a OpenRefine plugin for managing and validating botanical specimen data. It provides specialized tools for data quality checking, field normalization, and enrichment through integration with the JACQ herbarium management system. The extension supports two primary data standards: CDF (Collection Data Form) and JACQ import files. Further data standards can be added as required.
This section provides a high-level overview of the extension's architecture, components, and workflows. For detailed information about specific subsystems, see:
Client-side UI components: Client-Side UI Extensions Server-side integration: Server-Side Components Validation rules and standards: Configuration and Standards Complete workflow guides: Data Workflows
3.2 System Architecture
The e-mesh extension operates as a three-tier system within OpenRefine, consisting of client-side JavaScript UI enhancements, server-side Java data processing, and configuration-driven validation rules.
3.3 Core Data Standards
The extension enforces data quality through two constraint schemas that define required fields, data types, controlled vocabularies, and other validation rules.
3.3.1 CDF Standard Fields (Subset)
The CDF standard defines specimen documentation requirements:
| Example Categories | Example Fields | Example Validation |
|---|---|---|
| Collection Data | Collectors, Collection Date from, Collection Date to | Required, ISO-8601 dates |
| Geographic | Latitude, Longitude, Geocode Method | Required, decimal degrees, range validation |
| Taxonomic | Family, Genus, Specific Epithet | Required fields |
Sources
src/main/resources/rules/constraints-CDF.json (lines 1–575)
3.3.2 JACQ Standard Fields (Subset)
The JACQ standard focuses on herbarium specimen documentation to be imported into the JACQ herbarium management system:
| Example Categories | Example Fields | Example Validation |
|---|---|---|
| Specimen Identity | HerbNummer, collectionID, CollectionNumber | Required |
| Collection | Sammler, Datum, Datum2 | Required collector and date |
| DMS Coordinates | coord_NS, lat_degree, lat_minute, lat_second | Dependent required, 0–90° latitude |
Sources
src/main/resources/rules/constraints-JACQ.json (lines 1–137)
3.4 Major Subsystems
3.4.1 Column Menu Extensions
Provides custom operations accessible from column headers, organized into four functional groups:
Sources
module/scripts/project/data-table-column-header/column-header-extend-menu.js (lines 1–1532)
3.4.2 Schema Dialogs
Two specialized dialogs enable mapping project columns to standardized schemas:
-
RenameDialog: Normalizes project column headers to CDF or JACQ standards -
ExportDialog: Maps columns and exports data in standardized formats with validation
Both dialogs implement a four-stage auto-mapping algorithm:
Check if column name matches any alias in constraints[field].aliases
- Exact case-insensitive match
- Predefined mapping lookup from
mappings.json - Cosine similarity matching (threshold: 0.75)
Sources
module/scripts/project/data-table-header-all/rename-dialog.js (lines 1–404)
module/scripts/project/project-controls-export/exporter-dialog.js (lines 1–539)
3.4.3 Validation System
Real-time client-side validation using a constraint-driven rule engine:
Validation Rule Classes
All validation rules implement a common interface:
class ExampleRule {
validate(value) { /* Returns boolean */ }
getMessage(columnName) { /* Returns error message string */ }
getSuggestion(columnName) { /* Returns suggestion string */ }
getTransformationFunction() { /* Returns function or empty string */ }
}
Rules
The validator.js file defines 17 validation rule classes:
- RequiredRule (lines 39-55): Validates that a field has a non-null, non-undefined, non-empty value.
- DependentRequiredRule (lines 58-121): Validates that a field is required when any of its dependent columns have values (OR logic).
- StringRule (lines 123-142): Validates that a value is of type string.
- NumberRule (lines 144-187): Validates that a value is a valid number by parsing with parseFloat() and ensuring the entire string is numeric.
- BooleanRule (lines 189-208): Validates that a value is of type boolean.
- DateRule (lines 210-229): Validates that a value can be parsed as a valid date using Date.parse().
- MySqlIntRule (lines 231-253): Validates that a value matches the MySQL integer format (optional minus sign followed by digits only).
- IntMaximumRule (lines 255-282): Validates that an integer value is less than or equal to a specified maximum.
- IntMinimumRule (lines 284-311): Validates that an integer value is greater than or equal to a specified minimum.
- MySqlFloatRule (lines 313-333): Validates that a value can be parsed as a MySQL floating-point number.
- DateTimeRule (lines 335-355): Validates that a value matches the MySQL datetime format YYYY-MM-DD HH:MM:SS.
- TimeRule (lines 357-379): Validates that a value matches the MySQL time format HH:MM:SS.
- ISO8601CalendarDateRule (lines 381-414): Validates that a value is a valid ISO 8601 calendar date in format YYYY-MM-DD with actual date validation.
- LatitudeDecimalDegreeRule (lines 416-445): Validates that a value is a valid latitude in decimal degrees (between -90 and 90).
- LongitudeDecimalDegreeRule (lines 447-476): Validates that a value is a valid longitude in decimal degrees (between -180 and 180).
- EnumRule (lines 478-504): Validates that a value matches one of the allowed values in a controlled vocabulary (case-insensitive).
- LengthRule (lines 506-535): Validates that a string value has exactly the expected character length and provides a transformation function to truncate to that length.
- TextContainsRule (lines 537-565): Validates that a value contains at least one of the required substrings (case-insensitive).
These rules are instantiated and applied by the Validator class's validateCell() method (lines 580-801), which checks constraints in a specific order: required rules first, then dependent required rules, then type-specific rules, and finally value constraint rules.
Visual Feedback:
- Green: Valid cells
- Red: Invalid cells (with tooltip showing violation)
- Gray: No rule applies
- Fix buttons: Quick-action remediation for common issues
Sources
module/scripts/project/data-table-header-all/validation-cell-renderer-jacq.js (lines 1–224)
module/scripts/project/data-table-header-all/validation-cell-renderer-cdf.js (lines 1–224)
module/scripts/validator.js (lines 1–840)
3.4.4 JACQ API Integration
The JacqClient class provides HTTP communication with the JACQ API.
Key Features:
- Caching: Caffeine cache with 10,000-entry maximum and 5-minute expiration
- Rate Limiting: Bursty rate limiter allowing 5 requests/second
- Concurrency Control: Bulkhead pattern limiting 6 concurrent requests
- Retry Logic: Exponential backoff (150ms to 2s) with jitter, honors Retry-After headers
- Error Handling: Custom TransientHttpException for retriable failures
Sources
src/main/java/org/openrefine/extensions/emesh/jacq/JacqClient.java (lines 1-355)
3.5 Primary Data Workflows
The extension supports three primary workflows that users execute to clean, enrich, and export specimen data:
3.5.1 Data Enrichment Workflow
Users select enrichment operations from column menus. GREL expressions invoke Java functions that call JacqClient, which applies resilience policies before querying the JACQ API. Results populate new columns.
Sources
module/scripts/project/data-table-column-header/column-header-extend-menu.js (lines 389-495)
src/main/java/org/openrefine/extensions/emesh/jacq/JacqClient.java (lines 108-126)
3.5.2 Data Validation Workflow
Users toggle validation renderers from the View menu. Renderers fetch constraint definitions, then use the Validator class to check each cell against applicable rules. Visual feedback is rendered directly in the data table.
Sources
module/scripts/project/data-table-header-all/header-all-extend-menu.js (lines 92-157)
3.5.3 Schema Export Workflow
Users open the export dialog, which auto-maps project columns to schema fields. After manual adjustments, the SchemaExporter validates required fields, applies mappings and fixed values, and generates a standards-compliant output file.
Sources
module/scripts/project/project-controls-export/exporter-dialog.js (lines 1-539)
src/main/java/org/openrefine/extensions/emesh/exporters/SchemaExporter.java (lines 1-221)
4 Ressources
The OpenRefine website (http://openrefine.org/) provides a very nice compilation of tutorial videos, which will help you to understand the basic concepts of data exploration, cleaning, and transformation. There is also a written manual in the form of a wiki-site with common use cases and links to external tutorials in different languages (https://github.com/OpenRefine/OpenRefine/wiki).




