Country codes, currency lists, airport identifiers—this is the bedrock of our applications. We call it "reference data," the foundational information used to classify and give context to everything else. It feels solid, stable, and permanent. But here’s the reality: reference data is not static. It evolves. And if you’re not prepared for that evolution, you're building on shaky ground.
Geopolitical boundaries shift, currencies are retired, and even your own internal product categories change. Treating this data as a hardcoded list or a static file copied into every microservice is a recipe for inconsistency, broken integrations, and data chaos. The solution isn't just managing the data; it's managing its history. This is why versioning is a non-negotiable pillar of modern reference data management.
The term "slow-changing" is often used to describe reference data, but "slow" doesn't mean "never." History provides countless examples:
When you copy-paste these lists into your applications, you create data silos frozen in time, leading to a cascade of problems.
Ignoring the temporal nature of reference data introduces significant risks that compound over time and across systems.
Data Inconsistency and Failed Lookups: Microservice A has an updated country list, but Microservice B is three years old. A user record created in Service A with the country "Eswatini" is sent to Service B, which fails because it only knows "Swaziland." The systems are no longer speaking the same language.
Lack of Historical Accuracy: Imagine running an analytical report on sales data from five years ago. Your current application only knows the current sales territory structure. You can't accurately reconstruct how that sale was categorized at the time it happened. The report is flawed because the context—the reference data—is missing its history.
Broken Integrations and Painful Migrations: You decide to update the list of industry codes your company uses. How do you roll this out across a dozen applications and APIs simultaneously without breaking things for your partners and internal teams? It becomes a massive, error-prone coordination effort. With unversioned data, there is no "transition period"; there's only a hard cutover and the inevitable fallout.
Versioning solves these problems by treating reference data not as a single list, but as a series of snapshots. It's the key to establishing a true single source of truth that respects the dimension of time.
Building a robust, versioned reference data management system is a significant engineering challenge. You need to design storage schemas, build APIs capable of point-in-time queries, manage access control, and handle the update lifecycle for dozens of public and private datasets.
This is where a managed service like reference.do becomes invaluable. It's designed to be your single source of truth for reference data, with versioning at its core.
Instead of wrestling with static files, your applications make a simple, reliable API call.
With a managed platform, the complexities of versioning are handled for you. Public datasets like ISO codes are kept up-to-date automatically, and you can upload and version your own custom datasets through the same unified API. This means new and old applications alike can query the exact version of the data they need, ensuring consistency, accuracy, and reliability across your entire technology stack.
Stop letting your foundational data crumble. Embrace its evolution. By centralizing and versioning your reference data, you build more robust, maintainable, and accurate systems for the long term.
import { Do } from '@do-inc/sdk';
const an = new Do(process.env.DO_API_KEY);
// Get the latest list of all ISO 4217 currencies
const currencies = await an.reference.get('iso-4217');
console.log(currencies);
/*
=> [
{ "code": "USD", "name": "United States Dollar", "symbol": "$" },
{ "code": "EUR", "name": "Euro", "symbol": "€" },
...
]
*/