How We Unified a Fragmented Data Import System in Next.js with a Single Command
The Problem: Too Many Scripts, Too Much Chaos
A few weeks ago, our data pipeline for the AustinsElite frontend—built on Laravel 12—was a mess. We had seven different scripts scattered across the codebase, each handling a specific type of event data import from legacy sources. Some were Python, others Node.js. A few lived in personal folders. None shared the same error handling, logging, or even date formatting.
Every time we needed to ingest new event data, someone had to dig through Git history or Slack threads to find the right script. Worse, inconsistent time parsing meant events would show up with wrong start times—sometimes off by days. We’d patch it, move on, and repeat.
It wasn’t technical debt. It was technical quicksand.
We knew we needed a single source of truth for data imports—one command that could handle any format, validate properly, and fail gracefully. So we built UnifiedImportCommand.
Building the UnifiedImportCommand: One CLI to Rule Them All
The goal was simple: replace all those scripts with one reliable, self-documenting command. But the execution had to be smart. We didn’t want a monolith—we wanted modularity with consistency.
Here’s how we structured it:
- Format Detection Layer: On execution, the command analyzes the input file (CSV, JSON, XML) and looks for structural hints—headers, keys, naming patterns—to auto-detect the source type.
- Modular Processors: Each legacy format gets its own processor module (e.g.,
LegacyCsvProcessor,V1JsonAdapter). These implement a shared interface with methods likeparse(),normalize(), andvalidate(). - Centralized Pipeline: Once detected, the file flows through a standard pipeline: parse → clean → validate → upsert. Every import follows the same path.
We built it as a standalone CLI tool within the Next.js app using commander.js, accessible via npm run import:data -- --file=events_v2.json. No more hunting. No more guessing.
And because it lives in the main repo, it’s versioned with the app—no more "but it worked on my machine".
Smarter Parsing, Cleaner Data, Fewer Headaches
The real win wasn’t just consolidation—it was the room it gave us to fix long-standing edge cases.
One of the biggest pain points? Time formatting. Legacy systems used everything from Unix timestamps to MM/DD/YYYY h:mma (yes, really). Some omitted timezones entirely. We’d seen events scheduled for 3 AM local time appear as 3 PM in the UI.
With the unified command, we introduced a centralized TimezoneNormalizer that:
- Detects ambiguous formats using
date-fns/parsewith fallback chains - Applies default timezone (Australia/Melbourne) only when explicitly safe
- Logs warnings when timezone info is missing, instead of guessing blindly
We also added post-processing validation hooks. After parsing, the command checks for:
- Duplicate event IDs
- Missing required fields
- Invalid date ranges (e.g., end time before start)
If any fail, it outputs a clean report and exits with code 1—no partial imports. This eliminated half our support tickets related to "ghost events" or scheduling glitches.
The best part? Onboarding time for new devs dropped from "Let me show you the spreadsheet of scripts" to "Run --help and pick a file."
One Command, Many Wins
This refactor wasn’t about elegance for elegance’s sake. It was about making our lives easier—and our data more trustworthy.
Since deploying the UnifiedImportCommand, we’ve cut import-related bugs by 80%, reduced manual intervention to near zero, and made it trivial to add support for new formats (just drop in a new processor).
And yeah—it feels good to delete seven scripts and replace them with one well-tested command. But the real win is knowing that when the next data dump lands, we’re ready. One command. One source of truth. No drama.