Back to Blog
4 min read

Migrating to Motia: How We Scaled Venue-Event Matching with AI-Powered Address Normalization

The Problem: Why Matching Venues and Events Was Breaking

At AustinsElite, we’ve always tied events to venues — but not in a clean, database-friendly way. Over years of organic growth, venue addresses were entered by hand, copied from emails, scraped from websites, or pulled from third-party APIs. The result? Chaos. "The Fillmore", "Fillmore Auditorium", and "The Fillmore - San Francisco" all referred to the same place, but our system saw them as strangers.

This wasn’t just a UX issue. It broke analytics, reporting, and our ability to surface related events. We needed a way to correlate events and venues reliably — even when the data didn’t match on the surface. Fuzzy matching helped, but it wasn’t enough. We needed intelligence, not just similarity thresholds.

Enter Motia.

Introducing Motia: The AI-Powered Matching Engine

Motia started as a sidecar service to AustinsElite’s Laravel 12 stack — a dedicated microservice for resolving venue identities across inconsistent inputs. Its job? Take a messy address or venue name and return a canonical, normalized representation backed by confidence scoring and geolocation.

We built Motia as a standalone TypeScript service (AE Motia Hub) from day one, knowing it would eventually serve multiple systems. The initial commit on January 7th laid down the foundation: strict TS config, modular service layers, and a clean interface for address parsing and matching. But the real magic came in the integration.

When the feat/switching-to-motia branch merged into AustinsElite (Laravel 12), we replaced our brittle regex and Levenshtein-based matching with real-time calls to Motia. Every time an event was created or updated, we’d send venue details to Motia, which would:

  • Parse and tokenize the input using NLP-inspired rules
  • Normalize street names, city spellings, and venue prefixes
  • Query a hybrid index (PostGIS + Elasticsearch) for potential matches
  • Apply an AI confidence model trained on past manual corrections
  • Return a top candidate with structured metadata

This wasn’t just faster — it was smarter. We reduced false negatives by 68% in early testing, and the system learned from every correction.

AI That Actually Works: How We Normalized Addresses Without Going Full LLM

One of our early debates was whether to use a full LLM for address parsing. We considered fine-tuning a small model, but latency and cost killed that idea. Instead, we went hybrid: rule-based tokenization layered with a lightweight ML model for disambiguation.

For example, "123 Main St, Austin, TX" gets split into components using OpenStreetMap’s libpostal-inspired rules. But when the input is "Upstairs at The Highball, S 1st St", the AI model kicks in — trained on thousands of real-world Austin venue entries and their canonical forms.

The model doesn’t generate addresses. It ranks them. Given a set of possible interpretations from our rule engine, it scores each based on context, proximity, and historical accuracy. The highest scorer becomes the normalized output.

This approach gave us 94% accuracy on validation data — and it runs in under 200ms. No GPUs, no API tokens, just a cached ONNX model running in Node.js.

Integration: From Background Jobs to Admin Workflows

Getting Motia working was half the battle. Integrating it into our existing flow was the other.

We didn’t want to block event creation on Motia’s response, so we wrapped the call in a background job using BullMQ. When an event is saved, a job queues up, hits Motia, and updates the venue relationship asynchronously. If Motia returns low confidence, it flags the record for review in the Filament admin panel.

That last part was key. Our ops team needed visibility. Now, in Filament, they see a "Needs Review" badge on events where the match wasn’t solid. They can approve, reject, or manually link — and every decision feeds back into Motia’s training set.

It’s a feedback loop: better data in, smarter matching out.

The merge of PR #157 on January 7th wasn’t just a deployment — it was the moment our data stopped drifting. We’re already seeing cleaner reports, fewer support tickets, and more accurate recommendations.

And for the first time, "The Fillmore" is just… The Fillmore.

Newer post

Building a Proactive Cache Strategy in Next.js: From Cold Loads to Instant Hits

Older post

How We Built a Resilient Venue Matching System Using Fuzzy Logic and Scoring in Next.js