From ClawHub to Loom: How We Executed a Seamless Project-Wide Rename in Python
Renaming ClawHub to Loom taught us how to refactor a mature Python codebase without breaking imports, tests, or team momentum.
Renaming ClawHub to Loom taught us how to refactor a mature Python codebase without breaking imports, tests, or team momentum.
We built a lightweight, real-time dashboard to monitor GhostGraph's distributed scraping workers using FastAPI, Redis Streams, and server-sent events.
We replaced sequential HTTP fetching with asyncio-powered concurrency—5 requests at a time—and slashed our crawl times by 70%.
We stopped silent network hangs in our Python crawler by layering signal-based hard timeouts over curl_cffi and adding IP rotation to preserve throughput.
How atomic Redis operations fixed state corruption during worker shutdowns in our distributed Vultr Crawler.
We replaced ARQ with a lightweight Redis Streams polling worker—cutting 6k+ lines and improving reliability across our scraping fleet.
How I replaced raw Postgres queries with a type-safe repository pattern in a production scraper—improving testability and long-term maintainability.
How we ripped out a brittle third-party integration and replaced it with a unified, maintainable worker model.
We replaced ARQ with our custom event-driven framework Motia to gain control, clarity, and reliability in our scraping workflows.
We replaced a tangled mess of Python workers with ARQ and Redis, cutting complexity and boosting reliability in our scraping pipeline.
How I used LLMs and ARQ to build a self-adapting, scalable web scraper that survives real-world site changes.
We turned a script-driven scraper into a fully observable web interface using FastAPI and server-rendered templates—no frontend framework needed.