Skip to main content

Command Palette

Search for a command to run...

Racing Against Time: How We Built a Fair, Server-Synchronized Contest Timer at Scale

From Pit-Lane to Production: Part 3

Updated
4 min read
Racing Against Time: How We Built a Fair, Server-Synchronized Contest Timer at Scale

Lights Out, And Away We Go (Again)

By the time Sunday rolls around in a Grand Prix weekend, the garages are quiet, the setup is locked in, and nobody is allowed to touch anything unless it’s absolutely necessary. That’s exactly where we were with the backend in Part 3 of From Pit Lane to Production. Practice was done, qualifying had set the grid, and now it was race day. This was the phase where systems had to work flawlessly, not because they were flashy, but because if they failed, everyone would notice. That too while screaming.

It’s About Time (Literally)

Every race needs a clock. Not a “my laptop says so” clock. A single, authoritative race timer that everyone trusts, like the FIA timing screen (minus the controversy.)

That’s where the timer package came in. Its job was simple: make sure every contestant was racing against the same ticking clock, no excuses, no browser shenanigans.

When the admin fired up SetTime, the backend first checked if a round was already active. Because starting two races at once is how you end up on a Netflix documentary ( Although I really want to feature on Netflix but not this way). If all was clear, it took the admin-provided end time, parsed it into a clean RFC3339 timestamp, converted it to Indian Standard Time, and parked it safely inside Redis under the round_end_time hash. Stored in memory, fetched in milliseconds faster than a Mercedes pit stop on a good day.

Then came GetTime, the endpoint every contestant leaned on like a steering wheel in Eau Rouge. When pinged, it first checked whether a round was actually running via is_round_started. It then verified the user’s round ID against the current one in Redis, shutting down any attempts at cross-round funny business. Only after passing scrutineering did it fetch the start time, end time, and current server time, sending them back to the frontend. The result? A countdown timer perfectly in sync with the server, even if the user’s system clock thought it was still 2022.

Finally, UpdateTime handled race control chaos. Need to extend the round? Shorten it? Weather delay? Safety car? This endpoint pulled the current end time from Redis, adjusted it by the given duration, rewrote the round_end_time, and updated the TTL on is_round_started. When the TTL expired, the race ended. Clean. Automatic. No human panic required.

On the surface, it was “just a timer.” Under the hood, it was the FIA timekeeping system: quiet, precise, and absolutely unforgiving.

The Docker Garage (And the Multi-Car Strategy)

Every F1 team has a garage. Ours was Docker, deployed on DigitalOcean, clean, repeatable, and mercifully boring.

Each service had a clearly defined role:

Postgres handled persistence, backed by volumes so data didn’t vanish mid-race.

Redis lived entirely in memory, managing timers, flags, and caching with engineer-level precision.

The API container was the main car : handling requests, validating rules, and serving responses under load.

Nginx sat up front, routing traffic cleanly like race control on a good day.

But the real strategy call?

We didn’t run one worker. We ran multiple.

The worker service was horizontally scaled, spinning up multiple containers to handle submission load in parallel. One worker would’ve been fine in practice. Maybe even qualifying. But race day is different. When hundreds of submissions hit at once, you don’t pray, you call the head office for backup force.

Each worker container picked up jobs independently, pulling from the same queues, processing submissions without stepping on each other’s toes. No bottlenecks. No backlog panic. Just clean parallel execution, like running multiple stints with fresh tyres instead of cooking one set to death.

DigitalOcean was our circuit for the weekend. And for once, nothing exploded.

Backend So Smooth, It Deserves a Podium

And then came the race itself.

Everything worked. Not in a “don’t touch it or it’ll break” way, but in a Hamilton in Mercedes way, fast, composed, and suspiciously drama-free. Submissions flew to Judge0 like DRS wide open, Redis hit cache lookups with pit-stop precision, and the backend just… kept going.

At the end for the entire dev team while we were standing at the venue, on contest day, watching the system run felt like watching a perfect lap: clean lines, no oversteer, total confidence.

So, that's enough od my yap for this final part of this series.

If you’ve read this far, congratulations. You now know more about this system than half the pit wall. Go grab a snack, stretch your legs, and enjoy the moment.

You’ve earned your place on the podium.