Skip to main content

Command Palette

Search for a command to run...

Designing a Race-ready Backend for our Competitive coding portal with Golang, Echo & Redis : Part one

From Pit Lane to Production: How we designed our backend to survive the CookOff Weekend | Part one

Updated
6 min read
Designing a Race-ready Backend for our Competitive coding portal with Golang, Echo & Redis : Part one

Pre-Race Prep: Designing the Car, Praying It Starts

If you’ve ever wondered what happens when a bunch of over-caffeinated developers decide to build their own version of LeetCode (White Monster and Red Bull should probably sponsor us at this point), then welcome to this series of blogs that take you up close behind CookOff 10.0’s backend.

Picture hundreds of students hammering their keyboards like your roommate’s midnight singing, racing to solve problems faster than the McLaren duo this year. It’s CodeChef VIT’s flagship coding event.

To run something this chaotic, you need more than enthusiasm. You need a backend that doesn’t explode the moment hundreds of people press “Submit” at once. So we built one. From scratch. In-house. Because why pick something stable and safe when you can live on the edge and watch your server logs scream in real time? It’s the kind of decision that sounds brilliant at 2 AM and catastrophic at 9 AM like most things developers do (yes, we take inspiration from Ferrari engineers).

This is the story of how we built the backend for CookOff 10.0’s in-house coding portal; a system that could compile, execute, and score submission results in real time while pretending everything was under control. We used Go, the Echo framework, Redis, and a bit of luck to create a pipeline fast enough to keep up with the chaos. It was part engineering, part madness, and somehow, it worked.

The Engine: Why We Chose Go and Echo

Every great machine needs an engine, and for us, that engine was Go. We could have gone with Node or Python, but when hundreds of coders start firing requests faster than the Mercedes W16 engine, you need something with raw power. Go doesn’t mess around. It is fast, lightweight, and handles concurrency like it was designed by someone who enjoys watching CPUs suffer. In short, it’s what you pick when you want performance and probably therapy bills.

The other half of our setup was Echo, our chosen framework. We used Chi last year, and it felt like assembling IKEA furniture while blindfolded. Too many moving parts, and one wrong screw could take down the whole thing. Echo was a relief. It felt like walking into a clean, well-lit garage where every tool had a place. It gave us clear routing, smooth structure, and middlewares that actually worked without turning our code into a mess.

Echo handled everything: user routes, admin routes, submission endpoints, and the protected judge APIs. It also gave us ground to build middlewares that acted like bouncers at a club, checking who had a valid token, who was banned, and who was trying to sneak into the wrong endpoint.

It wasn’t over-engineered or flashy. It was built for speed, endurance, and a bit of chaos. Just like a race car that is meant to survive 24 hours of Le Mans, not sit pretty in a museum.

Tech Choices That Actually Worked, Somehow

Every great system begins with a few terrible decisions that somehow turn out right. For CookOff 10.0, we knew we were building a coding arena that would be hit by hundreds of submissions at once, so we needed tools that could survive chaos without crying for mercy. Every choice we made had a reason, even if that reason was “we don’t want to stop the contest midway while we restart our servers again.”

Let’s start with Judge0, our external code execution engine. We could have spun up our own Docker sandboxes, configured runtimes, managed security, and written endless YAMLs that nobody would ever read again. But Judge0 already handled all of that elegantly. It supported multiple languages, handled time limits, and offered callback URLs to update results asynchronously. We sent batch submissions to Judge0 using its API, received tokens for each job on our server’s callback URL, and let it do the heavy lifting. It was like outsourcing the messy part of the kitchen to a robot chef that never complains.

Now, since these submissions and callbacks arrive faster than I can find a missing semicolon in my code, we needed something to manage temporary state without choking the database. Enter Redis. It became our short-term memory, fast, volatile, and perfect for caching submission status, storing tokens, and tracking progress. It simply acted as the storage layer that made things fast.

The actual queue was handled by Asynq. Every callback from Judge0 and pushed to Queue using a shared Redis client. Our worker processes picked tasks one by one, updated results in DB, calculated scores, and cleaned up Redis keys when done. Without Asynq controlling task flow, our submission pipeline would have turned into a traffic jam where everything tried to update DB at once.

For the database layer itself, we used SQLC. At first glance, it sounds like one of those tools you install and forget about, but in reality, it saved us from half the bugs we didn’t even know we were writing. It generates type-safe Go code directly from SQL queries, ensuring that we couldn’t accidentally mismatch a column or ruin the schema with an innocent typo. In short, it turned the database layer into something we could actually trust.

Finally, we wrapped everything in Docker containers. Each container in its own environment, so if one goes down, the others just keep going like nothing happened. When traffic spikes, we simply spin up more containers and let them do their thing.

In the end, none of these choices were glamorous, but together they made the system reliable enough to handle chaos at scale. Redis sprinted through caching and timers, while Asynq calmly orchestrated background task execution, and Judge0 crunched through code like a machine possessed. It was a strange symphony of speed, order, and just enough luck to make it through CookOff 10.0 without watching the backend catch fire.

Black Flag or Green Light? JWT and Middlewares as Race Stewards

Before any line of code could go racing through the backend, we needed proper authentication and authorization.

We used JWTs as our access control system. During login, the user’s email and password went through validation, bcrypt hashed password comparison, and a successful match triggered token generation. Two tokens were issued: an access token and a refresh token. The access token was short-lived and carried the user ID and role, while the refresh token lived longer and was stored in memory, keyed by the user’s UUID with roughly a one-hour expiry.

Every protected route passed through Echo’s middleware stack. The first checkpoint was the JWT verification middleware. It looked for an access_token cookie, parsed it using our JWT secret, and verified both integrity and expiry. Once validated, it attached the decoded user ID and role to the Echo context. If the token was missing, expired, or malformed, the middleware returned 401 Unauthorized.

Next came the BanCheckUser layer. It pulled the user ID from the context, checked the IsBanned field in the database, and blocked requests from banned users to routes like /submit or /run.

Admin-only routes such as /admin/create-question or /admin/update-testcase were gated by an AdminOnly middleware that checked the role field from JWT claims. If it wasn’t admin, the request returned 403 Forbidden.

Submissions were also gated by round eligibility. Before creating a submission record or queuing a Judge0 job, the handler ensured that the user’s RoundQualified field matched the active round.

This multi-layer design filtered bad or invalid requests early, preventing unnecessary load on Redis or the Judge0 queue and keeping worker traffic meaningful.

JWTs handled identity, refresh tokens lived in memory, and middlewares enforced control. The result was a clean, efficient request pipeline that allowed valid code to flow while blocking unauthorized users.

Alright, that’s the end of this sector. Coming up next: a deep dive into the submission pipeline, async job queues, and Redis.