Rewriting BAT's Scoring System After One Painful Demo

Today

When the Bartender Association of Taiwan (BAT) needed a scoring system for its competitions, the first version did not come from me.

A partner vendor had already stepped in, apparently willing to do it for free. On paper, that sounds great. In reality, what arrived felt closer to a rushed student prototype than a production tool for real event operations.

Staff could not confidently operate it. The workflows were hard to understand. The interface was sparse and confusing. It lived on a raw IP instead of a proper domain. And most painfully, it did not seem built around how BAT actually runs competitions.

I was originally pulled in for something much smaller: help the association understand the system during a meeting.

Then I saw it.

And honestly, my eyes hurt.

So instead of teaching everyone how to survive a tool that already felt wrong, I decided to rewrite it.

The old system had deeper problems than ugly UI

Yes, the interface was rough.

But the real issue was structural: the product did not respect the operational reality of running a judging event.

BAT works across multiple competitions, sponsors, event partners, and scoring formats. A system in that environment has to do more than collect numbers. It has to be:

  • understandable by staff who are busy on event day
  • fast enough for judges using phones or tablets
  • flexible enough for admins to configure different scoring formats
  • secure enough that not everyone can just wander in and touch sensitive pages
  • exportable enough that results are actually useful afterward

From what I saw, the first version missed too many of those basics.

What immediately felt wrong

A few things stood out right away:

  • No proper trust layer — the system was being shown from a raw IP instead of a real domain
  • Weak access control — I was explicitly told not to let people discover certain pages because they could otherwise just enter them
  • Confusing interface — the homepage looked more like a placeholder shell than a working operations dashboard
  • Poor workflow fit — it did not map cleanly to what staff and judges actually need during a competition
  • Low confidence for non-technical users — the people who had to use it could not comfortably understand it from the demo alone

That last point matters a lot.

If an internal system needs a translator standing beside it just so the operators can survive a meeting, the system is already telling you it was not designed for its users.

A quick look at the old version

Below is a screenshot of the original system that triggered the rewrite.

Legacy scoring dashboard screenshot

Even without touching the internals, a few things were immediately obvious:

  • it looked unfinished
  • it exposed too little useful information on the main screen
  • it inspired very little confidence for real event operations
  • and the whole thing felt like something you were supposed to be careful not to break just by using it

That is not the emotional state you want before a live scoring session.

What I wanted from the rewrite

I was not interested in making it merely prettier.

The goal was to make it feel like a tool BAT could actually run events with.

That meant building around a few principles:

1. The workflow should be obvious

  • admins should understand where to configure things
  • judges should know exactly how to start scoring
  • staff should be able to monitor what is happening in real time

2. The product should be event-first, not template-first

  • contests, events, judges, contestants, forms, and assignments are the core objects
  • the UI should reflect that directly

3. Access should be controlled properly

Not “please do not share this page” security.

Actual session and route protection.

4. It should be able to grow

Even if it starts as an internal tool, the foundation should support broader expansion later.

What I rebuilt

The rewritten system became an internal adjudication platform built around the real competition flow.

Admin side

Admins can:

  • create and manage contests
  • structure events inside each contest
  • build scoring forms with a drag-and-drop form builder
  • manage contestants, judges, and assignments
  • monitor scoring progress live
  • handle unlock requests when necessary
  • export results into Excel for operational use

Judge side

Judges can:

  • enter through a dedicated judge flow
  • authenticate with QR badge + 4-digit PIN
  • score on mobile or tablet
  • move through assigned forms with less friction
  • stay focused inside a cleaner, lower-noise interface

Staff side

Staff get a system that is easier to reason about while an event is happening.

That matters more than flashy visuals. Event-day software should reduce panic, not create it.

Tech stack

The current version is built with:

  • Next.js 15
  • React 19
  • TypeScript
  • Tailwind CSS v4
  • Drizzle ORM
  • Cloudflare D1 in production
  • better-sqlite3 in local development
  • OpenNext + Cloudflare Workers for deployment

That stack gave me a few things I wanted from the start:

  • modern app-router ergonomics
  • fast iteration for a UI-heavy internal system
  • simple relational data modeling for contests and scoring
  • a deploy target that stays lightweight for this kind of operational product

Why the rewrite mattered

Software like this gets judged differently.

A landing page can be messy and still survive. An internal scoring system cannot.

If the UX is wrong here, you do not just get a slightly ugly interface. You get:

  • confused staff
  • delayed scoring
  • awkward judge experience
  • operational mistakes
  • reduced trust in the entire event flow

That is why I treated this like product design, not just front-end cleanup.

This is still an internal tool — but I did not build it like a throwaway one

BAT's scoring system is currently used internally, but I did not want to build it like a disposable back-office toy.

I wanted a base that could grow.

Right now it handles the association's immediate judging workflow. But structurally, it is already being shaped as something bigger: a more complete competition platform rather than a one-off scoring page.

That difference matters.

If you build an internal tool with clear models, proper permissions, and real workflow thinking, it stops being a patch. It becomes a platform.

A quick walkthrough of the rebuilt flow

Here is a short demo of the rebuilt scoring experience, from the competition-facing interface to the operational tools around it.

0:00/0:00

Final thought

Sometimes the reason to rebuild a system is not that you were asked to.

Sometimes it is because the first version makes it painfully obvious that the people who will rely on it were never really considered.

That was the turning point here.

BAT did not need a scoring system that merely existed.

It needed one that people could actually use.