Food menus are incredibly simple. Or so I thought.

They still are, for the most part, but scaling a food menu to be accessible to around 4000+ daily visitors with good uptime, handling data updates, handling UI optimisations whilst taking user feedback really gives some great lessons in productization for someone with moreso of a solo engineering background, so I’d like to document them here.

I wanted to build something that’d last after I graduate with minimal intervention from my end, and that was the major influence for some key technical and infrastructural decisions.

This is the biography of https://fc2.coolstuff.work.

Why Build It?

How do you solve the problem of distributing food court menus across a college campus?

When I joined, the form of distribution for menus at our college’s biggest food court (Food Court 2, by The Indian Kitchen; henceforth FC2) and its subsidiaries throughout the campus was printing an excel sheet with the week’s menu, and printing it out to pin it on their pin boards.

Since these weren’t distributed to hostel blocks, the method of distribution would usually be:

  • Someone taking a photo
  • Posting it in friends’ groups
  • Sharing in hostel groups
  • Or in some cases, making the image of the menu the profile picture of the hostel group (looking at you, Block 17)

This gets spammy if sent across multiple groups, and due to WhatsApp defaulting to SD for image resolutions, dense tabular data rarely retained the text quality it needed for legibility. Depending on the person taking the image, it can be:

  • A well-lit, highly legible, crisp image
  • Or a motion-smoothed, poorly-cropped disaster

After being torn apart by a thousand paper cuts (across two years of eating at this FC), I decided that I had had enough. I needed a better solution, even if it would just be for me.

Prototype

As an engineer, I’ve been primed to look for is structured, parameterised, the works. It had dates, titles, categories. I went with JSON because it felt like the perfect format for this, being the least bloated in terms of extra syntax while giving me the right set of modularity to really play with how it’d look with JavaScript.

For the very first prototype, it was just going to be a JSON in a data/ folder, and I just OCR’d content into my format. Then I realised that I could “Gemini” it, but LLMs tend to hallucinate quite a few menu items so that was rather funny. That’d be good enough data for a week, so I then had to build the rest of the website around it.

I could have perhaps gone with a hyper optimal plain HTML/CSS (and barely some JS) combination, but my mental state when building this at 11:30 PM at night was to have a prototype that’d just work for me, one I could iterate on extremely rapidly, and NextJS, which I had grown used to with other projects in recent months, enabled just that.

Design

I love photography, videography, graphic design, yada yada. I’m in love with the visual medium, and enjoy looking at interfaces built for their various purposes. I wanted this to be ethereal and lighthearted, or as I remember phrasing it to myself - to feel like a “marshmallow”. When a person looks for a meal, they aren’t looking for everything throughout the day, it’s usually what they want to consume next. Mobile had to be the priority, and with the current (ongoing, or upcoming) meal being particularly important in the shown space.

I decided that a slideable carousel would be the smoothest UX for it, and I could highlight the meal with a box of a special gradient.

The initial UI still rendered all the menu items from one meal as a text list in a box, which I didn’t really enjoy, so I ended up utting each item into tiles.

It goes live

I already had an analytics endpoint setup for myself, so I added that to the site and once I was fairly happy with the prototype, I shared it with my friends and classmates, with the site launching to a thousand visitors on its launch as it was propagated across various hostel groups. And then I received a lot of feedback - most of it just small bug fixes and requests for a theme switcher, which I obliged to.

Many expressed curiosity over how I would be aggregating this data, and while my initial idea was crowdsourcing through a really “simple” GitHub PR flow, I felt that it’d alienate most of my audience, despite being in an engineering college. I was willing to update it, but I’d be sick, out of town, and then there’d be no updates to the data.

That was a major architectural problem. One that I wouldn’t solve for a week, yes, but one that I needed to solve at some point.

Reaching out

I realised that the best way to ensure that the data is updated is to ask those who decide on it on the first place, but was a tad apprehensive of whether they’d be willing to allow such a platform to exist at all. They were, to my absolute delight, welcoming of such a platform, but wanted some intuitive way to add data.

The Data Model

The very initial version was largely static, and looked for the JSON file with the newest date name to be the default one. As mentioned before, I’d be accepting GitHub PRs or commits with data, but that’d be a massive technical barrier for someone not accustomed to such a workflow.

Initially, I worked on an automated system that would automatically add GitHub PRs/commits via GitHub PATs (Personal Access Tokens), but not only would that be rather insecure, but also quite suboptimal to maintain. I decided, then, that a client server model would be the best, with an open (non authenticated) API. The secondary benefit from this would be that anyone else could build clients for the same data endpoint without any hassle. I could just serve the latest JSON over the API endpoint, which would be convenient for me to swap as the data source as well.

I then prototyped an initial demo for the TIK team to enter data on, but when I demoed it to them, they wanted something that they could directly upload their excel sheets to. This was a brand new challenge, since their excel sheets (one of which I’d received as a sample) did not follow any standardisation at all, merging cells for aesthetic reasons, lacking any uniformity in the number of rows per meal, etc.

I had to build a custom parser, with a lot of fuzzy logic to adapt to the data provided in the XLSX. I decided that I’d first convert it to CSV, out of the sheer convenience afforded by tools/libraries/functions built to perform data manipulation around the format. I’d then parse it neatly, and convert it from CSV to JSON at the very end, giving me the data to be held at the API endpoint. I could also have all the historical data stored in their specific JSONs, following a standardized naming scheme with the latest delivered automatically through the endpoint (/api/menu).

Writing Data

I faced two challenges next - authentication and writes. While reading from a file can be handled remotely, writing would require me to build some sort of upload/write solution. Authentication was fairly trivial, with NextAuth handling most of the grunt. Could I have implemented it myself? Yes. Did I want to? No. Security is not trivial, and I did not want to make mistakes that people a million times more experienced that me had already addressed.

For writes though, I would have to move to object storage. It would be important to mention here, then, that the website was being hosted on Vercel. I decided, then, in my naivete, that the free tier of Vercel’s Blobs would be equally as generous as Vercel’s hosting.

I ensured that every entry JSON would be uploaded to the Vercel Blob Storage, with the backend maintaining version control. Having built that, the data flow now would be something as follows.

  • The TIK team authenticates on the platform given to them.
  • The XLSX is converted to a CSV, which is then parsed and structured into JSON (on device).
  • The JSON is uploaded to Vercel Blob Storage and the object is then linked to the API endpoint delivering content at /api/menu.

That worked really well, for a week.

Free Quotas

I woke up one morning to a mail from Vercel that I was running out of the monthly available quota of 10k operations. When I looked into it, it turned out that Vercel Blob storage apparently counted each read or write as an equivalent data operation, which was disastrous for me as at this point, the page was getting more than 2000 visits per day.

I had to look for another system (and re-engineer an entire two days of integration work in less than two days). Once again, I was on the lookout.

Cloudflare

Again, I must mention that I was using Cloudflare for DNS, CDN and DDoS protection, so it only made sense for me to tap one of their services. I looked into Cloudflare R2, their object storage service, which had a much more generous free tier.

R2, unlike Vercel, had different quotas for reads and writes.

  • Class A Operations - Writes, or any operation that mutates state.
  • Class B Operations - Reads, or any operation that reads an existing state.

For their free tier, Cloudflare allows 1 million Class A Operations/month and 10 million Class B Operations/month, a 1100x multiplier in quota availability for me. However, this was a live website that I couldn’t break, so I got around to re-engineering it.

The APIs aren’t even remotely similar, so this basically set off a complete rewrite of the object handling, writing and editing states for the entire structure.

I also wanted to make historical data available via the API, so that involved even more listing of data and schema design to query appropriately.

This was a sensitive operation that I couldn’t really hand over to an AI either, but I did use Claude 4.0 Sonnet to help me make sense of Cloudflare’s documentation quite a bit, and by the end of the day, working between classes, breaks, etc, I had it all working, along with having to write a custom script to port the data over from Vercel Blobs to Cloudflare R2, to ensure smooth handover.

I shifted it around midnight (which, by my analytics, has the least concurrent users - and surprisingly lesser than late night hours from 3 AM - 5 AM, which I’d expected to be silent), and it worked!

Small Features

One of the requests I got repeatedly was to add a full menu viewer, aka a page where one would preview the entire week’s menu, and this was something I found really interesting so I implemented as a page view.

That’s really all to it.

Performance

Since this website would usually be viewed on phones (as people used it across the campus), I’d have to optimise it quite a bit - especially given bandwidth issues around campus, and the variation of memory and performance allocation on memory devices. Recognising this, I saw two immediate optimisations: the data would stay static for a week, so that could be cached; and so could the UI.

NextJS (and by extension, the React ecosystem) has absurd amounts of caching tooling built in, so I decided to leverage that, and build it in. This was particularly useful as the user wouldn’t have to load the entire website content every single time, and it’d also sometimes persist without the user having an internet connection. I also did build in GPU acceleration for scrolling, but that was just as a fun gimmick.

Data only had to be loaded on mondays, right? Right?

Issues

I have mostly addressed issues in their specific categories so far, but propagation with caching turned out to be the biggest issue. Data can be edited (from a JSON perspective) at Cloudflare R2, and that is reflected via the API immediately, but a cached user wouldn’t know that. There would be two ways to go about it: either have a webhook-type implementation, that basically is a signal to the client to refresh data, or… it could just be a client side button. What I did learn in the making of this button, though, is how browser caching works and how, due to browser level optimisation (ref: Browser localStorage API, it’s quite a fun topic and what many cached downloaders like MEGA are built on), it can be quite difficult to purge it!

The first implementation I went with was a “smart refresh” - speculatively checking if new menu data was available:

export async function refreshDataIfNeeded(week: WeekMenu): Promise<boolean> {
  const upcomingMeal = findCurrentOrUpcomingMeal(week);

  if (upcomingMeal) {
    // We have upcoming meal data, no need to refresh
    return false;
  }

  // No upcoming meal data, let's refresh
  // Clear all cache first to ensure fresh data
  clearAllCache();

  // Get fresh data
}

However, there would some times be mid-week changes or the requirements to make a full reload without any preservation (the smart reload instance still preserves a lot of data). The best UX for that would be a button.

const handleRefresh = React.useCallback(async () => {
  setIsRefreshing(true);
  try {
    // First, fetch the latest weeks info to see if there's a new week
    const { weekIds, meta } = await fetchWeeksInfoFresh();
    setAllWeekIds(weekIds);
    setWeeksMeta(meta);

    // Determine which week to load and fetch fresh data
      } catch (error) {
    console.error('Failed to refresh data:', error);
  } finally {
    setIsRefreshing(false);
  }
}, [weekId, routingMode, router]);

Stability

The website now has over 4000 users daily, and it works. I keep trying new things from time to time, like a recent UI simplification, but it’s been quite a learning experience. AWS had an outage recently, and I was worried that the service would be down due to Vercel’s infrastructure being built upon AWS Lambdas, but it was quite pacifying to know that just the build pipeline broke; hosting remained unaffected.

In the future, I’d love to build some load balancing (it’s probably not required at this scale) via Cloudflare Workers purely for redundancy purposes. If it’s needed.

Learnings

It was quite nice to see a passion project be useful to so many people across the campus. It was quite a nice learning experience for me too. While I’d played around quite a bit with web development, it was never at this scale or user/pageview concurrency. I really wanted to keep this free not just so that it’d be a cost saver, but mainly since I didn’t want this to be a burden down the road, so it could seamlessly be used by many more people.

You’ll almost always hit an edge case - I tested across MacOS and Linux for the most part - from old browsers to new (Firefox, Zen, Mullvad, Librewolf, Orion, Safari, Chrome, Brave, Vivaldi Dia, Arc, ChatGPT Atlas, Perplexity Comet, Deta Surf, BrowserOS) and it taught me a lot about how poor web standards compliance is.

Apple, rather arbitrarily, considers 90% web standards compliance for an engine to be worthy of being called an option on their platforms, and a lot of the basic features are also affected by their abstractions (Comet and Atlas showed remarkable behavioral variation compared to their chromium base in terms of caching, which I’m still rather curious about).

Analytics

I used Medama Analytics (self hosted on my own homeserver) for analytics, and a few points were really interesting. Kindly note that this probably excludes users with aggressive adblocks.

  • The full week menu only sees around 5% click through.
  • 0.2% visitors are reported to be from other countries. (I assume this is due to VPNs/Apple’s Private Relay, mainly because the countries are rather peculiar and not the usual scraping suspects - Oman, Iceland etc.)
  • Chrome (and by extension the Chromium ecosystem) dominates 70% of the webcons traffic, but Safari was rather significant at 30%, possibly reflecting the relatively affluent nature of the audience of this campus.
  • 10% of all traffic is Google SEO, and 3% is from Instagram - I have put in zero SEO optimisations, and the instagram traffic is possibly just from messages, which is interesting. I don’t pay for either of their analytics to see origins.

The End

Thanks for reading till the end. This is an OSS project, and I welcome contributions. Check it out on GitHub.