Building a Solution in a Modern Stack
I’ve spent my career in the Microsoft stack. MS SQL, C#, ASPX… a world I know inside out. If I wanted to build a Formula 1 statistics app (F1DB) the “easy” way, I’d be finished by the weekend. I am starting to realize however that my skillset could use a refresher. I’m rebuilding my mental models by swapping my veteran “monolith” toolkit for a Modern Nerd stack.
Stack Swap
- Database: Leaving MS SQL for PostgreSQL. (The logic stays, the syntax shifts.)
- Backend: Swapping C# for Python (FastAPI) and introducing an ORM via SQLModel.
- Frontend: Moving from ASPX and standard markup to Next.js, React, and Tailwind CSS.
- Infrastructure: Moving away from IIS toward Docker and Railway.
Why
This isn’t about the “best” tech; it’s about the learning curve. I understand how to build REST endpoints and manage relational data, but doing it in an ecosystem where I’m a “junior” again is the point. Formula1 is a passion of mine, and I figured it would be perfect. It’s perfectly relational and notoriously complex—the ideal test track for new tools.
I’m trading what I know for what I need to learn.
To effectively learn and get up to speed with a modern technology stack, I am leveraging AI tools to assist me in the learning process. I plan to document my learning journey here, sharing my experiences and progress. #collectivewisdom
Goals
I recognize learning a new stack will be challenging so I chose a topic I enjoy, Formula 1. I figure, if I enjoy the content, the process of building the tool will be more fun. I set some basic goals for the project.
- Build an F1 statistical database tool using a modern stack.
- Use AI to help me make quick, but intelligent decisions.
- Document the process in detail so I can share the experience and help others learn.
Getting Started
I started here…
| Prompt I am getting ready to build a database application for my own use. I am going to store stats from the Formula 1 racing series, so when I am watching a race, I will be able to pick a driver and a track and see things like the drivers fastest lap here, the fastest lap by anyone…race stats by driver, by track, by team, or by combinations of those criteria. |
| Response …and like all LLM’s out there right now, Gemini started running with the idea, mocking up a schema, considering technical granularity, query complexity, etc. I had to slam the brakes on. I do not want this project to be AI generated slop. I want to use AI to make me better. After properly chastising Gemini for going nuts, I added another prompt and started us at the “bottom” of the stack, storage. |
| Take Away Your prompts need to be concise, and specific. |
Database Layer
I started too broadly, so I narrowed the discussion to the bottom of the stack. Where am I going to put all of the data that I want to mine? We went on to have a conversation about SQLite, Supabase, and others. Being vaguely familiar with the MERN stack, I sugged MongoDB as a potential option. Gemini gave some very insightful ideas on how MongoDB would not be a good fit for this project.
| Prompt I will need a way to store the data, what are free database options that have are easy to use? |
| Response MongoDB is generally not the best choice for this specific project.While MongoDB is popular, it is a “Document” database (NoSQL). Your Formula 1 project is inherently “Relational.”Why SQL is better for F1 Stats?F1 data has strict, logical connections. A Result always belongs to a Driver, a Team, and a Race. SQL is designed to “join” these separate pieces of information perfectly.Complex Queries: In SQL, asking “What is Max Verstappen’s average finish position at tracks with more than 15 turns?” is a simple command. In MongoDB, that kind of cross-referencing (joining) is much more difficult to write.Data Integrity: SQL ensures you don’t accidentally list a race result for a driver who doesn’t exist in your database. MongoDB is “schema-less,” meaning it won’t stop you from making messy data entries.Math & Analytics: SQL is built for the math you want (Fastest, Average, Count).When would you use MongoDB?You would use MongoDB if you were storing unstructured data, such as:A feed of live Tweets about a race.Scraped news articles where every article has different fields.Telemetry sensor data where the format changes constantly.Verdict: Stick with SQLite or Supabase. They use SQL, which matches the “Driver-Team-Track” relationship of Formula 1. |
| Take Away This to me was a great example of where an AI tool can be helpful to us as consultants. We know what we do well, so moving laterally to another stack, we can make educated decisions on our own. With an AI tool, we can make those decisions a bit quicker. The LLM allows me to take a TL;DR and distill it down to just what I need, right now. Mongo is popular. Is Mongo good for this project? NO, move on. |
I continued to discuss the database portion with the LLM to quickly identify a lightweight, inexpensive (free), common, and modern solution that meets my needs. The LLM provided the information I needed to decide swiftly. I do not want to overanalyze components now, and get caught up in making the best decision. I just want a viable product, not a perfect product.
I shared my experience using MySQL for a past side project. MySQL is powerful, east to manange, and free. I considered using MySQL again. If I am going to use a SQL product, I might as well choose one I know. However, Gemini provided the right answer. One goal is to learn a new stack, not assemble one from familiar parts. MySQL does meet the proper criteria. After further discussion, we settled on this stack with PostgreSQL as my DB layer.
We then moved on and up the stack. I know I needed a middleware layer to create my API wrapper, and then something on the front end to consume the API, and present it in a framework that will do the CSS work for me. It was a quick conversation, and the stack came together quickly
| Layer | Recommendation | Skill You Gain |
|---|---|---|
| Database | PostgreSQL | Advanced SQL, JSONB, & Relational Design |
| Backend | FastAPI (Python) | High-performance Async APIs |
| Frontend | Next.js (React) | Server-side rendering & Modern UI |
| Hosting | Supabase | Cloud infrastructure & Auth |
We continued chatting to complete administrative tasks such as how we will track to-do items, manage source code, etc. I also wanted to containerize the project, so we discussed hosting options. We chose a Docker container and GitHub for the source.
Our updated stack…
| Component | Recommended Version | Note |
| Python | 3.14.x | The current stable branch (v3.14.3 released Feb 2026). |
| Node.js | 22.x (LTS) | Provides the most stable runtime for Next.js. |
| Next.js | 16.1.x | The latest stable version, including Turbopack (stable). |
| React | 19.2 | Bundled with Next.js 16; includes automatic memoization. |
| FastAPI | 0.133.x | Latest high-performance release (Feb 2026). |
| PostgreSQL | 17.x | Current stable Docker image version. |
| SQLModel | Latest | Bridges the gap between SQLAlchemy 2.x and Pydantic 2.x. |
We ended the night with a stack, and a plan to begin provisioning.
Session Summary: Feb 26-27, 2026
- Goal: Planning a custom F1 Statistics database and web application.
- Decisions: Moving from a traditional MS stack to a modern, modular architecture.
- Stack Confirmed: * Frontend: Next.js (React) + Tailwind CSS.
- Backend: Python + FastAPI + SQLModel (ORM).
- Database: PostgreSQL.
- DevOps: Docker + Railway + GitHub Projects.
- Next Step: Environment provisioning (scaffolding Next.js, FastAPI, and spinning up Postgres via Docker).
Take away: I like to end every session with some sort of wrap up, so I have a sense of what we accomplished and what we decided through the conversation. I also now, starting to have the LLM generate an anchor prompt based on where we are on, in case I need to start a fresh session.