Back

Static Generation for Podcasts: No Database Required

Static Generation for Podcasts: No Database Required

Static Generation for Podcasts: No Database Required

Most podcast websites are overbuilt. They pull episode data from a CMS, query a database at runtime, or call an API for every page load. For a show that releases once or twice a week, that's a lot of infrastructure doing very little work.

PanhaInsight's podcast section works differently. Every episode is a markdown file. At build time, Next.js reads those files and generates static HTML pages. The result is instant loads, zero database dependencies, and a deployment process that's indistinguishable from publishing a blog post.

The File-First Approach

Here's what a podcast episode looks like in this system:

---
title: "Ep1. ការរៀនតែក្នុងសាលាមិនគ្រប់គ្រាន់ទេ!—School is not the only place to study."
date: 2026-05-04
youtubeUrl: "https://youtu.be/XEaIZw1zbFc"
summary: "School never been a place to learn, it is a place to share"
tags: ["study"]
---

That's it. No database schema. No API endpoints. No CMS fields. The episode's metadata lives in YAML frontmatter, and the markdown body holds show notes or a transcript if I want to include one.

The file lives in content/podcasts/ with a slug derived from the filename. youtube-episode-1.md becomes /podcast/youtube-episode-1.

How generateStaticParams Works

In Next.js App Router, dynamic routes can be fully static if you tell the framework what paths exist ahead of time. I do this with a single function:

// app/podcast/[slug]/page.tsx
export async function generateStaticParams() {
  const episodes = await getAllEpisodes();
  return episodes.map((episode) => ({
    slug: episode.slug,
  }));
}

At build time, Next.js calls this function and receives an array of every podcast slug. It then renders each page once, producing static HTML files. When a visitor navigates to /podcast/youtube-episode-1, they get a pre-built page from the CDN. No server work. No database hit. No JavaScript required for the initial render.

Reusing the Same Pipeline

The blog and podcast sections share the same markdown processing pipeline:

  1. Read the file from content/posts/ or content/podcasts/
  2. Parse frontmatter with gray-matter for metadata
  3. Convert markdown to HTML using remark and rehype
  4. Sanitize the output with rehype-sanitize
  5. Render in a server component that ships only HTML to the client

The only difference is the page layout. Blog posts get a reading-optimized template with typography tuned for long-form text. Podcast episodes get an embedded player, episode metadata, and a layout designed around the YouTube video.

This reuse is why adding podcasts didn't require a new architecture. The same getAllContent() and getContentBySlug() functions handle both content types. A new directory and a new route segment were all that changed.

The Podcast-Specific Piece

Episodes need a video player, which blog posts don't. The template handles this by reading the youtubeUrl from frontmatter and rendering an embedded iframe:

{episode.youtubeUrl && (
  <div className="video-wrapper">
    <iframe
      src={embedUrl(episode.youtubeUrl)}
      title={episode.title}
      allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
      allowFullScreen
    />
  </div>
)}

Because the page is a server component, this HTML is generated at build time. The client receives a complete page with the player already in place. There's no loading state, no skeleton screen, no JavaScript hydration needed for the core content.

Why This Beats a Database for Small-Scale Podcasts

A database makes sense when you have:

  • Multiple editors publishing simultaneously
  • Content that changes frequently after publication
  • Complex querying, filtering, or search requirements
  • User-generated content or comments
  • A need for real-time updates

A personal podcast has none of these. Episodes are written once, published once, and rarely edited afterward. A file system is a perfectly good database for this workload. It's faster to read, simpler to back up, and requires no connection management.

The static generation approach also means:

  • Zero runtime dependencies. If the build succeeds, the site works. There are no database outages, no connection pool exhaustion, no query timeouts.
  • Instant global availability. Static HTML on a CDN loads in milliseconds from anywhere. No server region to choose, no cold starts.
  • Version-controlled content. Every episode is in git. I can see when it was published, what changed, and revert if needed. Try doing that with a CMS.
  • Offline editing. I can write and edit episodes without an internet connection. Push when I'm back online.

Scaling Limits (And When I'd Change)

This approach has real limits. If I were running a podcast network with fifty shows, daily episodes, and a team of producers, files wouldn't scale. I'd need a CMS, a database, and probably a search index.

But for one show, released weekly, with one person handling everything? Files are more than enough. The complexity of a database or CMS would add overhead without adding value. It's a solution looking for a problem.

The Deployment Flow

Publishing a new episode is identical to publishing a blog post:

  1. Create content/podcasts/episode-5.md
  2. Add frontmatter with title, date, YouTube URL, and tags
  3. git add, git commit, git push
  4. Vercel builds the site, generateStaticParams picks up the new slug, and the episode is live

The entire process takes under a minute. There's no CMS to log into, no form to fill out, no publish button to click. The commit is the publish action.

The Broader Pattern

This isn't really about podcasts. It's about recognizing when your problem is small enough that the simple solution is the correct one.

Static generation with file-based content works for:

  • Personal blogs
  • Podcast episode pages
  • Documentation sites
  • Small product catalogs
  • Portfolio sites
  • Conference schedules

It stops working when you need real-time data, user-specific content, or frequent updates from non-technical users. The skill is knowing which side of that line your project lives on — and not building for the other side just because it's what "serious" projects do.

PanhaInsight's podcast section is serious enough. It loads fast, it's easy to maintain, and it lets me focus on making episodes instead of managing infrastructure. That's the right trade-off.


The best architecture is the one that disappears. You shouldn't notice your podcast's infrastructure — you should notice the episodes.