My Twitch Clone: 5 Brutal Next.js/Supabase Fixes (2025)
Building a Twitch clone with Next.js & Supabase? I hit 5 brutal roadblocks. Here are the real-world fixes for scaling chat, video, presence, and more in 2025.
Alex Carter
Full-stack developer specializing in real-time applications and modern web frameworks.
It started as the perfect dream stack. Next.js for a lightning-fast, SEO-friendly frontend. Supabase for a batteries-included, Postgres-powered backend. Building a Twitch clone felt... easy. Almost too easy.
The initial prototypes were glorious. Real-time chat popped up instantly. User auth was a breeze. I was deploying to Vercel with a single command. I felt like a 10x developer riding the wave of modern web tooling. Then, I tried to scale it. Not even to Twitch levels. Just to a few hundred concurrent users. And that's when the dream turned into a brutal, performance-debugging nightmare.
The beautiful abstractions that made Supabase and Next.js so magical started to crack under pressure, revealing the complex machinery underneath. If you're embarking on a similar real-time project, save yourself weeks of pain. Here are the five most brutal problems I faced and the fixes that actually worked in 2025.
1. The Real-time Chat Meltdown: When RLS Becomes the Bottleneck
The official Supabase docs make setting up a real-time chat app look trivial. Create a `messages` table, slap on some Row Level Security (RLS) policies to ensure users can only see messages for the channel they're in, and subscribe on the client. Done.
The Brutal Problem: This approach falls apart with any meaningful concurrency. Every single client subscribed to the `messages` table forces Postgres to evaluate the RLS policy for every single message inserted into that table. In a chatroom with 500 users, one new message triggers 500 RLS policy checks. The database CPU usage skyrocketed, and chat messages started appearing with a noticeable, unacceptable delay.
The Fix: Decouple Chat from the Database with Realtime Channels
The key insight is that not every real-time event needs to be a database read. Supabase's Realtime Server is more than just a Postgres-to-client pipe. We can use its broadcast feature to send ephemeral messages directly between clients, bypassing the database for the initial delivery.
- Authenticate Once: When a user joins a chat room, they call a Supabase Edge Function. This function verifies their permissions (are they allowed in this room?) and, if successful, uses the Realtime admin client to subscribe the user to a specific channel topic, like `chat:stream-channel-id`.
- Broadcast, Don't Insert (at first): When a user sends a message, the client sends it directly to the Realtime channel, not to the database. All other authenticated clients on that topic receive it instantly. No database interaction, no RLS, no bottleneck.
- Batch-Persist Asynchronously: The client that sent the message (or a trusted Edge Function) is also responsible for persisting it. To avoid thrashing the DB, you can batch these inserts. For example, an Edge Function could listen to the broadcast and insert messages into the `messages` table every few seconds. This way, chat history is saved, but it's decoupled from the live experience.
// Client-side: Sending a message
// Note: This is a conceptual example
const channel = supabase.channel('chat:some-room-id');
// We've already authenticated and subscribed via an Edge Function
function sendMessage(text) {
// Broadcast to all clients on the channel
channel.send({
type: 'broadcast',
event: 'new_message',
payload: { content: text, author: 'user-name' },
});
// Asynchronously save the message to the DB for history
// This call doesn't block the real-time experience
supabase.from('messages').insert({ content: text, room_id: 'some-room-id' });
}
2. Video Streaming Chaos: Supabase Storage Isn't a Media Server
My naive first attempt at video streaming involved using a user's browser to capture their screen with WebRTC and then trying to push HLS segments directly to Supabase Storage. I thought, "It's just files, right?" Wrong. So, so wrong.
The Brutal Problem: Supabase Storage is an S3-compatible object store. It's fantastic for user avatars, VODs, and other static assets. It is not designed for the low-latency, high-throughput demands of live video streaming. I faced constant buffering, high latency (viewers were 30-45 seconds behind the streamer), and had no way to handle transcoding for different network conditions.
The Fix: Use a Dedicated Media Server (like LiveKit)
You need to separate your application's "control plane" from its "data plane".
- Control Plane (Supabase): This is where you manage users, authentication, stream metadata (title, category), and stream keys. Supabase is perfect for this.
- Data Plane (LiveKit, Mux, etc.): This is a dedicated service that handles the heavy lifting of video: ingesting the WebRTC stream from the broadcaster, transcoding it into multiple bitrates (e.g., 1080p, 720p, 480p), and distributing it globally via a CDN as HLS or DASH.
The integration looks like this:
- A user clicks "Go Live" in the Next.js app.
- The app calls a Supabase Edge Function.
- The Edge Function validates the user, generates a secure token for LiveKit using the LiveKit server SDK, and returns it to the client.
- The client uses this token to connect to LiveKit and start sending video.
- Viewers fetch the stream URL (provided by LiveKit) from your Next.js app. Supabase still handles the chat, presence, and other metadata around the stream.
This architecture is robust, scalable, and how real-world video platforms are built. Don't try to reinvent the media server.
3. The Phantom Viewer Problem: Inaccurate Real-time Presence
Supabase's built-in Realtime presence is great for showing a list of who is currently connected to a channel. I used it to power the "Users in Chat" list. But I made a critical mistake: I assumed the presence count was the same as the viewer count.
The Brutal Problem: A user can be "present" in the chat channel with the browser tab open in the background, but not actually be watching the video stream. This led to inflated viewer counts, which is a cardinal sin for a streaming platform. Furthermore, as the channel size grew, the sheer number of `join` and `leave` events became noisy and resource-intensive.
The Fix: A Server-Side Heartbeat System
To get an accurate count of active viewers, you need confirmation that they are still watching. A heartbeat system is a classic and reliable way to do this.
- Create a Table: Make a simple table called `active_viewers` with columns like `stream_id`, `user_id` (or a session ID), and `last_seen` (a timestamp).
- Client-Side Heartbeat: In your video player component, use a `setInterval` to call a Supabase Edge Function every 30 seconds. This is the "heartbeat".
- Edge Function Logic: The Edge Function receives the heartbeat and performs an `UPSERT` on the `active_viewers` table, updating the `last_seen` timestamp for that user and stream. An `UPSERT` is critical for performance; it's a single database operation.
- Cleanup Crew: A scheduled Supabase Edge Function (a cron job) runs every minute to delete records from `active_viewers` where `last_seen` is older than, say, 60 seconds. This cleans up disconnected or idle users.
- The Real Count: Your viewer count is now a simple, fast `SELECT count(*) FROM active_viewers WHERE stream_id = ?`. You can expose this via another Edge Function or an RPC call.
This is far more accurate and scales much better than relying on client-side presence events alone.
4. The Query of Death: Taming Complex Joins in PostgreSQL
As features grew, so did my data relationships. A common page was the main feed, which needed to show: "For the current user, get the streamers they follow, show their live status, and grab their most recent VOD thumbnail."
The Brutal Problem: This translated into a monstrous, multi-level `JOIN` across `users`, `follows`, `streams`, and `vods` tables, with a `LATERAL JOIN` or window function to get the "most recent" VOD. It was slow on the first load and impossible to cache effectively. This single query, my "Query of Death," was bringing the app to its knees.
The Fix: Denormalization and Postgres Functions (RPC)
The relational purist in me screamed, but the pragmatist won. For read-heavy applications, you cannot be afraid to denormalize data or create optimized data access layers.
Here's a comparison of the approaches:
Approach | Description | Pros | Cons |
---|---|---|---|
The Bad: Complex Client-Side Joins | The Next.js server makes multiple Supabase calls and joins the data in JavaScript. | Easy to write initially. | Very slow, multiple network round-trips, high server load. |
The Better: Postgres Views | Create a `VIEW` in Postgres that pre-defines the complex join. The client just queries the view. | Hides complexity, single API call. | The view still executes the full join every time it's queried. |
The Best: Postgres Functions (RPC) | Create a database function `get_user_feed(user_id)` that performs the logic and returns a set of `json` objects. | Incredibly fast, logic lives next to the data, minimal data transfer, easy to call from Supabase client. | Requires writing SQL/plpgsql. |
Creating a Postgres function that returns exactly the JSON shape my frontend needs was the ultimate fix. The database does what it's best at—querying and relating data—and sends a perfectly formed, lightweight payload to the client.
-- Simplified example of the RPC function in Postgres
create function get_user_feed(p_user_id uuid)
returns setof json
as $$
begin
return query
select json_build_object(
'streamer_name', u.username,
'is_live', s.is_live,
'latest_vod_thumbnail', (select v.thumbnail_url from vods v where v.user_id = s.user_id order by v.created_at desc limit 1)
)
from follows f
join streams s on f.followed_id = s.user_id
join users u on s.user_id = u.id
where f.follower_id = p_user_id;
end;
$$ language plpgsql;
// Client call
supabase.rpc('get_user_feed', { p_user_id: userId })
5. Caching vs. Reality: Next.js's Identity Crisis with Live Data
I love Next.js's caching strategies. ISR (Incremental Static Regeneration) is brilliant for content that changes periodically. I set up my stream pages (`/stream/[username]`) to revalidate every 60 seconds.
The Brutal Problem: What happens when a user loads the page 5 seconds after it was cached? They see a stale viewer count, an incorrect "LIVE" status, and a chat that's a minute old. For a live platform, this is disorienting and feels broken. Caching the entire page was fundamentally at odds with the real-time nature of the content.
The Fix: The Static Shell + Dynamic Hydration Model
You have to separate the static parts of the page from the dynamic parts. The server renders a static "shell" of the page, and the client is responsible for fetching and rendering the live data.
- Static Shell (ISR/SSG): The Next.js page is built at build time or revalidated periodically. It contains all the non-real-time data: the streamer's bio, their social links, panels, and a gallery of past VODs. This makes the initial page load incredibly fast and great for SEO. The video player and chat are just empty containers.
- Client-Side Fetching (SWR/React Query): Once the page loads in the browser, a main React component (e.g., `
`) kicks in. It uses a client-side data fetching library like SWR to immediately fetch the *current* live status. - Conditional Rendering:
- If the streamer is live, the component loads the video player, connects to the chat and presence channels (using our fixes from above!), and starts the heartbeat.
- If the streamer is offline, it can show a "Streamer is Offline" message or the latest VOD.
This hybrid approach gives you the best of both worlds: the instant perceived performance and SEO benefits of a static site, combined with the liveness and accuracy required for a real-time application.