A behind-the-scenes look at our AI-powered curation process.
Every morning at 7:30 AM UTC, our automated workflow kicks off. Here's what happens behind the scenes:
Our n8n workflows query YouTube's Data API for each of our 12+ specialized feeds. We search for recent videos (1-3 days old) matching specific topics like "LangChain Framework," "Claude AI Development," "Terraform IaC," and more.
Each video is analyzed by OpenAI's GPT-4o-mini using a specialized prompt that evaluates:
Only videos scoring ≥7/10 make it to our daily lists. This filters out clickbait, low-effort tutorials, and off-topic content.
Before adding videos to the daily list, we check if they've already been curated in the past 30 days. This ensures you're always seeing fresh content without repetition.
Approved videos are stored in our PostgreSQL database via a custom PL/pgSQL function called ingest_daily_list(). This function handles all the complexity: upserting videos, linking them to feeds, assigning categories, and maintaining data integrity.
After all feeds are collected, a second AI workflow generates daily summaries. These summaries highlight:
When new content is ingested, PostgreSQL emits a NOTIFY event. Our Next.js app listens for these events and automatically revalidates cached pages. This means you always see the latest content without manual refreshes.
We use n8n (an open-source workflow automation tool) to orchestrate the entire curation process. Our main workflows include:
Our AI assessment uses topic-specific prompts tailored to each feed. For example:
"You are assessing a YouTube video about agentic AI systems. Rate it 1-10 based on: technical depth of agent architectures, practical implementation examples, discussion of multi-agent coordination, production deployment insights, and overall teaching quality..."
Instead of polling for changes or rebuilding pages on a schedule, we use PostgreSQL's LISTEN/NOTIFY feature. When new content is ingested, the database notifies the Next.js app, which immediately revalidates affected pages. This keeps the site fast while ensuring content freshness.
If a feed returns zero videos (common for niche topics), the workflow gracefully skips it and logs the event. No errors, no failed runs—just a clean skip.
If all videos from a search fail the ≥7/10 quality threshold, the feed is skipped for that day. We'd rather show nothing than promote low-quality content.
Our duplicate detection checks both video ID and external ID to ensure the same video isn't added multiple times, even if it appears in different feeds.
We add 200ms delays between YouTube API calls to avoid hitting rate limits. With 12 feeds and ~6 queries per feed, we stay well under YouTube's quota.
We don't manipulate results. Every video that scores ≥7/10 is included—no editorial filtering, no sponsor preferences, no algorithmic bias toward engagement metrics.
We cite our sources. Every video links back to the original creator. We're a discovery tool, not a content host.
We show the quality score. You can see exactly how each video was rated (coming soon as a visible feature).
We're open about costs. Running DailyDevLists costs approximately $3-5 per month (OpenAI API usage). We're exploring sustainable monetization options (premium features, sponsorships) but will always keep the core daily lists free.
See the results of our AI curation in action.
Browse Today's Lists