Skip to content

Data Pipeline Walkthrough

The data flows through the pipeline as follows:

  1. Cron Job Trigger: The cron job runs at 15-minute intervals (configured in cron.js) to start the data fetch process.

  2. Fetch YouTube Data:

    • The onlything that is hardcoded is the YouTube channel IDs.
    • Check for new playlists for each Channel.
    • For each playlist, hit the YouTube API to get the latest videos.
    • On the conditional: The videos are filtered based on the last fetch time to only include new content.
  3. Webhook to Rock RMS:

    • Once the videos are fetched, each video is sent via a webhook to Rock RMS.
    • The Rock RMS webhook (rockRmsWebhookContentChannelUrl) processes the incoming data and creates content assets in dynamic playlists.
  4. Save Data to Local File:

    • The video data is stored locally in JSON files (using the saveDataToFile.js utility) for future reference.
  5. Logging:

    • After the data is successfully fetched, the timestamp of the last successful fetch is saved in the logOfLastFetch.json file for tracking purposes.

The entire process ensures that the content is efficiently fetched, processed, and displayed on the platform without manual intervention.

Note: It took about 15 min to push every video from every playlist from every channel for the last 20 years into Rock. The Rock workflows were much slower, but the API to webhook is F A S T !

Explore and learn. Released under the MIT License.