YouTube Playlist Scheduled Data Fetcher => MongoDB Storage => GET APIs
This Solution
Hit the Google / YouTube for a playlist once a week (or more) and store (check for redundancies) all the data in a separate database (MongoDB).
Organized datastore by: for each playlist, a collection by the same name.
Then serve up endpoints for each playlist/collection (Node/Express) via a Virtual Machine / Digital Ocean droplet server.
Overview
This section is about an Express.js web application that fetches YouTube playlist data using the YouTube API, saves it to MongoDB, and provides an API endpoint to fetch the playlist data from our frontend website. It includes rate limiting and scheduling for automated updates. The API is served via a DigitalOcean Droplet, ensuring remote access and scalability.
End Result
You can access these endpoints with web api fetch(). Easy Right?! Well, if your lava/vue code is using content channels to display data to the page, you will need to rewrite that code. Or, try a different solution. One that doesn't use a seperate data store. Try the webhook - Azure - Content Channel approach?!
Elements
Installation Instructions:
If you are interested in running the program locally start here: [GitHub]!(https://github.com/chaselikethebank/backend-server-chron-yt-data-fetch)Config and Service:
digital ocean account
MongoDB account
SSL certsDependencies:
{
"dependencies": {
"cors": "^2.8.5",
"dotenv": "^16.4.5",
"express": "^4.19.2",
"express-rate-limit": "^7.2.0",
"mongodb": "^6.5.0",
"mongoose": "^8.2.4",
"node-schedule": "^2.1.1"
}
}
- Configuration: create a dot env file
# Server configuration
PORT=3000
# data store
MONGODB_URI: MongoDB connection URI.
# google dev api
YOUTUBE_API_KEY
- Endpoints:
hardcode your endpoints
some of mine are /harvest /loft etc
- Testing:
spin up a local server
you should get a consol log of the port location
set the cron term to be every min
after a min, you should get a print to the console confirming the playlist data being added (unless redundancies)
curl the endpoint at that location to get a dump of the data
then, do the same w a vm
then reset your cron term to whatever makes the most sense for your system
Contributing:
License:
MIT
- Contact Information: Provide contact information for maintainers or contributors in case users have questions or need support.
Question
What is the current state of things?!
What is a better way to propose the next steps?!
What expectations are reasonable?!
Is there a better way to kick this thing off?!