Scraper Spider

Based upon a set of custom keywords, and using:
Web sites and API's are scraped for relevant content every few hours and then listed here. False positives are not displayed but are retained in the database for training with a Naive Bayesian text classification model. Summaries are generated using Natural Language Processing. Redis/RedisJSON is used for a handy (and wicked fast!) caching mechanism so I don't abuse the API's and websites that I utilize.