Skip to content

Instantly share code, notes, and snippets.

@Endogen
Created August 24, 2017 21:53
Show Gist options
  • Save Endogen/8ba36a88e91e4fbfd054612cc7744eb3 to your computer and use it in GitHub Desktop.
Save Endogen/8ba36a88e91e4fbfd054612cc7744eb3 to your computer and use it in GitHub Desktop.
Scrap Hackernews storys and read them
import requests
import subprocess
import time
# Install with "pip3.6 install beautifulsoup4"
from bs4 import BeautifulSoup
response = requests.get("https://news.ycombinator.com")
if response.status_code != 200:
exit("Status code " + str(response.status_code) + " - exiting")
soup = BeautifulSoup(response.content, "html.parser")
for story in soup.find_all(class_="storylink"):
title = story.get_text()
link = story["href"]
print(title + "\n" + link + "\n")
say_story = subprocess.Popen(["say", "-v", "Karen", story.get_text()], stdout=subprocess.PIPE)
say_story.communicate()
time.sleep(2)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment