분류 전체보기
-
Scrap WeWorkRemotelyProject using python/Jobs scrapper 2020. 12. 21. 11:51
scrapperWWR.py import requests from bs4 import BeautifulSoup def extract_job(html): # title, company, location, link job_info_link = html.find_all('a') if len(job_info_link) > 1: job_info_link = job_info_link[1] else: job_info_link = job_info_link[0] link = f"https://weworkremotely.com/{job_info_link['href']}" job_info = job_info_link.find_all("span") company = job_info[0].get_text() title = job..
-
Extract jobs from Stack OverflowProject using python/Jobs scrapper 2020. 12. 21. 11:49
scrapperSO.py import requests from bs4 import BeautifulSoup def extract_job(html): # title, company, location, link title_link = html.find('h2').find('a',{'class':'s-link'}) title = title_link['title'] link = f"https://stackoverflow.com/jobs/{title_link['href']}" company, location = html.find('h3', {"class":"fs-body1"}).find_all('span', recursive=False) return { "title" : title, "link" : link, "..
-
get_last_page of Stack OverflowProject using python/Jobs scrapper 2020. 12. 21. 10:51
scrapperSO.py import requests from bs4 import BeautifulSoup def get_last_page(url): result = requests.get(url) soup = BeautifulSoup(result.text, "html.parser") pages = soup.find("div", {"class": "s-pagination"}).find_all('a') last_page = pages[-2].get_text(strip=True) return int(last_page) def get_SOJobs(word): url = f"https://stackoverflow.com/jobs?q={word}" last_page = get_last_page(url) print..
-
Python BeautifulSoupProject using python/Jobs scrapper 2020. 12. 21. 10:27
Definition Beautiful Soup은 HTML 및 XML 파일에서 데이터를 추출하기 위한 Python 라이브러리다. Installation pipenv install beautifulsoup4 Usage import requests from bs4 import BeautifulSoup result = requests.get(url) soup = BeautifulSoup(result.text, "html.parser") Method find pages_container = soup.find("div", {"class": "s-pagination"}) BeautifulSoup로 추출한 html에서 tag가 "div"이고 class가 "s-pagination"인 html을 추출한다. 이에 해당하는 ..
-
Python requestsProject using python/Jobs scrapper 2020. 12. 21. 10:00
Definition Python에서 사용되는 Http library다. Installation pipenv install requests Methods requests.get import requests url = 'https://api.github.com/some/endpoint' r = requests.get(url) payload = {'key1': 'value1', 'key2': 'value2'} r = requests.get('https://httpbin.org/get', params=payload) headers = {'user-agent': 'my-app/0.0.1'} r = requests.get(url, headers=headers) requests.get()에 준 url의 page를 가..
-
Game count down clockProject using node.js/Cloning Catch-Mind 2020. 12. 18. 16:58
Client에게 game이 시작되었다는 것을 알리기 // player.js const timeOut = document.getElementById("jsTimeOut"); let count = null; let counter = null; const setTimeOut = () => { count--; if (count { clearTimeout(counter); count = 30; counter = setInterval(setTimeOut, 1000); }; export const handleGameEnded = () => { clearTimeout(counter); }; 30초 안에 그림을 그리고 참가자들은 painter의 그림을 맞춰야 한다. 시간의 count down을 board 오른쪽 위에 표..
-
Add pointsProject using node.js/Cloning Catch-Mind 2020. 12. 18. 16:51
점수 부여 // socketController.js export const socketController = (socket, io) => { const addPoints = (id) => { sockets = sockets.map((socket) => { if (socket.id === id) socket.points += 10; return socket; }); sendPlayerUpdate(); endGame(); }; socket.on(events.sendMsg, ({ message, username }) => { if (word === message) { addPoints(socket.id); superBroadcast(events.newMsg, { message: `Winner is ${sock..