Crawlers Jobs

70 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Hourly - Expert ($$$) - Est. Time: Less than 1 month, 10-30 hrs/week - Posted
In summary: I want to be able to configure scrapy for multiple locations via a simple website. I want scrapy to grab a session token, spoof the IP, grab my data and save the CSV to an S3 bucket. I want to be able to: 1) login to my own secure website hosted in AWS 2) display simple 4 column form with column names (see attachment) 3) Setup new scrapes 4) refresh recurring scrapes 3) in detail For setting up New Scrapes: "Get New DataSource" launches new tab or similar (e.g., Chrome extension?) wherein I login into my new datasource and then navigate to the area that I want to scrape, specify the table and somehow specify "Get Data". It should be able to handle easier REST url requests or more difficult ones with an obscured header variables). While I'm open to variation, I'm envisioning something similar to the pinterest chrome extension but with regards to data tables within secure websites. Once, the scrape configuration is saved, then it starts 4) get data "refresh" 4) in detail click "REFRESH" spawns new tab wherein user only logs in. Session token is grabbed by service. All requested data is navigated to and pulled on the back end. Note: some IP spoofing on the login or on the backend service will be required. 5) back end service should exist as AWS Lambda callable code. As such, variables should reside separately and load per request. 6) I anticipate using this with a node.js service ... so, looking for callable compliance (i.e., I know that scrapy is natively python) 7) data should be saved consistently/statically to a dedicated S3 bucket (per logged in user) ... authenticated URL can be made available. Finally, I'm okay with pulling in Scrapy and AWS libraries. I do want to minimize code complexity beyond that am looking for clean, well documented, quick code.
Skills: Web Crawler Data scraping Scrapy
Fixed-Price - Intermediate ($$) - Est. Budget: $45 - Posted
I need an Excel list created of all the solo and two attorney firms who are members of the Boston Bar (I currently use -http://www.sljinc.org/atty_resources.php) By solo and two attorney firms, I mean they is only 1 or 2 attorneys working at the office. I would like them in these categories: Name, Firm Name, address, email, phone and website (if provided). If you can determine if they are a solo or two attorney firm that would be great because that's who I'm targeting. Each will need to be a separate column header on the spreadsheet. This ensures that I can easily filter the data.
Skills: Web Crawler Data mining Data Recovery Data Science
Fixed-Price - Expert ($$$) - Est. Budget: $200 - Posted
i need some with expertise in researching popular websites for jewelry tools. you must be able to send me a excel sheet with the most popular to least popular product web pages in amazon in the Jewelry Making Tools & Accessories category. i do not need all the products in this category, just the top 100 best sellers on amazon, and in your proposal please tell me how you plan on getting me the information, what parimeters. thanks
Skills: Web Crawler
Hourly - Intermediate ($$) - Est. Time: Less than 1 week, 10-30 hrs/week - Posted
I am developing an event sharing application that I planned to to release soon in the App Store. This event sharing application will recommend weekly local events to the user based on their location and preferences. This event sharing app is tailored to the African-American culture. Currently, the app supports the following types of events: HipHop, Historically Black Colleges, Food, Fitness, Fashion, and Festivals. Currently only two cities will be supported. Atlanta, Georgia and Charlotte, NC. Both are in the United States.... Now that you have background on the app, I am looking for quality structured and unstructured data to feed the app. I am currently using firebase and mongodb to store data. So I will need the data that is scraped and/or crawled from the web to be extracted and stored in JSON format so that it is easliy portable to mongo or firebase. Idea places to look for data will be eventbrite, facebook, twitter, instagram.. etc.. If this works out like I hope it will, I will need data on a weekly basis. I am not a data scientist nor do I play one on television, so if you have any better suggestions or advice then I am all ears. Thanks Maurice
Skills: Web Crawling Data mining Web Crawler Web scraping
Fixed-Price - Intermediate ($$) - Est. Budget: $5 - Posted
Hello, I want to teach some english, I need someone who can find me students to have one on one classes with. I would like to teach conversational english, I am a native english speaking, certified teacher. I charge a one time monthly fee and this allows my students unlimited access to me, so as to practice their english; they will need to book the time slots, but I am always free. If you could help me find a student to teach, then you have the job. As I offer professional services, me fee is pretty high, but it gets you two full months of unlimited practice. All you need to improve your english. I also offer homework and lesson content with homework review, I do not teach for TOEFL testing but that is later down the line. You will be paid via Upwork of coarse, a good portion per successful referral.
Skills: Web Crawler Ad Posting Advertising Classifieds Posting
Hourly - Entry Level ($) - Est. Time: More than 6 months, 10-30 hrs/week - Posted
I need occasional scraping work done, so I am looking for a contractor who really knows the ins and outs of scraping and can assist me long term. We primarily use import.io, so expertise in this tool is necessary. Expertise in other similar tools would also be useful. Work will often be done via Skype and would involve conferencing, screen sharing, etc. Ability to communicate in English is an absolute necessity.
Skills: Web Crawler Data mining Data scraping Web scraping
Hourly - Intermediate ($$) - Est. Time: More than 6 months, Less than 10 hrs/week - Posted
The right developer should have the following expertise: - Experience with backend development - Knoweldge in one of the following languages is a big plus (in that order): Java / Go / Node.JS - Experience in creating web crawlers, parsing websites and extracting relevant data - Experience with working on a modern No-SQL DB (mongo or similar) - Know how to plan scalable systems, from design to coding - Working with AWS - a very big advantage
Skills: Web Crawling