Web Crawler Jobs

64 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Fixed-Price - Intermediate ($$) - Est. Budget: $100 - Posted
I am looking for someone to build a web scraper that can go to 1) Amazon.com, 2) Target.com and 3) Walmart.com and search each site for a UPC number. If the product is found on the site, the scraper will capture: 1) Product Name 2) Product Description 3) Product Image (or link to image) 4) Product Price and 5) Product Weight (if available). The output should be in Microsoft Excel. We will run this scraper multiple times a week for hundreds of products. I know this scraper isn't too difficult as I've built a test one on my own but would like someone experienced to bring all three website pulls together. I am open to suggestions on the build to achieve the same desired results to pull product data by UPC if you have recommendations.
Skills: Web Crawler Web scraping
Fixed-Price - Intermediate ($$) - Est. Budget: $100 - Posted
I need someone to build a Excel vba based web crawler to extract large amounts of data from websites. Someone that has work experience of more than 3 years creating web crawlers/ scripts to extract data big websites and solve any security issues. Please see attached file for format needed. Site aliexpress . com ioffer . com etc.
Skills: Web Crawler Data mining Data Science Data scraping
Hourly - Intermediate ($$) - Est. Time: 3 to 6 months, Less than 10 hrs/week - Posted
We're looking for an expert who used the crawling software 'Crawl Anywhere 4' thoroughly in the past. More specifically a deep understanding of the inner workings and transport of data in said system is required. Some key information: - The system holds 500+ sources that need to be crawled constantly - Deployed on Ubuntu 16.04 Requirements: - Deep understanding of Crawl Anywhere (Crawler, Pipeline, Indexer scripts) and its possible configurations - Experience in using Solr. - Confident in working with Ubuntu (16.04) OS. - Finding performance bottle necks and possible solutions (both OS and Crawl Anywhere). - Confident with SSH
Skills: Web Crawler Apache Solr Web Crawling Laravel Framework
Hourly - Intermediate ($$) - Est. Time: More than 6 months, Less than 10 hrs/week - Posted
Hello - I'm looking to work with a researcher/job recruiter who can help me find work-remote jobs within the field of media. I'm a media producer - specializing in web content, editing for text, video and graphics, news items and more. This is a two-part job: 1) Research and collect job postings for work remote or work from home jobs within this field. Work from home ONLY 2) Help me submit my cover letter and resume to each job by tailoring my resume and cover letter to each specific job before we submit them. You will be paid for each round for items 1 and 2 as we go. Please send description of how you will approach this job and why you feel you are the qualified candidate. MUST BE: Skilled in doing deep web research and finding these specific types of jobs only. No on-site jobs. It must be 100% work remote. Add the word "REMOTE" at the top of your response so that I know you are not spamming Thanks!
Skills: Web Crawler Human Resource Information Systems Human Resource Management Internet research
Fixed-Price - Entry Level ($) - Est. Budget: $999,999 - Posted
For a new business oportunity i am looking for someone who can set up an crawler that automaticly search for jobs in the Netherlands using a couple of parameters (location,job title etc). The crawler must search in multiple websites. Do you have a better idea please contact me so we can discuse it.
Skills: Web Crawler Web Crawling
Fixed-Price - Intermediate ($$) - Est. Budget: $100 - Posted
I used to subscribe to Rainking.com and was able to download its complete list of IT decision makers fairly easily up to a maximum number of records at a time. I suspect with a web crawler, it can be downloaded more quickly. If you have access to the Rainking.com and/or discover.org database, I would need all available fields including first name, last name, address, city, state, zip, country for data outside of the USA, phone #, fax # if available, any category fields, and most importantly email address at the individual name level. I also would be interested in the same type of data from discover.org and anything else similar where the focus are IT executives around the world. I'd like the data in CSV format. Please let me know what is world cost and how quickly I could get the file. Budget shown is a placeholder.
Skills: Web Crawler Web Crawling Data Entry Data mining