Crawlers Jobs

71 were found based on your criteria {{|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Hourly - Intermediate ($$) - Est. Time: 3 to 6 months, Less than 10 hrs/week - Posted
Some key information: - The system holds 500+ sources that need to be crawled constantly - Deployed on Ubuntu 16.04 Requirements: - Deep understanding of Crawl Anywhere (Crawler, Pipeline, Indexer scripts) and its possible configurations - Experience in using Solr. - Confident in working with Ubuntu (16.04) OS. - Finding performance bottle necks and possible solutions (both OS and Crawl Anywhere). - Confident with SSH
Skills: Web Crawling Apache Solr Laravel Framework PHP
Hourly - Intermediate ($$) - Est. Time: More than 6 months, Less than 10 hrs/week - Posted
Hello - I'm looking to work with a researcher/job recruiter who can help me find work-remote jobs within the field of media. I'm a media producer - specializing in web content, editing for text, video and graphics, news items and more. This is a two-part job: 1) Research and collect job postings for work remote or work from home jobs within this field. Work from home ONLY 2) Help me submit my cover letter and resume to each job by tailoring my resume and cover letter to each specific job before we submit them. You will be paid for each round for items 1 and 2 as we go. Please send description of how you will approach this job and why you feel you are the qualified candidate. MUST BE: Skilled in doing deep web research and finding these specific types of jobs only. No on-site jobs. It must be 100% work remote. Add the word "REMOTE" at the top of your response so that I know you are not spamming Thanks!
Skills: Web Crawler Human Resource Information Systems Human Resource Management Internet research
Fixed-Price - Entry Level ($) - Est. Budget: $999,999 - Posted
For a new business oportunity i am looking for someone who can set up an crawler that automaticly search for jobs in the Netherlands using a couple of parameters (location,job title etc). ... For a new business oportunity i am looking for someone who can set up an crawler that automaticly search for jobs in the Netherlands using a couple of parameters (location,job title etc). The crawler must search in multiple websites. Do you have a better idea please contact me so we can discuse it.
Skills: Web Crawling Web Crawler
Fixed-Price - Intermediate ($$) - Est. Budget: $100 - Posted
I used to subscribe to and was able to download its complete list of IT decision makers fairly easily up to a maximum number of records at a time. I suspect with a web crawler, it can be downloaded more quickly. If you have access to the and/or database, I would need all available fields including first name, last name, address, city, state, zip, country for data outside of the USA, phone #, fax # if available, any category fields, and most importantly email address at the individual name level.
Skills: Web Crawling Data Entry Data mining Internet research
Fixed-Price - Intermediate ($$) - Est. Budget: $100 - Posted
I am interested in a list of people involved in commercial real estate in the NJ (New Jersey) USA counties on the attachment from or a similar service: Bergen, Passaic, (excluding West Milford and Ringwood), Morris, Hudson, Essex, Union, and Middlesex. The towns with their zipcodes are shown below. You would need to have access to or a similar service. I would like all available fields separately such as first name, last name, job title, address, city, state, zipcode, phone number, fax number, email address for the individual (e.g. as little as generic emails such as as possible) and type of company such as Brokers and Brokerage Firms. I would like the data on individuals under Brokers and Brokerage Firms, Owners and Investors, Multifamily Owners and Property Managers, and Retailers and Corporations. Please let me know what source/s you will be using for this data and a cost per thousand records provided or the complete job if you can bring in all the data from Costar. Please dedupe if needed so we don't see the same individual contact names multiple times. Please provide final file in XLS or CSV format. I'd also like the time needed to complete the project. Pricing shown is just a placeholder.
Skills: Web Crawling Data Entry Data mining Email Marketing
Fixed-Price - Intermediate ($$) - Est. Budget: $250 - Posted
You will be looking up a catalog number on 3-4 websites and recording if an item is exactly the same. If an item is exactly the same then you will record the catalog number in the excel spreadsheet. We also need the manufacturer name to be recorded. All fields can be copy/paste so you don't need to type a large amount of information. Step by step instructions: 1. Take the Manuf Part No and search it in the three websites of Spectrum Chemicals, VWR,Fisher( website are given in the excel attachment) 2. Collect the catalog number for that product in each of the websites and put it in the spreadsheet. 3. If the search results does not match the product put N/A in that field You need to look at the item description and ensure that is the same item. A number of manufacturer parts return multiple items or a different product so need to be careful. Complete the missing data in the attached excel spreadsheet and attach it with your application as sample. Also mention how many you can do per day.
Skills: Web Crawler Data Entry Internet research
Hourly - Expert ($$$) - Est. Time: Less than 1 month, 10-30 hrs/week - Posted
We need to 1. Scrape and download images locally 2. Scrape and save images data (like alt text, page title) in a csv file We have a list of 30 sites so far and more to come. The sites are quite big with lots of images but the structure of them is very easy and 99% don't have any anti-scrape systems. We are looking for someone who can take care of them nice and easy, we can pay a low amount for each site given the fact this is bulk work and they're easy.
Skills: Web Crawling Data scraping Python Scrapy