You've landed at the right place. oDesk is now Upwork. Learn about the new platform.

Web Scraping Jobs

237 were found based on your criteria {{ paging.total | number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("hourly") | number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("fixed") | number:0}})
show all
only
only
only
show all
only
only
only
only
only
show all
only
only
only
Looking for the Team App?
Download the New Upwork Team App
Fixed-Price - Expert ($$$) - Est. Budget: $2,000 - Posted
Hi There, I am looking to develop a crawler that will look online for email addresses of professionals in certain industries. For example... I am looking to create a data base of New York City Accountants and I need all of their email addresses. There are about 30,000 accountants in New York and to put this list together manually will be very time consuming and very expensive. I need an online robot to do it for me. All I need is their email address collected and then once the system is done collecting it I will need it exported to Excel sheet. I have about 10 industries in New York that I am trying to collect the emails from for future marketing campaign so I will be reusing this software. Please contact me with ideas, pricing and completion time for this project as soon as possible. I am looking to start right away. Thank you, Dave Ratner
Skills: Web scraping Microsoft Excel
Hourly - Entry Level ($) - Est. Time: Less than 1 week, Less than 10 hrs/week - Posted
I need help extracting products and images from Aliexpress and getting them into a csv or excel file so i can upload it as a listing on a platform similar to ebay. Freelancer can either do the process for me or show me how to do it. You must speak English and be able to communicate over skype.
Skills: Web scraping Data Entry Data scraping
Fixed-Price - Entry Level ($) - Est. Budget: $50 - Posted
Hello. I am looking for someone to help me collect 2,000 website links and the names associated with those links. The task is very straightforward, but time consuming, and I would like to pay one individual to manually scrape the links and names. It will involve simple copying and pasting from multiple pages on one website to a Google spreadsheet.
Skills: Web scraping Email Handling
Fixed-Price - Intermediate ($$) - Est. Budget: $100 - Posted
Extract the following data: #1) All-State: Data for the state of Utah as a test This is where we would find a list of all-State Agents by State https://agents.allstate.com/index.html Here is an actual agent profile https://agents.allstate.com/weller-agency-eden-ut.html First Name Last Name (first and last name as separate fields if possible) Address (Separate fields if possible for address, city, state, zip) Email address Phone Mobile Number of reviews Star rating Link: The link to the Reps All-State profile #2) Home Advisor: Carpet Cleaners and Pest Control professionals in the state of Utah as a test. For Home Advisor: http://www.homeadvisor.com/sitesearch/searchQuery?action=SEARCH&startIndex=&showBusinessOnly=false&searchType=ServiceProfessionalSearch&query=carpet+cleaning&explicitLocation=utah company profile http://www.homeadvisor.com/rated.ASpotlessCarpet.39550642.html Company Name Phone Address (Separate fields if possible for address, city, state, zip) Number of reviews Star rating Link: The link to the company Home Advisor Profile Goal is to expand data to all areas and states.
Skills: Web scraping Data mining Data scraping Internet research
Hourly - Entry Level ($) - Est. Time: More than 6 months, 30+ hrs/week - Posted
You will log into a website that provides specifications for items, and you will export the data into Excel. Some of the data will be exported automatically, and some items you will need to copy/paste manually unless you can find some other method to do it, such as data mining, you are free to do so as long as the data is accurate. (If you are able to mine the data, you will receive bonus compensation equal to the average amount of hours it would have taken to gather the data manually.) The information will saved onto a Google Drive file, as well as a local Excel file for backup. We will explain the steps and procedures to you over a Skype screen share conference, also we have uploaded a tutorial video on youtube for your reference which will be provided to you upon hire to review. We have done this manually ourselves before and know how long it takes. The average rate is approximately 80-90 entries per hour worked and you will be expected to meet this quota. If you are able to mine the data in bulk, you will be compensated with a bonus at this rate. YOU MUST BE AVAILABLE FOR WORK & COMMUNICATION DURING OUR WORKING HOURS FOR THE FIRST WEEK WHICH IS 9AM-5PM PACIFIC TIME. AFTER BOTH PARTIES ARE COMFORTABLE WITH YOU WORKING ON YOUR OWN, YOU MAY WORK YOUR OWN HOURS. THIS IS ABSOLUTELY REQUIRED.
Skills: Web scraping Data Entry Data mining Data scraping
Hourly - Intermediate ($$) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
This job is focused on advancement of the experience that thousands of users get navigating, browsing, searching and comparing the content offered through our proprietary technology platform. The end-result (output of the ontology model) will be a set of intuitive and comprehensive multi-level navigation structures (hierarchical taxonomies, facets) for browsing, searching and tagging the content offered to our clients. The end-task is envisioned to be primarily achieved with the usage of Semantic Web concepts and data (LOD and other available SKOS) as per Semantic Web standards. The task most likely will require knowledge/learning of several RDF-based schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, Schema.org) and usage of the W3C’s Semantic Web technology stack components (SPARQL, Protege, Semantic resoners). Key tasks: - Definition of RDF Schema and ontologies based on several existing RDF Schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, Schema.org, etc.) - linking available LOD and SKOS data sets, building several core multi-level hierarchical taxonomies (magnitude of tens of thousands of elements) comprehensively describing the content in our system - Rule-based processing and linking of multiple existing, as well as obtained sets of data using semantic reasoners - Definition, structuring and optimization of hierarchical data sets, definition and maintenance of hierarchical relationships of particular terms (facets) - Research (independent, as well as guided by management team) on publicly available SKOS and LOD sets related to the content of the platform from public (international standards, patent databases, public and government databases, various organizational, available XML datasets, etc.), as well as acquired proprietary sources - Retrieval and ETL of multiple additional data sets from multiple sources - Tagging, Classification, entity extraction - Working with management team to maintain and advance particular segments of defined taxonomies Optional Stretch-Tasks (Depending on Candidate's Qualifications): - Automatic analysis of content, extraction of semantic relationships - Auto-tagging, auto-indexing - Integration and usage of selected IBM Watson services for content analysis - Integration with Enterprise Taxonomy Management platforms (Mondeca, Smartlogic, PoolParty, or others) This job will initially require commitment of 15-20 hours per week over 3-6 months engagement. Interaction with a responsible manager will be required at least twice a week over Skype and Google Hangouts. Longer-term cooperation is possible based on the results of the initial engagement. Required Experience: - Detailed knowledge of Semantic Web concepts and techniques - Intimate familiarity with W3C’s Semantic Web technology stack (RDF, SPARQL, etc.) - Hands-on experience with LOD (DB Pedia and others) and various SKOS - Experience of modeling data based on various RDF schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, ISO 25964, etc.) - Knowledge of common open-source ontology environments and tools (Mediawiki, Protege, etc.) or other enterprise-grade ontology tools (Synaptica, DataHarmony, PoolParty, Mondeca, Top Braid, etc.) - Experience of work with semantic reasoners - Prior experience of content management and maintenance of taxonomies for consumer or e-commerce applications Additional Preferred Experience: - Background in Library and Information Science (MLIS), Knowledge Management, Information Management, Linguistics or Cognitive Sciences - Familiarity with common classification systems - Experience working with catalog and classification systems and creation of thesauri - Auto-tagging, auto-classification, entity extraction
Skills: Web scraping Web Crawling Data Analytics Data Entry
Fixed-Price - Intermediate ($$) - Est. Budget: $48,999 - Posted
Hello. I need to scrape texts from websites that found by google. The total list of keyword is 4 millions. I don't know how much it should cost, so.. place your bid.
Skills: Web scraping
Fixed-Price - Intermediate ($$) - Est. Budget: $50 - Posted
Create a tool that will identify products that are for sale on Amazon and NOT on sale on eBay. The search criteria would be based on Title Keyword comparison. The following filters should be in place: 1. Specify Min/Max price of the products searched - For example if i only want to search products less then $50, it will not show results that are higher then $50. 2. Amazon Category selection - I should be able to search based on Amazon category (this is based on every category available in Amazon) 3. Select the number of keywords that will the tool will compare between Amazon and eBay - For example - if i select 6 words, it will grab the first 6 words of every title within the specific category and within the specific min/max price and search for titles with the first 6 words in eBay. It will only show a result IF that item is NOT sold on eBay. - I should be able to select between 3 words and 7 words (3 words being a broad search, 7 words being a more specific search) 4. Show sales rank of Amazon search results The purpose of this tool is to simply find items that are selling on Amazon that ARE NOT being sold on eBay.
Skills: Web scraping Amazon MWS Amazon Web Services eBay API
Fixed-Price - Intermediate ($$) - Est. Budget: $800 - Posted
List of URLs: lakorn.guchill.com www.seriesubthai.tv www.kodhit.mobi cuptv.com www.startclip.com cn.upyim.com www.jengmak.com www.songdee.com th.hao123.com www.friv.com newsupdate.todayza.com diply.com tvshow.guchill.com www.subthaiseries.com www.tunwalai.com www.yumzap.com www2.adintrend.com devian.tubemate.home lakorn.guchill.com www.kodhit.mobi cuptv.com www.jengmak.com Step 1 Review the list of URL’s above, mine the top 1000 pages from that URL by using a crawler. Step 2 Extract terms from each of the these web pages that are mined Step 3 Determine if the terms from these web pages, match terms from the entertainment list or match elements from the news list. (Fill in the attached worksheet) Step 4 For each site create a new worksheet and repeat process
Skills: Web scraping Data Analytics Data scraping Machine learning
Looking for the Team App?
Download the New Upwork Team App
Fixed Price Budget - ${{ job.amount.amount | number:0 }} to ${{ job.maxAmount.amount | number:0 }} Fixed-Price - Est. Budget: ${{ job.amount.amount | number:0 }} Open to Suggestion Hourly - Est. Time: {{ [job.duration, job.engagement].join(', ') }} - Posted
Skills: {{ skill.prettyName }}
Looking for the Team App?
Download the New Upwork Team App