You've landed at the right place. oDesk is now Upwork. Learn about the new platform.

Web Crawler Jobs

54 were found based on your criteria {{ paging.total | number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("hourly") | number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("fixed") | number:0}})
show all
only
only
only
show all
only
only
only
only
only
show all
only
only
only
Looking for the Team App?
Download the New Upwork Team App
Fixed-Price - Intermediate ($$) - Est. Budget: $65 - Posted
I am looking for a part time person to be a virtual assistant for me. Duties include: 1. Checking a generic email inbox twice daily and forwarding messsages to me as they come. Filtering out unneccessary/spam emails. 2. Web Research for companies related to my business or people, bloggers. 3. Emails, requesting appointments, interviews, partnership opportunities with bloggers or other people related to my business. (including potential affiliate opportunities). 4. Transcribing audio files and or mp3's. (should have very high skill) 5. Creating visual images. (Not necessary but nice to have.) 6. Communicate regularly with me via the 'voxer' app on your smartphone (free download - walkie talkie 2 way communication). 7. Create spreadsheets through excel and use google drive sheets and docs. 8. Handle my email when I am away for vacation. I am looking for someone with Great communication skills. Someone who understands the importance of communicating with me frequently, during and once tasks are completed. I am easy going and like progress. It doesnt matter how big or small. Skills: Excel and Word skills is required. An ability to speak with good level of English proficiency Customer service Data entry Write succinct clear emails. Long term future work is a good probability if you are the right fit, depending on results. I would like to have a skype chat to determine the right fit. Please email me your resume as well.
Skills: Web Crawler Administrative Support Customer service Data Entry
Hourly - Entry Level ($) - Est. Time: Less than 1 month, Less than 10 hrs/week - Posted
I need some help scraping a list of U.S. and UK home improvement and decor blogs. I am in need of the URL, contact name (if available), contact email, and category (if available). I would prefer an automated data scrape, however I am open to manual research and entry as well. At the end of the day, I would like at least 300 high quality, valid contacts to use for a press-release and affiliate marketing campaign. Thanks for looking!
Skills: Web Crawler Data Entry Data mining Data scraping
Fixed-Price - Expert ($$$) - Est. Budget: $300 - Posted
I'm in need of an application which I can run on scheduled intervals using windows scheduler to retrieve sport odds from three South African betting websites. The application should retrieve all available markets of the events. The application should scrape / mine the following sites: * sunbet.co.za * sportingbet.co.za * bet.co.za The application should then generate csv files with the retrieved odds or possibly insert them into a sql database. If the scraping / mining of the three sites works good then I will definitively have more work available by adding additional sites. Note: If you don't have experience with data mining or website scraping the please don't insert a proposal.
Skills: Web Crawler Data mining Data scraping Web scraping
Fixed-Price - Intermediate ($$) - Est. Budget: $100 - Posted
Hello, I'm looking for someone to download all prices for every part, for all vehicle year/make/models. From an automotive parts site such as http://www.rockauto.com/en/catalog/ I would like it to be summarized in an excel spreadsheet, or itemized database that can be downloaded. Please let me know A) How you plan to do this (crawler, manually, etc) B) How long it will take you C) Price you will charge (can be more or less than $100)
Skills: Web Crawler Web scraping
Fixed-Price - Intermediate ($$) - Est. Budget: $250 - Posted
I need help in scrapping university websites sites for email address. This is a low weight work. The code can be written in any language, preferably in Ruby. I will need a csv file in the format, i specify. I will also specify the sites and techniques to scrape as i have done this before. There are close to 30 sites, you will be needing to scrape from. I will give you the university name and you will need to get names from university facebook group or use common names that i can provide. Find the student directory page for that university and use that for searching and fetching the email and other meta information from the results
  • Number of freelancers needed: 2
Skills: Web Crawler Data scraping HTML JavaScript
Hourly - Intermediate ($$) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
This job is focused on advancement of the experience that thousands of users get navigating, browsing, searching and comparing the content offered through our proprietary technology platform. The end-result (output of the ontology model) will be a set of intuitive and comprehensive multi-level navigation structures (hierarchical taxonomies, facets) for browsing, searching and tagging the content offered to our clients. The end-task is envisioned to be primarily achieved with the usage of Semantic Web concepts and data (LOD and other available SKOS) as per Semantic Web standards. The task most likely will require knowledge/learning of several RDF-based schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, Schema.org) and usage of the W3C’s Semantic Web technology stack components (SPARQL, Protege, Semantic resoners). Key tasks: - Definition of RDF Schema and ontologies based on several existing RDF Schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, Schema.org, etc.) - linking available LOD and SKOS data sets, building several core multi-level hierarchical taxonomies (magnitude of tens of thousands of elements) comprehensively describing the content in our system - Rule-based processing and linking of multiple existing, as well as obtained sets of data using semantic reasoners - Definition, structuring and optimization of hierarchical data sets, definition and maintenance of hierarchical relationships of particular terms (facets) - Research (independent, as well as guided by management team) on publicly available SKOS and LOD sets related to the content of the platform from public (international standards, patent databases, public and government databases, various organizational, available XML datasets, etc.), as well as acquired proprietary sources - Retrieval and ETL of multiple additional data sets from multiple sources - Tagging, Classification, entity extraction - Working with management team to maintain and advance particular segments of defined taxonomies Optional Stretch-Tasks (Depending on Candidate's Qualifications): - Automatic analysis of content, extraction of semantic relationships - Auto-tagging, auto-indexing - Integration and usage of selected IBM Watson services for content analysis - Integration with Enterprise Taxonomy Management platforms (Mondeca, Smartlogic, PoolParty, or others) This job will initially require commitment of 15-20 hours per week over 3-6 months engagement. Interaction with a responsible manager will be required at least twice a week over Skype and Google Hangouts. Longer-term cooperation is possible based on the results of the initial engagement. Required Experience: - Detailed knowledge of Semantic Web concepts and techniques - Intimate familiarity with W3C’s Semantic Web technology stack (RDF, SPARQL, etc.) - Hands-on experience with LOD (DB Pedia and others) and various SKOS - Experience of modeling data based on various RDF schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, ISO 25964, etc.) - Knowledge of common open-source ontology environments and tools (Mediawiki, Protege, etc.) or other enterprise-grade ontology tools (Synaptica, DataHarmony, PoolParty, Mondeca, Top Braid, etc.) - Experience of work with semantic reasoners - Prior experience of content management and maintenance of taxonomies for consumer or e-commerce applications Additional Preferred Experience: - Background in Library and Information Science (MLIS), Knowledge Management, Information Management, Linguistics or Cognitive Sciences - Familiarity with common classification systems - Experience working with catalog and classification systems and creation of thesauri - Auto-tagging, auto-classification, entity extraction
Skills: Web Crawler Web Crawling Data Analytics Data Entry
Hourly - Intermediate ($$) - Est. Time: Less than 1 week, 10-30 hrs/week - Posted
i will provide company name and potential job titles to look for per company you will find exact name, job title add to columns in existing spreadsheet example of what i will provide: company name: google company domain: google.com title(s) to look for: ceo you will return in two columns (column A = name1, column b = title1): name, title: sundar pichai, ceo this must be done in bulk and automatically as new rows are added to sheet if i have a list of 100 company and/or domains you should be able to return accurate results in minutes or seconds for all results this is not a manual job you can create scraper, crawler, api whatever i just need fast, accurate, consistent results
Skills: Web Crawler Web Crawling Data scraping Scrapy
Hourly - Expert ($$$) - Est. Time: Less than 1 month, 30+ hrs/week - Posted
I am in need of hotel, restaurants lists with Business, Email, Website (optional). Countries: USA,UK, France, Italy, Spain Turkey, UAE, Australia, Japan, Brasil. Email type: Manager's emails or Generic e.g. (reservation@; info@; frontdesk@; reception@; rsv@; etc..) ONLY APPLY IF YOU HAVE OR CAN GET MORE THAN 1000 CONTACTS A DAY. We consider acquiring existing databases too, only large ones.
  • Number of freelancers needed: 3
Skills: Web Crawler Data mining Data scraping Email Marketing
Hourly - Expert ($$$) - Est. Time: Less than 1 month, 30+ hrs/week - Posted
I am in need of hotel database with Hotel Name, Email. Countries: USA,UK, France, Italy, Australia, Japan, Brasil. Email type: generic e.g. (reservation@; info@; frontdesk@; reception@; rsv@; etc..) Existing databases are accepted too.
  • Number of freelancers needed: 3
Skills: Web Crawler Data mining Data scraping Email Marketing
Looking for the Team App?
Download the New Upwork Team App
Fixed Price Budget - ${{ job.amount.amount | number:0 }} to ${{ job.maxAmount.amount | number:0 }} Fixed-Price - Est. Budget: ${{ job.amount.amount | number:0 }} Open to Suggestion Hourly - Est. Time: {{ [job.duration, job.engagement].join(', ') }} - Posted
Skills: {{ skill.prettyName }}
Looking for the Team App?
Download the New Upwork Team App