Web Crawler Jobs

71 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Fixed-Price - Entry Level ($) - Est. Budget: $20 - Posted
I am looking for a knowledgable custom android developer whom is very skilled with java & android development & is able to think outside the box on projects - I have some specifications & ideas & am looking for the right android developer to help me execute them.. If you feel you can accomplish any task & are willing to learn & grow please contact me for more details! looking forward to finding the right android devs
Skills: Web Crawler Android Android App Development Android SDK
Fixed Price Budget - Expert ($$$) - $100 to $500 - Posted
We are looking to develop a price comparison website. Please apply with some sample work and the best cost. Preferred languages Website - Ruby on rails Crawler- Scrapy (python) However we are open to good suggestions.
Skills: Web Crawler PHP Ruby on Rails Scrapy
Fixed Price Budget - Intermediate ($$) - $200 to $500 - Posted
We are looking for a data mining and scrapping pro to build us a simple tool that will help us check the prices on our competitors websites. This tools will work on 4 predefined competing sites and will get several parameters to search and get those products and prices on the competing sites. Please only bid if you have experience in data scraping tools. We will give more details to the best candidates. Thanks and good luck.
Skills: Web Crawler Data mining Data scraping JavaScript
Fixed-Price - Expert ($$$) - Est. Budget: $400 - Posted
Hi We have to build large database for marketing. Please let us know price for 10,000 Contact database. Long term opportunity for good team or company. Looking for low price and quick finisher 1. First Name 2. Last Name 3. Professional Title 4. Specialty 5. Organization Name 6. Address 1 7. Address 2 8. City 9. State 10. Zip 11. Phone 12. Email – This should be the individual’s direct email address, not a shared email address. Company email addresses such as info@medsouth.com do not qualify. The domain must be a company domain. Addresses from personal email – aol, hotmail, gmail, etc – are unacceptable. 13. Company URL
Skills: Web Crawler Data Entry Data mining Data scraping
Hourly - Intermediate ($$) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
This job is focused on advancement of the experience that thousands of users get navigating, browsing, searching and comparing the content offered through our proprietary technology platform. The end-result (output of the ontology model) will be a set of intuitive and comprehensive multi-level navigation structures (hierarchical taxonomies, facets) for browsing, searching and tagging the content offered to our clients. The end-task is envisioned to be primarily achieved with the usage of Semantic Web concepts and data (LOD and other available SKOS) as per Semantic Web standards. The task most likely will require knowledge/learning of several RDF-based schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, Schema.org) and usage of the W3C’s Semantic Web technology stack components (SPARQL, Protege, Semantic resoners). Key tasks: - Definition of RDF Schema and ontologies based on several existing RDF Schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, Schema.org, etc.) - linking available LOD and SKOS data sets, building several core multi-level hierarchical taxonomies (magnitude of tens of thousands of elements) comprehensively describing the content in our system - Rule-based processing and linking of multiple existing, as well as obtained sets of data using semantic reasoners - Definition, structuring and optimization of hierarchical data sets, definition and maintenance of hierarchical relationships of particular terms (facets) - Research (independent, as well as guided by management team) on publicly available SKOS and LOD sets related to the content of the platform from public (international standards, patent databases, public and government databases, various organizational, available XML datasets, etc.), as well as acquired proprietary sources - Retrieval and ETL of multiple additional data sets from multiple sources - Tagging, Classification, entity extraction - Working with management team to maintain and advance particular segments of defined taxonomies Optional Stretch-Tasks (Depending on Candidate's Qualifications): - Automatic analysis of content, extraction of semantic relationships - Auto-tagging, auto-indexing - Integration and usage of selected IBM Watson services for content analysis - Integration with Enterprise Taxonomy Management platforms (Mondeca, Smartlogic, PoolParty, or others) This job will initially require commitment of 15-20 hours per week over 3-6 months engagement. Interaction with a responsible manager will be required at least twice a week over Skype and Google Hangouts. Longer-term cooperation is possible based on the results of the initial engagement. Required Experience: - Detailed knowledge of Semantic Web concepts and techniques - Intimate familiarity with W3C’s Semantic Web technology stack (RDF, SPARQL, etc.) - Hands-on experience with LOD (DB Pedia and others) and various SKOS - Experience of modeling data based on various RDF schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, ISO 25964, etc.) - Knowledge of common open-source ontology environments and tools (Mediawiki, Protege, etc.) or other enterprise-grade ontology tools (Synaptica, DataHarmony, PoolParty, Mondeca, Top Braid, etc.) - Experience of work with semantic reasoners - Prior experience of content management and maintenance of taxonomies for consumer or e-commerce applications Additional Preferred Experience: - Background in Library and Information Science (MLIS), Knowledge Management, Information Management, Linguistics or Cognitive Sciences - Familiarity with common classification systems - Experience working with catalog and classification systems and creation of thesauri - Auto-tagging, auto-classification, entity extraction
Skills: Web Crawler Web Crawling Data Analytics Data Entry
Fixed-Price - Intermediate ($$) - Est. Budget: $500 - Posted
Looking for someone expert in scraping data from variation website.. and save data in mysql / csv . Script has to be python or php . If python, it should work with on linux server with lamp php website. If you are really good , I dont mind you offering full time job as I need 100s of scrapping stuff needed over next 3 months .. this job is for 6 websites .. but I might need some other small scrappers before I give big project please answer these 1. write 'ddonk' before application 2. let me know if you prefer php or python 3. mention what website you have scrapped ? google, linkedin, amazon, yellow pages ? 4. show me list to any web application that does scraping if you have build any 5. do you have full time job and part time freelancer ? or you are full time freelancer ?
Skills: Web Crawler Data mining Data scraping Django
Fixed-Price - Intermediate ($$) - Est. Budget: $5,000 - Posted
The job requires creativity and analytical thinking to solve problems and move the growth metrics for the project. Skills Required 1. Web Crawling 2. Scripting: Javascript/Ruby/Python 3. File I/O: Reading and writing to CSV files. Mentorship will be provided if needed. Fixed pay for the task however I am flexible and can pay hourly once we establish a rapport.
Skills: Web Crawler Web scraping