Data Scraping Jobs

363 were found based on your criteria {{|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Hourly - Expert ($$$) - Est. Time: Less than 1 week, 10-30 hrs/week - Posted
We are looking for an expert in Wordpress MySQL and phpMyadmin to provide proven advice or plugin suggestions on the transfer of JUST Wordpress content such as articles, meta data and images that match up with the permalinks We know there are many plugins for full site migration such a backup buddy and vaultpress and have tested them but these are not what we are looking for. We specifically want ONLY articles and images migrated with nothing else. No plugins, No Themes and residual data. Just the content. What we require is someone who knows exactly how to do this as we've tested the official wordpress import/export tools but this is no currently working. So we need some suggestions and may possibly require support via chat to assist us with phpMyadmin manual transfer. We anticipate 1-3 hours of time for this.
Skills: Data scraping Data Backup Data Recovery phpMyAdmin
Fixed-Price - Expert ($$$) - Est. Budget: $350 - Posted
I need a web scraper / crawler that will access this url. Need to be able to modify the url if changes are made in future. This will be 2 scrapers I assume as they will access different sections. we will be using the "bin" as unique identifier. Bin will be pulled from database. We know that the website checks to see if the visitor is a real browser or crawler. There is also a prioritization page which will load at times. We need the application to be able to monitor/crawl the site for changes. Once at page, we need information from "jobs/filings" section. This section can be accessed via direct url. Under this section there is a drop down at top to "show all filings" we want to select "hide subsequent filings". We want to take each individual job# and access each of those pages. These are the pages we want to scrape / crawl. The second section/crawler is the "actions" screen. Here want to scrape each of the pages We will be inserting the records into a Mysql database. We will need sql dump to create the database and tables on our server and a config file for database connection settings. We need a config file for proxy ip addresses, user names and passwords etc. If there are entries in the proxy config file then the app has to crawl the pages using each proxy server with a round-robin strategy. We need another config file to configure a) how many instances we can launch concurrently. b) Need to configure a wait time in between each request and prioritization page. Need a config file for entering User-agent strings. If there are entries in this file, crawler will use each User-agent string for setting User-agent string on http headers when requesting pages. Need to check http responses for errors. If status code is anything but 200 the app should try again. If the status code is 200 and but the response body is for the the prioritization page then the app should wait at least 5 seconds and refresh the page to pass the prioritization page.
Skills: Data scraping JavaScript Web scraping
Fixed-Price - Intermediate ($$) - Est. Budget: $500 - Posted
We are looking for someone who can help us do the following: * Take data in an Excel spread sheet and convert to a dbf file and confirm that the data was successfully converted and the data is in the correct fields. * Take data that is in a dbf file, remove unwanted data and upload into the SQL database. * Test and confirm the data was successfully uploaded into the database and that the data is located in the correct field. This would include the ability to create queries and debug the SQL database and\or locate data upload errors. The datasets range in size from a 1 MB to large files with up to a several hundred GB. Contract can be fixed price by the job (file\data uploaded) or hourly with cost not to exceed. Individual freelancers only please.
Skills: Data scraping Data mining
Fixed-Price - Expert ($$$) - Est. Budget: $200 - Posted
I need an automated way to scrape data from two specific filing types in the EDGAR database and have it populated into an excel list by category. I'm open to suggestions for how to accomplish this. Ideally this could be automated in one way or the other to happen on a daily/weekly basis.
Skills: Data scraping Data mining
Fixed-Price - Intermediate ($$) - Est. Budget: $35 - Posted
I need someone to create a small script that does the following: 1. Scrape (Role: Sales, Location:United States). 2. For each listing: a. Get the company name. b. get the company domain name. c. get the cmpany founder first and last name. There can be multiple founders. d. output to xls or google docs. thanks!
Skills: Data scraping Web scraping
Fixed-Price - Intermediate ($$) - Est. Budget: $150 - Posted
I am looking for someone perform a one-time scrape of information from approximately 14,000 pages on a eRetail website. For each page, you will locate and store a short, specific list of attributes which I will provide. Again, every page is formatted the same so you will be able to find the attributes in the same place on each page. The final product that you will deliver is a CSV file that includes these attributes. Thanks!
Skills: Data scraping Web Crawling Data mining Web Crawler
Hourly - Expert ($$$) - Est. Time: More than 6 months, 10-30 hrs/week - Posted
Pull college data from the IPEDS in subject areas like psychology, licensed practical nursing, registered nursing, healthcare administration and more. For each subject area, pull institution contact information, school enrollment, teacher student ratios, graduation rate, institution accreditation, program accreditation, degree levels offered, degree specialties offered, and more. May need to pull data from several different databases such as accreditation, scorecard and merge data. Create simple use cases to ensure dataset are merged correctly. Create repeatable process that can be done again every six month or every year. Ensure the data useful and relevant for the subject area. In some cases, help to formulate datasets so they can be used as part of ranking college programs against each other. In addition to these projects, help pull and format other job and career data.
Skills: Data scraping Data mining Microsoft Excel
Fixed-Price - Entry Level ($) - Est. Budget: $20 - Posted
Goal: extract information for Yahoo! Finance pages via a PHP sript via simplehtmldom, for a later use in an existing PHP installation. Modify (or re-create) exisiting PHP function read_yahoo_finance_profile_info($ticker) to get the correct results with simplehtmldom ( This function receives the $ticker for a listed company or ETF as a parameter and returns some information found on the page ''.$ticker.'/profile?ltr=1' The info to return is different if the QuoteType is "EQUITY" or if it's "ETF" Example for EQUITY: {​"quoteType":{​"exchange":"NMS","shortName":"Apple Inc.","longName":"Apple Inc.","underlyingSymbol":null,"quoteType":"EQUITY","symbol":"AAPL","underlyingExchangeSymbol":null,"headSymbol":null,"messageBoardId":"finmb_24937 Example for ETF: {​"quoteType":{​"exchange":"PCX","shortName":"SPDR S&P 500","longName":"SPDR S&P 500 ETF","underlyingSymbol":null,"quoteType":"ETF","symbol":"SPY","underlyingExchangeSymbol":null,"headSymbol":null,"messageBoardId":"finmb_6160262"," Functions parse_equity() and parse_etf() are the ones extracting the information for EQUITY or for ETF For parse_equity(), the function has to return an array containing the info for: shortname longname exchange extracted from ''.$ticker.'/profile?ltr=1' Similarly for parse_etf() Job consist of the creation (or modification of existing) parser Some examples: read_yahoo_finance_profile_info('AAPL') must return: $result['type'] = 'EQUITY' $result['exchange'] = 'NMS' $result['shortname'] = 'Apple Inc.' $result['longname'] = 'Apple Inc.' read_yahoo_finance_profile_info('SPY') must return: $result['type'] = 'ETF' $result['exchange'] = 'PCX' $result['shortname'] = 'SPDR S&P 500' $result['longname'] = 'PDR S&P 500 ETF' read_yahoo_finance_profile_info('ITX.MC') must return: $result['type'] = 'EQUITY' $result['exchange'] = 'MCE' $result['shortname'] = 'INDITEX' $result['longname'] = '' read_yahoo_finance_profile_info('USO') must return: $result['type'] = 'ETF' $result['exchange'] = 'PCX' $result['shortname'] = 'United States Oil Fund' $result['longname'] = 'United States Oil' read_yahoo_finance_profile_info('ASSA-B.ST') must return: $result['type'] = 'EQUITY' $result['exchange'] = 'STO' $result['shortname'] = 'ASSA ABLOY -B-' $result['longname'] = 'ASSA ABLOY AB'
Skills: Data scraping PHP
Hourly - Intermediate ($$) - Est. Time: More than 6 months, 10-30 hrs/week - Posted
Good with learning and configuring system Desirable attributes Knowledge of financial markets And above all the applicant must be honest no misinterpretation of skills or work history will be tolerated,you must do the work yourself.sub contracting the work will not be tolerated Good English-Ability to write fluently Good data entry skills(Excel) accurate and fast Internet research skills We require a virtual assistant to help us with a variety of duties. The ability to think independently and make good recommendation will earn you ongoing permanent role. Duties may include:data entry,emailing marketing,bookkeeping,internet research,and blog posting
Skills: Data scraping Data Science Virtual Assistant Web scraping