You've landed at the right place. oDesk is now Upwork. Learn about the new platform.

Data Science Jobs

68 were found based on your criteria {{ paging.total | number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("hourly") | number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("fixed") | number:0}})
show all
only
only
only
show all
only
only
only
only
only
show all
only
only
only
Looking for the Team App?
Download the New Upwork Team App
Hourly - Intermediate ($$) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
This job is focused on advancement of the experience that thousands of users get navigating, browsing, searching and comparing the content offered through our proprietary technology platform. The end-result (output of the ontology model) will be a set of intuitive and comprehensive multi-level navigation structures (hierarchical taxonomies, facets) for browsing, searching and tagging the content offered to our clients. The end-task is envisioned to be primarily achieved with the usage of Semantic Web concepts and data (LOD and other available SKOS) as per Semantic Web standards. The task most likely will require knowledge/learning of several RDF-based schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, Schema.org) and usage of the W3C’s Semantic Web technology stack components (SPARQL, Protege, Semantic resoners). Key tasks: - Definition of RDF Schema and ontologies based on several existing RDF Schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, Schema.org, etc.) - linking available LOD and SKOS data sets, building several core multi-level hierarchical taxonomies (magnitude of tens of thousands of elements) comprehensively describing the content in our system - Rule-based processing and linking of multiple existing, as well as obtained sets of data using semantic reasoners - Definition, structuring and optimization of hierarchical data sets, definition and maintenance of hierarchical relationships of particular terms (facets) - Research (independent, as well as guided by management team) on publicly available SKOS and LOD sets related to the content of the platform from public (international standards, patent databases, public and government databases, various organizational, available XML datasets, etc.), as well as acquired proprietary sources - Retrieval and ETL of multiple additional data sets from multiple sources - Tagging, Classification, entity extraction - Working with management team to maintain and advance particular segments of defined taxonomies Optional Stretch-Tasks (Depending on Candidate's Qualifications): - Automatic analysis of content, extraction of semantic relationships - Auto-tagging, auto-indexing - Integration and usage of selected IBM Watson services for content analysis - Integration with Enterprise Taxonomy Management platforms (Mondeca, Smartlogic, PoolParty, or others) This job will initially require commitment of 15-20 hours per week over 3-6 months engagement. Interaction with a responsible manager will be required at least twice a week over Skype and Google Hangouts. Longer-term cooperation is possible based on the results of the initial engagement. Required Experience: - Detailed knowledge of Semantic Web concepts and techniques - Intimate familiarity with W3C’s Semantic Web technology stack (RDF, SPARQL, etc.) - Hands-on experience with LOD (DB Pedia and others) and various SKOS - Experience of modeling data based on various RDF schemas (Resume RDF, HRM Ontology, HR-XML, FOAF, SCIOC, ISO 25964, etc.) - Knowledge of common open-source ontology environments and tools (Mediawiki, Protege, etc.) or other enterprise-grade ontology tools (Synaptica, DataHarmony, PoolParty, Mondeca, Top Braid, etc.) - Experience of work with semantic reasoners - Prior experience of content management and maintenance of taxonomies for consumer or e-commerce applications Additional Preferred Experience: - Background in Library and Information Science (MLIS), Knowledge Management, Information Management, Linguistics or Cognitive Sciences - Familiarity with common classification systems - Experience working with catalog and classification systems and creation of thesauri - Auto-tagging, auto-classification, entity extraction
Skills: Data Science Web Crawling Data Analytics Data Entry
Fixed-Price - Entry Level ($) - Est. Budget: $50 - Posted
Build in GoLang a server application that proxies to a HTTPS website, confirming that the certificate is correct, than extract a webform from that website and displays it to a client within a webpage template. (A temporary template should be provided by you.) The form, uses Ajax to update the forms fields with data while the user is filling in the form, this asynchronous functionality shall remain and bypass the proxy communicating directly with the source HTTPS website. The server shall manage multiple sessions when connecting to the source site. A captcha is provided from the source site for each session. You shall program this server to allow multiple sessions all with their own captcha. Upon the user submitting the form, the server shall submit the contents to the HTTPS site, and wait for a response. If success details from the returned data shall be recorded to a json file on the server. On success or fail the feed back from the form shall be displayed to the user. Comment the code.
Skills: Data Science Golang Multithreaded Programming
Hourly - Intermediate ($$) - Est. Time: Less than 1 week, Less than 10 hrs/week - Posted
I am looking for someone that can be available to work with me this Saturday and if need be Sunday also. During CST (USA Central Time). I have about 3,000,000 records that I must analyse. I do not have tools to analyse it. I need to work with someone who has can use data tools to produce tables (and if you can produce graph it will be a big plus but it is not a must). Some of the basic analyses are, how many records contain the word "regulation". I will think of additional questions based on the results that we get from each query. Skills: - must be fast with queries - be able to give results in minutes - must be very good with large data - Help me install tools if possible - be able to manipulate data (Scrape, parse, group, count and more) This will be intense two days work over the weekend. If we work well together it could turn to more work. (if you used MS SQL Server and Azure it will be a big plus​, but it is not essential) Many thanks Fatima
Skills: Data Science Data Analytics Data Encoding Data mining
Hourly - Expert ($$$) - Est. Time: 1 to 3 months, Less than 10 hrs/week - Posted
Hello Everyone, I'm looking to hire a Data Analyst that has experience on eCommerce data and insights generation. The ideal profile is someone that has worked with platforms like CLAVIS INSIGHTS or similar (http://clavisinsight.com/) or alternatively, that has worked on data of an online retailer. This person will have to define how we can get the best information from CLAVIS and other data sources for a large client that needs to structure standard reports and scorecards. If you have experience in eCommerce data analysis or even better, CLAVIS, write me. Luigi
Skills: Data Science Data Analytics Tableau Software
Hourly - Expert ($$$) - Est. Time: Less than 1 week, 30+ hrs/week - Posted
*Background:* Perhaps I already have it, but I simply do not know how to make sense of trained model results in R to make useful predictions out of R on new data. I just don't have statistics background, but I have programming background. So I think I just need someone who knows enough statistics and enough about R to show me how to make sense of training results in my R script. *Request:* I need the attached R script modified to accomplish the following: *(1)* Extract the result/formula, (formula values) of the classification models for each group/class, (-2,0,2) after my attached R script completes Stepwise LDA, QDA and Logistic Regression. I want formula values, not the formula code so I can make predictions out of R on new data. *(2)* Extract the formula for the decision bounds that separates the groups/classes, (-2,0,2) after my attached R script completes Stepwise LDA, QDA and Logistic Regression. I want formula decision bounds, not the decision bounds code so I can make predictions out of R on new data. Specifically, I want my R script, (see attached .zip in job posting) modified to give me the data necessary to do similar to what is seen in video in [Column D] and in [Cells F21:G25] and [Column E], but for after I run my R script for LDA, QDA and Logistic Regression. -- See video from *2:18 to 4:05* for [Column D] -- See video from *4:05 to 7:31* for [Cells F21:G25] -- See video from *7:32 to 8:28* for [Column E] https://www.youtube.com/watch?v=NaZ6Xuczs94 This screenshot might be a good example of the values I need extracted from LDA, QDA and Logistic Regression in R http://www.screencast.com/t/hmUHHvrRF7cV
Skills: Data Science Data mining Machine learning Python
Hourly - Entry Level ($) - Est. Time: More than 6 months, Less than 10 hrs/week - Posted
We are looking for an adwords account assistant, preferably a math or statistics student, with the following skills: 1. Be highly trustworthy. will be able to change running campaigns. 2. Be available during the night, and especially on weekends (Saturday-Sunday). 3. Be smart. Quick to learn. Need demonstration of this, and I'll test them on this in the interview. 4. Have a general idea of what web campaigns are or what AdWords is - bonus. 5. Experience in data analysis - big bonus. 6. Should NOT have selling skills, or marketing like approach. This is a drone, not an AdWords expert. 7. Good core math skills (if I tell him to calc weighted average, he should understand). 8. Knowledge of statistics - big bonus.
Skills: Data Science Google AdWords Mathematics Statistics
Hourly - Expert ($$$) - Est. Time: Less than 1 week, 10-30 hrs/week - Posted
We are looking for a Data Scientist exceptionally strong on Python who can do the following: 1. Data Preparation a. Pre-Processing b. Cleansing c. Enrichment d. De-Duplication 2. Data Exploration a. Statistical Inferencing b. Univariate/Bivariate/Multivariate analysis c. Feature engineering d. Outlier analysis e. Dimensionality reduction 3. Data Analysis a. Machine Learning b. Analytics models We shall provide the datasets, problem, and solution steps (definitely you can improve our solution). You just need to implement on Python and provide output. Apply if and only if, you have exhaustively worked on Points (& sub-points) 1 to 3 above, else ignore.
  • Number of freelancers needed: 2
Skills: Data Science Data Analytics Data mining Machine learning
Hourly - Entry Level ($) - Est. Time: More than 6 months, Less than 10 hrs/week - Posted
We have several applications for data science and machine learning in our project. Were looking for someone who is excellent with algorithm selection and preferrably willing to roll up their sleeves and write some python or golang to prove their solution. We have a test You must pass to join the team. Must be able to select and explain your selection of an algorithm for our test. Lots of exciting work in this project.
Skills: Data Science Algorithms Data Analytics Data mining
Looking for the Team App?
Download the New Upwork Team App
Fixed Price Budget - ${{ job.amount.amount | number:0 }} to ${{ job.maxAmount.amount | number:0 }} Fixed-Price - Est. Budget: ${{ job.amount.amount | number:0 }} Open to Suggestion Hourly - Est. Time: {{ [job.duration, job.engagement].join(', ') }} - Posted
Skills: {{ skill.prettyName }}
Looking for the Team App?
Download the New Upwork Team App