Python Numpy Jobs

17 were found based on your criteria {{|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Fixed-Price - Intermediate ($$) - Est. Budget: $350 - Posted
See 2 attached files. Word file contains infromation about what the tools is supposed to do. The PDF contains documentation how it was done. Current tool is avaible here:!9kN11bjJ!xqP-DwnSDQTc6nit7voSVfWXysGyCH5D8NmgrwwTakc It works !! It does what it is supposed to do but some improvemnts are necessary. 1. It is super slow !! I know there is a lot of data to be processes but maybe it would be possible to divide work for multicore CPU or for GPU or make faster algorithms. 2. Now tool requires translation of a text to another language in order to make cross-lingual comparison. Translation is VERY SLOW and there are some tools that could compare semantically texts in different langauges without translation or using some different comparison method. I need to abble to run all under Ubuntu or Wndows. Ubuntu prefferable.
Skills: Python Numpy C# C++ Web Crawling
Fixed-Price - Expert ($$$) - Est. Budget: $150 - Posted
I require someone with pyspark knowledge to produce a movie recommendation script in python 3+. The script should be able to run locally on a mac. I have pyspark installed and functional. Attached are specification for the project and an faq. YOU are only require to implement Workload 2 (that is a simple neighborhood based collaborative filtering algorithm for personalized recommendation) Key requirement is that the script should be completed by Wednesday 25th 2016 by 9 pm (sydney australia) time. So this project will be rewarded very quickly to the right candidate. Please state your experience with pyspark and python. Data for the project is available download from the following location
Skills: Python Numpy Apache Spark Python
Fixed-Price - Intermediate ($$) - Est. Budget: $300 - Posted
This is a relatively straight forward task for someone who is familiar with pandas.DataFrame It will be even easier if you have experience with resampling financial data. This task is to take continuous security prices (open, high, low, close, volume, etc) in a DataFrame with a time index - and aggregate them. 1 min / 1 hour / 1 day aggregation based on continuous activity (24x7) - and also again based on US Trading times (0930 - 1600, etc). Example code on how to implement each stage based on the provided example data is sufficient.
Skills: Python Numpy Python
Hourly - Expert ($$$) - Est. Time: Less than 1 week, 10-30 hrs/week - Posted
I have a data frame taken from an SQL database which needs to be transformed into a wide dataframe in Python using python 2.7. The dataframe will have many strings which need to be converted into a column with a 1 or 0. I would like this job completed by Monday 9am GMT as I have to build out the model and don't have time to build the model.
Skills: Python Numpy Python SciPy
Hourly - Intermediate ($$) - Est. Time: Less than 1 month, Less than 10 hrs/week - Posted
Hi, I am looking who has patience to teach me Scrapy With Scrapy I want to crawl Amazon and some orther online shop to get the informations and prices, save then in MongoDB for later use Hope you can help me. Thank you Best wishes. Nguyen
Skills: Python Numpy Data Analytics Python Python SciPy
Fixed-Price - Expert ($$$) - Est. Budget: $150 - Posted
i have a script which basically does some backtesting results on insample and outsample. Its currently slow and need to tweak it to speed it up. More details to be shared later. Apply if you have some experience in backtesting trading strats, cython , numba etc. Thanks
Skills: Python Numpy Python Quantitative Analysis
Hourly - Entry Level ($) - Est. Time: Less than 1 week, 10-30 hrs/week - Posted
Hi, I have an educational data set of how students performed on various learning tests. One set of students was in a condition with one version of a learning game (physics). The other set had a different version. Three schools were run on this experiment. I have attached the data set here. Students were measured based upon their: -Pre/post test scores/gain scores -Engagement survey results -Number of trials within the game itself -Actions used within the game itself -Trial times on incorrect trials within the game -Trial times on a mini-game within the main game (differed across two conditions) -Spatial ability -Attentional ability -A few other specific metrics I would like the following completed in sci-kit learn using Python: I. Exploratory statistics (scatterplots, histograms, etc.) II. Training and Testing of dataset: GridsearchCV Classifiers: Logistic Regression, Multinomial Naive Bayes, Decision Tree, Random Forest, K-Nearest Neighbor, Support Vector Classifier. Model evaluation metric: accuracy,precision,recall,f1-score,mean-squared error. III. Clustering (k-means, KNN) students performance on the final test based on: -Pretest scores (high vs. low) -Spatial ability (high vs. low) -Attentional ability (high vs. low) -Perhaps game performance metrics (actions used, trials, time spent) This should not be more than a days worth of work, possibly less. I realize some of these analyses may not make sense, we can discuss together to refine the strategy. I would like the output and code for all these analyses. Thanks!
Skills: Python Numpy Machine learning Python Python SciPy
Fixed-Price - Intermediate ($$) - Est. Budget: $5 - Posted
Greetings, I need help creating an auto run file as described below:- There is an image attached for the root folder of a pen-drive, the requirement of the auto run file is to do the following:- when the pen drive is inserted in the computer to 1- check for frameworks and if it is not installed in the computer to run DotNtFx40_Full_x86_x64.exe and then to run the OpenEmrStart.exe. or 2- if frameworks is installed in the computer to run run the OpenEmrStart.exe. file. Thank you,
Skills: Python Numpy C# MySQL Programming
Hourly - Intermediate ($$) - Est. Time: Less than 1 month, Less than 10 hrs/week - Posted
We are working on an online fashion based application that provides personalized shopping experience to users based on their physical appearance. We are looking for someone in CV/IP field who might be interested to write initial algorithms for this project. The algorithm we need to write is to create 3D base templates/models for women's clothes. For example, we may need different base 3D models for different types of skirts. Similarly, we need to create templates/models for dresses, jeans, shoes and handbags. These models should support ability to size into different shapes/sizes e.g. XS, S, M, L and XL. We will be providing input pictures of the clothes (may be multiple images of the same clothes from different angles. no complex patterns). The algorithm is to scan the input pictures, identify the type of clothes, fit the base model using the corresponding clothes template and create texture maps. We will also need to change the base template to fit the scanned image and have the capability to create clothes in different sizes e.g. XS, S, L etc. Preferred language is python/c++/opencv/numpy. Please let us know if this is something you will be interested and we can plan on scheduling some time or chatting over the phone/skype for more questions. We would also need step-by-step approach and initial estimates in terms of time/price.
Skills: Python Numpy 3D Design 3D Modeling 3D Rendering