Python Numpy Jobs

16 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Fixed-Price - Intermediate ($$) - Est. Budget: $250 - Posted
Here is a nice tool that I used https://github.com/dansoutner/LSTMLM but it is missing some features: 1. It has --ngram parameter that changes results of this program in accordance with some other ARPA LM. I need to have a parameter to reverse it. So that the ARPA file is modified (rescored) in accordante to this program results. 2. I use --ngram parameter like that: python lstmlm.py --initmodel modele/ClaLM.LSTMLM.med.4.lstm --ppl modele/ClaTest.pl --ngram modele/ClaLM.lm.1 0.2 to do the evaluation. All is good but If I want to save combined models with command: python lstmlm.py --initmodel modele/ClaLM.LSTMLM.med.4.lstm --ppl modele/ClaTest.pl --ngram modele/ClaLM.lm.1 0.2 --save-net modele/combined.lstm the program does not save anything. 3. (NOT OBLIGATORY) If it works on GPU and I supply large model that exeeds GPU memory program crashes with: Traceback (most recent call last): File "lstmlm.py", line 937, in <module> lstmlm = LSTMLM(args) File "lstmlm.py", line 221, in __init__ self.model.to_gpu() File "/lib/python2.7/site-packages/chainer/link.py", line 479, in to_gpu d[name].to_gpu() File "/lib/python2.7/site-packages/chainer/link.py", line 479, in to_gpu d[name].to_gpu() File "/lib/python2.7/site-packages/chainer/link.py", line 226, in to_gpu d[name].to_gpu() File "/lib/python2.7/site-packages/chainer/variable.py", line 210, in to_gpu self._grad = cuda.to_gpu(self._grad) File "/lib/python2.7/site-packages/chainer/cuda.py", line 217, in to_gpu return cupy.asarray(array) File "/lib/python2.7/site-packages/cupy/creation/from_data.py", line 47, in asarray return cupy.array(a, dtype=dtype, copy=False) File "/lib/python2.7/site-packages/cupy/creation/from_data.py", line 27, in array return core.array(obj, dtype, copy, ndmin) File "cupy/core/core.pyx", line 1400, in cupy.core.core.array (cupy/core/core.cpp:49505) File "cupy/core/core.pyx", line 1419, in cupy.core.core.array (cupy/core/core.cpp:49263) File "cupy/core/core.pyx", line 87, in cupy.core.core.ndarray.__init__ (cupy/core/core.cpp:5019) File "cupy/cuda/memory.pyx", line 275, in cupy.cuda.memory.alloc (cupy/cuda/memory.cpp:5517) File "cupy/cuda/memory.pyx", line 414, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:8078) File "cupy/cuda/memory.pyx", line 430, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:8004) File "cupy/cuda/memory.pyx", line 337, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:6972) File "cupy/cuda/memory.pyx", line 357, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:6799) File "cupy/cuda/memory.pyx", line 255, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5459) File "cupy/cuda/memory.pyx", line 256, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5380) File "cupy/cuda/memory.pyx", line 31, in cupy.cuda.memory.Memory.__init__ (cupy/cuda/memory.cpp:1542) File "cupy/cuda/runtime.pyx", line 181, in cupy.cuda.runtime.malloc (cupy/cuda/runtime.cpp:3065) File "cupy/cuda/runtime.pyx", line 111, in cupy.cuda.runtime.check_status (cupy/cuda/runtime.cpp:1980) cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory So here I got a question if this can be easily solved ? If you cannot solve the last one please place your bit with comment that without task 3.
Skills: Python Numpy Artificial Neural Networks CUDA Deep Neural Networks
Fixed-Price - Expert ($$$) - Est. Budget: $8,500 - Posted
Advanced Python Application Developer for Web Application for Mobile Web, Must be Knowledgeable of Flask, Heroku, GitHub, Matplotlib, Spyder IDE and Python(All Versions )and Bootstrap **MUST HAVE Verifiable EXPERIENCE** The Nature of this project is very advanced, will include 3D vector modeling, some statistics, We are looking for the right Candidate - This Project will start in Late September and will conclude in Late January.
Skills: Python Numpy Bootstrap CSS CSS3
Fixed-Price - Intermediate ($$) - Est. Budget: $170 - Posted
hi there, I need modest hybrid (item-based) recommendation engine linking Users to Items in a database, which are mostly crawled with a small portion (<5%) crowdsourced. Marketplace [this is the main focus of the project] 1) User profile entered via i) registration ie preference/survey (item ranking). logged via fb/google accounts; ii) online behaviour [this is not the main focus of the project] 2) Items are classified with many attributes and assumed cleaned and structured. some but most expected to lack rankings by Users. Items are time-sensitive (perish over time) but can still be used for recommendation reference point. -> Match Users with Items. ie predict User's ranking on items
Skills: Python Numpy Django Ecommerce Platform Development Pandas
Fixed-Price - Intermediate ($$) - Est. Budget: $30 - Posted
HI, I have a existing script that is written using python programming . It logins into a website & scrape data into a csv file, downloads images to local folder ,downloads videos to local folder with the help of numpy & opencv . The script is working fine on Ubuntu & Windows machines . The problem is - it is not creating any video files in centos OS . I would like that some experienced python developer to look into this & advise the possible reason & make some quick edits Details : python 2.7 requests, lxml, numpy , opencv 3.1.0 Please take a look at the attached script . Thank you
Skills: Python Numpy OpenCV Python
Hourly - Entry Level ($) - Est. Time: Less than 1 week, 10-30 hrs/week - Posted
1. Pull relevant stock trading data from the internet 2. Create 2-4 models (use simple cookie cutter code mostly) to test performance of trading algorithm (regression, clustering, SVM, etc.) on back-testing data. 3. Generate a plot of portfolio performance/back test performance, and 4 other plots that are interesting/relevant (put in ipython notebook, with plots already outputted for me) 4. Entire code base should be less than 10,000 characters script (so not a very long script), in an ipython notebook I can run easily. *Somewhat similar to code here (see these 6 parts explaining how to do a similar analysis, it includes all the code): http://francescopochetti.com/part-vii-backtest-portfolio-performance/ Or this is a simple version you can draw upon if needed: https://www.quantstart.com/articles/Backtesting-a-Forecasting-Strategy-for-the-SP500-in-Python-with-pandas ***I need good comments for all major parts of the code as well, so I can understand it. *Ideally you are modifying a project you have already completed, or something on the web, not doing this from scratch.
Skills: Python Numpy Corporate Finance Machine learning Stock Management
Hourly - Intermediate ($$) - Est. Time: Less than 1 week, Less than 10 hrs/week - Posted
Looking for experienced Python programmer to do occasional pair programming sessions and consulting chats. You will need excellent knowledge of Python 2.7, NumPy. Additional experience with image processing, machine learning is very welcome. Might eventually grow into serious projects.
Skills: Python Numpy Python Python SciPy
Fixed-Price - Intermediate ($$) - Est. Budget: $50 - Posted
I am looking for someone who is proficient in python programming to write me a small script which takes various array values and prints them as space delimited columns into a text file (.txt). I'm working within the `astropy.io` `fits` module in order to extract various regions within an astronomical image. I'm then wanting to extract a given specified region and print each pixel in the array as a continuous column (and regions from other images as further columns) to a text file with space as the delimiter. I'm fairly confident in manipulating these images, except for what I am about to ask. I'm looking to print out various numpy arrays into space delimited columns. For example, I have an array, let's call this "`image_array`", and I have selected, as an example, a 5x5 array using: hdulist_SDSS_u = fits.open('SDSS_u.fits') sub_array_a = hdulist_SDSS_u[0].data[0,615:620,420:425] Giving: sub_array_u = [[ 0.21881846 0.25050985 0.30488651 0.15721292 0.36129788] [ 0.20289249 0.20764523 0.29301882 0.20700296 0.25146781] [ 0.30137189 0.29080765 0.15349937 0.2055568 0.25210646] [ 0.19860348 0.29827366 0.34293352 0.1721678 0.12383141] [ 0.13158184 0.26189449 0.33571601 0.11402556 0.22013794]] Now, I'd like to transform this and print this as a column. I am also opening up another image file and taking the corresponding pixels to give me the values of the image for the same corresponding pixel, i.e.: hdulist_SDSS_z = fits.open('SDSS_z.fits') sub_array_z = hdulist_SDSS_z[0].data[0,615:620,420:425] I'd like to do this for a number of images I am comparing. However, I'm sure once I have the basic method for printing these arrays as columns delimited by space in a txt file I can then apply this wholesale. I'd also like to print alongside these columns, the corresponding x and y pixels, i.e., I'm hoping my txt file will contain something like this: pxlname x y u z pix615_420 615 420 0.21881846 0.31271553 pix616_420 616 420 0.44872895 0.41526772 .... pix620_425 620 620 0.34837483 0.38282376 Applications welcome ASAP! I will attach the example printed format I would like to extract so you can see what you would need to automate.
Skills: Python Numpy Python
Hourly - Intermediate ($$) - Est. Time: 3 to 6 months, 10-30 hrs/week - Posted
This project will be made of multiple sub-projects. The first of which is a set of command line scripts (display market data, get portfolio, etc). More detail will be given after candidate has been selected for the next phase. If candidate successful in this project a series of other related projects will be directly assigned
Skills: Python Numpy Bash shell scripting Pandas Python
Fixed-Price - Expert ($$$) - Est. Budget: $400 - Posted
i am unable to use this code: https://github.com/ryankiros/skip-thoughts it is an open source machine learning model. I want to train my own decoder: "Decoding: Generating the sentence that the conditioned vector had encoded" https://github.com/ryankiros/skip-thoughts/tree/master/decoding i believe the instructions for the README are bad and the code might have some bugs, you can see in other people's forks that there are lots of bug fixes. I want to be able to train my own skip-thought vector with the scifi genre from the bookcorpus dataset: http://www.cs.toronto.edu/~mbweb/ in their example model, they also train off the bookcorpus dataset, but only on the romance genre. Here is a ticket with the exact same issue that I am having: https://github.com/ryankiros/skip-thoughts/issues/25 to consider this done, we need 2 things: code that allows me to train on any arbitrary dataset. a trained scifi bookcorpus model
Skills: Python Numpy CUDA Machine learning Python