Natural Language Processing Jobs

33 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Fixed-Price - Intermediate ($$) - Est. Budget: $250 - Posted
Here is a nice tool that I used https://github.com/dansoutner/LSTMLM but it is missing some features: 1. It has --ngram parameter that changes results of this program in accordance with some other ARPA LM. I need to have a parameter to reverse it. So that the ARPA file is modified (rescored) in accordante to this program results. 2. I use --ngram parameter like that: python lstmlm.py --initmodel modele/ClaLM.LSTMLM.med.4.lstm --ppl modele/ClaTest.pl --ngram modele/ClaLM.lm.1 0.2 to do the evaluation. All is good but If I want to save combined models with command: python lstmlm.py --initmodel modele/ClaLM.LSTMLM.med.4.lstm --ppl modele/ClaTest.pl --ngram modele/ClaLM.lm.1 0.2 --save-net modele/combined.lstm the program does not save anything. 3. (NOT OBLIGATORY) If it works on GPU and I supply large model that exeeds GPU memory program crashes with: Traceback (most recent call last): File "lstmlm.py", line 937, in <module> lstmlm = LSTMLM(args) File "lstmlm.py", line 221, in __init__ self.model.to_gpu() File "/lib/python2.7/site-packages/chainer/link.py", line 479, in to_gpu d[name].to_gpu() File "/lib/python2.7/site-packages/chainer/link.py", line 479, in to_gpu d[name].to_gpu() File "/lib/python2.7/site-packages/chainer/link.py", line 226, in to_gpu d[name].to_gpu() File "/lib/python2.7/site-packages/chainer/variable.py", line 210, in to_gpu self._grad = cuda.to_gpu(self._grad) File "/lib/python2.7/site-packages/chainer/cuda.py", line 217, in to_gpu return cupy.asarray(array) File "/lib/python2.7/site-packages/cupy/creation/from_data.py", line 47, in asarray return cupy.array(a, dtype=dtype, copy=False) File "/lib/python2.7/site-packages/cupy/creation/from_data.py", line 27, in array return core.array(obj, dtype, copy, ndmin) File "cupy/core/core.pyx", line 1400, in cupy.core.core.array (cupy/core/core.cpp:49505) File "cupy/core/core.pyx", line 1419, in cupy.core.core.array (cupy/core/core.cpp:49263) File "cupy/core/core.pyx", line 87, in cupy.core.core.ndarray.__init__ (cupy/core/core.cpp:5019) File "cupy/cuda/memory.pyx", line 275, in cupy.cuda.memory.alloc (cupy/cuda/memory.cpp:5517) File "cupy/cuda/memory.pyx", line 414, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:8078) File "cupy/cuda/memory.pyx", line 430, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:8004) File "cupy/cuda/memory.pyx", line 337, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:6972) File "cupy/cuda/memory.pyx", line 357, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:6799) File "cupy/cuda/memory.pyx", line 255, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5459) File "cupy/cuda/memory.pyx", line 256, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5380) File "cupy/cuda/memory.pyx", line 31, in cupy.cuda.memory.Memory.__init__ (cupy/cuda/memory.cpp:1542) File "cupy/cuda/runtime.pyx", line 181, in cupy.cuda.runtime.malloc (cupy/cuda/runtime.cpp:3065) File "cupy/cuda/runtime.pyx", line 111, in cupy.cuda.runtime.check_status (cupy/cuda/runtime.cpp:1980) cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory So here I got a question if this can be easily solved ? If you cannot solve the last one please place your bit with comment that without task 3.
Skills: Natural language processing Artificial Neural Networks CUDA Deep Neural Networks
Hourly - Expert ($$$) - Est. Time: 1 to 3 months, 30+ hrs/week - Posted
We’re a fast growing startup based in NYC & San Diego! www.therestaurantzone.com (launching on August 10th) - We’re a resume search engine for the restaurant/hospitality industry, we help staff some of the largest restaurants and hotels across the US. We are seeking an experienced python developer to support our new software and add new functionality. The stack that we're using: Flask (Python), ElasticSearch, MongoDB, Postgresql, SqlAlchemy, nginx, uWSGI, memcached. Ideally you would have experience or familiarity with this stack. Experience with search engines is a requirement for this job. If you’ve done parsing using NLP, Regex, entity recognition, etc. that would be a huge plus! Knowledge on Javascript is critical. AngularJS and html/css is a plus since it is used as well. Bonus points if you're a full-stack developer, but we really need a rockstar back-end developer! Your job is to make sure things are running smoothly as we acquire customers - e.g. monitoring/fixing bugs, developing new features, ensuring there are no crashes, optimizing speed, etc. Later we’ll be adding a lot more complex projects into the mix to grow our technology! The first project you would take on is to look at the code, study it, and learn what's going on. Then you'll give us a “diagnosis," and evaluate its functionality. Additionally, we'd like to know if it will be ready to scale to thousands of paying clients that are using the tool simultaneously. The second project will be developing a small feature set. If your performance meets our standards we'll discuss keeping you on a long term basis, and with the way we’re growing who knows what can happen! Other must have non-technical requirements: -Must have people skills -Must be open to feedback and not take constructive criticism personally -VERY Detail Oriented -Fast worker & high quality -Clear communicator To apply, answer the questions. Give us a cover letter and if you have a Linkedin/resume, go ahead and include that in the application. Thanks!
Skills: Natural language processing AngularJS Big Data Elasticsearch
Fixed-Price - Intermediate ($$) - Est. Budget: $165 - Posted
The goal of this project is to create a document matching service including an API endpoint. The primary language for this Project is Python. Type hints should be used wherever possible. The API endpoint should be implemented with Flask and use the JSON format. High performance is of great importance since this service will be used in a mobile app and users expect a fast response from the server. We expect the number of documents to be somewhere between 50.000 and 250.000. Since the input to the API is the output of an OCR module, the edit distance between the input and the "true" document information needs to be taken into account. A Git-based repository will be created and code should be pushed to that repository on a regular basis. The entire module needs to be provided in a way that can be deployed to Amazon ElasticBeanstalk using the standard single command ("eb deploy").
Skills: Natural language processing Git Python
Fixed-Price - Expert ($$$) - Est. Budget: $250 - Posted
We are looking for a seasoned professional on Machine Learning, Experience with the Spanish Language on a modeling level (If not with previous experience with bots on Spanish, experience on creating conversational bots in English, will be fine), and Facebook Bots. The main idea is solve in an automated way the easiest (and more common questions) regarding client services, in order to speed up support questions, we have a big corpus of samples answered by human operators on the range of a full year. We are open to a software platform (The selected platform/tools/libraries must be free/opensource) or worked from scratch, If need licences or webservices must document their cost and relevancy to the project. All the interactions to the client must be done with Facebook Messenger (This part is not the most important, the most important part of this project is working on the questions solving). If the pricing is too low, we can talk to ensure both us, can have the maximum benefit (for me a productive, freelancer and a quality work, and a proper monetary reward for you).
Skills: Natural language processing Data Science Machine learning
Hourly - Intermediate ($$) - Est. Time: 3 to 6 months, Less than 10 hrs/week - Posted
We are looking for and NLP & ML expert to help us build our our product. Requirements: 1. Masters/PhD in Machine Learning/NLP or related field 2. Algorithms development experience 3. Over 3 years hands on development experience in Python / Java 4. Deep understanding of NLP problems and experience going from raw data to choosing the right statistical model and benchmarking performance and accuracy 5. Strong knowledge of named-entity recognition, word-sense disambiguation, parsing, syntax trees, dependency graphs. 6. Experience working with ML toolkits like Scikit Learn, Apache Spark MLlib, H2O, Aerosolve or Apache Mahout 7. Experience working with NLP toolkits such as Spacy, NLTK, Gensim, OpenNLP, Stanford CoreNLP Nice to have: Familiarity with deep learning algorithms and frameworks like TensorFlow, Theano, Keras or Torch
Skills: Natural language processing Machine learning
Hourly - Expert ($$$) - Est. Time: 1 to 3 months, 30+ hrs/week - Posted
We’re a fast growing startup based in NYC & San Diego! www.therestaurantzone.com (launching on August 10th) - We’re a resume search engine for the restaurant/hospitality industry, we help staff some of the largest restaurants and hotels across the US. We are seeking an experienced python developer to support our new software and add new functionality. The stack that we're using: Flask (Python), ElasticSearch, MongoDB, Postgresql, SqlAlchemy, nginx, uWSGI, memcached. Ideally you would have experience or familiarity with this stack. Experience with search engines is a requirement for this job. If you’ve done parsing using NLP, Regex, entity recognition, etc. that would be a huge plus! Knowledge on Javascript is critical. AngularJS and html/css is a plus since it is used as well. Bonus points if you're a full-stack developer, but we really need a rockstar back-end developer! Your job is to make sure things are running smoothly as we acquire customers - e.g. monitoring/fixing bugs, developing new features, ensuring there are no crashes, optimizing speed, etc. Later we’ll be adding a lot more complex projects into the mix to grow our technology! The first project you would take on is to look at the code, study it, and learn what's going on. Then you'll give us a “diagnosis," and evaluate its functionality. Additionally, we'd like to know if it will be ready to scale to thousands of paying clients that are using the tool simultaneously. The second project will be developing a small feature set. If your performance meets our standards we'll discuss keeping you on a long term basis, and with the way we’re growing who knows what can happen! Other must have non-technical requirements: -Must have people skills -Must be open to feedback and not take constructive criticism personally -VERY Detail Oriented -Fast worker & high quality -Clear communicator To apply, answer the questions. Give us a cover letter and if you have a Linkedin/resume, go ahead and include that in the application. Thanks!
Skills: Natural language processing AngularJS Big Data Elasticsearch
Fixed-Price - Entry Level ($) - Est. Budget: $350 - Posted
I am looking for a person that will guide me how to train from plain text so called: 1. semantic langauge model http://cmusphinx.sourceforge.net/wiki/semanticlanguagemodel I am familiar with tools like SRILM, IRSTLM or KENLM but till now I trained only normal models. I need guidance how train semantic LM from normal textual data like http://opus.lingfil.uu.se/OpenSubtitles2016.php Data pre-processing also should be included in the guide. The resulting model should be in ARPA format if possible
Skills: Natural language processing Artificial Intelligence Artificial Neural Networks Linux System Administration
Fixed-Price - Intermediate ($$) - Est. Budget: $300 - Posted
Hi, I'm looking for a program that automatically writes simple descriptive, qualitative summaries of quantitative data. That means I want the automatic production of an actual paragraph of written summary that describes the statistical data. The data are simple trend data on society -- raw numbers and percentages for various indicators (e.g, indicators for people in poverty, average income, college degree attainment, etc), across many years, for each county in California. The data are in over 100 csv (excl) files. They are not all in the exact same format, but typically do have the same basic format -- the name of the file represents the indicator, and each file has counties in the rows, years in the columns, and rates in the cells. I want the program to be reusable, so new summaries can be produced automatically when new data are added. The descriptive summary would describe the indicator, the location the indicator is measuring (e.g, which state and county), the existing rates, past rates, and changes in the rates over time. For example, the output the program would produce would look like this, "This indicator is the percentage of people with a college degree. In Fresno County, California, the rate for 2013 is 22 percent. That is a 1% increase from the last measurement in 2012, a 5% increase from 5 years ago in 2008 when the rate was 17 percent, and 8% increase from 10 years ago in 2003 when the rate was 14 percent." This above paragraph in quotes is an example of the actual output that the program needs to produce. Obviously the percentages, years, indicator, and county would vary, and the program you build would need to be able to produce the same basic output structure for data that varies by percentage, years, indicator, and county. All output would look the same as above, with changes in the wording (depending on the specific quantitative data) for the county, indicator, years, and percentages. The budget is negotiable, but thank you for keeping costs low, as this is for an educational nonprofit.
Skills: Natural language processing Python R VBA
Hourly - Expert ($$$) - Est. Time: 1 to 3 months, Less than 10 hrs/week - Posted
We’re a fast growing startup based in NYC & San Diego! www.therestaurantzone.com (launching on August 10th) - We’re a resume search engine for the restaurant/hospitality industry, we help staff some of the largest restaurants and hotels across the US. We are seeking an experienced python developer to support our new software and add new functionality. The stack that we're using: Flask (Python), ElasticSearch, MongoDB, Postgresql, SqlAlchemy, nginx, uWSGI, memcached. Ideally you would have experience or familiarity with this stack. Experience with search engines is a requirement for this job. If you’ve done parsing using NLP, Regex, entity recognition, etc. that would be a huge plus! Knowledge on Javascript is critical. AngularJS and html/css is a plus since it is used as well. Bonus points if you're a full-stack developer, but we really need a rockstar back-end developer! Your job is to make sure things are running smoothly as we acquire customers - e.g. monitoring/fixing bugs, developing new features, ensuring there are no crashes, optimizing speed, etc. Later we’ll be adding a lot more complex projects into the mix to grow our technology! The first project you would take on is to look at the code, study it, and learn what's going on. Then you'll give us a “diagnosis," and evaluate its functionality. Additionally, we'd like to know if it will be ready to scale to thousands of paying clients that are using the tool simultaneously. The second project will be developing a small feature set. If your performance meets our standards we'll discuss keeping you on a long term basis, and with the way we’re growing who knows what can happen! Other must have non-technical requirements: -Must have people skills -Must be open to feedback and not take constructive criticism personally -VERY Detail Oriented -Fast worker & high quality -Clear communicator To apply, answer the questions. Give us a cover letter and if you have a Linkedin/resume, go ahead and include that in the application. Thanks!
Skills: Natural language processing AngularJS Big Data Elasticsearch