Hadoop Jobs

49 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Hourly - Intermediate ($$) - Est. Time: 3 to 6 months, Less than 10 hrs/week - Posted
Installing and maintaining Hadoop clusters, Data extraction, transformation and loading into hadoop layer, teradata database experience
Skills: Hadoop Teradata
Hourly - Expert ($$$) - Est. Time: 1 to 3 months, Less than 10 hrs/week - Posted
Seeking an advanced analytics architect to help propose an end-to-end, cloud-based, PaaS solution (rough wireframe), blending the best of Big Data, Machine Learning, Predictive and Prescriptive analytics for my startup in development.
Skills: Hadoop Apache Spark Big Data C
Fixed-Price - Expert ($$$) - Est. Budget: $550 - Posted
We have an urgent requirement for Hadoop admin and Devops developer for our a remote project where you have to spend your 7-8 hour daily.
Skills: Hadoop DevOps
Hourly - Intermediate ($$) - Est. Time: Less than 1 week, Less than 10 hrs/week - Posted
Looking for a Cloudera Architect. We are setting up a on-prime cluster on our facility in Veracruz, Mexico. Experience in previous cluster deployments is required. On site setup is desired.
Skills: Hadoop Cloudera Cluster Computing
Hourly - Intermediate ($$) - Est. Time: 3 to 6 months, 30+ hrs/week - Posted
Our client is a major financial research firm and they already have Hadoop cluster running. We are looking for experts having 4-5 years of BigData analytics experience who can help us retrieving useful data using text analysis and understanding patterns to predict the next leap The data is in the form of documents (millions of records) which do not follow any standard templates.
Skills: Hadoop Data Analytics HBase MapReduce
Hourly - Intermediate ($$) - Est. Time: Less than 1 week, Less than 10 hrs/week - Posted
We are looking for a short term to long term HBASE specialist. We have a Hadoop Cluster, on top of which we use HBASE. The cluster is about 200TB in size and will grow fast. ... The cluster is about 200TB in size and will grow fast. We use HBASE. Hadoop, Ubuntu, Python, Thrift & RabbitMQ We are struggling to achieve insert rates to HBASE - we seem to be limited to around 2000 inserts per second, we need to be way above this. ... Expected max=0, tasksInProgress=14 2016-08-12 13:30:43,001 INFO [hconnection-0x59379549-shared--pool2-t485] client.AsyncProcess: #45903, table=TheField, attempt=14/35 failed=4ops, last exception: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoo p.hbase.NotServingRegionException: Region AND 2016-08-12 16:42:51,422 INFO [main] util.HBaseFsck: Loading regionsinfo from the hbase:meta table ERROR: Empty REGIONINFO_QUALIFIER found in hbase:meta ERROR: Empty REGIONINFO_QUALIFIER found in hbase:meta We are seeking someone with experience of HBASE and associated technologies.
Skills: Hadoop Apache Thrift HBase Python