Hadoop Jobs

45 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Fixed-Price - Intermediate ($$) - Est. Budget: $2,000 - Posted
Qualifications 9+ years of experience developing large scale software/ web applications Experience in Big Data and cloud computing leveraging Hadoop ecosystem ((Hadoop/HDFS and related technologies) Proven prior experience in building large scalable distributed systems At least 2 Big Data implementations Experience in Java and an expert of all Hadoop components Hands on experience with major components like Map Reduce, HBase, Hive, Pig Strong understanding of underlying Hadoop concepts and distributed computing Solid understanding of current status of big-data technology including hands on experience with design, and building big-data infrastructures. ... Qualifications 9+ years of experience developing large scale software/ web applications Experience in Big Data and cloud computing leveraging Hadoop ecosystem ((Hadoop/HDFS and related technologies) Proven prior experience in building large scalable distributed systems At least 2 Big Data implementations Experience in Java and an expert of all Hadoop components Hands on experience with major components like Map Reduce, HBase, Hive, Pig Strong understanding of underlying Hadoop concepts and distributed computing Solid understanding of current status of big-data technology including hands on experience with design, and building big-data infrastructures. ... Qualifications 9+ years of experience developing large scale software/ web applications Experience in Big Data and cloud computing leveraging Hadoop ecosystem ((Hadoop/HDFS and related technologies) Proven prior experience in building large scalable distributed systems At least 2 Big Data implementations Experience in Java and an expert of all Hadoop components Hands on experience with major components like Map Reduce, HBase, Hive, Pig Strong understanding of underlying Hadoop concepts and distributed computing Solid understanding of current status of big-data technology including hands on experience with design, and building big-data infrastructures.
Skills: Hadoop Agile software development Big Data C++
Fixed-Price - Expert ($$$) - Est. Budget: $160 - Posted
We have tie up with so many engineering colleges in pune, we have a plan to conduct training courses in respective college premises as per their requirements, for that we required freelance technical trainers for JAVA, .NET, ANDROID & HADOOP. Roles and Responsibilities: 1. Should have hands on experience in application development and passionate towards sharing knowledge. 2.
Skills: Hadoop .NET Remoting Android Java
Hourly - Intermediate ($$) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
Looking for an experienced Hadoop systems Architect/Administrator (Cloudera ONLY). I have two positions (Suitable candidate can assume both the roles) 1) Systems Architect who can advice on the areas to improve with respect to Automation, deployment, performance tuning, capacity management etc, Document with the steps, Diagrams (Please apply if you only have experience large deployments) — This would be hourly job 2) Hadoop Administrator with experience with Troubleshooting of various ecosystem tools, JVM, setting up monitoring like Ganglia, Nagios, Automation experience (Shell and Python) etc here is the detailed Job description 1) Big data System architecture/Administration (Cloudera Hadoop, Elasticsearch, MongoDB) 2) Cloudera Administration, Cloudera Manager API Preferably Cloudera certified 3) In-depth knowledge of Security practice on Cloudera (Kerberos, KMS, Cloudera Navigator, Sentry ) 4) expert in troubleshooting (ecosystem tools, JVM, Hive/Impala Query Tuning ) 5) Solid Scripting in Python, Shell (Proof needed: Github) 6) Someone who has experience with Monitoring setup (Ganglia, Nagios) complimenting existing Cloudera Manager Solid Linux Admin Skills Work ... I have two positions (Suitable candidate can assume both the roles) 1) Systems Architect who can advice on the areas to improve with respect to Automation, deployment, performance tuning, capacity management etc, Document with the steps, Diagrams (Please apply if you only have experience large deployments) — This would be hourly job 2) Hadoop Administrator with experience with Troubleshooting of various ecosystem tools, JVM, setting up monitoring like Ganglia, Nagios, Automation experience (Shell and Python) etc here is the detailed Job description 1) Big data System architecture/Administration (Cloudera Hadoop, Elasticsearch, MongoDB) 2) Cloudera Administration, Cloudera Manager API Preferably Cloudera certified 3) In-depth knowledge of Security practice on Cloudera (Kerberos, KMS, Cloudera Navigator, Sentry ) 4) expert in troubleshooting (ecosystem tools, JVM, Hive/Impala Query Tuning ) 5) Solid Scripting in Python, Shell (Proof needed: Github) 6) Someone who has experience with Monitoring setup (Ganglia, Nagios) complimenting existing Cloudera Manager Solid Linux Admin Skills Work ... I have two positions (Suitable candidate can assume both the roles) 1) Systems Architect who can advice on the areas to improve with respect to Automation, deployment, performance tuning, capacity management etc, Document with the steps, Diagrams (Please apply if you only have experience large deployments) — This would be hourly job 2) Hadoop Administrator with experience with Troubleshooting of various ecosystem tools, JVM, setting up monitoring like Ganglia, Nagios, Automation experience (Shell and Python) etc here is the detailed Job description 1) Big data System architecture/Administration (Cloudera Hadoop, Elasticsearch, MongoDB) 2) Cloudera Administration, Cloudera Manager API Preferably Cloudera certified 3) In-depth knowledge of Security practice on Cloudera (Kerberos, KMS, Cloudera Navigator, Sentry ) 4) expert in troubleshooting (ecosystem tools, JVM, Hive/Impala Query Tuning ) 5) Solid Scripting in Python, Shell (Proof needed: Github) 6) Someone who has experience with Monitoring setup (Ganglia, Nagios) complimenting existing Cloudera Manager Solid Linux Admin Skills Work
Skills: Hadoop Ansible Bash shell scripting Cloudera
Hourly - Intermediate ($$) - Est. Time: More than 6 months, 30+ hrs/week - Posted
----- No agency please as we would be dealing only with quality freelancers----- ----- Preference would be given to applicants from Indian sub-continent because of location constraints, but if you are from Pakistan please dont apply, as due to legal issues we wont be able to hire you even if you clear interviews, so save your job credits---- Hi, We are looking for 2 resources, who are expert in Big Data, with good years of experience in Hadoop, Kafka, Hive, Strom and if you have experience in MapR then it would give you a real edge for selection to join our team.
Skills: Hadoop Apache Hive Apache Kafka Big Data
Fixed-Price - Entry Level ($) - Est. Budget: $80 - Posted
Density-based method is a remarkable class in clustering data streams, which has the ability to discover arbitrary shape clusters and to detect noise. Furthermore, it does not need the number of clusters in advance. We not only summarize the main density-based clustering algorithms on data streams, discuss their uniqueness and limitations, but also explain how they address the challenges in clustering data streams. I need a base paper and problem in the same field which can help in extending the research with a novel problem.
Skills: Hadoop Big Data
Hourly - Intermediate ($$) - Est. Time: Less than 1 month, Less than 10 hrs/week - Posted
We are embarking on a project to improve storage benchmarks. The first step is better understanding how applications access storage. We need to analyze data we receive in the form of trace files (.CSVs with one record per storage I/O). A single trace can easily be a billion or more I/Os. Specifications for a parser, step one, are attached. If the parser goes well we should have more.
Skills: Hadoop Data Analytics Data Visualization
Fixed-Price - Expert ($$$) - Est. Budget: $1,000 - Posted
Hello, we have website running with 1 billion records and 200 + fields in a table. we are looking for the migration to a mongodb + Hadoop ( any other solution is also accepted) . we want to make sure, our query result comes in less then 2 seconds with minimum resource usage. we are looking for experieced expert who has worked on such task before or someone who is capable of doing so. thanks
Skills: Hadoop MongoDB
Hourly - Expert ($$$) - Est. Time: More than 6 months, 30+ hrs/week - Posted
. • In depth development skills and expert level understanding of Hadoop ecosystem. • MapReduce, Hive, Pig, Sqoop, Oozie - 5+ years preferred. • Strong experience working with different file formats (e.g. ... . • Experience with back end data processing and databases, both relational and NoSQL (e.g. HBase, Cassandra). • Hadoop Developer Certification (Cloudera preferred).
Skills: Hadoop Apache Hive Apache Cassandra Cloudera
Fixed-Price - Expert ($$$) - Est. Budget: $1,500 - Posted
Looking to have a data scraping tool developed. This tool is use to extract data from various sites that use similar data format and structure. The tool needs to handle large data collection and extraction. Also, speed and accuracy of data extraction is important, as the data will be available for real-time indexing and search. The tool should has an Admin dashboard to add/edit/delete sites to spider/crawl as well as to view data collected.
Skills: Hadoop Apache Nutch Apache Solr Apache Spark