Apache Hive Jobs

11 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Hourly - Intermediate ($$) - Est. Time: More than 6 months, 30+ hrs/week - Posted
----- No agency please as we would be dealing only with quality freelancers----- ----- Preference would be given to applicants from Indian sub-continent because of location constraints, but if you are from Pakistan please dont apply, as due to legal issues we wont be able to hire you even if you clear interviews, so save your job credits---- Hi, We are looking for 2 resources, who are expert in Big Data, with good years of experience in Hadoop, Kafka, Hive, Strom and if you have experience in MapR then it would give you a real edge for selection to join our team. Your KFR would be as follows: Our HR team would allign various candidates from above technologies for one of our US office, your job would be to take their interview and filter the right candidates who have good skills. Job would be assignment based for you, with your services needed sporadically (alternate days) during the week (as and when the interviews are aligned), but if you want and perform good, then we are open to use your services in our tech team also, in same field. For starters you would have to commit three months atleast to work with us in case you qualify for the job after the interview and based on your performance to be prepared to "atleast" give us 1 yr, to remain with us. Initially you would be working for our offices in India (you can work from any location in India), with regular client interaction & later on, might have to do small or big onsite (US - Sn Hose) stint, if & when need arise in our office their. Dont quote exorbitant prices, as we need guy for long term, so apply with that in mind. Apply your applications with 589986 at the start of applications You would be signing NDA for the job Best of Luck!!
Skills: Apache Hive Apache Kafka Big Data Hadoop
Fixed-Price - Intermediate ($$) - Est. Budget: $25 - Posted
Hi, I am running a training institution on Analytics & Data Science. I am also planning to start few more courses on Hadoop & Big Data Analytics. I need someone who can design a Course Curriculum for Hadoop Developer & Big Data Analytics for me. Please Note: We are not providing Graduation/Masters degree, Hence the syllabus should not be too lengthy Please comeup with your experience and advice in order to make it best for students.
Skills: Apache Hive Apache Spark Data Analytics Data mining
Hourly - Expert ($$$) - Est. Time: More than 6 months, 30+ hrs/week - Posted
• Excellent design, organizational, communication, and collaboration skills. • In depth development skills and expert level understanding of Hadoop ecosystem. • MapReduce, Hive, Pig, Sqoop, Oozie - 5+ years preferred. • Strong experience working with different file formats (e.g. Parquet) and data sources moving data into and out of HDFS. • Strong skills in development methodologies (Agile, Continuous Delivery) and programming languages (Java, Shell Scripting, Python). • Proven background in Distributed Computing, Data Warehousing, ETL development, and large scale data processing. • Solid foundation in data structures and algorithms. • Experience with back end data processing and databases, both relational and NoSQL (e.g. HBase, Cassandra). • Hadoop Developer Certification (Cloudera preferred).
Skills: Apache Hive Apache Cassandra Cloudera Hadoop
Hourly - Intermediate ($$) - Est. Time: More than 6 months, 30+ hrs/week - Posted
We are looking for a specialist for the optimization of our Hadoop cluster. Instead of optimizing a new could be Stacked. We use Hadoop, Hive and as query accelerator Spark. Our existing system is almost overloaded. In addition to optimizing the existing system we want add more servers. The partitioning of existing servers is inefficient. These must be during operation repartitioned and reinstalled. In a Later step Spark should to be updated to Version 2.0. We are open to new strategies which help us improve the performance.
Skills: Apache Hive Apache Spark Hadoop
Hourly - Intermediate ($$) - Est. Time: More than 6 months, Less than 10 hrs/week - Posted
We looking for a Hadoop and Hive Administrator who helps us to optimize our Hadoop cluster. Our existing system is often overloaded. In addition to optimizing the existing system we want add two more servers. The partitioning of existing servers is inefficient. These must be during operation repartitioned and reinstalled. In a Later step Spark should to be updated to version 2.0. We are open to new strategies which help us improve the performance. This job takes place in close cooperation with the Database Administrator. We are using: Hadoop Hive 2.0 Spark 1.5.2 as query optimizer
Skills: Apache Hive Apache Spark Hadoop
Hourly - Entry Level ($) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
Hello I am experienced java developer in finance industry. I decided to switch to Big Data domain (Hadoop, Spark, Hive, Hbase....) . I already read Hadoop and Spark books but still I don't have any hands on development. I didn't developed any application yet and I am not sure where should I start. I am looking someone to help me to setup environments and mentoring me to do some small apps. I still don't have a big picture. or even you can walk me through some of projects that you did before.... Please apply if you have hands on Hadoop and Big Data world ....
Skills: Apache Hive Apache Spark Big Data Data Science
Hourly - Expert ($$$) - Est. Time: More than 6 months, 30+ hrs/week - Posted
Thank you for the interest in our available position. We want to take this opportunity to congratulate you on making it past our initial screening process! As the next step we would like to get a better idea of your technical and problem solving skills by asking you to show some of your previous work. If we are impressed, the final step will be to schedule two interviews (one technical and one with senior management) to discuss the position in more detail and confirm that we're a good fit for each other. Please return the completed test by Thursday, August 4th. * Any bid without work samples/links will be neglected. * You have to sign an NDA as well as a contact for 1 year.
Skills: Apache Hive Apache Spark Hadoop Scala
Hourly - Intermediate ($$) - Est. Time: 3 to 6 months, 10-30 hrs/week - Posted
Our company has several projects related to Big Data & Hadoop in delivery pipeline. We are looking for solid resources with excellent skills on BigData & Hadoop (Spark, Hive, Pig, Kafka, Storm, Elastic Search, Solr, Python, Scala) for ongoing work with our clients in US and world wide. If you are interested, please send us links to your portfolio on Big Data & Hadoop and why should consider you for this role. Looking for a long term relationship. (See the attached job description for the dev role).
Skills: Apache Hive Apache Kafka Apache Solr Elasticsearch
Fixed-Price - Intermediate ($$) - Est. Budget: $300 - Posted
Responsible for installing Hadoop infrastructure. Set up hadoop ecosystem projects such as Hive, Spark, Hbase, Hue, Oozie, Flume, Pig Setup new Hadoop users with kerbros/sentry security. Process for creation and removal of nodes. Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
Skills: Apache Hive Apache Spark Cloudera Hadoop