Apache Hive Jobs

15 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Hourly - Expert ($$$) - Est. Time: Less than 1 week, 30+ hrs/week - Posted
Hi, I'm looking for someone with experience writing complex Hive queries/scripts for large (multi-hundred terabyte) datasets. Should also be experienced in Java for writing Hive UDFs, Amazon Web Services (EC2), and having previously worked in Qubole is a plus (but not required). To be considered, please share the most challenging large-scale issue you ever faced with Hive, and how you overcame it (be sure to include the node count and approximate disk space required). Thank you.
Skills: Apache Hive Amazon Web Services Hadoop
Hourly - Intermediate ($$) - Est. Time: 3 to 6 months, 10-30 hrs/week - Posted
Our company has several projects related to Big Data & Hadoop in delivery pipeline. We are looking for solid resources with excellent skills on BigData & Hadoop (Spark, Hive, Pig, Kafka, Storm, Elastic Search, Solr, Python, Scala) for ongoing work with our clients in US and world wide. If you are interested, please send us links to your portfolio on Big Data & Hadoop and why should consider you for this role. Looking for a long term relationship. (See the attached job description for the dev role).
Skills: Apache Hive Apache Kafka Apache Solr Elasticsearch
Hourly - Expert ($$$) - Est. Time: More than 6 months, 30+ hrs/week - Posted
Senior level SAS programmer – Customer Marketing, Loyalty, Data Management, Big Data/Hadoop experience Job Responsibilities: •Transform product transaction data into clearly defined and executable offer marketing strategies for merchant including targeting of offers based on previous product/category purchases (targeting matrix) •Create vendor specific and cross sponsor marketing segmentations for offer targeting •Work closely with econometricians to develop custom relevancy scoring models for all merchant offers •Design and implement test and learn strategies •Produce product offer reports including redemption rates and incremental sales vs. control groups •Identify and articulate customer patterns in a meaningful way to drive value to AMEX/Wellness + members •Perform as a marketing analytics subject matter expert within AMEX and with merchant •Create adhoc analysis to support merchant marketing needs including market analysis, share of wallet analysis, campaign analysis, test analysis etc. •Maintain open lines of communication with leadership, partners, and stake holders in order to continuously evolve and improve consumer insights and offer marketing strategies Required Skills/Qualifications: • Analysts will be experienced analytic professionals with background/degrees in statistics/quantitative studies • Proficiency in Linux scripting required. • Minimum 3 years of recent experience in an analytic role, preferably marketing analytics • Experience with Oracle & Big Data environments and knowledge of Teradata, Pig, Hive are strongly desired. • Proficiency in SAS/SQL programming required. • Big Data exposure is good but some experience would be a big plus • Candidate is willing to relocate to Phoenix, AZ • Preferably, no H1’s. If H1, the candidate will cover cost of H1 transfer unless AMEX/Plenti is willing to sponsoring. • must be technical/analytical • good problem solver • good communicator • good listener
Skills: Apache Hive Pig SAS
Fixed-Price - Intermediate ($$) - Est. Budget: $300 - Posted
Responsible for installing Hadoop infrastructure. Set up hadoop ecosystem projects such as Hive, Spark, Hbase, Hue, Oozie, Flume, Pig Setup new Hadoop users with kerbros/sentry security. Process for creation and removal of nodes. Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
Skills: Apache Hive Apache Spark Cloudera Hadoop
Hourly - Intermediate ($$) - Est. Time: Less than 1 month, 30+ hrs/week - Posted
We are seeking Jaspersoft reporting specialists to build interactive, dashboard reports. The reports to be designed are based on fairly common data structures, commonly found in the online space (Google analytics, Woopra Analytics and Web server Logs) as well a number of internal lead/workflow databases that are used to manage our day to day activities. The data warehouse being used is AWS Redshift and it is expected that the successful candidate will set up a local DB with these schemes so that they could be develop against. Reports would then be delivered to us, where we will re-target the data sources. The data stores are not available however the data schema's will be available along with data samples. Report templates will be provided outlining data and data schema's required, however we are open to suggestions regarding visualisations and final layout within dashboards.
Skills: Apache Hive Big Data Financial Reporting JasperReports
Fixed-Price - Intermediate ($$) - Est. Budget: $1,000 - Posted
We are looking for a big data architect that can: 1. manage our Cloudera server on AWS 2. load data into the hadoop cluster as needed 3. run hive queries 4. do some ETL and data cleansing if required 5. produce reports Any experience with a big data visualization tool is also very helpful. Please apply only if you can do very high quality work and is dependable.
Skills: Apache Hive Amazon Web Services Big Data Cloudera