Hadoop Jobs

35 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Hourly - Intermediate ($$) - Est. Time: Less than 1 month, Less than 10 hrs/week - Posted
We need some one to propose and implement the solution for data analytics from Mysql dataset. We need to get the aggregated values of dataset belongs to one of client and compare it to avg. of other clients which is taking time consuming if we do on the fly by hitting our mysql cluster. The reports should be generated on daily basis or may realtime if possible.
Skills: Hadoop Apache Spark Big Data R
Hourly - Intermediate ($$) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
We are looking for part-time help from a Big Data architect that has experience with Cloudera, Hive, Tableau, and AWS. Currently we have Tableau Desktop running on AWS that is trying to connect to Hive sitting on a small Cloudera cluster. We are trying to run some aggregations on a 5 GB CSV file that is sitting on HDFS within Cloudera, but the reports are taking a long time. We want this person to help us performance tune this issue for us asap. Qualifications: 2+ years of experience with HIve, Cloudera, Big Data architecture Good experience in connecting Tableau to data sources sitting on AWS Past experience in performance tuning issues such as the one explained above
Skills: Hadoop Amazon EC2 Amazon Web Services Big Data
Hourly - Intermediate ($$) - Est. Time: Less than 1 week, 10-30 hrs/week - Posted
i am looking for someone who has processed huge volume of data using Apache spark. Job requires both streaming and batch expertise, data ingestion, ETL and some analytics
Skills: Hadoop Apache Spark
Hourly - Expert ($$$) - Est. Time: More than 6 months, 30+ hrs/week - Posted
The candidate will have to know the following environment: - Hortonworks Data Platform, with HDP™ 2.4: Ready for the enterprise Automated (with Ambari 2.2) including: >> Spark >> Hbase >> Yarn >> HDFS - Hortonworks Data Platform Add-Ons: >> Hortonworks Connector for Teradata v1.4.1 >> Hortonworks ODBC Driver for Apache Hive (v2.1.2) >> Hortonworks Data Platform Search (Apache Solr 5.2.1) - Apache Tika - Apache Kafka - Django Framework Project phase: Stage 1 - Data Integration with Kafka and Spark streaming Stage 2 - Feature Engineering and ML Pipeline with Spark and Hadoop HDFS Stage 3 - System DB with Apache HBASE A detailed technical documentation of the project requirements will be provided.
  • Number of freelancers needed: 3
Skills: Hadoop Apache Kafka Apache Spark HBase
Hourly - Intermediate ($$) - Est. Time: More than 6 months, 10-30 hrs/week - Posted
The job consist in implementing the following Infrastructure: - Hortonworks Data Platform, with HDP™ 2.4: Ready for the enterprise Automated (with Ambari 2.2) including: >> Spark >> Hbase >> Yarn >> HDFS - Hortonworks Data Platform Add-Ons: >> Hortonworks Connector for Teradata v1.4.1 >> Hortonworks ODBC Driver for Apache Hive (v2.1.2) >> Hortonworks Data Platform Search (Apache Solr 5.2.1) - Apache Tika - Apache Kafka - Django Framework Installation must be done preferably on AWS. The infrastructure will need to have at least 3 nodes, 1 master, and a daily incremental backup. At least 10 users will need to have their access set up. Upon completion of the implementation, regular maintenance (weekly or bi-weekly) and upgrade will be required. Any further details will be provided upon request.
Skills: Hadoop Linux System Administration Private Clouds Red Hat Enterprise Linux (RHEL)
Hourly - Intermediate ($$) - Est. Time: Less than 1 month, 10-30 hrs/week - Posted
We have a few hundred Gigs of data sitting on a few data nodes in Cloudera HDFS on AWS. We are in need of a Tableau developer that can run some basis statistics for us on these data files using Tableau. Requirement: * Strong past experience with using Tableau on HDFS for analyzing CSV files * Ability to join columns from two or more CSV files when creating Tableau reports * Must know how to generate basic statistics
Skills: Hadoop Cloudera Tableau Software
Hourly - Expert ($$$) - Est. Time: More than 6 months, Less than 10 hrs/week - Posted
You should have some understanding of RTB systems for mobile advertising or have developed/worked on similar projects like RTBkit or OpenRTB or anything similar in Java language using Jetty, Jackson, GSON, JEDIS, JUNIT etc and Redis.. Your job is to fix some errors we are facing, maintain and extend the code we are using currently. You will have to prove your command on Redis, core Java and other dependencies mentioned above to get this job. This can be a lucrative long term project.
Skills: Hadoop Core Java Java JavaScript