Hadoop Jobs

46 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Hourly - Expert ($$$) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
Looking for a Hortonworks expert architect. My goal is to design a new data sciences platform. The purpose of this platform for 1) a team of data scients to train predictive models and 2) an tech operations team to productionalize and run these predictive models in a recurring manner. The team has strong expertise and best practices with data sciences, data architecture, data governance and data management. Our need is for a Hortonworks expert. This Hortonworks expert would be an advisor to the existing team. This expert is needed to provide a “generic” Hortonworks architecture and then work with the team to adapt this architecture to our specific needs. Must Have Skills > Hortonworks Architecture & Best Practices > Hive expertise > HBase expertise Good to Have > Apache Atlas > Python scripting > Bash scripting My best guess is 10-15 hours would be needed over a 4-6 week time frame. The approach I believe makes sense (I’m open to smarter ideas) is two phases Phase I - Provide a generic Hortonworks architecture and interpretation of key requirements Phase II - Adapt the architecture to meet the specific requirements needs Please let me know if you have any comments or questions.
Skills: Hadoop Apache Hive Atlas Bash shell scripting
Hourly - Expert ($$$) - Est. Time: 3 to 6 months, 10-30 hrs/week - Posted
We are looking for Hadoop expert to help us in some hadoop/bigdata work. The consultant could be anywhere, we are open to work with him/her remotely. ... We request from 10-20 hours a week for three-six months. Responsibilities Performance Tune a Cloudera Hadoop 5.8 cluster to run map-reduce jobs efficiently Add in-memory sharing capability to the Hadoop cluster so that jobs common computation libraries Setup the web-service job submission for Hadoop Improve the interaction between Cloudera and MongoDB Education Bachelor's Degree, computer science or related field Master's Degree, computer science or related field a plus Required Skills and Qualifications Hands on experience working on the Spring Framework Hands on experience working with Mongo DB Hands on experience developing and performance tuning Map-Reduce jobs to perform computation on a Cloudera Hadoop cluster on Linux nodes Hands on experience in Installation, configuration and Maintenance of Hadoop ecosystems like HDFS, YARN/MRV2, Hive, Pig, Zookeeper, HBase, Oozie, Sqoop, Hue, Flume and Spark. ... Responsibilities Performance Tune a Cloudera Hadoop 5.8 cluster to run map-reduce jobs efficiently Add in-memory sharing capability to the Hadoop cluster so that jobs common computation libraries Setup the web-service job submission for Hadoop Improve the interaction between Cloudera and MongoDB Education Bachelor's Degree, computer science or related field Master's Degree, computer science or related field a plus Required Skills and Qualifications Hands on experience working on the Spring Framework Hands on experience working with Mongo DB Hands on experience developing and performance tuning Map-Reduce jobs to perform computation on a Cloudera Hadoop cluster on Linux nodes Hands on experience in Installation, configuration and Maintenance of Hadoop ecosystems like HDFS, YARN/MRV2, Hive, Pig, Zookeeper, HBase, Oozie, Sqoop, Hue, Flume and Spark.
Skills: Hadoop Artificial Intelligence Cloudera MongoDB
Fixed-Price - Expert ($$$) - Est. Budget: $300 - Posted
Need a developer for backend logic development. Server/cloud based integration with other back-end systems in real time and providing the output to the front-end. Payment system integration through escrow mechanisms and UPI (Unified Payment Interface) between different merchants. Logic flow/rules/protocols execution to be automated (without manual intervention). Security is important at all steps. Skillset required is flexible.
Skills: Hadoop database programming MySQL Programming OAuth
Hourly - Expert ($$$) - Est. Time: Less than 1 month, 10-30 hrs/week - Posted
Hi, I am looking for a developer with experience using the common crawl dataset. Ideally it would be ran as EC2 instances, so bandwidth isn't an issue, only computation time/cost. Knowledge of additional cost saving measures such as spot instances would be an advantage. I require a script to parse through the metadata (WAT) files and extract data quickly and efficently. The data I am interested in is the number of times a domain references an external image or javascript file. For example: "WARC-Target-URI": "http://internaldomain.com/some-page/" "Scripts": [{​ "path": "SCRIPT@/src", "type": "text/javascript", "url": "http://externaldomain.com/script.js" }​ "Links": [{​ "alt": "Alt Text", "path": "IMG@/src", "url": "http://externaldomain.com/image.jpg" }​ I would like to know how many times externaldomain.com has been referenced throughout the commoncrawl corpus by JS and Images. A more detail spec can be provided after some more discussion. As this is just a very basic overview of what I would like created. References & Examples: http://commoncrawl.org/the-data/get-started/ https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/wat/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.wat.gz
Skills: Hadoop Amazon EC2 Data scraping Java
Fixed-Price - Intermediate ($$) - Est. Budget: $2,000 - Posted
Qualifications 9+ years of experience developing large scale software/ web applications Experience in Big Data and cloud computing leveraging Hadoop ecosystem ((Hadoop/HDFS and related technologies) Proven prior experience in building large scalable distributed systems At least 2 Big Data implementations Experience in Java and an expert of all Hadoop components Hands on experience with major components like Map Reduce, HBase, Hive, Pig Strong understanding of underlying Hadoop concepts and distributed computing Solid understanding of current status of big-data technology including hands on experience with design, and building big-data infrastructures. ... Qualifications 9+ years of experience developing large scale software/ web applications Experience in Big Data and cloud computing leveraging Hadoop ecosystem ((Hadoop/HDFS and related technologies) Proven prior experience in building large scalable distributed systems At least 2 Big Data implementations Experience in Java and an expert of all Hadoop components Hands on experience with major components like Map Reduce, HBase, Hive, Pig Strong understanding of underlying Hadoop concepts and distributed computing Solid understanding of current status of big-data technology including hands on experience with design, and building big-data infrastructures. ... Qualifications 9+ years of experience developing large scale software/ web applications Experience in Big Data and cloud computing leveraging Hadoop ecosystem ((Hadoop/HDFS and related technologies) Proven prior experience in building large scalable distributed systems At least 2 Big Data implementations Experience in Java and an expert of all Hadoop components Hands on experience with major components like Map Reduce, HBase, Hive, Pig Strong understanding of underlying Hadoop concepts and distributed computing Solid understanding of current status of big-data technology including hands on experience with design, and building big-data infrastructures.
Skills: Hadoop Agile software development Big Data C++
Fixed-Price - Expert ($$$) - Est. Budget: $160 - Posted
We have tie up with so many engineering colleges in pune, we have a plan to conduct training courses in respective college premises as per their requirements, for that we required freelance technical trainers for JAVA, .NET, ANDROID & HADOOP. Roles and Responsibilities: 1. Should have hands on experience in application development and passionate towards sharing knowledge. 2.
Skills: Hadoop .NET Remoting Android Java
Hourly - Intermediate ($$) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
Looking for an experienced Hadoop systems Architect/Administrator (Cloudera ONLY). I have two positions (Suitable candidate can assume both the roles) 1) Systems Architect who can advice on the areas to improve with respect to Automation, deployment, performance tuning, capacity management etc, Document with the steps, Diagrams (Please apply if you only have experience large deployments) — This would be hourly job 2) Hadoop Administrator with experience with Troubleshooting of various ecosystem tools, JVM, setting up monitoring like Ganglia, Nagios, Automation experience (Shell and Python) etc here is the detailed Job description 1) Big data System architecture/Administration (Cloudera Hadoop, Elasticsearch, MongoDB) 2) Cloudera Administration, Cloudera Manager API Preferably Cloudera certified 3) In-depth knowledge of Security practice on Cloudera (Kerberos, KMS, Cloudera Navigator, Sentry ) 4) expert in troubleshooting (ecosystem tools, JVM, Hive/Impala Query Tuning ) 5) Solid Scripting in Python, Shell (Proof needed: Github) 6) Someone who has experience with Monitoring setup (Ganglia, Nagios) complimenting existing Cloudera Manager Solid Linux Admin Skills Work ... I have two positions (Suitable candidate can assume both the roles) 1) Systems Architect who can advice on the areas to improve with respect to Automation, deployment, performance tuning, capacity management etc, Document with the steps, Diagrams (Please apply if you only have experience large deployments) — This would be hourly job 2) Hadoop Administrator with experience with Troubleshooting of various ecosystem tools, JVM, setting up monitoring like Ganglia, Nagios, Automation experience (Shell and Python) etc here is the detailed Job description 1) Big data System architecture/Administration (Cloudera Hadoop, Elasticsearch, MongoDB) 2) Cloudera Administration, Cloudera Manager API Preferably Cloudera certified 3) In-depth knowledge of Security practice on Cloudera (Kerberos, KMS, Cloudera Navigator, Sentry ) 4) expert in troubleshooting (ecosystem tools, JVM, Hive/Impala Query Tuning ) 5) Solid Scripting in Python, Shell (Proof needed: Github) 6) Someone who has experience with Monitoring setup (Ganglia, Nagios) complimenting existing Cloudera Manager Solid Linux Admin Skills Work ... I have two positions (Suitable candidate can assume both the roles) 1) Systems Architect who can advice on the areas to improve with respect to Automation, deployment, performance tuning, capacity management etc, Document with the steps, Diagrams (Please apply if you only have experience large deployments) — This would be hourly job 2) Hadoop Administrator with experience with Troubleshooting of various ecosystem tools, JVM, setting up monitoring like Ganglia, Nagios, Automation experience (Shell and Python) etc here is the detailed Job description 1) Big data System architecture/Administration (Cloudera Hadoop, Elasticsearch, MongoDB) 2) Cloudera Administration, Cloudera Manager API Preferably Cloudera certified 3) In-depth knowledge of Security practice on Cloudera (Kerberos, KMS, Cloudera Navigator, Sentry ) 4) expert in troubleshooting (ecosystem tools, JVM, Hive/Impala Query Tuning ) 5) Solid Scripting in Python, Shell (Proof needed: Github) 6) Someone who has experience with Monitoring setup (Ganglia, Nagios) complimenting existing Cloudera Manager Solid Linux Admin Skills Work
Skills: Hadoop Ansible Bash shell scripting Cloudera
Hourly - Intermediate ($$) - Est. Time: More than 6 months, 30+ hrs/week - Posted
----- No agency please as we would be dealing only with quality freelancers----- ----- Preference would be given to applicants from Indian sub-continent because of location constraints, but if you are from Pakistan please dont apply, as due to legal issues we wont be able to hire you even if you clear interviews, so save your job credits---- Hi, We are looking for 2 resources, who are expert in Big Data, with good years of experience in Hadoop, Kafka, Hive, Strom and if you have experience in MapR then it would give you a real edge for selection to join our team.
Skills: Hadoop Apache Hive Apache Kafka Big Data