You've landed at the right place. oDesk is now Upwork. Learn about the new platform.

Hadoop Jobs

25 were found based on your criteria

show all
show all
only
only
only
show all
only
only
only
only
only
show all
only
only
only
Hourly - Est. Time: 1 to 3 months, 30+ hrs/week - Posted
Hi, I am looking for a Big Data Architect having 5+ years working experience of the Big Data Analytics. He/She should have a sound grip over most of the following technical skills: Computing - Hadoop - Spark - Mapreduce Storage - HDFS - Hbase - Cassandra - S3 - Google Cloud Storage Indexing - Solr - Elastic Search Streaming - Spark Streaming - Storm - Kafka Query | SQL - Spark SQL - Hive - Phoenix Machine Learning - Mahout - Mlib - R ETL - Flume - Sqoop2 - Pentaho In Memory Stores - Aerospike - Redis Amazon - EC2 - S3 Programming Languages - Java - Scala
Hourly - Est. Time: 3 to 6 months, 30+ hrs/week - Posted
Idea Couture is looking for a senior backend systems architect to join our technology group and build systems and solutions that help shape the future. We operate in collaborative, cross-functional teams, working on problems from multiple perspectives until we get it right. You'll need to be comfortable facilitating discussion, taking feedback, working in a constantly evolving agile environment and motivating yourself and your team. Responsibilities: / Design, develop, test and maintain a scalable solution to handle huge amounts of records with minimum processing latency, and building a cost-effective high throughput system. / Build fault tolerant, highly performant data pipelines that are able to meet scalability and latency targets. / Develop proof of concepts that demonstrate data centric analytic services delivering structured and unstructured information. / Integrate, via APIs, the following, but not limited to: web-services, custom wrappers, and third party applications. /...
Hourly - Est. Time: Less than 1 week, Less than 10 hrs/week - Posted
I'm looking for someone who knows how to work with HDInsight from an /administrative/ perspective. Specifically, successful candidate can: - Explain how in HDInsight works in general - What are the important things that need to be monitored with respect to Hadoop in general - General problems that a Hadoop administrator must be aware of - Specifics with respect to Hadoop running in HDInsight There is no hands-on work required. This engagement will be mostly Skype based with explanations and desktop sharing. I'm not looking for the cheapest candidate or someone who once tried the product. I'm looking for someone who knows what they're talking about.
Hourly - Est. Time: 3 to 6 months, 30+ hrs/week - Posted
Hi....We are looking for Hadoop and MongoDB resources to help us with a variety of projects. Immediate needs... Looking for mid level and senior resources..
Hourly - Est. Time: 1 to 3 months, Less than 10 hrs/week - Posted
I am interested in creating a database model and then a wiki type solution to act as an interface to the database. I am open to ideas on how to manage this but have a specific goal in mind. I want someone with strong database knowledge. Must have experience developing both databases and wikis. Must have advanced experience using Linux any distro is fine.
Hourly - Est. Time: Less than 1 month, 30+ hrs/week - Posted
We are a social trading platform and would like help with creating a big data analytics solution. This includes collecting data from our social network software and other external sources including financial sources, storing the data to a big data DB like MongoDB/Hadoop, building analytics widgets and tools for reporting and predictive analysis.
Hourly - Est. Time: 1 to 3 months, 30+ hrs/week - Posted
We are looking for a big data expert with experience in hadoop, spark, presto, hive, mysql. You should be super experienced in setting up the hadoop architecture on any cloud server. On the high level, this is what the application needs to be doing. 1) We need to fetch a lot of data from several different apis which have different formats of responses and different data representations. 2) We need to clean this data and put them into a sql database. 3) This data will then need to be retrieved by an application that requires a data source which will be the datatable to reference - this is the front end that will pull the data from our database: http://www.tableau.com/ 4) Fetching data from these data endpoints / apis needs to be done in the most efficient manner possible and also retrieving data from our database needs to be efficient. We are looking for someone highly experienced in: - Big data and has experience in working with large / huge data sets of millions...
Hourly - Est. Time: 1 to 3 months, Less than 10 hrs/week - Posted
We are looking for a big data expert with experience in hadoop, spark, presto, hive, mysql. You should be super experienced in setting up the hadoop architecture on any cloud server. On the high level, this is what the application needs to be doing. 1) We need to fetch a lot of data from several different apis which have different formats of responses and different data representations. 2) We need to clean this data and put them into a sql database. 3) This data will then need to be retrieved by an application that requires a data source which will be the datatable to reference - this is the front end that will pull the data from our database: http://www.tableau.com/ 4) Fetching data from these data endpoints / apis needs to be done in the most efficient manner possible and also retrieving data from our database needs to be efficient. We are looking for someone highly experienced in: - Big data and has experience in working with large / huge data sets of millions...
Fixed-Price - Est. Budget: $ 2,000 Posted
We are looking for a big data expert with experience in hadoop, spark, presto, hive, mysql On the high level, this is what the application needs to be doing. 1) We need to fetch a lot of data from several different apis which have different formats of responses and different data representations. 2) We need to clean this data and put them into a sql database. 3) This data will then need to be retrieved by an application that requires a data source which will be the datatable to reference - this is the front end that will pull the data from our database: http://www.tableau.com/ 4) Fetching data from these data endpoints / apis needs to be done in the most efficient manner possible and also retrieving data from our database needs to be efficient. We are looking for someone highly experienced in: - Big data and has experience in working with large / huge data sets of millions of recrods - Has experience in setting up Hadoop to work with a PHP / MYSQL environment -...