Hire the best Apache Hive Developers in Pune, IN

Check out Apache Hive Developers in Pune, IN with the skills you need for your next job.
  • $30 hourly
    I have 15+ years of experience in software development in Telecom, Banking, and Healthcare domains. Primary skillsets include Big Data eco-systems (Apache Spark, Hive, Map Reduce, Cassandra), Scala, Core Java, Python, C++. I am well versed in designing and implementing Big data solutions, ETL and Data Pipelines, Serverless and event-driven architectures on Google Cloud Platform (GCP), and Cloudera Hadoop 5.5. I like to work with organizations to develop sustainable, scalable, and modern data-oriented software systems. - Keen eye on scalability, sustainability of the solution - Can come up with maintainable & good object-oriented designs quickly - Highly experienced in seamlessly working with remote teams effectively - Aptitude for recognizing business requirements and solving the root cause of the problem - Can quickly learn new technologies Sound experience in following technology stacks: Big Data: Apache Spark, Spark Streaming, HDFS, Hadoop MR, Hive, Apache Kafka, Cassandra, Google Cloud Platform (Dataproc, Cloud storage, Cloud Function, Data Store, Pub/Sub), Clouder Hadoop 5.x Languages: Scala, Python, Java, C++, C Build Tools: Sbt, Maven Databases: Postgres, Oracle Worked with different types of Input & Storage formats: CSV, XML, JSON file, Mongodb, Parquet, ORC
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    C++
    Java
    Apache Spark
    Scala
    Apache Hadoop
    Python
    Apache Cassandra
    Oracle PLSQL
    Cloudera
    Google Cloud Platform
  • $29 hourly
    *Experience* • Have hands-on experience upgrading the HDP or CDH cluster to Cloudera Data Private Cloud Platform [CDP Private Cloud]. • Extensive experience in installing, deploying, configuring, supporting, and managing Hadoop Clusters using Cloudera (CDH) Distributions and HDP hosted on Amazon web services (AWS) cloud and Microsoft Azure. • Experience in pgrading of Kafka, Airflow and CDSW • Configured various components such as HDFS, YARN, Sqoop, Flume, Kafka, HBase, Hive, Hue, Oozie, and Sentry. • Implemented Hadoop security. • Deployed production-grade Hadoop cluster and its components through Cloudera Manager/Ambari in a virtualized environment (AWS/Azure Cloud) as well as on-premises. • Configured HA for Hadoop services with backup & Disaster Recovery. • Setting Hadoop prerequisites on Linux server. • Secured the cluster using Kerberos & Sentry as well as Ranger and tls. • Experience in designing and building scalable infrastructure and platforms to collect and process very large amounts of structured and unstructured data. • Experience in adding and removing nodes, monitoring critical alerts, configuring high availability, configuring data backups, and data purging. • Cluster Management and troubleshooting on the Hadoop ecosystem. • Performance tuning, and solving Hadoop issues using CLI, CMUI by apache WebUI. • Report generation of running nodes using various benchmark operations. • Worked on AWS services such as EC2 instances, S3, Virtual private cloud, Security groups, and Microsoft Service like resource groups, resources (VM, disk, etc.), Azure blob storage, Azure storage replication. • configure private and public IP addresses, network routes, network interface, subnets, and virtual network on AWS/Microsoft Azure. • Troubleshooting, diagnosing, performance tuning, and solving the Hadoop issues. • Administration of Linux installation. • Fault finding, analysis and logging information for report. • Expert in administration of Kafka and deploying of UI tools to manage Kafka • Implementing HA for MySQL • Installing/Configuring Airflow for orchestration of jobs
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Apache Kafka
    Apache Airflow
    Apache Spark
    YARN
    Hortonworks
    Apache Hadoop
    Apache Zookeeper
    Cloudera
    Apache Impala
  • $50 hourly
    I am a passionate coder and data enthusiast. I love solving complex problems using Data and Models. I am currently working with tools and frameworks required for building efficient and scalable data pipelines using AWS and GCP based cloud based platform. My skills : Computer Vision, Google cloud, Infrastructure set-up, Big Data, Machine Learning, MapReduce, SQL, Search technologies. Tools and Languages : Pytorch, Tensorflow, Opwn CV, Apache Hadoop, Apache Kafka, Apache Spark, Apache Hive, Apache Impala, Apache Jena, AWS Cognito, AWS IOT Core, AWS Lambda, Django framework, Flask, Graphene, Graphql, AWS Dynamodb, AWS S3, RDF triple stores, Time series databases like Axibase, Apache Solr, Apache Lucene, Marklogic, Metafacts, Jenkins, Telegraf, Grafana, Kubernetes, Docker, AWS ECS, AWS EKS, GCP Kubernetes, Databricks solutions, Python, SQL.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Kubernetes
    Apache Spark
    Apache Kafka
    Architectural Design
    TensorFlow
    AWS Lambda
    PyTorch
    Internet of Things Solutions Design
    Apache Hadoop
    Internet of Things
    Google Cloud Platform
    Cloud Computing
    Amazon Web Services
  • $50 hourly
    I'M PANKAJ PORE AND I'M FULL STACK TEST AUTOMATION ARCHITECT I got over 15 years of diverse industry experience. I am expert in creating automation solutions and frameworks for large scale complex systems. Expert in automation across tech stacks (Python, Java & Perl) and on different channels (Web, APIs, Big data & Cloud). In my current role, as Principal SQA Engineer, I am responsible to build, design, implement and maintain the automation framework and integrate it with CI/CD pipeline. I am also responsible for bringing in the best industry practices for coding standards, extensible framework design, and low maintenance automation solutions. My experience is with ETL, Linux, Oracle, Hadoop eco-systems, Spark Framework, Python, Storm, Kafka, Java MR, SQL, and QA Automation. To know more about me, connect with me on LinkedIn or drop me a line at pankajpore@gmail.com
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Hive
    Apache Kafka
    Quality Assurance
    Apache Hadoop
    Automation
    QA Automation
  • $100 hourly
    Specialised in design and integration with intuitive problem-solving skills. Proficient in C programming, C++ programming,PYTHON,HTML,CSS and SQL. Passionate about implementing and launching new projects. Looking to start the career as an entry-level software engineer with reputed firm driven by technology.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    CSS
    HTML
    Adobe XD
    Typing
    C++
    Product Development
    Hive
    Web Application
    MySQL
    Apache Spark
    Big Data
    MySQL Programming
    Python
    Web Development
  • $18 hourly
    I am a Full-stack developer / lead, or solution architect with 10+ years of experience and have the expertise to complete many projects . I have very good experience working on full applications that require scalable architecture to design and develop, having worked on all stages of development like design, development to deployment with proven experiences in development. My passion and inclination toward the programming and coding, lead me to Upwork, a platform where I can put my knowledge, experience, passion and geekiness together and define and set my own limits. My expertise:- ✔️ Front-end Development JavaScript / React / React-Native / Redux / Angular / Ionic / Vue ✔️ Back-end Development Python / Node / Express / Java Spring boot / REST API / Golang / Laravel /Nest.js / Next.js ✔️ Databases PostgreSQL / MySQL / MongoDB / DynamoDB ✔️ Data Engineering Data Pipelines / ETL / Hive / Spack / Kafka / Drill ✔️ AWS Cloud Services Amplify / Lambda / EC2 / CloudFront / EC2 / S3 Bucket / Microservices ✔️ Responsibilities and Contribution: • Involved in various stages of software development life cycle including - development, testing, and implementation. • Analyzing and validating the functional Requirements. • Suggesting a better approach and preparing detailed documents and estimating the time required for the delivery system periodically. • Configuration and Customization of the Application as per the given Business requirement. • Used the sandbox for testing and migrated the code to the deployment instance thereafter. • Analysis of requirements Involved in the development of modules. • Discussing on requirements, feasibility of the changes, and impact on the current functionality onsite. I have excellent time management skills to define priorities and implement activities tailored to meet deadlines. My aptitude & creative problem solving skills help applying innovative solutions to complex issues. I am always eager to offer the value addition to customers by providing them with suggestions about the project.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    React
    React Native
    Apache Spark
    Angular 10
    Apache Kafka
    AWS Lambda
    Golang
    Spring Boot
    NodeJS Framework
    Vue.js
    Amazon EC2
    Python
    Java
  • $40 hourly
    As a Senior Data Engineer with 8+ years of extensive experience in the Data Engineering with Python ,Spark, Databricks, ETL Pipelines, Azure and AWS services, develop PySpark scripts and store data in ADLS using Azure Databricks. Additionally, I have created data pipelines for reading streaming data from MongoDB and developed Neo4j graphs based on stream-based data. I am well-versed in designing and modeling databases using Neo4j and MongoDB. I am seeking a challenging opportunity in a dynamic organization that can enhance my personal and professional growth while enabling me to make valuable contributions towards achieving the company's objectives. • Utilizing Azure Databricks to develop PySpark scripts and store data in ADLS. • Developing producers and consumers for stream-based data using Azure Event Hub. • Designing and modeling databases using Neo4j and MongoDB. • Creating data pipelines for reading streaming data from MongoDB. • Creating Neo4j graphs based on stream-based data. • Visualizing data for supply-demand analysis using Power BI. • Developing data pipelines on Azure to integrate Spark notebooks. • Developing ADF pipelines for a multi-environment and multi-tenant application. • Utilizing ADLS and Blob storage to store and retrieve data. • Proficient in Spark, HDFS, Hive, Python, PySpark, Kafka, SQL, Databricks, and Azure, AWS technologies. • Utilizing AWS EMR clusters to execute Hadoop ecosystems such as HDFS, Spark, and Hive. • Experienced in using AWS DynamoDB for data storage and caching data on Elasticache. • Involved in data migration projects that move data from SQL and Oracle to AWS S3 or Azure storage. • Skilled in designing and deploying dynamically scalable, fault-tolerant, and highly available applications on the AWS cloud. • Executed transformations using Spark, MapReduce, loaded data into HDFS, and utilized Sqoop to extract data from SQL into HDFS. • Proficient in working with Azure Data Factory, Azure Data Lake, Azure Databricks, Python, Spark, and PySpark. • Implemented a cognitive model for telecom data using NLP and Kafka cluster. • Competent in big data processing utilizing Hadoop, MapReduce, and HDFS.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    SQL
    MongoDB
    Data Engineering
    Microsoft Azure
    Apache Kafka
    Apache Spark MLlib
    Apache Hadoop
    AWS Glue
    PySpark
    Databricks Platform
    Hive Technology
    Apache Spark
    Azure Cosmos DB
    Python
  • $30 hourly
    Have 8 years of experience in Data-warehousing and Visualisation. Has worked on various Reporting and Dashboard development project, Also has good experience in Data Analysis and ETL. Good working experience in ETL technology such SSIS and Azure Data Factory, Also has excellent experience on Power BI, SSRS, Excel Reporting and Power View. Have excellent hands on experience on SQL, TSQL and HQL, Writing optimised stored procedures, functions etc.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Data Analysis
    Dashboard
    Microsoft Excel
    Data Visualization
    Microsoft Azure
    SQL
    MySQL
    Microsoft Excel PowerPivot
    Microsoft Azure SQL Database
    Microsoft Power BI
    Microsoft SQL SSAS
    Microsoft SQL Server Reporting Services
    SQL Server Integration Services
  • $10 hourly
    Currently working as an Data Engineer. Experienced in Hadoop, Spark, Hive, Kafka, Python, SQL and AWS services such as EMR, EC2, S3, Redshift, Lambda.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Hive
    AWS Lambda
    Linux
    PostgreSQL
    PySpark
    Apache Kafka
    Apache Spark
    Apache Hadoop
    Python
    AWS Glue
  • $10 hourly
    I am Data engineer working in a big data domain. I have experience of about 4 years in the IT industry.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Apache Airflow
    Microsoft Azure
    Apache Kafka
    Scala
    Python
    Apache Spark
    Sqoop
    Apache Hadoop
    SQL
  • $10 hourly
    I am Team Lead, Highly skilled IT professional with over 7 years of hands-on experience in big data technologies, including Hadoop, Spark, and Google Cloud. Expertise in designing, implementing data modeling, and managing large-scale data processing solutions. Expertise in SQL, Python, Bash, PySpark and GCP
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Bash Programming
    Hive
    SQL Programming
    Python Script
    Google Cloud Platform
    SQL
    Python
    Apache Hadoop
    PySpark
    Apache Airflow
    Apache Spark
  • $20 hourly
    I am highly skilled and results-driven Data Engineer/Architect with 7+ years of experience in designing and implementing robust data solutions. Adept at integrating complex data systems, optimizing data pipelines, and ensuring data quality and integrity. Overall, having a strong technical knowledge across multiple projects with technologies like Spark, Hadoop, Hive, Sqoop, Oozie, Python, Scala, SQL, Snowflake, AWS services (S3, Glue, Lambda, Step Functions, EventBridge, SNS, Redshift) and Microsoft Azure services (Blob, ADLS, Databricks, Data Factory, SQL server) etc. I am seeking challenging opportunities to leverage my expertise in data engineering, architecture, and analytics to drive business growth and enable data-driven decision-making.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Microsoft Azure SQL Database
    Data Lake
    PostgreSQL
    AWS Lambda
    Amazon CloudWatch
    Amazon S3
    Snowflake
    Amazon Redshift
    Databricks Platform
    AWS Glue
    Microsoft Azure
    Amazon Web Services
    Apache Spark
    PySpark
  • $8 hourly
    I successfully completed 2 real time projects related to Big Data.Also,my expertise lies in designing ETL processes,optimizing data storage and retrieval, and ensuring data quality and integrity. I have hands-on experience with a variety of big data technologies, including Apache Spark, Hadoop, and Apache Kafka.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    NoSQL Database
    SQL
    Apache Spark
    PySpark
    Hive
    Apache Hadoop
    Amazon Web Services
  • $20 hourly
    SUMMARY Having 3 total years currently in big data working as a Hadoop Admin and also having knowledge of deployment, configuring the Hadoop ecosystem, ETL using Linux and shell scripting. Tivoli workload monitoring on Hadoop and Kafka clusters and establishing Cognos BI dashboard reports. also able to find errors in Kafka connectors and streams logs and solve issues by killing and restarting whole connectors and streams. handled 10 streams of Kafka and 250 nodes Hadoop cluster singlehanded. ACHIEVEMENTS * Received Service Excellence Award for providing Production Support for kyndryl for 2 year and Received Certificate of Appreciation (COA) for timely delivery and resolution of defects * Reduced SLA delay Daily time by 3 hours by implementing and scheduling important jobs early and in pieces during the low cluster CPU utilization.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Hive
    Apache Hadoop
  • $12 hourly
    Overall, 4+ years of extensive experience in different domains in the IT industry and mainly in Hadoop Ecosystem Development and PySpark and Cloud service (Amazon Web Services). Have experience writing data extraction logic in pyspark. Have Experience building optimized big data pipeline using Hdfs,Hive,Hbase,PySpark on cloud premises. Have a hands-on experience on Amazon web services mainly S3 ,EC2 , AwsEmr , RDS, Redshift, Athena. Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Big Data
    Apache Hadoop
    Sqoop
    Amazon Web Services
    Amazon Athena
    Apache Airflow
    Apache Kafka
    Apache Spark
    Amazon S3
    Hive
    AWS Glue
    PySpark
    Python
    SQL
  • $10 hourly
    Carrier Objective: Aspire to be a part of ever dynamic and progressive organization where I can continuously learn and contribute to the growth of the company & which offers me innovativeness, challenges and enhances my proficiencies in the technical field.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Hive
    Web Application
    Apache Hadoop
    C
    Python Script
    C++
    Apache HTTP Server
    Python
  • $30 hourly
    Big Data Engineer with 1+ years of experience in designing, developing, and maintaining data pipelines and data storage systems. Proven ability to use big data technologies to solve real-world problems. Expertise in Hadoop, Spark, Hive, and Pig. Strong ability to collaborate with data scientists and analysts to develop data models and algorithms for predictive analytics and machine learning. Excellent communication skills with the ability to communicate meaningful data insights to stakeholders
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Amazon
    Web Application
    Web Services Development
    Management Skills
    Hive
    Business Management
    Apache Hadoop
    Amazon Web Services
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job (it’s free)

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Apache Hive Developer near Pune, on Upwork?

You can hire a Apache Hive Developer near Pune, on Upwork in four simple steps:

  • Create a job post tailored to your Apache Hive Developer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Hive Developer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Hive Developer profiles and interview.
  • Hire the right Apache Hive Developer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Hive Developer?

Rates charged by Apache Hive Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Hive Developer near Pune, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Hive Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Hive Developer team you need to succeed.

Can I hire a Apache Hive Developer near Pune, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Hive Developer proposals within 24 hours of posting a job description.