Hire the best Apache Hive Developers in Pune, IN

Check out Apache Hive Developers in Pune, IN with the skills you need for your next job.
  • $35 hourly
    I have 18+ years of experience in software development in Telecom, Banking, and Healthcare domains. Primary skillsets include Big Data eco-systems (Apache Spark, Hive, Map Reduce, Cassandra), Scala, Core Java, Python, C++. I am well versed in designing and implementing Big data solutions, ETL and Data Pipelines, Serverless and event-driven architectures on Google Cloud Platform (GCP), and Cloudera Hadoop 5.5. I like to work with organizations to develop sustainable, scalable, and modern data-oriented software systems. - Keen eye on scalability, sustainability of the solution - Can come up with maintainable & good object-oriented designs quickly - Highly experienced in seamlessly working with remote teams effectively - Aptitude for recognizing business requirements and solving the root cause of the problem - Can quickly learn new technologies Sound experience in following technology stacks: Big Data: Apache Spark, Spark Streaming, HDFS, Hadoop MR, Hive, Apache Kafka, Cassandra, Google Cloud Platform (Dataproc, Cloud storage, Cloud Function, Data Store, Pub/Sub), Cloudera Hadoop 5.x Languages: Scala, Python, Java, C++, C, Scala with Akka and Play frameworks Build Tools: Sbt, Maven Databases: Postgres, Oracle, MongoDB/CosmosDB, Cassandra, Hive GCP Services: GCS, DataProc, Cloud functions, Pub/Sub, Data-store, BigQuery AWS Services: S3, VM, VM Auto-scaling Group, EMR, S3 Java APIs, Redshift Azure Services: Blob, VM, VM scale-set, Blob Java APIs, Synapse Other Tools/Technologies: Dockerization, Terraform Worked with different types of Input & Storage formats: CSV, XML, JSON file, Mongodb, Parquet, ORC
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    C++
    Java
    Apache Spark
    Scala
    Apache Hadoop
    Python
    Apache Cassandra
    Oracle PLSQL
    Cloudera
    Google Cloud Platform
  • $40 hourly
    As a Senior Data Engineer with 9 years of extensive experience in the Data Engineering with Python ,Spark, Databricks, ETL Pipelines, Azure and AWS services, develop PySpark scripts and store data in ADLS using Azure Databricks. Additionally, I have created data pipelines for reading streaming data from MongoDB and developed Neo4j graphs based on stream-based data. I am well-versed in designing and modeling databases using Neo4j and MongoDB. I am seeking a challenging opportunity in a dynamic organization that can enhance my personal and professional growth while enabling me to make valuable contributions towards achieving the company's objectives. • Utilizing Azure Databricks to develop PySpark scripts and store data in ADLS. • Developing producers and consumers for stream-based data using Azure Event Hub. • Designing and modeling databases using Neo4j and MongoDB. • Creating data pipelines for reading streaming data from MongoDB. • Creating Neo4j graphs based on stream-based data. • Visualizing data for supply-demand analysis using Power BI. • Developing data pipelines on Azure to integrate Spark notebooks. • Developing ADF pipelines for a multi-environment and multi-tenant application. • Utilizing ADLS and Blob storage to store and retrieve data. • Proficient in Spark, HDFS, Hive, Python, PySpark, Kafka, SQL, Databricks, and Azure, AWS technologies. • Utilizing AWS EMR clusters to execute Hadoop ecosystems such as HDFS, Spark, and Hive. • Experienced in using AWS DynamoDB for data storage and caching data on Elasticache. • Involved in data migration projects that move data from SQL and Oracle to AWS S3 or Azure storage. • Skilled in designing and deploying dynamically scalable, fault-tolerant, and highly available applications on the AWS cloud. • Executed transformations using Spark, MapReduce, loaded data into HDFS, and utilized Sqoop to extract data from SQL into HDFS. • Proficient in working with Azure Data Factory, Azure Data Lake, Azure Databricks, Python, Spark, and PySpark. • Implemented a cognitive model for telecom data using NLP and Kafka cluster. • Competent in big data processing utilizing Hadoop, MapReduce, and HDFS.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Microsoft Azure SQL Database
    SQL
    MongoDB
    Data Engineering
    Microsoft Azure
    Apache Kafka
    Apache Hadoop
    AWS Glue
    PySpark
    Databricks Platform
    Hive Technology
    Apache Spark
    Azure Cosmos DB
    Python
  • $18 hourly
    I am a Full-stack developer / lead, or solution architect with 10+ years of experience and have the expertise to complete many projects . I have very good experience working on full applications that require scalable architecture to design and develop, having worked on all stages of development like design, development to deployment with proven experiences in development. My passion and inclination toward the programming and coding, lead me to Upwork, a platform where I can put my knowledge, experience, passion and geekiness together and define and set my own limits. My expertise:- ✔️ Front-end Development JavaScript / React / React-Native / Redux / Angular / Ionic / Vue ✔️ Back-end Development Python / Node / Express / Java Spring boot / REST API / Golang / Laravel /Nest.js / Next.js ✔️ Databases PostgreSQL / MySQL / MongoDB / DynamoDB ✔️ Data Engineering Data Pipelines / ETL / Hive / Spack / Kafka / Drill ✔️ AWS Cloud Services Amplify / Lambda / EC2 / CloudFront / EC2 / S3 Bucket / Microservices ✔️ Responsibilities and Contribution: • Involved in various stages of software development life cycle including - development, testing, and implementation. • Analyzing and validating the functional Requirements. • Suggesting a better approach and preparing detailed documents and estimating the time required for the delivery system periodically. • Configuration and Customization of the Application as per the given Business requirement. • Used the sandbox for testing and migrated the code to the deployment instance thereafter. • Analysis of requirements Involved in the development of modules. • Discussing on requirements, feasibility of the changes, and impact on the current functionality onsite. I have excellent time management skills to define priorities and implement activities tailored to meet deadlines. My aptitude & creative problem solving skills help applying innovative solutions to complex issues. I am always eager to offer the value addition to customers by providing them with suggestions about the project.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    React
    React Native
    Apache Spark
    Angular 10
    Apache Kafka
    AWS Lambda
    Golang
    Spring Boot
    NodeJS Framework
    Vue.js
    Amazon EC2
    Python
    Java
  • $30 hourly
    Have 8 years of experience in Data-warehousing and Visualisation. Has worked on various Reporting and Dashboard development project, Also has good experience in Data Analysis and ETL. Good working experience in ETL technology such SSIS and Azure Data Factory, Also has excellent experience on Power BI, SSRS, Excel Reporting and Power View. Have excellent hands on experience on SQL, TSQL and HQL, Writing optimised stored procedures, functions etc.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Data Analysis
    Dashboard
    Microsoft Excel
    Data Visualization
    Microsoft Azure
    SQL
    MySQL
    Microsoft Excel PowerPivot
    Microsoft Azure SQL Database
    Microsoft Power BI
    Microsoft SQL SSAS
    Microsoft SQL Server Reporting Services
    SQL Server Integration Services
  • $10 hourly
    Hello! I'm Advait, a highly skilled in data engineering and AI/ML enthusiast. I have 4 years of experience in data engineering.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Apache Airflow
    Apache Kafka
    Sqoop
    Apache Hadoop
    Microsoft Azure
    Scala
    Python
    Apache Spark
    SQL
  • $8 hourly
    Looking for a highly-skilled DevOps Engineer who can optimize your AWS deployments and streamline your DevOps processes? Look no further! With over 5 years of hands-on experience supporting mission-critical systems, I'm the expert you need to ensure your projects are delivered with maximum efficiency and speed. As a seasoned DevOps Engineer, I specialize in leveraging the latest tools and technologies to automate and streamline your deployments, making sure your code is always up-to-date and running smoothly. With a strong focus on configuration management and CI/CD, I can help you reduce your time-to-market and improve your overall product quality. Whether you're looking to scale your existing infrastructure, migrate to AWS, or simply improve your existing DevOps processes, I have the skills and expertise to help you achieve your goals. So why wait? Let's work together to take your projects to the next level!
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Git
    CI/CD
    AWS Application
    Ansible
    DevOps
    Kubernetes
    Docker
    Linux
    Apache NiFi
    Scala
    Apache Hadoop
    Apache Kafka
    Apache Spark
    Big Data
  • $35 hourly
    Results-oriented Data Engineer and Data Scientist with approximately 3 years of experience specializing in Machine Learning, AI, and data engineering, consistently driving client success. Proficient in Big Data technologies and data visualization tools. Skilled in analyzing large datasets with 20,000,000+ rows using machine learning and statistical analysis, primarily in Python and R. Collaborates effectively with cross-functional teams to deliver innovative, efficient, and end-to-end solutions that empower clients and drive growth.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Python
    Apache Airflow
    Databricks Platform
    PySpark
    Data Engineering
    ETL Pipeline
    Predictive Analytics
    SQL Programming
    Apache NiFi
    Data Analysis
    Deep Learning
    Feature Engineering
    Autoencoder
    Machine Learning
  • $25 hourly
    Mr. Prafulla has over 12 years of experience working on enterprise BI software products. Throughout, he worked upon development activity of 'IBM Netezza Performance Server' (which is a widely used data warehouse appliance) and Cognos Business Intelligence reporting (data analysis) product and Data Manager-ETL tool of IBM-Cognos. Currently, Prafulla is working as an independent Big Data consultant owning multiple responsibilities for setting up big data projects for multiple organizations. His some of the recent big data engagements include - Opinion Mining Engine – Used to derive sentiment score out of huge data from news sites Product Comparison Engine – Used to enable the online shopper to choose the seller who is selling the product at a lower price than the other online retailers. Design and development of Distributed Search and Selective Replication Technologies that Prafulla deal with frequently are- GOlang, Docker, Kubernetes, Hadoop, HBase, Hive, Apache Nutch, Couchbase, Java, C, C++ In past, he has been awarded “One of 15 percent top mentors” in IBM for providing mentor-ship to university graduates under University Relations program of IBM. In addition, he has received “IBM Bravo” award for setting up the team for India Development Center of Cognos during early days of Cognos in India after its acquisition by IBM.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Apache HBase
    VMware ESX Server
    Apache Hadoop
    Docker
    Apache Nutch
    Golang
    Kubernetes
  • $30 hourly
    Have 8+ years of experience in analytics domain. Expertise is HR analytics. Worked on various workforce planning project. Created Talent matching framework, which considered skill set required in open position and available skill set in employee profile and give match score. Expertise in following: Data visualization (Tableau, power BI, Excel). Data mining and predictive modeling . Data pipeline creation using AWS service which includes S3, Glue, Redshift, Sagemaker, API gateway etc.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Linear Regression
    Machine Learning
    Data Warehousing
    Apache Spark
    Data Mining
    Data Visualization
    R
    Python
    SQL
  • $25 hourly
    • 9+ Years of IT experience with production development using BigData Technologies, AWS (its related services) and GCP (its related services). • Successfully lead the development team of data science engineers and deployed major migrations projects for various renowned US/Germany clients on advanced platforms like : BigData/Haddop, AWS and Google cloud platform. • Architected customised frameworks by implementing technologies like Hadoop MR, Apache Spark, Scala, Hive, Pig, Hbase, Kafka, Akka, REST, NoSql, Sql, MySQL, MongoDB and Play/Scalding frameworks. • Designed and developed frameworks for handling very huge and complex datasets through data ingestion, data modeling, development of ETL pipelines and using repositories like Data Lakes and Data Mart. • Back-end and front end strengths are Scala, J2EE, JavaScript, jQuery, Shell and PHP. Personally: • I love programming and I am a Self initiator. Self motivated and a great problem solving programmer. • I have strong communication, good interpersonal skills and I am an amicable team player. # Please find all my project details in the attached PPT file (in other experiences section below).
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Scala
    AWS Lambda
    Google Cloud Platform
    Big Data
    Amazon Web Services
    Apache Kafka
    Apache Spark
    Apache Hadoop
  • $10 hourly
    I am very passionate big data developer here at Upwork. My expertise are data processing solutions for large data sets. I am have 5 years of experience as developer and have good amount of experience in Data Engineering specialized in big data technologies.. I have have implemented some big data processing frameworks using Apache Spark, Hive, pySpark. Also i have worked on optimizing the existing solution for better execution of the applications. I am also having experience in developing applications in python and java. i I can provide the quick automation/utility in scripts in python/shell scripting.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Google Cloud Platform
    Hive
    Unix
    Bash Programming
    Terraform
    Big Data
    PySpark
    Java
    Python
    Scala
    Apache Hadoop
    Apache Spark
    SQL
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Apache Hive Developer near Pune, on Upwork?

You can hire a Apache Hive Developer near Pune, on Upwork in four simple steps:

  • Create a job post tailored to your Apache Hive Developer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Hive Developer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Hive Developer profiles and interview.
  • Hire the right Apache Hive Developer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Hive Developer?

Rates charged by Apache Hive Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Hive Developer near Pune, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Hive Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Hive Developer team you need to succeed.

Can I hire a Apache Hive Developer near Pune, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Hive Developer proposals within 24 hours of posting a job description.