Hire the best Apache Hive Developers in Bengaluru, IN

Check out Apache Hive Developers in Bengaluru, IN with the skills you need for your next job.
  • $40 hourly
    🚀 Greetings! 🚀 I'm a seasoned Senior Data Engineer with a robust background in architecting and implementing sophisticated data solutions that drive decision-making and business intelligence. With a knack for data wrangling, transformation, normalization, and crafting end-to-end data pipelines, I bring to the table a wealth of expertise aimed at optimizing your data infrastructure for peak performance and insight generation. 🔍 What Sets Me Apart? 🔍 Proven Track Record: Successfully deployed multiple complex data pipelines using industry-standard tools like Apache Airflow and Apache Oozie, demonstrating my capability to handle projects of any scale. Fortune 500 Experience: Contributed significantly to data platform teams at renowned companies, tackling intricate data challenges, managing voluminous datasets, and enhancing data flow efficiency. Holistic Skillset: My proficiency isn't just limited to engineering. I excel in Business Intelligence, ETL processes, and crafting complex SQL queries, ensuring a comprehensive approach to data management. Efficiency & Simplicity: I prioritize creating solutions that are not only effective but also straightforward and maintainable, ensuring long-term success and ease of use. 🛠 Tech Arsenal 🛠 Cloud Platforms: Mastery over GCP (Google Cloud Platform) and AWS (Amazon Web Services), enabling seamless data operations in the cloud. Programming Languages: Skilled in Java, Scala, and Python, offering versatility in tackling various data engineering challenges. Data Engineering Tools: Expertise in Spark, Pyspark, Kafka, and more, equipped to build robust data processing applications. Data Warehousing: Proficient with AWS Athena, Google BigQuery, Snowflake, ensuring scalable and efficient data storage solutions. Orchestration & Scheduling: Adept in managing complex workflows with tools like Airflow and Oozie, coupled with container orchestration using Docker. 🌟 Why Collaborate With Me? 🌟 Beyond my technical prowess, I am detail-oriented, organized, and highly responsive, prioritizing clear communication and project efficiency. I am passionate about unlocking the potential of data to fuel business growth and innovation. Let's embark on this data-driven journey together! Connect with me to discuss how we can elevate your data infrastructure to new heights.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Apache Airflow
    Apache Kafka
    Data Warehousing
    Data Lake
    ETL Pipeline
    ETL
    AWS Lambda
    AWS Glue
    Microsoft Azure
    Data Integration
    Data Transformation
    PySpark
    SQL
    Python
  • $10 hourly
    With a Bachelor’s degree in Computer Science, and hands-on experience using JAVA and C++ to create and implement software applications. I work as a Software engineering SDE, in a well known fintech startup , I use JAVA and C++ extensively for my day to day work. Have experience in working with advance BIG DATA frameworks such as Apache Hadoop, Apache Spark and Apace Hive. Works as SME at Chegg where I help students with there doubts and assignments in the field of Computer Science. Have 1yr+ experience in teaching.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    AWS Development
    Rust
    Golang
    Python
    LLM Prompt Engineering
    Data Engineering
    C++
    Spring Boot
    Core Java
    Apache Hadoop
    Data Structures
    Apache Spark
    MySQL
  • $15 hourly
    I am Big Data Engineer with expertise in Hadoop, Cloudera and Horton Works Distributions and also Azure Data Services proficiency. Having good experience in all trending popular tools and technologies like Azure: Azure Data Factory, Azure Logic Apps, Azure Function apps, Azure Event Hub and Azure Service bus, Azure SQL DB. Apache: Apache Spark, Apache NIFI, Apache Kaka , Apache Hive. Having strong knowledge in programming languages like Java, Scala and Python. Also have good knowledge in SAP process.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Microsoft Azure
    ETL Pipeline
    Apache Cassandra
    Apache Hadoop
    Database Design
    Apache Spark
    Apache Kafka
    Apache NiFi
    Elasticsearch
  • $40 hourly
    I am a Senior Data Engineer with extensive expertise in data wrangling, transformation, normalization, and setting up comprehensive end-to-end data pipelines. My skills also include proficiency in Business Intelligence, ETL processes, and writing complex SQL queries. I have successfully implemented multiple intricate data pipelines using tools like Apache Airflow and Apache Oozie in my previous projects. I have had the opportunity to contribute to the data platform teams at Fortune 500 companies, where my role involved solving complex data issues, managing large datasets, and optimizing data streams for better performance and reliability. I prioritize reliability, efficiency, and simplicity in my work, ensuring that the data solutions I provide are not just effective but also straightforward and easy to maintain. Over the years, I have worked with a variety of major databases, programming languages, and cloud platforms, accumulating a wealth of experience and knowledge in the field." Skills : 𝗖𝗹𝗼𝘂𝗱: GCP (Google Cloud Platform) , AWS (Amazon Web Services) 𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 : Java, Scala, Python 𝗗𝗮𝘁𝗮 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 : Spark, Pyspark, Kafka, Crunch, MapReduce, Hive, HBase, AWS Glue 𝗗𝗮𝘁𝗮-𝘄𝗮𝗿𝗲𝗵𝗼𝘂𝘀𝗶𝗻𝗴 : AWS Athena, Google BigQuery, Snowflake, Hive 𝗦𝗰𝗵𝗲𝗱𝘂𝗹𝗲𝗿 : Airflow, Oozie etc. 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 : Docker I am highly attentive to details, organised, efficient, and responsive. Let's connect over.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Data Warehousing & ETL Software
    API Integration
    Apache Airflow
    Apache Spark
    Apache Hadoop
    Apache Kafka
    PySpark
    ETL Pipeline
    Data Engineering
    Data Preprocessing
    Data Integration
    Python
    SQL
    Data Transformation
  • $15 hourly
    Specialties: Big Data Technology, Spark, Databricks, Azure Synapse Analytics Services, AWS, Hive, ETL, Data lake, delta lake expert. Languages : Scala, Java , Python(intermediate), SQL & No-SQL Databases Academic Project expert for all University
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Oracle
    ETL
    Oracle PLSQL
    Big Data
    SQL
    Java
    Apache Kafka
    Apache Hadoop
    Apache Spark
  • $30 hourly
    I am a dedicated and results-driven Data Engineer with a passion for transforming complex data into valuable insights and actionable results. With 4 of experience in the industry, I have honed my skills in designing, developing, and implementing effective data systems and pipelines using a range of tools including Apache Spark, Apache Hadoop, and Snowflake. My deep understanding of data warehousing, ETL processes, and data analysis has enabled me to deliver innovative solutions that drive business growth and competitive advantage. I am committed to staying up-to-date with the latest technologies and industry trends, always seeking new and better ways to turn data into meaningful insights
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Data Analytics
    Big Data
    Data Warehousing
    Google Analytics
    Apache Spark MLlib
    Apache Airflow
    Apache Kafka
    Data Mining
    Data Structures
    Apache Spark
    Data Analysis
    Python
    SQL
    ETL Pipeline
  • $15 hourly
    Creative, Focused and Quick-Learning personality with expertise in analyzing the business using analytical skills.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Microsoft Power BI
    MySQL Programming
    Microsoft Azure SQL Database
    Microsoft Azure
    Apache Hadoop
    Apache Spark
    Databricks Platform
    Python
    Apache Kafka
  • $40 hourly
    PROFILE With almost 5 years of experience, currently serving as a Data Engineer at Infosys, adept in Hadoop, Hive, Spark, Unix Azure shell scripting, Azure BI and Python. Demonstrated proficiency in designing, implementing, and optimizing data processing pipelines within diverse environments. Proven track record of leveraging advanced technologies to drive actionable insights and facilitate informed decision-making. Strong analytical skills combined with a collaborative approach to problem-solving, contributing to the success of data-driven initiatives.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Unix
    SQL Programming
    Big Data
    Apache Hadoop
  • $500 hourly
    EXPERIENCE SUMMARY * Having 6.5 Years of experience in Sr.Software Testing which includes experience in Manual & ETL & Big Data & DWH & Database Testing. * Good knowledge on Data warehouse concepts like Star schema, snow-flake schema, fact table, dimension table etc. * Good Knowledge on different Testing Types Smoke Testing, Functional Testing, GUI Testing, Integration Testing, Re-Testing, Regression Testing, Sanity Testing, Compatibility Testing and End to End Testing * Good in Testing the API services using POSTMAN tool by using GET, POST, PUT and DELETE methods * Very good in Web Application testing * Experience on API Testing with Postman Tool. * Hands on experience in Test Planning, Test Design, Test Execution, Defect Tracking and Defect Reporting * Experienced in deriving test scenarios and writing Test Cases in JIRA.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    QA Testing
    ETL
    Agile Project Management
    Agile Software Development
    Apache Hadoop
    QA Automation
    SQL
    Database Testing
    Manual Testing
  • $15 hourly
    # 14 years of Industry experience with 10 years as java development and 4 years as Big data developer. # 13 months Big Data Engineering program (2020-2021) in from BITS, Pilani with Upgrad with 8.45 CGPA Below are my Technical Skill Set : # Big Data Technologies: Hadoop, HDFS, MapReduce , Sqoop, Hive, Oozie, Kafka, Spark SQL/Streaming/ML # ML Techniques: Regression, Classification and Clustering using spark ML. # Programming: Java, Scala, SQL, Shell scripting # Database: Oracle, MySql, Hbase,Hive # Cloud : AWS and cloudera ***Key independent projects *** published in GitHub profile Project Name: Recommendation Engine # Built Recommendation engine by using collaborative filtering technique on clickstream song data to create clusters of related users. # Achieved notification list to only selected users on release of song with accuracy rate 80% Technology: Spark ML ,Java 1.8 , AWS S3 and EC2 Project Name: Stock Data analysis on streaming data # Stock data on JSON format from Kafka topic on batch interval of 1 minute. # Calculated avg Closing price and difference of avg closing price and avg opening price for every 10 mins and calculated total traded no of each stocks in every 10 mins with maximum accuracy. Technology: Spark streaming with java 1.8 along with kafka as client Project Name: Trending songs on SAAVN dataset. Achieved 82 % accuracy in predicting 100 top trending songs for 10 days by successfully creating a data pipeline using MapReduce programs to filter, analyze and find trending songs from Saavn's stream records of 44 GB of data. Data set stored in S3 and computing component used AWS EC2 instance. Technology: Java, Mapreduce, AWS S3 and EC2. Project Name: Health Analytics on India Annual Health Survey data # Ingestion, clean up, benchmarking with file formats and analysis, generated visualization (charts) by Hue on Annual health survey data of Indian Govt. AWS RDS is used as input data . Technology: ETL tools Sqoop, Hive,Hue and Hbase , AWS RDS
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Sqoop
    Apache HBase
    Amazon EC2
    Spring Framework
    Cloudera
    MapReduce
    Databricks Platform
    Java
    Scala
    Apache Kafka
    Apache Spark
    Apache Hadoop
  • $15 hourly
    I worked as a Data Engineer for couple of clients and Backend Engineer for one of the Client. I am good at AWS and Python Scripting. I have achieved AWS Data Analytics certification, AWS Certified Developer Associate and AWS Cloud practitioner. I can individually create Data pipelines for the reporting and Data Science. Skills: Amazon Web services, Hadoop,Hive,Mysql,Python,Pyspark, Linux, Postgresql
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Amazon S3
    AWS Glue
    Hive
    PostgreSQL
    Apache Spark
    Apache Hadoop
    Data Engineering
    MySQL Programming
    Python Script
    Amazon Web Services
    MySQL
    PySpark
    Big Data
    Python
  • $30 hourly
    WORK EXPERIENCE Career Summary 9+ Years Possessing more than 7 years of industry experience in the domain of Hadoop, AGE ETL/ELT for Big data & analytics with Microsoft Azure. Skilled in designing, 29 development and deployment along with facilitating on-boarding data to cloud services and operatizing with end to end orchestration of data pipelines. Extensively worked on Microsoft Azure data Factory, Data lake, Azure Devops, Databricks and other analytics services, streaming data, added up with experience in working with Java based applications and Talend ETL for Big data. Career Objective To work as a proactive and client-oriented engineer in designing of IT solutions with my supporting technical skills
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Data Lake
    PySpark
    Data Migration
    Data Engineering
    Databricks Platform
    Apache Hadoop
    Big Data
    Azure Cosmos DB
    Microsoft Azure
    Azure DevOps
    Microsoft Azure SQL Database
    Talend Open Studio
    ETL
    ETL Pipeline
  • $25 hourly
    I'm Big Data Engineer with 1 Year and 8 Months of experience in designing and building Data Intensive Processing Pipelines for Analytics Platform, considering scalability and efficient data storage and processing warehousing solution. Knows Pyspark, Hadoop, Hive, Kafka and AWS cloud data warehousing and processing services like Amazon Redshift, AWS EMR, AWS Glue, AWS Kinesis Data Stream. I am also proficient in Python, Shell Script, Linux and Docker. Contact For End To End Data Engineering Project For your business needs. From Data Collection to Pushing processed data into Dashboards.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Amazon EC2
    Amazon Athena
    Amazon Redshift
    AWS Glue
    Data Warehousing & ETL Software
    Distributed Computing
    SQL
    Apache Hadoop
    Python
    PySpark
    Big Data
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job (it’s free)

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Apache Hive Developer near Bengaluru, on Upwork?

You can hire a Apache Hive Developer near Bengaluru, on Upwork in four simple steps:

  • Create a job post tailored to your Apache Hive Developer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Hive Developer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Hive Developer profiles and interview.
  • Hire the right Apache Hive Developer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Hive Developer?

Rates charged by Apache Hive Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Hive Developer near Bengaluru, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Hive Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Hive Developer team you need to succeed.

Can I hire a Apache Hive Developer near Bengaluru, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Hive Developer proposals within 24 hours of posting a job description.