Hire the best Apache Hive Developers in Hyderabad, IN

Check out Apache Hive Developers in Hyderabad, IN with the skills you need for your next job.
  • $35 hourly
    ════ Who Am I? ════ Hi, nice to meet you! I'm Ajay, a Tableau and SQL Specialist, Business Intelligence Developer & Data Analyst with half a decade of experience working with data. For the last few years I've been helping companies all over the globe achieve their Data Goals and making friends on the journey. If you're looking for someone who can understand your needs, collaboratively develop the best solution, and execute a vision - you have found the right person! Looking forward to hearing from you! ═════ What do I do? (Services) ═════ ✔️ Tableau Reports Development & Maintenance - Pull data from (SQL Servers, Excel Files, Hive etc.) - Clean and transform data - Model relationships - Calculate and test measures - Create and test charts and filters - Build user interfaces - Publish reports ✔️ SQL - Build out the data and reporting infrastructure from the ground up using Tableau and SQL to provide real time insights into the product and business KPI's - Identified procedural areas of improvement through customer data, using SQL to help improve the probability of a program by 7% - Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs and Scala. ═════ How do I work? (Method) ═════ 1️⃣ First, we need a plan; I will listen, take notes, analyze and discuss your goals, how to achieve them, determine costs, development phases, and time involved to deliver the solution. 2️⃣ Clear and frequent communication; I provide frequent project updates and will be available to discuss important questions that come up along the way. 3️⃣ Stick to the plan; I will deliver, on time, what we agreed upon. If any unforeseen delay happens, I will promptly let you know and provide a new delivery date. 4️⃣ Deliver a high-quality product. My approach aims to deliver the most durable, secure, scalable, and extensible product possible. All development includes testing, documentation, and demo meetings.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Python Script
    Scala
    Machine Learning
    Apache Spark
    Hive
    SQL Programming
    Business Intelligence
    Microsoft Excel
    Microsoft Power BI
    Tableau
    SQL
    Python
  • $60 hourly
    Nikhil is a Microsoft certified azure data engineer with 5+ years of experience in data engineering and big data. Have worked for couple of fortune 500 companies for developing and deploying their data solutions in azure and helped them find business insights out of their data. Coding: - SQL, Python, Pyspark Azure: - Azure Data Factory - Azure Databricks - Azure Synapse Analytics - Azure Datalake - Azure Functions and other azure services Reporting: - Power BI - Microsoft Office
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    ETL
    Microsoft Azure
    Data Lake
    Data Warehousing
    Microsoft SQL Server
    Big Data
    PySpark
    Databricks Platform
    SQL
    Apache Spark
    Python
    Microsoft Excel
    Data Engineering
    Data Integration
  • $70 hourly
    • A creative hands-on with around 12 years of experience, exceptional technical skills, and a business-focused outlook. Adept in analyzing information system needs, evaluating end-user requirements, custom designing solutions for complex information systems management • Vast experience in data driven applications ,creating data pipe lines, creating interfaces between up-stream and down-stream applications and tuning the pipe lines. • Interacting with business team to discuss and understand the data flow and designing the data pipelines as per the requirements. • Experience in driving the team to meet the target deliverables. Strong experience in creating scalable and efficient big data pipelines using Spark, Hadoop, Hive, Pyspark, Python,Snowflake,DBT and Airflow • Commendable experience in cloud data warehousing SNOWFLAKE. . Experience in snowflake development , data sharing, advanced features of snowflake Strong experience in integrating snowflake with DBT and creating data layers on the Snowflake warehouse using DBT • Expertise skill in SQL • Have strong exposure in PYTHON. • Strong experience on Hadoop • Strong experience in implementing ETL pipelines using SPARK. • Strong experience in tunings the SPARK applications. • Extensively used SPARK SQL to clean the data and to perform calculations on datasets. • Have strong experience in HIVE. • Strong experience in HIVE query tuning. • Worked on different big data file formats such as Parquet, ORC, etc.. • Familiar with AZURE Databricks. Decent exposure on Airbyte, Bigquery, Terraform • Expertise in Analytical functions. • Have strong exposure in converting the data into business insights. • Decent knowledge on data lake and data marts concepts. • Experience in Creation of Tables, Views, Materialized Views, Indexes using SQL and PL/SQL. • • In-depth knowledge of PL/SQL with the experience in constructing the tables, joins, sub queries and correlated sub queries In SQL * Plus. • Proficient in Developing PL/SQL Programs Using Advanced Performance Enhancing Concepts like Bulk Processing, Collections and Dynamic SQL • Sound knowledge in using Oracle materialized views • Effectively made use of Indexes, Collections, and Analytical functions • Sound knowledge in using Oracle SQL Loader and External Tables. • Has good knowledge and exposure in designing and developing user defined stored procedures and user defined functions. • Having experience in using Packages UTL_FILE, DBMS_JOB and DBMS_SCHEDULE. • Skilled in handling critical application and business validation oriented trigger logic. • Has good Knowledge in trapping runtime errors by providing Suitable Exception Handlers.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Apache Airflow
    Databricks Platform
    Apache Spark
    Python
    Apache Hadoop
    PySpark
    Snowflake
    Amazon S3
    dbt
    Database
    Oracle PLSQL
    Unix Shell
  • $10 hourly
    Technical Experience * Hands on experience in Hadoop Ecosystem including Hive, Sqoop, MapReduce and basics of Kafka * Excellent knowledge on Hadoop ecosystems such as HDFS , Resource Manager, NodeManager , Name Node, Data Node and Map Reduce programming paradigm * Expertise in managing big data processing using Apache Spark and its various components * Load and transform large sets of structured, semi-structured and unstructured data from Relational Database Systems to HDFS and vice-versa using Sqoop tool. * Data ingestion and refresh from RDBMS to HDFS using Apache Sqoop and processing data through Spark Core and Spark SQL * Proficiency in Scala and Pyspark required for high level data processing and have end-to end knowledge for implementation of a project * Designing and creating Hive external tables, using shared meta-store instead of Derby, and creating Partitions and Bucketing
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Amazon Web Services
    Visualization
    Apache Spark
    Apache Kafka
    SQL
    Apache Hadoop
  • $20 hourly
    With 6.5 yrs of Exp working with huge Data to solve complex business problems, ability to write technical code and articulate in simple business terms with excellent communication skills. I am a full stack Data Engineer. Tech stack Programming Languages : Python, Scala,Shell Scripting Database : MySQL, Teradata and other RDBMs Distributed systems : Hadoop Ecosystem - HDFS, Hive, Spark, PySpark, Oozie
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Engineering & Architecture
    Big Data
    Linux
    RESTful API
    PySpark
    Scala
    Apache Hadoop
  • $250 hourly
    PERSONAL PROFILE:- An ambitious individual who enjoys taking on responsibility and has a successful background in retail sales. Having a smart appearance with impeccable personal hygiene and a relaxed sociable demeanor. Someone who enjoys spending time with clients & who is constantly looking for opportunities to build relationships with them. Now looking for a suitable position with a company where advancement is based on individual merit & performance, rather than seniority.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Apache Hadoop
    MySQL
    Hive
    Mechanical Engineering
  • $45 hourly
    Big-Data Developer Around 4 Years of experience in the IT industry in Big Data Eco Systems Hadoop, Spark, Scala, Hive and specializing in PROFILE SUMMARY * 4 Years of experience in the IT industry in Big Data Eco Systems Hadoop/Spark, Scala, Hive and specializing in health care domain. * Extensively worked on Hadoop/Spark framework and ecosystems like HDFS, HIVE, Spark core, Spark SQL, scala , * Experience on deployment part by using CI/CD pipelines from git. * Involved in developing hive scripts for handling business transformation. * Worked extensively with HIVE DDLs and Hive Query language (HQLs). * Worked on creating the RDDs, Data Frames for the required input data and performedthe data transformations using Spark-core. * Experience on Spark Core, Scala and creating RDD to perform aggregations, grouping, etc. in Spark. * Basic knowledge on streaming tools like Kafka.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    PySpark
    Python
    Scala
    Hive
    Scripting
    Apache Hadoop
  • $56 hourly
    I am big data developer, middle ware developer and good programming skills backend, cloud computing knowledge.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    J2EE
    Oracle
    Oracle Programming
    Oracle Database
    Google Cloud Platform
    Hive
    Java
    PySpark
    Apache Spark
  • $30 hourly
    Data Analytics consultant. Strng Data Architecture, Designing, MVP expert. Expert skills in python, PySpark, Data migration, Advisory & Consulting, Machine Learning and cloud & Hybrid experience. TechStacks: Python, Pyspark,Scala,Kaftka,AWS,Azure,Docker,Kubernetes and NoSQL - Architect, Designer with Strong Data Anlytics experinece. - Building Data pipeline, Data migration skills. - Worked with ETL tools and Big data, hadoop with Spark experience. - Build insights using Python programming with various tools, platforms, and systems such as: Apache Spark, Kafka, HBase, Redis, Akka, etc. - Working in Containerzation Docker, Kubernetes, CI/CD. - Development using a variety of languages which may include Python, Scala, Java Script, Hive, Spark, sqoop, node.js, Shell Scripting - Strong experience on HBase, Casandra & Elastic search - Design, develop, and implement complex reports and analytics - Use advanced data and coding techniques to provide concise and compelling summary of analysis findings in reports and presentations - Excellent programming skills on the JVM using Java/Scala. - Comfortable working with SQL/NoSQL. - Comfortable working in Unix CLI and with cloud infrastructure. - An advocate of agile practices for rapid development of quality software, such as CI, TDD and automated deployment. - Write scala programs that scales to petabytes of data and supports millions of transactions a second - Write and review pull requests in Git - Leverage various data and technology resources to augment analysis - Experience with Spark or the Hadoop ecosystem and similar frameworks is ideal for this role. - Experience with Kafka is ideal for this role.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Amazon Web Services
    Apache Hadoop
    Microsoft Azure
    AWS Glue
    Akka
    Snowflake
    Looker Studio
    BigQuery
    Google Analytics
    Big Data
    Cloudera
    Apache Spark
    Scala
  • $25 hourly
    I have round 11 years of professional experience in IT, this includes Analysis, Designing Architecture, Coding, Testing, Implementation and Training in Python and Big Data Technologies working with Apache Hadoop Eco-components, Spark and Amazon Web services(AWS).
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    AWS Application
    SQL Programming
    Unix Shell
    Big Data
    Python
    Amazon Redshift
    AWS Glue
    Amazon ECS
    Apache Airflow
    Terraform
    AWS Lambda
    Apache Spark
    Apache Hadoop
  • $5 hourly
    Seasoned Google professional with 9 years of experience delivering exceptional customer solutions across diverse industries. Proven track record in Cloud Migration, data pipeline development and genAI specialist to drive business impact. Let's collaborate and elevate your project success!
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Presentation Design
    Oracle PLSQL
    PPTX
    Apache Airflow
    Amazon Web Services
    Google Cloud Platform
    BigQuery
    Vertex AI
    LLM Prompt Engineering
    Apache Spark
    Apache Hadoop
    Big Data
    Python
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job (it’s free)

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Apache Hive Developer near Hyderabad, on Upwork?

You can hire a Apache Hive Developer near Hyderabad, on Upwork in four simple steps:

  • Create a job post tailored to your Apache Hive Developer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Hive Developer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Hive Developer profiles and interview.
  • Hire the right Apache Hive Developer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Hive Developer?

Rates charged by Apache Hive Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Hive Developer near Hyderabad, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Hive Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Hive Developer team you need to succeed.

Can I hire a Apache Hive Developer near Hyderabad, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Hive Developer proposals within 24 hours of posting a job description.