Hire the best Apache Hive Developers in Hyderabad, IN

Check out Apache Hive Developers in Hyderabad, IN with the skills you need for your next job.
  • $35 hourly
    ════ Who Am I? ════ Hi, nice to meet you! I'm Ajay, a Tableau and SQL Specialist, Business Intelligence Developer & Data Analyst with half a decade of experience working with data. For the last few years I've been helping companies all over the globe achieve their Data Goals and making friends on the journey. If you're looking for someone who can understand your needs, collaboratively develop the best solution, and execute a vision - you have found the right person! Looking forward to hearing from you! ═════ What do I do? (Services) ═════ ✔️ Tableau Reports Development & Maintenance - Pull data from (SQL Servers, Excel Files, Hive etc.) - Clean and transform data - Model relationships - Calculate and test measures - Create and test charts and filters - Build user interfaces - Publish reports ✔️ SQL - Build out the data and reporting infrastructure from the ground up using Tableau and SQL to provide real time insights into the product and business KPI's - Identified procedural areas of improvement through customer data, using SQL to help improve the probability of a program by 7% - Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs and Scala. ═════ How do I work? (Method) ═════ 1️⃣ First, we need a plan; I will listen, take notes, analyze and discuss your goals, how to achieve them, determine costs, development phases, and time involved to deliver the solution. 2️⃣ Clear and frequent communication; I provide frequent project updates and will be available to discuss important questions that come up along the way. 3️⃣ Stick to the plan; I will deliver, on time, what we agreed upon. If any unforeseen delay happens, I will promptly let you know and provide a new delivery date. 4️⃣ Deliver a high-quality product. My approach aims to deliver the most durable, secure, scalable, and extensible product possible. All development includes testing, documentation, and demo meetings.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Python Script
    Scala
    Machine Learning
    Apache Spark
    Hive
    SQL Programming
    Business Intelligence
    Microsoft Excel
    Microsoft Power BI
    Tableau
    SQL
    Python
  • $60 hourly
    Nikhil is a Microsoft certified azure data engineer with 5+ years of experience in data engineering and big data. Have worked for couple of fortune 500 companies for developing and deploying their data solutions in azure and helped them find business insights out of their data. Coding: - SQL, Python, Pyspark Azure: - Azure Data Factory - Azure Databricks - Azure Synapse Analytics - Azure Datalake - Azure Functions and other azure services Reporting: - Power BI - Microsoft Office
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    ETL
    Microsoft Azure
    Data Lake
    Data Warehousing
    Microsoft SQL Server
    Big Data
    PySpark
    Databricks Platform
    SQL
    Apache Spark
    Python
    Microsoft Excel
    Data Engineering
    Data Integration
  • $50 hourly
    With around 13 Years of IT experience on data driven applications.I excel in building robust data foundations for both structured and unstructured data from diverse sources. Additionally, I possess expertise in efficiently migrating data lakes and pipelines from on-premise to cloud environments. My skills include designing and developing scalable ETL/ELT pipelines using cutting-edge technologies such as Spark, kafka, Pyspark, Hadoop, Hive, DBT, Python, and leveraging cloud services like AWS, Snowflake, and DBT Cloud,Airbyte, BigQuery, Metabase and A good understanding of containerisation frameworks like Kubernetes and Docker is essential
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Apache Airflow
    Databricks Platform
    Apache Spark
    Python
    Apache Hadoop
    PySpark
    Snowflake
    Amazon S3
    dbt
    Database
    Oracle PLSQL
    Unix Shell
  • $20 hourly
    With 6.5 yrs of Exp working with huge Data to solve complex business problems, ability to write technical code and articulate in simple business terms with excellent communication skills. I am a full stack Data Engineer. Tech stack Programming Languages : Python, Scala,Shell Scripting Database : MySQL, Teradata and other RDBMs Distributed systems : Hadoop Ecosystem - HDFS, Hive, Spark, PySpark, Oozie
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Engineering & Architecture
    Big Data
    Linux
    RESTful API
    PySpark
    Scala
    Apache Hadoop
  • $56 hourly
    I am big data developer, middle ware developer and good programming skills backend, cloud computing knowledge.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    J2EE
    Oracle
    Oracle Programming
    Oracle Database
    Google Cloud Platform
    Hive
    Java
    PySpark
    Apache Spark
  • $46 hourly
    I am senior Azure Data Engineer , I have been working with the technologies like ADF,DATABRICKS, Logic app , HDinsight
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    SQL
    Kusto Query Language
    Microsoft Windows PowerShell
    Data Transformation
    Data Cleaning
    Data Engineering
    Apache Hadoop
    Apache Spark
    Core Java
    Python
    Microsoft Azure
    Databricks Platform
    PySpark
    Hive
  • $50 hourly
    • Senior Data Engineer with 9+ years of experience building data-intensive applications in Retail/Sports/Insurance. •. Development of Data Processing Pipelines using Spark Scala. • Redesigning and migration of ETL pipelines in Hive to Spark for performance improvement. These flows serve the files to around 3000 partners daily/weekly. • Writing Unit Test cases using ScalaTest for different Spark functions that are developed as part of data processing pipelines. Maintaining the code coverage of the pipelines above 80% all the time. • Performed data analysis/data validations required for troubleshooting data-related issues and assisted in resolving data issues in production. • Implemented scripts to monitor Data Quality and ensured production data is 100% accurate. • Maintained data pipeline up-time of 99% while reading data from different sources and scheduled the alerts for various events. • Automation and scheduling of data processing pipelines using Oozie. • Built the streaming pipelines using Spark for ingesting data from Kafka Topic. • Experience in using partitions, bucketing concepts in Hive and designed both Managed and External tables in Hive for optimized performance. • Implemented multiple performance enhancements for HBase tables writing/reading, improved the performance of the data pipeline by 70x times, and storage size also reduced drastically. • Having good experience in deploying pipelines with CI/CD tools like Drone, Vela. • Worked extensively on different Hadoop distributions like CDH and Hortonworks. • Good working knowledge of AWS Cloud components (EC2, EMR, S3, etc.) and GCP. • Knowledge of standard build and deployment tools such as IntelliJ, SBT, Maven.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Java
    Apache Kafka
    Apache Spark
    Apache Hadoop
  • $50 hourly
    Key Skills • Palantir cloud foundry, Syncs, Schedules, PipeLine Building, Data Engineering • Apache Spark, Scala, Pyspark, Sqoop • Azure, AWS, Data Bricks • Cloudera EcoSystem, Hue, Hive-beeline, Impala-Shell • Project planning, Team Management More than 10 Years of experience, Expertise in People Management & IT. Currently engaged in developing and implementing Big Data and Cloud technologies for my clients. • Results-oriented, self-managed, detail-oriented, with excellent problem-solving skills. • Delivering service that meets SLA and delights the customer • Data Engineering & Spark Transformations, Building pipelines using PySpark, HiveQL. • Performing data cleaning & advanced analytics on Palantir foundry with the spark. • Data ingest with sqoop tool. • Data visualization on Palantir contour & Tableau. • Creating syncs from eSFT server to foundry • Reviewing and managing existing codes. • Achieving productivity targets • Requirement gathering, Co-ordinating with production and IT teams • Data Analysis, Preparing Dashboards, Providing key inputs to management • MIS & Insights, SQL Analytics with Tableau visualization • Advanced Excel, Macros, Reports automation, Business Analysis & Root cause analysis.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Data Management
    Microsoft Azure
    Apache Impala
    Data Warehousing
    Jenkins
    Databricks Platform
    PySpark
    AWS Glue
    Apache Spark
  • $30 hourly
    Overview 💡 Expert in Generative AI and Large Language Models (LLMs) | Delivering cutting-edge AI-powered solutions tailored to your needs. 🔍 With 17+ years of experience in AI/ML, I specialize in designing, training, and deploying LLM-based applications like ChatGPT, Bard, and custom models. Whether you're looking for chatbot development, content generation, fine-tuning models, or building end-to-end AI solutions, I'm here to make your project a success. My Expertise Includes: • Custom LLM Development: Fine-tuning GPT-based models for niche applications. • Chatbot Design: Building conversational agents for customer support, sales, and engagement. • Text & Content Generation: Automating blogs, emails, and other content with high accuracy. • AI Pipelines: End-to-end AI solution deployment on platforms like AWS, GCP, and Azure. • Data Engineering: Cleaning and curating datasets for model training. • API Integration: Integrating AI models with existing systems via RESTful APIs. ________________________________________ Skills • Programming: Python, TensorFlow, PyTorch, Hugging Face, LangChain • LLM Tools: OpenAI GPT-4, Anthropic Claude, Stability AI • Fine-tuning: LoRA, Transfer Learning, RLHF (Reinforcement Learning from Human Feedback) • Cloud Platforms: AWS Sagemaker, Google Vertex AI, Azure ML • Prompt Engineering & Optimization • Natural Language Processing (NLP) and Computer Vision Integration ________________________________________ Portfolio • Custom Chatbot for E-commerce: Built an AI assistant to handle 90% of customer queries, boosting response efficiency by 70%. • Healthcare LLM Deployment: Created a fine-tuned medical chatbot to provide reliable symptom explanations and triage support. • Automated Content Generation Tool: Developed an AI-based system for creating SEO-friendly blogs and articles at scale. • Financial Report Summarizer: Designed an LLM tool for parsing financial data and generating concise, actionable summaries. ________________________________________ Why Choose Me? ✔️ Proven track record of delivering AI projects that exceed expectations. ✔️ Clear communication and timely delivery. ✔️ Focused on creating solutions that drive real-world impact. Let’s bring your AI vision to life! 🚀
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Amazon Web Services
    Apache Hadoop
    Microsoft Azure
    AWS Glue
    Akka
    Snowflake
    Looker Studio
    BigQuery
    Google Analytics
    Big Data
    Cloudera
    Apache Spark
    Scala
  • $35 hourly
    Helical IT is a company specializing in data stack solutions. We do extensive work in implementation of Data Lake, Data Warehouse, Data Analytics, Data Pipeline, Business Intelligence and Generative AI Services. For providing these services we can make use of Open source tool stack (to help you reduce the licensing cost and vendor lockin), any of the most popular cloud vendors (like AWS, Azure and GCP) or using modern data stack and tools like Snowflake, Databricks, DBT, Airflow, Airbyte etc. We have experience in building the entire 3 generations of data stack and solutions which includes Traditional Data Stack - Canned Reports - BI - Designing Data Warehouse - Populating DW using ETL tools 2nd Gen Data Stack - Designing Data Lake - ETL - Data Warehouse - Business Intelligence - Data Science - ML Modern Data Stack - Data Lakehouse - ETL - Business Intelligence - Data Science - ML Some of the tools and technologies that we have experience on includes BI: Open Source [Helical Insight, Jaspersoft, Pentaho, Metabase, Superset], Proprietary [PowerBI, Tableau, Quicksight] DW Platforms: Redshift, Vertica, Big Query Data Lake / Lakehouse: Snowflake, Databricks, S3, AWS Lake, GCP, Dremio, Apache Iceberg, Hadoop Canned report: Jaspersoft, Pentaho, Helical Insight ETL/ELT: Talend, Kettle, Glue, Spark, Python Transformation: DBT, Airflow, Airbyte AI Services - Generative AI (Hugging face, Tensorflow, Pytorch, LangChain) - NLP & Chatbot development Owing to our strong technical expertise we have been technology partner of various tools which includes DBT, Snowflake, AWS, Jaspersoft, Pentaho etc. We have got multiple resources who are certified and having relevant skills. If you are looking for support or any new fatures in any of the legacy implementations or migratin to modern dat stack from one of the older generation tools or if you are a new greenfield implementation also, we at Helical can help you with the same. Over the last 10+ years we have worked with Fortune 500 clients, Government organizations, SMEs etc and have been part of 85+ DWBI implementations across various domains and geographies. - Fortune 500 - Unilever, CA Technologies, Tata Communications, Technip, Smithsdetection, Mutual of America - Unicorns: Mindtickle, Fractal Analytics, - Govt - Government of Micronesia, Government of Marshall Islands, Government of Kiribati Islands, INRA France - Energy - Vortecy, Wipro Ecoenergy - Education- University of Bridgeport, Envision Global, Nexquare, KidsXAP - Insurance - 4sightBI, Hcentive - Social Media Analytics - UnifiedSocial - HR - SyncHR, Sage Human Capital - Data Analytics - Numerify, Syntasa - Supply Chain- New Age Global, Canadian Bearings, Autoplant - FinTech- Wealthhub Solutions - Manufacturing- Unidesign Jewellery - Clinical Trial - Inductive Quotient, Radiant Sage, Reify Health Please reach out to us for learning more about our implementations
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Data Modeling
    GIS
    Talend Data Integration
    Snowflake
    Data Lake
    dbt
    Jaspersoft Studio
    Data Warehousing
    Big Data
    Talend Open Studio
    Pentaho
    Databricks Platform
    Apache Airflow
    Apache Hadoop
    Apache Spark
    Apache Cassandra
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Apache Hive Developer near Hyderabad, on Upwork?

You can hire a Apache Hive Developer near Hyderabad, on Upwork in four simple steps:

  • Create a job post tailored to your Apache Hive Developer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Hive Developer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Hive Developer profiles and interview.
  • Hire the right Apache Hive Developer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Hive Developer?

Rates charged by Apache Hive Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Hive Developer near Hyderabad, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Hive Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Hive Developer team you need to succeed.

Can I hire a Apache Hive Developer near Hyderabad, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Hive Developer proposals within 24 hours of posting a job description.