Hire the best Apache Hive Developers in Bengaluru, IN
Check out Apache Hive Developers in Bengaluru, IN with the skills you need for your next job.
- $15 hourly
- 5.0/5
- (3 jobs)
I am Big Data Engineer with expertise in Hadoop, Cloudera and Horton Works Distributions and also Azure Data Services proficiency. Having good experience in all trending popular tools and technologies like Azure: Azure Data Factory, Azure Logic Apps, Azure Function apps, Azure Event Hub and Azure Service bus, Azure SQL DB. Apache: Apache Spark, Apache NIFI, Apache Kaka , Apache Hive. Having strong knowledge in programming languages like Java, Scala and Python. Also have good knowledge in SAP process.Apache Hive
Microsoft AzureETL PipelineApache CassandraApache HadoopDatabase DesignApache SparkApache KafkaApache NiFiElasticsearch - $15 hourly
- 4.6/5
- (48 jobs)
With a Bachelor’s degree in Computer Science, and hands-on experience using JAVA and C++ to create and implement software applications. I work as a Software engineering SDE, in a well known fintech startup , I use JAVA and C++ extensively for my day to day work. Have experience in working with advance BIG DATA frameworks such as Apache Hadoop, Apache Spark and Apace Hive. Works as SME at Chegg where I help students with there doubts and assignments in the field of Computer Science. Have 1yr+ experience in teaching.Apache Hive
PyTorchAWS DevelopmentRustGolangPythonLLM Prompt EngineeringData EngineeringC++Spring BootCore JavaApache HadoopData StructuresApache SparkMySQL - $15 hourly
- 5.0/5
- (10 jobs)
Specialties: Big Data Technology, Spark, Databricks, Azure Synapse Analytics Services, Azure, AWS, Hive, ETL, Data lake, delta lake expert. Languages : Scala, Java , Python SQL & No-SQL Databases Databricks Apache spark certified Azure DevOpsApache Hive
OracleETLOracle PLSQLBig DataSQLJavaApache KafkaApache HadoopApache Spark - $30 hourly
- 5.0/5
- (7 jobs)
I am a dedicated and results-driven Data Engineer with a passion for transforming complex data into valuable insights and actionable results. With 4 of experience in the industry, I have honed my skills in designing, developing, and implementing effective data systems and pipelines using a range of tools including Apache Spark, Apache Hadoop, and Snowflake. My deep understanding of data warehousing, ETL processes, and data analysis has enabled me to deliver innovative solutions that drive business growth and competitive advantage. I am committed to staying up-to-date with the latest technologies and industry trends, always seeking new and better ways to turn data into meaningful insightsApache Hive
Data AnalyticsBig DataData WarehousingGoogle AnalyticsApache Spark MLlibApache AirflowApache KafkaData MiningData StructuresApache SparkData AnalysisPythonSQLETL Pipeline - $20 hourly
- 4.9/5
- (9 jobs)
I am a full stack data scientist with 8 years of professional experience. I am an expert in developing end to end application for machine learning ,Deep learning , Natural language processing and image processing . I have experience in building and optimizing big data pipelines, architectures . My main areas of expertise are :- -Python 2.7,3,4, scikit-learn , pandas , Pytorch ,Tensor flow ,keras, Word2vec -Machine learning - PCA , SVM , Neural Network , Logistic regression , K-mean , recommendation systems , CRF , factor machine -Deep learning - Deep neural network , Convolution neural network , Auto encoder , RNN , LSTM -Natural language processing - Part of speech tagging , Machine translation , Named entity recognition , Question answering, Optical character recongnition, Sentiment analysis , Text classification , Topic modelling , Natural language generation . - AWS , Sagemaker , AirFlow. - Hadoop , Map reduce , Apache spark ,Kafka , Kafka streaming , Elastic search , Kibana . I hold masters degree in applied mathematics from Indian Institute of Science ( IISC) and bachelors degree from IIT Roorkee .Currently I am working as Principal data scientist in one of the famous organisation in Banglore. Please contact me about how I might be able to help you with a projectApache Hive
BlockchainArtificial IntelligencePySparkAWS CodeDeployBig DataApache KafkaSQLMachine LearningReinforcement LearningDeep LearningNatural Language ProcessingPyTorchPythonComputer Vision - $20 hourly
- 4.8/5
- (4 jobs)
• Having 6+ years of experience as a Hadoop/Pyspark Developer. • Having extensive knowledge on Hadoop technology experience in Storage, writing Queries, processing, and analysis of data. • Experience on migrating on Premises ETL process to Hadoop Layer. • Experience in optimizing Hive SQL queries and Spark Jobs. • Implemented various frameworks like Data Quality Analysis and Data Validation with the help of technologies like Bigdata, Spark, Python. • Primary technical skills in Pyspark, HDFS, YARN, Hive, Sqoop, Impala, Oozie. • Good exposure on advanced topics Like Analytical Functions, Indexes, Partition Tables. • Experience with creation of Technical document for Functional Requirement, Impact Analysis, Technical Design documents, Data Flow Diagram. • Quick learner and up to date with industry trends, Excellent written and oral communications, analytical and problem-solving skills and good team player, Ability to work independently and well-organized.Apache Hive
PySparkApache ImpalaSqoopPython ScriptApache HadoopApache SparkSQLPythonApache Airflow - $25 hourly
- 5.0/5
- (1 job)
Data Engineer | PySpark | Databricks | AWS | Azure | GCP 🚀 Building Scalable & Robust Data Solutions 🚀 I am an experienced Data Engineer with 2.9 years of expertise working with a multinational company (MNC) as a Software Engineer (Data). I specialize in designing, building, and optimizing scalable data pipelines using PySpark, Databricks, and cloud-based architectures (AWS, Azure, GCP). I am also a GCP-certified Professional Data Engineer with a strong background in distributed data processing, performance tuning, and observability. Key Skills: • Python, SQL, Apache Spark, Apache Hadoop, Apache Hive, Apache Kafka, Docker, Linux • Data Warehousing, Data Lake, API Development, Hadoop/Spark Cluster Administration • Databricks, GCP, AWS, Azure and On Premise Hadoop Deployments • Git, Github, Azure Devops, JIRA Key Expertise: ✔️ Big Data Pipelines – Designing Bronze, Silver, and Gold architectures for structured and semi-structured data. ✔️ Cloud & Distributed Processing – Hands-on experience with AWS (S3, Glue, EMR, Lambda), Azure, and GCP for cost-efficient and scalable solutions. ✔️ Databricks & PySpark – Optimizing large-scale transformations and analytical queries across distributed clusters. ✔️ Observability & Monitoring – Implementing real-time monitoring, logging, and metric tracking for robust system reliability. ✔️ CI/CD & DevOps – Automating deployments, improving system efficiency, and streamlining workflows. ✔️ Backend Development – A solid foundation in backend development, allowing seamless integration of data pipelines with applications. Why Work With Me? ✅ Results-Oriented – I take full ownership of my work and focus on delivering scalable and efficient solutions. ✅ Impact-Driven – With my well-rounded skill set, I bring real value to companies beyond just data engineering. ✅ Fast Learner & Adaptable – I acknowledge skill gaps but am always quick to bridge them and continuously improve. ✅ Flexible Availability – Open to part-time work and comfortable working across any time zone to support your team. If you’re looking for a dedicated, impact-driven Data Engineer, let’s connect and build something great together! 📩 Let’s discuss how I can help scale your data infrastructure.Apache Hive
Amazon EC2Amazon AthenaAmazon RedshiftAWS GlueData Warehousing & ETL SoftwareDistributed ComputingSQLApache HadoopPythonPySparkBig Data - $15 hourly
- 0.0/5
- (2 jobs)
I worked as a Data Engineer for couple of clients and Backend Engineer for one of the Client. I am good at AWS and Python Scripting. I have achieved AWS Data Analytics certification, AWS Certified Developer Associate and AWS Cloud practitioner. I can individually create Data pipelines for the reporting and Data Science. Skills: Amazon Web services, Hadoop,Hive,Mysql,Python,Pyspark, Linux, PostgresqlApache Hive
Amazon S3AWS GlueHivePostgreSQLApache SparkApache HadoopData EngineeringMySQL ProgrammingPython ScriptAmazon Web ServicesMySQLPySparkBig DataPython - $10 hourly
- 0.0/5
- (0 jobs)
I'm a Professional Data engineer with experience of about 3 years in cloud and on-prem environment and built ETL for real time data and batch data.Apache Hive
InformaticaAWS GlueApache HadoopPySparkETLData EngineeringPythonSQL - $20 hourly
- 0.0/5
- (0 jobs)
Objec&ve Summary : A seasoned professional with 13 years of extensive experience in Java and Big Data technologies, seeking a challenging role where I can leverage my experHse to drive innovaHon and deliver high-impact soluHons. CommiKed to staying abreast of the latest advancements in technology, I am eager to apply my skills and knowledge to contribute to the success of dynamic projects and collaborate with talented teams. Ready to take on new challenges and make meaningful contribuHons to the advancement of technology in a growth-oriented organizaHon. Technical Summary : * Google Cloud Cer&fied Professional Big Data Developer with experHse in designing and implemenHng scalable soluHons on Google Cloud PlaRorm (GCP). * Java 7 Cer&fied Developer proficient in J2SE, JDBC, and J2EE, with excepHonal development skills in Core Java and Adv Java . * Hands-on experience with Big Data technologies for both batch processing ( Spark ,Apache Hive
Apache KafkaAWS GlueData ProcessingData ModelingData CleaningData LakeScalaPythonBigQuerySpring FrameworkSpring BootJavaApache Spark Want to browse more freelancers?
Sign up
How hiring on Upwork works
1. Post a job
Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.
2. Talent comes to you
Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.
3. Collaborate easily
Use Upwork to chat or video call, share files, and track project progress right from the app.
4. Payment simplified
Receive invoices and make payments through Upwork. Only pay for work you authorize.
How do I hire a Apache Hive Developer near Bengaluru, on Upwork?
You can hire a Apache Hive Developer near Bengaluru, on Upwork in four simple steps:
- Create a job post tailored to your Apache Hive Developer project scope. We’ll walk you through the process step by step.
- Browse top Apache Hive Developer talent on Upwork and invite them to your project.
- Once the proposals start flowing in, create a shortlist of top Apache Hive Developer profiles and interview.
- Hire the right Apache Hive Developer for your project from Upwork, the world’s largest work marketplace.
At Upwork, we believe talent staffing should be easy.
How much does it cost to hire a Apache Hive Developer?
Rates charged by Apache Hive Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.
Why hire a Apache Hive Developer near Bengaluru, on Upwork?
As the world’s work marketplace, we connect highly-skilled freelance Apache Hive Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Hive Developer team you need to succeed.
Can I hire a Apache Hive Developer near Bengaluru, within 24 hours on Upwork?
Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Hive Developer proposals within 24 hours of posting a job description.