Hire the best Apache Kafka Developers in Faridabad, IN

Check out Apache Kafka Developers in Faridabad, IN with the skills you need for your next job.
Clients rate Apache Kafka developers
Rating is 4.6 out of 5.
4.6/5
based on 126 client reviews
  • $20 hourly
    ✓ Technology executive specializing in architecting and implementing highly scalable solutions to drive brand awareness, increase revenues, optimize productivity and improve margins. ✓ Overseeing the data, security, maintenance, and network for a company. ✓ Implementing the businesses’ technical strategy and managing the overall technology roadmap of the business. ✓ involved with talent acquisition and its onboarding, training, and management of Project Manager, product Manager, Developers, Devops, Designers. ✓ Setting the technical strategy for the company to enable it to achieve its goals. ✓ Seeking out the current and future technology that will drive the company’s success. ✓ Focus on strategic alignment of technology goals to organizational vision. ✓ Passionately committed to technology team development, empowering people to accomplish their goals and coaching them realize their individual potential. ✓ Proven track-record of success in technology product development, cloud infrastructure,Building Data Platforms, ETL Pipelines, Streaming Pipelines ,e-commerce, CRM ,mobile strategy, and social media integration. I am working from last 8 years with Apache Spark, Lucene, ElasticSeach/Kibana, Amazon EC2.RDBM's(SQL, MySQL, Aurora, PSQL, Oracle), NoSQL engines (Hadoop/HBase, Cassandra,DynamoDB, MongoDB),GraphDB(Neo4j, Neptune) in-memory databases (Hazelcast, GridGain), Apache Spark/MLib, Weka, Kafka, clustered file systems, general-purpose computing on GPU. Deploying the ML DL Models on GPU Instances(Nvidia). Have a great experience in query optimization, application profiling, and troubleshooting. My area of expertise includes: - Python script - Jira,Trello,Azure DevOps - Web scraping - AWS(Redshift, Glue, ECS, EC2, EMR, Kinesis, S3, RDS, VPC, IAM,DMS) - GCP(Big Query, DataFlow, SnowFlow) - Microsoft Azure - Hadoop Big Data - Elasticsearch/Kibana/Logtash(ELK) - Hadoop setup on standalone, Cloudera, and HortonWorks. - SQL like MySQL PostgreSQL - NoSql Database like Hbase and MongoDB - Machine learning - Deep Learning - Spark with Mlib,GraphX - Sphinx - Memcache - MS BI/Tableau/GDS
    Featured Skill Apache Kafka
    Big Data
    Kibana
    Apache Cassandra
    AWS CodeDeploy
    Apache NiFi
    MongoDB
    Golang
    Elasticsearch
    Apache Hive
    Apache Pig
    MapReduce
    Machine Learning
    Python
    Apache Spark
  • $50 hourly
    I'm a Python, Pyspark, AWS, API Gateways, and Kafka Architect with experience in Data engineering projects for large businesses. I am also experienced in Middleware technologies and DevOps.
    Featured Skill Apache Kafka
    Python Script
    Spring Boot
    Google Cloud Platform
    Apigee
    Java
    Python
  • $25 hourly
    I bring extensive hands-on experience in the realm of data science, showcasing proficiency in various Hadoop components such as MapReduce, Hive, Pig, alongside a deep understanding of AWS cloud services. Over the course of my career, I have successfully executed numerous projects utilizing machine learning techniques for in-depth data analysis. Specifically, I leverage Apache Spark to efficiently process vast datasets for analytical purposes. My expertise extends to the full spectrum of Spark's capabilities, including Spark Streaming, Spark MLlib, and Spark GraphX, which have proven instrumental in enhancing the speed and scalability of data processing in various projects. I have implemented Spark MLlib to develop machine learning models tailored to meet specific client requirements, focusing on prediction and classification tasks. In my current role, I am deeply involved in working with Hadoop components, and I continue to harness the advanced features of Spark, such as Spark Streaming, MLlib, and GraphX, for real-time data processing requirements. Moreover, I actively incorporate DevOps practices into my workflow to ensure seamless collaboration between development and operations teams. This includes the integration of continuous integration/continuous deployment (CI/CD) pipelines, automated testing, and infrastructure as code (IaC) principles. Embracing a DevOps mindset enhances the overall efficiency and reliability of the software development lifecycle. I take pride in my ability to align machine learning methodologies with data processing workflows to meet client demands effectively. This involves leveraging Spark MLlib for predictive modeling and classification tasks, ensuring a holistic approach to addressing client requirements and business objectives. Throughout my journey in data science, I have remained dedicated to staying at the forefront of technology, constantly adapting to new tools and methodologies. I am enthusiastic about bringing this multifaceted expertise, encompassing data science and DevOps practices, to tackle new challenges and make meaningful contributions to future projects.
    Featured Skill Apache Kafka
    Data Scraping
    Google Analytics
    AWS Lambda
    Amazon DynamoDB
    Apache Hadoop
    BigQuery
    Big Data
    Amazon ECS
    SQL
    Sentiment Analysis
    Machine Learning
    NLTK
    Apache Spark MLlib
    Apache Spark
  • $50 hourly
    * Principal Software Engineer with 12+ years of experience leading high-impact engineering programs. * Proven track record in technical leadership, cross-functional collaboration, and influencing org-level architecture decisions, achieving ~$3M in cost savings and ~$72M in revenue impact. * Proficient in Java, AWS, and cloud-native architectures. Skilled in designing distributed, scalable, and secure systems using microservices, event-driven, and serverless patterns. Expertise includes CI/CD, observability, infrastructure as code, application frameworks and database technologies (SQL/NoSQL).
    Featured Skill Apache Kafka
    GraphQL
    REST API
    Elasticsearch
    Spring Boot
    NoSQL Database
    Relational Database
    Java
    Agile Software Development
    Microservice
    Amazon Web Services
    Leadership Skills
  • $30 hourly
    A seasoned data engineer & Certified AWS Solution Architect Associate with 8+ years of Data Engineering experience. Have designed & implemented multiple ETL pipelines on cloud like AWS, Azure and On-prem ecosystem. 💡 If you want to turn data into actionable insights or planning to use 5 V's of big data 👋 Hi. My name is Sushant and I'm a data engineering professional. 💡 My true passion is creating robust, scalable, and cost-effective big data solutions using mainly Apache Spark, Hadoop, Open source technologies and any cloud platforms like AWS, Azure or GCP. 💡During the last 9 years, I have worked on tech stacks like - Programming Languages: Java, Scala, Python - Bigdata Technologies: Hadoop, Apache Spark, Hive, HBase, Kafka, Airflow, Oozie, Elasticsearch etc. - Hadoop Distributions: Cloudera - AWS: EMR, EC2, S3, RDS, Data Pipeline, Glue, Kinesis, SNS, Lambda, Dynamo DB, SQS etc. - Azure: Data Factory, Function Apps, Azure Data Lake Storage (Gen1/Gen2), Databricks, Service Bus, Event Hub, Logic Apps, Virtual Machines, HDInsight - UI Technologies: HTML5, JavaScript, CSS - Data Visualization Tools: Grafana, Kibana, Graphite - Database: MySQL, Oracle, PostgreSQL - Version Control Tools: Git, SVN and much more. 5-step Approach 👣 Requirements Discussion + Prototyping + Visual Design + Backend Development + Support = Success! Usually, we customize that process depending on the project's needs and final goals. How to start? 🏁 Every product requires a clear roadmap and meaningful discussion to keep everything in check. But first, we need to understand your needs. Let’s talk! 💯 Working with me, you will receive a modern good looking application that will meet all guidelines with easy navigation, and of course, you will have unlimited revisions until you are 100% satisfied with the result.
    Featured Skill Apache Kafka
    Docker
    Microsoft Azure
    Data Warehousing & ETL Software
    Scala
    Apache Hadoop
    Big Data
    Amazon Web Services
    Apache Airflow
    Data Modeling
    Hive
    Linux
    Apache Spark
    PySpark
    Python
    SQL
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Apache Kafka Developer near Faridabad, on Upwork?

You can hire a Apache Kafka Developer near Faridabad, on Upwork in four simple steps:

  • Create a job post tailored to your Apache Kafka Developer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Kafka Developer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Kafka Developer profiles and interview.
  • Hire the right Apache Kafka Developer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Kafka Developer?

Rates charged by Apache Kafka Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Kafka Developer near Faridabad, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Kafka Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Kafka Developer team you need to succeed.

Can I hire a Apache Kafka Developer near Faridabad, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Kafka Developer proposals within 24 hours of posting a job description.