Hire the best Apache Kafka developers

Check out Apache Kafka developers with the skills you need for your next job.
Clients rate Apache Kafka developers
Rating is 4.7 out of 5.
4.7/5
based on 384 client reviews
  • $45 hourly
    HI I have 10 years of experience in programming (backend). My main programming language is Golang, and I have experience creating microservice applications using Docker, Kubernetes (ECS and GKE), setup CI/CD using GitLab-ci or Jenkins, and most of my projects using Echo framework for APIs and for connecting internal services I use Grpc or Kafka.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Back-End Development
    RESTful API
    RabbitMQ
    Swagger
    Git
    Stripe
    Microservice
    Google Cloud Platform
    Amazon ECS for Kubernetes
    Kubernetes
    PostgreSQL
    MySQL
    Docker
    Golang
  • $30 hourly
    I have been associated with the Software Industry for more than 10 years, specializing in 1. Core Java, Spring Boot, Hibernate 2. Application deployment (CI/CD integration using Jenkins and AWS ElasticbeanStalk ) 3. Develop IVR (Interactive Voice Response) application using VXML and NDF 4. Automatic Speech Recognition create Grxml and dictionary for nuance platform. 5. Kafka 6. Mysql 7. Shell scripting 8. JavaScript
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Amazon Athena
    Hibernate
    RESTful Architecture
    Jenkins
    API
    Java Persistence API
    MySQL
    Database
    Eclipse Jetty
    Spring Framework
    Java
    Spring Boot
  • $40 hourly
    With more than 14 years of experience, I specialize in creating new and improving existing applications using the latest technologies. I am an active contributor for multiple open source projects ranging from Operating systems, Applications, and security. I have been a leader in the IT industry in diversified applications, services, and technologies spread across the spectrum on varied platforms. During my experience, I have built many mobile apps and web solutions and successfully published them to the app store or play store and are highly ranked also. I’m a professional, detail-oriented, and result-driven developer with a proven track record of high-quality work and services dedicated to my client. I provide 24/7 services with my deep and rich knowledge and experience in software engineering, high-quality stacks, and strong leadership skills. I’m seeking to well-versed in structuring, developing, and implementing interactive mobile and web solutions, assisting clients in all troubleshooting endeavors. My prompt response, clear communication, fast delivery, and High-quality work always guarantee 100% client satisfaction and Job success. Key Skills:- * End-to-End encryption/XMPP/Erlang / Ejabberd / MongooseIM * WebRTC based secure call integration * Specialised in Node JS based web applications * Resourced on React JS based front end * Experienced in Objective-C/Swift for iOS app * Experienced in Java/Kotlin for Android App * Experienced in Dart for Cross-Platform(Flutter) * Bug fixing, optimize the app * Admin Panel & APIs Development * I have Ejabberd and MongooseIM core modules modified to make it more stable for Android clients (using Smack) and iOS clients (using XMPP Framework) My creations & methodology: - I provide secure applications for banking, social with End-to-End encryption or anything which need good encryption Chat applications like WhatsApp, Google pay, Line, Kakao, Telegram, etc. Taxi applications like Uber, Ola, etc E-commerce applications like Amazon, Flipkart, etc Good at responding to unstable network connections Custom modules created in Erlang for Ejabberd and MongooseIM server for defined XEPS can scale up to millions of users. Chat modules created on the website for social, online gaming, and many more I provide the next step taken for the apps, which looks complete I have the potential to deliver a solution from beginning to end. I have worked on Video conferencing/Audio-Video-calling/Chat and messaging/Live Streaming
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Material Design
    XMPP
    Erlang
    Kotlin
    Flutter
    Mobile App Development
    iOS Development
    Firebase
    WebRTC
    Android
    Android App Development
  • $55 hourly
    I am graduated computer science engineer with 4 years of professional experience as software engineer and data scientist. My fileds of expertise are time series modeling, stream data processing and backend services. Along with the industrial experience I have experience as a researcher in the field of machine learning. The Links to my published papers so far are given in the section Other Experiences at my Upwork profile.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Data Scraping
    Amazon ECS for Kubernetes
    Amazon Web Services
    ETL Pipeline
    Apache NiFi
    Google Cloud Platform
    Apache Flink
    Time Series Analysis
    Machine Learning
    SQL
    Apache Airflow
    Python
    Java
  • $24 hourly
    I have 10+ years of development experience with Python, Java, Javascript, React, Flask, MongoDB, Postgres, SQL, Solidity, and have worked extensively with AWS services such as S3, Lambda, Glue, Redshift,EC2, SQS, Amplify, SAM etc. Furthermore I am well versed with Google services and technologies such as Google Sheets, Google Colab Notebook, Google Drive etc. I also have developed CI/CD pipelines which ensured fast delivery of products and services to the clients. I have actively worked with multiple startups for developing scalable services and products in domains of web development, app development, ETL pipelines, cloud development, blockchain development. I have deep understanding and experience in the tech stack required to develop this project and strongly believe that I will be able to develop this project as per the requirements of the client. My experience for requested skills: -Python - 11 years - C/C++ - 8 years - Java – 7 years - AWS – 6 years - Postgres – 6 years - MySQL – 6 years - Javascript – 6 years - React - 5 years - Flask - 5 year - Google Services and APIs – 4 years - Solidity – 2 years I am expert in(but isn't limited to) in following tech stack: - Automation and Testing: Selenium, Robot, Splinter - Dockerization: Docker, Amazon ECS - Blockchain Development: Solidity, Remix, Truffle, Geth. - Databases: MySQL, MongoDB, PostgreSQL, Amazon Aurora - Serverless: AWS Lambda, AWS Glue. - Data Lakes and Analytics: Amazon S3, AWS Glue, AWS Athena. - Web servers / Proxy: Nginx, Apache - OS: Ubuntu/CentOS, Windows Servers, Amazon Linux. - Message Broker: Redis, Celery/Rabbitmq - Libraries: Pandas/Numpy, Sqlalchemy, Dash/ploty - Tools: Jupyter notebook, Git, CodeCommit. I'm a big fan of Python and algorithm base problems. Always curious to learn new libraries and technologies. Working on US-based Healthcare domain.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Financial Software
    MySQL Programming
    AWS Lambda
    Docker
    Web Crawling
    Binance Coin
    Neo4j
    ETL Pipeline
    Data Extraction
    Python
    Greenplum
    Blockchain Development
    Redis
    AWS Glue
  • $50 hourly
    You're reading just right ➡️ I work as a DevOps Lead for Katalon, the developer of Katalon ― an automation testing software tool, specializing in Kubernetes, AWS, Google Cloud Platform, and RedHat OpenShift ⭐ 10+ years of experience in DevOps and Cloud Infrastructure, specializing in Kubernetes 🎖 CKA certified ― Certified Kubernetes Administrator Kubernetes practitioner who is comfortably familiar with all Kubernetes-based cloud providers: ■ AWS Elastic Kubernetes Service EKS ■ Azure Kubernetes Service AKS ■ GCP Google Kubernetes Engine GKE ■ RedHat OpenShift ■ K3S I demonstrate in-depth knowledge and excellent experience with Kubernetes Deployment, Microservice Architecture, Docker Deployment, Microservices Deployment, Mobile Deployment, Infrastructure as Code, Application Performance Monitoring/Management (APM), Managing source code, Automated testing, Security, Firewall DevOps toolkits that I am proficient in: Terraform, Jenkins, CircleCI, GitlabCI, Ansible, Vault, SonarQube, Nexus, Jfrog, Kafka, Zookeeper, ELK, Datadog, Grafana, Prometheus, Alert Manager, Slack integration, HockeyApp, Fastlane, Checkpoint 💬 Need some help on improve your infrastructure ? Let's have a talk and see what I help.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Jenkins
    Computer Network
    Amazon RDS
    Docker
    Amazon ECS for Kubernetes
    AWS CloudFormation
    Kubernetes
    Python
    Golang
    Linux System Administration
    CI/CD
    Amazon Web Services
    DevOps
    Ansible
  • $110 hourly
    Distributed Computing: Apache Spark, Flink, Beam, Hadoop, Dask Cloud Computing: GCP (BigQuery, DataProc, GFS, Dataflow, Pub/Sub), AWS EMR/EC2 Containerization Tools: Docker, Kubernetes Databases: MongoDB, Postgres-XL, PostgreSQL Languages: Java, Python, C/C++
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    MapReduce
    Cloud Computing
    Apache Hadoop
    White Paper Writing
    Academic Writing
    Google Cloud Platform
    Dask
    Apache Spark
    Research Paper Writing
    Apache Flink
    Kubernetes
    Python
    Java
  • $120 hourly
    ABOUT ME I'm John, a Senior Software Engineer specializing in data and cloud technologies. My goal is to simplify complex systems, making your data work smarter and drive business growth. ACHIEVEMENTS & SKILLS • Led data cost optimization at Unity Technologies, reducing expenses by 45%. • Directed LCBO's successful cloud migration, enhancing data streaming performance. • Improved Askuity's data querying speed by 30% through AWS Athena integration. • Proficient in Scala, Java, Python, Go, SQL, Kafka, Flink, Spark, GCP, Azure, AWS. INSTRUCTION & COMMUNITY INVOLVEMENT • Adjunct Instructor in machine learning and Java. • Active speaker and contributor in tech communities, focusing on open-source projects like Apache Druid. CREDENTIALS • Bachelor's in Information Technology (Honors), York University, specialized in Big Data for IoT. LET'S WORK TOGETHER Need efficient, scalable data solutions? Contact me to leverage your business data for optimal results.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Apache Beam
    Apache Flink
    Data Processing
    Data Visualization
    Database
    Data Analysis
    Data Management
    API Development
    Data Ingestion
    Data Mining
    Software Development
    Java
    Scala
    ETL Pipeline
  • $40 hourly
    Experienced AWS certified Data Engineer. Currently have around 4 years of Experience in Big Data and tools. AWS | GCP Hadoop | HDFS | Hive | SQOOP Apache Airflow | Apache Spark | Apache Kafka | Apache NIFI | Apache Iceberg Python | BaSH | SQL | PySpark | Scala | Delta Lake Datastage | Git | Jenkins | Snaplogic | Snowflake.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Amazon API Gateway
    Apache Spark
    Google Cloud Platform
    Apache Airflow
    Big Data
    Data Migration
    Apache NiFi
    Amazon Redshift
    Amazon Web Services
    PySpark
    AWS Lambda
    AWS Glue
    ETL
    Python
    SQL
  • $60 hourly
    A good lead backend software engineer with 10+ years of experience in developing highly scaleable and distributed software products using cutting edge technologies. Java, Spring Boot, JS, RDBMS, NoSQL, In-Memory Databases like Redis. Cloud platforms like AWS, GCP, Digital Ocean. Apache Kafka messaging queue. CI/CD. And a strong Test Driven Development practitioner.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Web Application
    Google Cloud Platform
    Microsoft Power BI
    SQL
    Java
    MongoDB
    Java Persistence API
    MySQL
    PostgreSQL
    Docker
    Spring Framework
    Redis
    Kubernetes
    Spring Boot
  • $99 hourly
    With over 20 years of leadership in data storage, processing, and streaming technologies at multinational corporations like Microsoft, IBM, Bloomberg, and Amazon, I am recognized as a Subject Matter Expert in these domains. My portfolio includes the successful design and deployment of large-scale, multi-tier projects utilizing a variety of programming languages (C, C++, C#, Python, Java, Ruby) and both SQL and NoSQL databases, often enhanced with caching solutions. My expertise extends to data streaming products such as Kafka (including Confluent and Apache Kafka), Kinesis, and RabbitMQ, tailored to meet specific project requirements and customer environments. My technical proficiency encompasses a wide range of databases and data processing technologies, including MS SQL, MySQL, Postgres, Comdb2, Cassandra, MongoDB, Hadoop, HDFS, Hive, Spark, and Snowflake. I am equally adept in Unix and Windows environments, skilled in both PowerShell and Bash scripting. As an AWS and Azure Solutions Architect, I have empowered numerous clients with comprehensive AWS and Azure cloud solutions based on clients need. My notable projects on Upwork include: 1. Migrating Arcbest's dispatch solution from mainframe to Linux servers with Confluent Kafka, enhancing processing times and reducing latencies. 2. Conducting petabyte-scale big data analysis for Punchh using Snowflake, Kafka, Python, Ruby, AWS S3, and Redshift. 3. Analyzing and comparing various Kafka-like solutions for an investment firm, focusing on adoption and maintenance costs. 4. Implementing ETL solutions with CDC for continuous updates from IBM Maximo to Snowflake via Kafka, and from Oracle to Snowflake, integrating Power BI and Tableau for analytics. 5. Deploying an IoT solution for a logistics firm using Particle and Pulsar devices, MQTT, Kinesis, Lambda, API Gateway, S3, Redshift, MySQL Aurora, and Power BI to monitor real-time delivery metrics as well as post delivery analysis of delivery performance such as spills, tilts, bumps. 6. Conducting data analysis for an advertising firm, benchmarking BigQuery and custom dashboards against Redshift with Tableau/Quicksight.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    SQL
    R Hadoop
    Amazon Web Services
    Snowflake
    Solution Architecture
    Apache Solr
    Ruby
    Apache Hadoop
    Apache Cassandra
    Redis
    Python
    Java
    C++
    C#
  • $42 hourly
    Node | Angular | Angular.js | MEAN Stack | MERN Stack| Express | Hapi | Nest | Sails.js | jQuery | Javascript | Typescript | Microservices | Kubernetes | Firestore | Postgresql | Mongodb | AWS Lambda | DynamoDb | State Machine | Cloud Watch | S3 | Api Gateway | Google Cloud Platform | | Redis | Elastic Search | Solr | Test cases | Mocha | Chai | Jest | Twillio | Sequelize | REST Api | GIT | Jira | Docker | Docker-compose | ZOHO Crm | Check market I am a Node, Angular, MEAN, and PHP developer with a Master Of Science in information technology(MSc. IT) and have 7 years of software development experience. I am experienced in web development with PHP and Laravel, Zend Framework 2. I have a working experience with node.js, angular JS, Angular, MEAN stack application, Google Cloud Platform, Elastic Search, AWS lambda functions, and API Gateway. I have experience in leading and managing projects entirely. I have also developed a web application using Ajax, jQuery, Javascript, Html5 and have good skills in error solving. When working on a new project, I always like to clear out the requirements of my clients, so that I can satisfy their requirements. Thank you for your time and consideration. I look forward to working with you soon.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Elasticsearch
    Google Cloud Platform
    AWS Lambda
    API Development
    RESTful API
    Node.js
    AngularJS
    Angular
    Docker
    PHP
    TypeScript
    jQuery
    MongoDB
    Microservice
  • $25 hourly
    Welcome! I’m Akhtar, an AWS Solution Architect and full-stack dev with over 7 years of experience. My expertise lie in full-stack development (MERN), Python-based architectures, and advanced AWS cloud solutions. Why collaborate with Me? Technical Expertise: Advanced proficiency in Python (Django, Flask, FastAPI), JavaScript, MERN (MongoDB, Express.js, React, Node.js) stack, and AWS cloud services Full-Stack Development: Proven record of delivering dynamic, responsive web applications from concept to deployment Cloud Mastery & Architectural Prowess: Expert in building scalable, cost-effective serverless architectures and containerized solutions Security and DevOps: I integrate security best practices and CI/CD pipelines to enhance development efficiency and safety 🥇 Differentiating Value Proposition: ➤ Full-Stack Development: Mastery of both backend and frontend technologies enables me to deliver complete web applications from conception to deployment, ensuring consistency and high performance across the MERN and Python stacks. ➤ Holistic Approach: From conceptualizing an idea in Python to integrating frontend intricacies using Javascript and full-stack capabilities with MERN ➤ Cloud-Centric: Expertise in leveraging the power of AWS to provide scalable and cost-effective 🕛solutions ➤ Performance-centric Solutions: Ensuring optimized architectures for swift response times and efficient operations ➤ Quality Assurance: Rigorous testing protocols are a standard part of my workflow, guaranteeing high-quality outcomes 🤝 Effective Collaboration: I firmly believe that open communication and mutual respect form the bedrock of successful projects. Understanding your vision and goals while maintaining transparency is my utmost priority. 🕛 Proven Track Record: With 8,000+ hours logged on Upwork and several successful projects, my experience is evidenced by my results. 💡 Your Vision, My Blueprint: Whether you’re migrating to the cloud, crafting a new digital solution, or optimizing existing architectures/code, I’m here to translate your aspirations into tangible digital solutions. Let’s connect now for a dynamic and efficient digital solution tailored to your needs!
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Web Development
    Mobile Development Framework
    Amazon Web Services
    Next.js
    TypeScript
    Cloud Computing
    Python
    GraphQL
    JavaScript
    AWS Lambda
    API Integration
    Microsoft Azure
    Progressive Web App
    API Development
    NestJS
    MongoDB
    Node.js
    React
    ETL
  • $54 hourly
    Leveraging a potent mix of JAVA 21, JAVA 17 LTS, JAVA 8+, JAVA legacy, Spring Boot, Spring Security, JPA, Spring WebFlux, Rest, Javascript, and React, I am skilled in full-stack software development. My proficiency extends to caching and messaging systems such as Redis, Hazelcast, Kafka, and Spring Integration. Operating within containerized and orchestrated environments, I am adept with Docker, Kubernetes, and the Kong gateway. Additionally, my proficiency encompasses a broad array of databases including MySQL, Postgres, Oracle, and SQL Server. My experience stretches across reworking applications in both Monolithic and Microservices architecture, providing me with a comprehensive understanding of various development strategies. I have a successful track record in designing, developing, and implementing web-based software applications, grounded in a thorough comprehension of industry technical standards and based on meticulously analyzed requirements. In my quest for code optimization and performance enhancement, I actively engage in code reviews and debugging. My expertise also covers AWS, Jenkins, and Gitlab pipelines, equipping me to navigate and utilize diverse tech stacks and tools seamlessly. Operating in Agile environments, I have honed my collaborative skills to deliver projects within stringent deadlines. My experience in team-based settings has enriched my communication and teamwork capabilities, ensuring I consistently contribute to achieving team objectives.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Spring Boot
    Linux
    React
    Web Application
    Web Development
    SQL
    JavaScript
    Java
  • $30 hourly
    Seasoned data engineer with over 11 years of experience in building sophisticated and reliable ETL applications using Big Data and cloud stacks (Azure and AWS). TOP RATED PLUS . Collaborated with over 20 clients, accumulating more than 2000 hours on Upwork. 🏆 Expert in creating robust, scalable and cost-effective solutions using Big Data technologies for past 9 years. 🏆 The main areas of expertise are: 📍 Big data - Apache Spark, Spark Streaming, Hadoop, Kafka, Kafka Streams, HDFS, Hive, Solr, Airflow, Sqoop, NiFi, Flink 📍 AWS Cloud Services - AWS S3, AWS EC2, AWS Glue, AWS RedShift, AWS SQS, AWS RDS, AWS EMR 📍 Azure Cloud Services - Azure Data Factory, Azure Databricks, Azure HDInsights, Azure SQL 📍 Google Cloud Services - GCP DataProc 📍 Search Engine - Apache Solr 📍 NoSQL - HBase, Cassandra, MongoDB 📍 Platform - Data Warehousing, Data lake 📍 Visualization - Power BI 📍 Distributions - Cloudera 📍 DevOps - Jenkins 📍 Accelerators - Data Quality, Data Curation, Data Catalog
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    SQL
    AWS Glue
    PySpark
    Apache Cassandra
    ETL Pipeline
    Apache Hive
    Apache NiFi
    Big Data
    Apache Hadoop
    Scala
    Apache Spark
  • $55 hourly
    I focus on data engineering, software engineering, ETL/ELT, SQL reporting, high-volume data flows, and development of robust APIs using Java and Scala. I prioritize three key elements: reliability, efficiency, and simplicity. I hold a Bachelor's degree in Information Systems from Pontifícia Universidade Católica do Rio Grande do Sul as well as graduate degrees in Software Engineering from Infnet/FGV and Data Science (Big Data) from IGTI. In addition to my academic qualifications I have acquired a set of certifications: - Databricks Certified Data Engineer Professional - AWS Certified Solutions Architect – Associate - Databricks Certified Associate Developer for Apache Spark 3.0 - AWS Certified Cloud Practitioner - Databricks Certified Data Engineer Associate - Academy Accreditation - Databricks Lakehouse Fundamentals - Microsoft Certified: Azure Data Engineer Associate - Microsoft Certified: DP-200 Implementing an Azure Data Solution - Microsoft Certified: DP-201 Designing an Azure Data Solution - Microsoft Certified: Azure Data Fundamentals - Microsoft Certified: Azure Fundamentals - Cloudera CCA Spark and Hadoop Developer - Oracle Certified Professional, Java SE 6 Programmer My professional journey has been marked by a deep involvement in the world of Big Data solutions. I've fine-tuned my skills with Apache Spark, Apache Flink, Hadoop, and a range of associated technologies such as HBase, Cassandra, MongoDB, Ignite, MapReduce, Apache Pig, Apache Crunch and RHadoop. Initially, I worked extensively with on-premise environments but over the past five years my focus has shifted predominantly to cloud based platforms. I've dedicated over two years to mastering Azure and I’m currently immersed in AWS. I have a great experience with Linux environments as well as strong knowledge in programming languages like Scala (8+ years) and Java (15+ years). In my earlier career phases, I had experience working with Java web applications and Java EE applications, primarily leveraging the WebLogic application server and databases like SQL Server, MySQL, and Oracle.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Scala
    Apache Solr
    Apache Spark
    Bash Programming
    Elasticsearch
    Java
    Progress Chef
    Apache Flink
    Apache HBase
    Apache Hadoop
    MapReduce
    MongoDB
    Docker
  • $25 hourly
     Certification in Big Data/Hadoop Ecosystem  Big Data Environment: Google Cloud Platform, Cloudera, HortonWorks and AWS, SnowFlake, Databricks, DC/OS  Big Data Tools : Apache Hadoop, Apache Spark, Apache Kafka, Apache Nifi, Apache Cassandra, Yarn/Mesos, Oozie, Sqoop, Airflow, Glue, Athena, S3 Buckets, Lambda, Redshift, DynamoDB ,Delta Lake, Docker, GIT, Bash Scripts Jenkins, Postgres, MongoDB, Elastic Search, Kibana, Ignite, TiDB  Certification SQL Server, Database Development and Crystal Report.  SQL Server Tools: SQL Management Studio, BIDS, SSIS, SSAS and SSRS  BI/Dashboarding Tools: Power BI, Tableau, Kibana  Big Data Development Programing Languages: Scala and python. ======================================================================= ************************************* Big Data Engineer**********************************************  Hands on experience with Google cloud platform, Big Query, Google Data Studio and Flow  Developing ETL pipeline for SQL server as well using SSIS.  For Reporting and Analysis using SSIS, SSRS and SSAS cubes.  Having amazing experience with Big data framework and open source technologies (Apache Nifi, Kafka, Spark and Cassandra, HDFS, Hive Docker/Cassandra/ Postgres SQL, Git, Bash Scripts Jenkins, MongoDB, Elastic Search, Ignite, TiDB.  Managing data warehouse Big Data cluster services and developments of Data Flows.  Writing big data/Spark ETL applications for different sources (SQL, Oracle, CSV, XML,JSON) to support different department for analytics.  Extensive work with Hive, Hadoop, Spark, Docker, Apache Nifi  Supporting different department for big data analytics.  Build multiple end to end Fraud monitoring alert based systems.  Preferable language is Scala and python as well. ************Big Data Engineer– Fraud Management at VEON *************  Devolved ETL Pipeline from Kafka to Cassandra using Spark in Scala Language.  Using Big Data Tools with Horton Works and AWS (Apache Nifi, Kafka, Spark and Cassandra, Elastic Search)  Dashboard Developments - Tableau and Kibana.  Writing SQL server complex queries, procedures and Functions.  Developing ETL pipeline for SQL server as well using SSIS.  For Reporting and Analysis using SSIS, SSRS and SSAS cubes.  Developing and designing Auto Email Reports.  Offline Data Analytics for Fraud Detection and Setting up controls for prevention.  SQL Database Development.  System Support of Fraud Management.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Google Cloud Platform
    SQL Programming
    Data Warehousing
    Database
    AWS Glue
    PySpark
    MongoDB
    Python Script
    Docker
    Apache Hadoop
    Apache Spark
    Databricks Platform
    Apache Hive
  • $45 hourly
    I am a software (mostly Java) developer with more than 10 years of experience. I have started from developing fully-featured enterprise systems on Java and ended up building complex data processing frameworks that solve all kind of big-data world tasks. I am ready to create architecture for reliable and amazingly fast ETL process with further ad-hoc analytics and data visualization. Was working mainly in banking and telecommunication area. I am a fan of new architecture approaches like CQRS. Has good communication skills, high level of motivation, open-minded. Programming​ ​ languages: Java,​ ​​​JavaScript, Python Big​ ​ data​ ​ tools: Kafka, Hadoop,​ ​ Zookeeper,​ ​ Spark,​ ​ Storm, HDFS, OOzie Full-featured​ ​ web​ ​ frameworks: Spring​ ​ Boot Databases: PostgreSQL,​ ​ MySQL Data warehouses: Vertica
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Data Science
    Amazon Web Services
    Amazon EC2
    AWS Lambda
    Spring Boot
    App Development
    DeFi
    ETL Pipeline
    Apache NiFi
    DApp Development
    Blockchain Development
    Apache Flink
    Java
    Python
  • $35 hourly
    I am creative, curious, analytical, and often considered a "Problem-Solver" at work. Having ease at learning new applications and programming languages by myself, I love to dive deeper into new concepts and tools of Software Development. I have a bachelor's degree in Computer Science. I have been working on the solution of scalable microservices on Java Spring Framework, in order to deliver new features used on multiple Client applications. Check below my main experiences: • Design and development of scalable microservices integrated with RESTful HTTP APIs and Messaging Brokers (Kafka and IBM MQ), based on Hexagonal/Clean Architecture. • Creating and deploying CI/CD solutions on OpenShift with Jenkins and AWS. • Projects implemented with Java Spring Framework, using Spring Boot with dependency management through Maven and code versioning by Git. • Automated testing practices using Testing Pyramid (Unit, Integration, and Component tests automation) to have a maintainable test suite. • Problem prospecting and solving using Observability tools such as Amazon CloudWatch, Splunk, and Grafana. • Experience at DevOps culture and Scrum Methodology working with synergic squads to design and build high-end applications and use of the project management platforms JIRA and Confluence. As I have a great interest and love for video games, I have been studying Game Programming in my spare time, mainly C++ programming and Unreal Engine development.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    CI/CD
    Linux
    Jenkins
    Apache Tomcat
    Microservice
    AWS Lambda
    .NET Framework
    Computer Science
    SQL Programming
    Spring Framework
    Entity Framework
    API
    AWS CloudFormation
    AWS Application
    Python
    Java
    C++
  • $30 hourly
    Experienced Backend Engineer Specializing in Scalable Microservices and Cloud Solutions ✨ As an experienced Backend Engineer, my expertise lies in creating sophisticated, scalable solutions across a diverse technology landscape. 🚀 I excel in transforming complex requirements into efficient, scalable solutions, ensuring high performance and reliability. My approach combines deep technical knowledge with a keen focus on clear communication and collaboration, aligning closely with project goals for maximum client satisfaction. 🌐 Ready to elevate your backend systems with cutting-edge technology and innovative solutions? Let's connect!
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Linux
    Java
    Python
    DevOps
    Prometheus
    MQTT
    MongoDB
    Kubernetes
    Docker
    RESTful API
    Microservice
    Back-End Development
    C#
    .NET Core
  • $90 hourly
    I prefer milestone based work, that make sense more than hourly basis work based on my experience level. I am working as software engineer since 2000, In my past 22 years of experience, I worked for multiple banks and insurance companies as Java developer, Microservice Architect. I designed ETL pipeline based on microservice architecture using Java, Kafka, Spark, BigQuery. I have experience in Machine Learning, Spark ML, Java Spring framework, Spring Framework, Spring batch, Spring cloud data-flow, Kafka, Hadoop, GIT, Gradle, Maven. I like TDD, design patterns and best practices, microservices resiliency for my design.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Spring Cloud
    ETL Pipeline
    Spring Data
    Spring Boot
    API Integration
    Microservice
    Scrum
    Spring Batch
    Apache Spark MLlib
    MongoDB
    Kubernetes
    Java
    Microsoft Azure
    Docker
  • $85 hourly
    - 15 years of experience in Data Science, Data warehouse, Business Intelligence, advanced analytics, ETL/ELT, Data visualization, Virtualization, database programming and data engineering. - Experience in Machine learning, especially on customer360, linear regression and decision trees. - Specialized in the end to end of Business Intelligence and analytics implementations - ER (Entity Relationship) modeling for OLTP and dimension modeling (conceptual, Logical, Physical) for OLAP. - Have experience running startup companies and building SaaS products, including CRM (Customer Relationship Management) and Data Orchestration low-code tools. - Experience working in Agile Scrum and methodologies (2 and 3-week sprints) - Excellent communication skills, a good understanding of business and client requirements - Good at technical documentation, POCs (Proof of Concepts) - Good at discussions with Stakeholders for requirements and Demos - Convert business requirements into Technical design documents with pseudo-code - Dedicated, work with minimal supervision. - Eager to learn new technologies, can explore and learn and develop quickly on client-owned applications. - Expert in SQL, T-SQL, PLSQL, knows advanced functions and features of it, good at database programming. - good at Performance Tuning, clustering, Indexing, Partitioning and other DBA activities. - DBA activities like Database backup/recovery, monitoring Database health, killing Long-running queries and suggesting better tuning options. - Good at database programming and normalized techniques (all 3 normal forms). - Expert in Azure Synapse, PostgreSQL, MongoDB, Dynamo DB, Google Data Studio, Tableau, Sisense, SSRS, SSIS, and more. - Domain knowledge in Telecom, Finance/Banking, Automobile, Insurance, Telemedicine, Healthcare and Virtual Clinical Trials (CT). - Extensive DBA knowledge and work experience in SQL Server, Login management, database backup and restore, monitoring database loads, and tuning methods. - Exceptionally well in Azure ML and regression models Expertise: Database: Snowflake, Oracle SQL and PLSQL (OCP certified), SQL Server, T-SQL, SAP HANA, Azure SQL Database, Azure Synapse Analytics, Teradata, Mysql, No SQL, PostgreSQL, and MongoDB ETL: Azure Data Factory, DBT, SSIS, AWS Glue, Matillion CDC & ETL, Google Big Query, Informatica PC and Cloud, ODI, Data Stage, MSBI (SSIS, SSAS) Reporting/Visualization: Sisense, QlikSense, Sigma Computing, Metabase, Qlikview, SSRS, Domo, Looker, Tableau, Google Data Studio, Amazon QuickSight and PowerBI Scripting Language: Unix, Python and R Cloud Services: Google Cloud Platform (Big Query, Cloud functions, Data Studio), MS Azure (Azure Blob Storage, Azure Functional Apps, Logic Apps, Azure Data Lakehouse, Databricks, Purview, ADF and Microservices), Azure ML, AWS RDS EC2, S3, and Amazon Redshift, Step functions, Data Pipelines Data Virtualization: Denodo
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    C#
    Snowflake
    ETL
    Data Warehousing
    Business Intelligence
    Data Visualization
    Azure Machine Learning
    Qlik Sense
    Looker
    Sisense
    Microsoft Power BI
    SQL
    Tableau
  • $25 hourly
    PROGRAMMING TECHNOLOGY EXPERTISE * Python ,Django, FastAPI,Flask,Selenium,Rest API * React.js, Next.js, Vue.js ,Angular * React Native * Flutter DEVOPS & CLOUD & CYBER SECURITY EXPERTISE * AWS Cloud Solution design and developmpent * Opensearch , Elasticsearch, Kibana, Logstash based setup, configuration and development integration * Ansible * Docker * Jenkins * GitLab Based CI-CD * Prometheus and grafana * SIEM * Surikata/Snort * Bro(zeek) * Hashicorp vault * Cyber Security Project Related development and consultation. * Kong api gateway integration
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Amazon Elastic Beanstalk
    Flutter
    React Native
    PostgreSQL Programming
    ELK Stack
    AWS CloudFront
    Amazon S3
    RESTful API
    AWS Lambda
    DevOps
    Next.js
    React
    Python
    Django
    AWS Amplify
  • $35 hourly
    🏆 Google Certified TensorFlow Developer 🏆 AWS Certified Machine Learning - Specialty Engineer 🏆 AWS Certified Data Analytics - Specialty Engineer 5+ years of comprehensive industry experience in computer vision, Natural Language Processing (NLP), Predictive Modelling and forecasting. ➤ Generative AI Models 📍 OpenAI ( GPT - 3/4, ChatGPT, Embeddings ) 📍 GCP PaLM, Azure OpenAI Service 📍 Stable Diffusion - LoRA, DreamBooth 📍 Large Language Models (LLMs) - BLOOM, LLaMA, Llama2, Falcon ➤ Generative AI Frameworks 📍 LangChain 📍 Chainlit 📍 Pinecone - Vector database 📍 Langfuse ➤ ML Frameworks 📍 TensorFlow 📍 PyTorch 📍 Huggingface 📍 Keras 📍 Scikit-learn 📍 Spark ML 📍 NVIDIA DeepStream SDK Development ➤ DevOps 📍CI/CD 📍Git, Git Action 📍AWS - CodeCommit, CodeBuild, CodeDeploy, CodePipeline, CodeStar ➤ Cloud Skills 📍 AWS - SageMaker, Comprehend, Translate, Textract, Polly, Forecast, Personalize, Rekognition, Transcribe, IoT Core, IoT Greengrass 📍 GCP - Vertex AI, AutoML, Text-to-Speech, Speech-to-Text, Natural Language AI, Translation AI, Vision AI, Video AI, Document AI, Dialogflow, Contact Center AI, Timeseries Insights API, Recommendations AI 📍 Azure - Azure ML ➤ Sample work Applications include but are not limited to: 📍 Sales forecasting 📍 Recommendation engines 📍 Image classification 📍 Object segmentation 📍 Face recognition 📍 Object detection & object tracking 📍 Stable Diffusion Generative AI 📍 Augmented Reality 📍 Emotion analysis 📍 Video analytics and surveillance 📍 Text analysis and chatbot development 📍 Image caption generation 📍 Similar Image search engine 📍 Fine-tuning large language models (LLMs) 📍 ChatGPT API
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Artificial Intelligence
    Amazon Redshift
    AWS Glue
    Google Cloud Platform
    Amazon Web Services
    Image Processing
    Python
    Amazon SageMaker
    Computer Vision
    TensorFlow
    Machine Learning
    Google AutoML
    PyTorch
    Natural Language Processing
    Deep Learning
  • $20 hourly
    I specialize in creating affordable, resilient, and secure cloud infrastructures for companies of all sizes as a certified cloud architect and seasoned cloud engineer. I have a lot of experience with a variety of frameworks, including React, Next, and Spring Boot, as well as programming languages including Python, JavaScript, TypeScript, and Java. Well-versed in Python, able to write clean, scalable, and documented code as well as working with asynchronous processes, and able to process images and videos using ffmpeg. I have strong communication skills and am aware of the value of efficient communication in any project, in addition to my technical abilities. I am committed to comprehending your particular business demands and collaborating with you to deliver solutions that satisfy your needs. I have knowledge of numerous databases, including SQL and NoSQL, as well as AWS development services like SQS, SNS, and RDS. I have the skills to deliver the outcomes you require, whether you need assistance with building and sustaining cloud-based apps or deploying and managing cloud infrastructure.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    AWS Glue
    Amazon Cognito
    Cloud Architecture
    API Integration
    Django
    Terraform
    TypeScript
    DevOps
    AWS Development
    AWS Lambda
    Docker
    Kubernetes
    Amazon Web Services
    Python
  • $40 hourly
    Data Engineer with over 5 years of experience in developing Python-based solutions and leveraging Machine Learning algorithms to address complex challenges. I have a strong background in Data Integration, Data Warehousing, Data Modelling, and Data Quality. I excel at implementing and maintaining both batch and streaming Big Data pipelines with automated workflows. My expertise lies in driving data-driven insights, optimizing processes, and delivering value to businesses through a comprehensive understanding of data engineering principles and best practices. KEY SKILLS Python | SQL | PySpark | JavaScript | Google cloud platform (GCP) | Azure | Amazon web services (AWS) | TensorFlow | Keras | ETL | ELT | DBT | BigQuery | BigTable | Redshift | Snowflake | Data warehouse | Data Lake | Data proc | Data Flow | Data Fusion | Data prep | Pubsub | Looker | Data studio | Data factory | Databricks | Auto ML | Vertex AI | Pandas | Big Data | Numpy | Dask | Apache Beam | Apache Airflow | Azure Synapse | Cloud Data Loss Prevention | Machine Learning | Deep learning | Kafka | Scikit Learn | Data visualisation | Tableau | Power BI | Django | Git | GitLab
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    Data Engineering
    dbt
    ETL
    Chatbot
    CI/CD
    Kubernetes
    Docker
    Apache Airflow
    PySpark
    Machine Learning
    Exploratory Data Analysis
    Python
    SQL
    BigQuery
  • $100 hourly
    I have over 4 years of experience in Data Engineering (especially using Spark and pySpark to gain value from massive amounts of data). I worked with analysts and data scientists by conducting workshops on working in Hadoop/Spark and resolving their issues with big data ecosystem. I also have experience on Hadoop maintenace and building ETL, especially between Hadoop and Kafka. You can find my profile on stackoverflow (link in Portfolio section) - I help mostly in spark and pyspark tagged questions.
    vsuc_fltilesrefresh_TrophyIcon Apache Kafka
    MongoDB
    Data Warehousing
    Data Scraping
    ETL
    Data Visualization
    PySpark
    Python
    Data Migration
    Apache Airflow
    Apache Spark
    Apache Hadoop
  • Want to browse more freelancers?
    Sign up

How it works

1. Post a job (it’s free)

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How to Hire Top Apache Kafka Developers

How to hire Apache Kafka developers

Looking for a high-throughput, fault-tolerant, data streaming solution for processing large volumes of messages? An Apache Kafka developer can help.

So how do you hire Apache Kafka developers? What follows are some tips for finding top Apache Kafka consultants on Upwork.

How to shortlist Apache Kafka professionals

As you’re browsing available Apache Kafka consultants, it can be helpful to develop a shortlist of the professionals you may want to interview. You can screen profiles on criteria such as:

  • Technology fit. You want a developer who understands how to integrate Apache Kafka with the rest of your technology stack.
  • Project experience. Screen candidate profiles for specific skills and experience (e.g., building a website activity tracking pipeline).
  • Feedback. Check reviews from past clients for glowing testimonials or red flags that can tell you what it’s like to work with a particular Apache Kafka developer.

How to write an effective Apache Kafka job post

With a clear picture of your ideal Apache Kafka developer in mind, it’s time to write that job post. Although you don’t need a full job description as you would when hiring an employee, aim to provide enough detail for a contractor to know if they’re the right fit for the project.

Job post title

Create a simple title that describes exactly what you’re looking for. The idea is to target the keywords that your ideal candidate is likely to type into a job search bar to find your project. Here are some sample Apache Kafka job post titles:

  • Need help building low-latency log aggregation solution with Apache Kafka
  • Seeking Java developer with Kafka Pepper-Box and JMeter expertise
  • Developing a Change Data Capture (CDC) agent with Kafka

Apache Kafka project description

An effective Apache Kafka job post should include:

  • Scope of work: From message brokers to real-time analytics feeds, list all the deliverables you’ll need.
  • Project length: Your job post should indicate whether this is a smaller or larger project.
  • Background: If you prefer experience with certain industries, software, or developer tools, mention this here.
  • Budget: Set a budget and note your preference for hourly rates vs. fixed-price contracts.

Apache Kafka developer responsibilities

Here are some examples of Apache Kafka job responsibilities:

  • Design and develop data pipelines
  • Manage data quality
  • Implement data integration solutions
  • Troubleshoot and debug data streaming processes

Apache Kafka developer job requirements and qualifications

Be sure to include any requirements and qualifications you’re looking for in an Apache Kafka developer. Here are some examples:

  • Proficiency in Java and/or Scala
  • Data streaming
  • CDC
  • Data engineering

Apache Kafka Developers FAQ

What is Apache Kafka?

Apache Kafka is an open-source stream-processing solution developed by LinkedIn and later donated to the Apache Software Foundation. The software platform aims to provide a low-latency, high-throughput solution for processing real-time data feeds.

Apache Kafka uses the publish/subscribe messaging pattern common in distributed systems. Kafka instances typically exist as clusters of nodes called brokers that can receive messages from multiple producers (any apps sending data to the cluster) and deliver them to multiple consumers (any apps receiving data from the cluster). Producers publish messages to Kafka topics (i.e., categories of messages), while consumers subscribe to Kafka topics. It is through this topic categorization that the brokers are able to determine where messages need to be delivered.

Apache Kafka is a popular choice among developers looking to build message brokers, website activity trackers, and analytics pipelines that must deal with large volumes of real-time data from disparate sources.

How much does it cost to hire an Apache Kafka developer?

The first step to determining the cost to hire an Apache Kafka developer will be to define your needs. Rates can vary due to many factors, including expertise and experience, location, and market conditions.

Cost factor #1: project scope

The first variable to consider when determining scope is the nature of the work that needs to be completed. Not all Apache Kafka development projects are created equal. Creating a simple log aggregator to collect log files off different servers into a central place for processing will typically take less time than building out a multistage data streaming pipeline for your SaaS (software-as-a-service) product.

Tip: The more accurately your job description describes the scope of your project, the easier it will be for talent to give you accurate cost estimates and proposals.

Cost factor #2: Apache Kafka developer experience

Choosing the right level of expertise for the job is closely tied to how well you determined the scope of your project. You wouldn’t need an advanced Apache Kafka developer to create your own custom site analytics dashboard using Kafka. On the other hand, building a large-scale enterprise messaging system will require the skills of a seasoned Apache Kafka developer.

Beyond experience level, you need to consider the type of experience the talent possesses. The following table breaks down the rates of the typical types of Apache Kafka developers you can find on Upwork.

Rates charged by Apache Kafka developers on Upwork

Level of Experience Description Hourly Rate
Beginner Familiarity across the technology stack. Data engineering fundamentals (e.g., data streaming, data quality, data integration). Can use Kafka for basic website tracking, messaging, and data streaming. $40-70+
Intermediate Professional full-stack developers or data engineers. Experience working with high-throughput data needs, microservices architectures, and multistage data streaming pipelines. $70-100+
Expert Advanced full-stack developers or data engineers with years of experience in big data. Capable of managing teams of developers and engineers. Advanced knowledge of application architectures, data streaming technologies, and data processing solutions. $100-130+

Cost factor #3: location

Location is another variable that can impact an Apache Kafka developer’s cost. It’s no secret that you can leverage differences in purchasing power between countries to gain savings on talent. But it’s also important to factor in hidden costs such as language barriers, time zones, and the logistics of managing a remote team. The real advantage to sourcing talent remotely on Upwork is the ability to scan a global talent pool for the best possible person for the job. Location is no longer an obstacle.

Cost factor #4: independent contractor vs. agency

The final variable regarding talent cost is hiring an independent contractor vs. an agency. An agency is often a “one size fits all” model, so you’ll often have access to a designer, a project manager, an engineer, and more. When hiring individuals you have total autonomy regarding who is responsible for which part of the project, but you’ll need to source each of those skills separately.

The trade-off between hiring individuals vs. hiring an agency is the level of administrative overhead you incur personally in coordinating tasks among all members of the team. Project scope and personal preference will determine which style is a better fit for your needs.

Apache Kafka developer tips and best practices

Understand your partition data rate limitations

In Kafka, messages are organized into topics that can be divided into a number of smaller partitions. Partitions allow your Kafka cluster to process the data in a particular topic in parallel across multiple brokers. This capacity for parallel processing is what enables Kafka to deliver high-throughput messaging.

Of course, even high-throughput systems are going to have their limitations. Messages sent to a partition exist in a log for a configurable period of time or until a configurable size limit is reached. Exceed that retention limit prematurely, and it’s possible you can start losing messages before consumers can pull them from the topic partition.

That’s why it’s important to understand the data rate of your topic partitions. Simply multiply the average message size times the number of messages per second to calculate your average retention rate. This will enable you to figure out how much retention space is required to guarantee data is retained for the desired period of time.

Widen those consumer socket buffers for high-speed ingestion

The default settings for consumer socket buffers tend to be around 100 KB (Kafka 2.4.x), which is too small for high-throughput environments. For low-latency, high-bandwidth networks (10 Gbps or higher), it might be necessary to bump those values up to 8 or 16 MB. You can tune the socket buffer setting for consumers with the “socket.receive.buffer.bytes” parameter.

Tune your memory buffer and batch sizes for high-throughput producers

On the producer side of the equation, high-throughput environments will likely require a change to the default memory sizes for your “buffer.memory” and “batch.size” parameters. These values are trickier to set than your consumer socket buffers as they depend on a number of factors, including producer data rate, number of partitions, and the total memory you have available. Larger buffers aren’t necessarily always better, because having too much data buffered on-heap can lead to increased garbage collection—a process that will compete for resources and affect your importance. Best practices should be established based on the unique configuration and settings of your Kafka data streaming system.

View less
Schedule a call