Apache Kafka Jobs

7 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Hourly - Intermediate ($$) - Est. Time: 3 to 6 months, 10-30 hrs/week - Posted
Our company has several projects related to Big Data & Hadoop in delivery pipeline. We are looking for solid resources with excellent skills on BigData & Hadoop (Spark, Hive, Pig, Kafka, Storm, Elastic Search, Solr, Python, Scala) for ongoing work with our clients in US and world wide. If you are interested, please send us links to your portfolio on Big Data & Hadoop and why should consider you for this role. Looking for a long term relationship. (See the attached job description for the dev role).
Skills: Apache Kafka Apache Hive Apache Solr Elasticsearch
Hourly - Expert ($$$) - Est. Time: Less than 1 week, 10-30 hrs/week - Posted
We need help with our Kafka Consumer (version 0.9.0.1) & Broker settings for a use-case where we have consumer groups that pull one job that can take longer than the session timeout. We are using the new Kafka 0.9.0.1 Consumer and using consumer groups, the consumers are doing work on one message that will take between 30 to 90 seconds to complete. If the work exceeds our session timeout of 30 seconds then our consumer hasn't polled anymore so that consumer gets put to rebalancing pools. We also would rather commit after we finish the long amount of work, so we didn't give that consumer a chance to commit its work...therefore, that same work (that takes between 30 and 90 seconds) would just get picked up again in cases when it takes more than 30 seconds to process. We can move the commit sooner, right after we pull the work and before the work is done, but that is not ideal...and it still doesn't address Kafka having to rebalance too frequently. Our relevant configs are below, can you let us know how you would recommend setting these configs so that we can balance processing long running work without settings values so high we are being detrimental to the benefits of rebalancing consumers? Consumer Configurations * heartbeat.interval.ms = 3000 (ms, default) * session.timeout.ms = 30000 (ms, default, max allowed value on our brokers - see below) * fetch.min.bytes = 1 (default) * max.partition.fetch.bytes = 1048576 (1024^2, default) Broker Configurations * group.max.session.timeout.ms = 30000 (ms, default)
Skills: Apache Kafka Java
Hourly - Entry Level ($) - Est. Time: More than 6 months, 30+ hrs/week - Posted
We're looking for extraordinary CodeIgniter and Eloquent developers to join our development team and work with us for Long Term. Joining Medlanes means that you will be part of an international team that is driven by the idea of improving healthcare with technology and changing the status quo. We do collaborate in an open, transparent environment and are a team of mission-driven talents. We believe in consistently improving as an individual, team and as a company. We strongly encourage learning new things and support developing your skillset further. Required skills: - Expert with CodeIgniter framework - API design and development - Having experience in working with large databases along with Eloquent. Good to have: - Has worked on OAuth implementation. - Has experience in message brokers like Kafka / RabbitMQ. - Worked with Event driven PHP design. - Can design and architect database layouts. - Additional knowledge of Java and Python. Prepend a word “RockStar” in your cover letter so that we know that you are a genuine applicant. Best Regards, Erik
Skills: Apache Kafka API Development Database design Java
Fixed-Price - Expert ($$$) - Est. Budget: $200 - Posted
i am looking for a candidate with good knowledge in apache kafka open source message software for job support and training. Its urgent and on going project need a person asap.
Skills: Apache Kafka
Fixed Price Budget - Expert ($$$) - $100 to $120 - Posted
In the role of Technology Lead, you will interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle including Requirements Elicitation, Application Architecture definition and Design. You will play an important role in creating the high level design artifacts. You will also deliver high quality code deliverables for a module, lead validation for all types of testing and support activities related to implementation, transition and warranty. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued. Qualifications Basic Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education. At least 4 years of experience with Information Technology. Preferred At least 4 years of solid experience in the software industry with strong experience in Big Data Technologies. Strong Expertise working in / Understanding of Big data a technologies with strong focus on HortonWorks. Strong knowledge of Kafka, Flume, Sqoop, Hive, Pig, Map Reduce, Spark, Storm with hands on experience – all or most of these. Solid technology expertise in J2EE and related technologies Strong Unix Scripting skills Be very comfortable with Agile methodologies Good knowledge of Data warehouse and BI technologies – Exposure to ETL and Reporting tools and appliances like Teradata. Excellent Communication Skills. Expertise in SQL databases (e.g. MySQL or Oracle) and strong ability to write SQL queries. Proven ability to lead a team of engineers. Ability to work in onsite/offshore model
Skills: Apache Kafka Apache Hive Apache Spark MapReduce
Hourly - Entry Level ($) - Est. Time: More than 6 months, Less than 10 hrs/week - Posted
Hi , I am having some client base in various regions and they are looking for various big data enthusiast who are experienced in various components such as Hadoop , Spark , Elasticsearch , Logstash , Kibana , Pig , Hive etc. Kindly shoot out your proposals and convenient time to discuss further with your skype ID Also write "Big Data enthu" on top and bottom of proposals so that I can come to know there are genuine proposals.
Skills: Apache Kafka Amazon EC2 Apache Flume Apache Hive