Hire the best Apache Spark Engineers in Shenzhen, CN

Check out Apache Spark Engineers in Shenzhen, CN with the skills you need for your next job.
  • $35 hourly
    I'm Fusion Zhu, with over 10 years of experience in Java development, including 5 years focusing on Big Data Processing and Visualisation using Java, Scala, JavaScript, HTML5, Apache Spark, Apache Hadoop, Apache Hive, Apache Flume, Apache Hbase, Storm, Kafka, DataX, and ECharts. Throughout my career: I've assisted employers in data ingestion from various sources such as RDBMS, NoSQL databases, and files by developing utilities on OSS platforms like DataXServer (open-source on GitHub) and Realtime Page Click Statistical System (refer to Portfolio section) as a Big Data Developer. I've played a key role in building Big Data Platforms using technologies like Hadoop, Spark, Hive, HBase, Flink, Kafka, and ElasticSearch as a Big Data Architect. I've designed and developed Web Applications including e-commerce and Report Systems using Java, Scala, HTML5, JavaScript, CSS, Spring, Akka, Mybatis, D3JS, ExtJS, JQuery, ReactJs, ECharts, and Bootstrap CSS as a Java & Front-end Developer. I've managed full-stack teams (Java, Front-end, QA, and Operation) effectively as a Team Leader. Furthermore, I possess extensive skills and experience in Microservice design & architecture, Container Cloud (Docker, Kubernetes), Rust, and Linux. If you're seeking a reputable and reliable professional who consistently delivers results, I'm the one you're looking for. Thank you for visiting my profile, and I look forward to hearing from you!
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    React
    Java
    JavaScript
    Scala
    Elasticsearch
    Web Development
    Docker
    OpenLayers
    D3.js
    Rust
    Spring Boot
    Apache Flink
    Apache Kafka
    Apache Hadoop
  • $80 hourly
    I am a back-end development engineer and big data development engineer. If you need to build your own website, I can work as a full stack engineer to quickly develop it for you. If you're looking for big data development, I'm perfectly qualified. Proficient in: JavaSE, Python, object-oriented thinking Proficient in: Spring, Springmvc, Mybatis, Mybatis-plus, Spring Boot, Spring Cloud mainstream development framework, capable of full-stack development projects Proficient in: MySQL, Postgresql relational database; proficient in Redis, Mongodb, Hbase, Hive non-relational database for back-end development. Proficient in: Vue.js, HTML, CSS, Javascript and other front-end technologies, able to use Vue to make simple front-end projects. Proficient in using and building Kafka, RocketMQ, Zookeeper, Flume, Datax and other components. Proficient in big data frameworks such as Hadoop, Mapreduce, Spark, Flink, and have project experience in big data data cleaning. Familiar with: Linux operating system, basic common commands, project deployment From the beginning to the end of the project, the quality of the escort for you, the time for you fast, accurate, cost to let you value for money
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Apache Zookeeper
    Apache Kafka
    Docker
    Apache HBase
    Apache Flink
    Hive
    Apache Hadoop
    Redis
    MySQL
    Vue.js
    Spring Cloud
    Spring Boot
    Java
  • $50 hourly
    1. Proficient in programming languages such as Java, Scala, Python, and Shell scripts. 2. Familiarize oneself with Spark source code, gain a deep understanding of Spark's working mechanism, become familiar with the execution process of Spark tasks, and master Spark Proficient in using SQL, Spark Streaming, Structured Streaming, and Mlib. 3. Familiar with Flink runtime architecture, Window API, temporal semantics and watermark, state management, fault tolerance mechanism, and status State consistency, Table API and Flink SQL, Flink CEP, Flink CDC, etc. 4. Proficient in using ETL tools such as ElasticSearch, LogStash, Kibana, etc., and proficient in Kafka and Flume data collection The principle of integrating tools to achieve filtering and analysis of streaming data. 5. Understand HDFS and Yarn, master MapReduce principles, proficiently use Sqoop, Azkaban, Nginx, Redis Keepalived, Zookeeper, Storm, Neo4j, DolphinSchedule. 6. Proficient in using Hive tools, understanding data warehouse establishment, and completing data topic extraction and multidimensional analysis. 7. Familiar with Hbase system architecture and storage principles. 8. Familiar with distribution methods such as Druid, ClickHouse, PostgreSQL, GreenPlum, Doris, Gbase, and Damon OLAP database architecture, principles, and proficient use. 9. Proficient in using relational databases such as MySQL, SQL Server, and Oracle. 10. Familiar with CDH, Ambaria, Alibaba Cloud, Huawei Cloud, and AWS. 11. The use of data lake iceberg, hudi, paimon, data security framework ranger, and metadata management atlas is relevant Experience. 12. Proficient in using the PASS cloud platform K8S, Openshift, Kubesphere, Harbor, Docker and other related technologies. 13. I have relevant research and experience in using technologies such as cloud native Knative, Istio, and IASS platform OpenStack. 14. Master Spring Boot and Eureka, Ribbon, Feign, Hystrix, Zuul, GateWay, Sleuth Backend technology stacks such as Zipkin, SpringCloud Configuration, SpringCloud Stream, Nacos, Sentinel, etc Kendo UI, Vu
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Greenplum
    Java
    Apache Flink
    Apache Hadoop
  • $15 hourly
    Hello there! Are you searching for a skilled Data Engineer for your project on Upwork? Look no further! As a Data Engineer with 10 years of development experience, including over 6 years of specialized expertise in data development and more than 3 years in Java development, I am well-equipped to handle your project requirements. My services: - Various data processing tasks - SQL scripting and optimization - Python scripting for data manipulation and analysis, including web scraping - Java backend code development - Development of Data Warehousing & ETL Software My skills: - Java, Python, Scala, Shell scripting - SQL, MySQL, PostgreSQL, Hive - Big Data technologies such as Spark, Hadoop, Kafka, and Airflow My experience: - Designing and implementing efficient data pipelines for large-scale data processing - Developing SQL scripts to extract, transform, and load data across various platforms - Creating Python scripts for data manipulation and analysis - Building Data Warehousing & ETL Software to ensure seamless data integration and transformation - Developing Java backend code for data-centric applications I am eager to collaborate with you and contribute to the success of your project as a dedicated and experienced Data Engineer. Let's connect and discuss how I can help you achieve your objectives.
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Data Chart
    Data Warehousing & ETL Software
    Hive
    PostgreSQL
    MySQL
    Linux
    Apache Kafka
    Apache Hadoop
    Apache Airflow
    SQL
    Scala
    Python
    Java
  • $20 hourly
    I have extensive experience in big data architecture design, Data Warehouse and Data Analyst. I excel at translating complex business requirements into efficient and reliable data solutions to collaborate and achieve project goals. I am well-versed in the Big Data technology stack, with a strong understanding of Hadoop, Spark, and other tools, along with excellent communication and problem-solving skills. Skills: 1)Data architecture and design, Data Warehoue, ETL 2)Big Data platform development and maintenance 3)Hadoop ecosystem (HDFS,MapReduce,YARN,Doris),customization and optimization 4)Spark,Flink data processing and analytics 5)Database management and query optimization 6)Programming languages: Python, Java, Scala,SQL 7)Web scraping development (Python, Beautiful Soup,Scrapy,Feapder) 8)Data analytics tools and techniques(Tableau,Superset,FindReport)
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Dev & IT Project Management
    Tableau
    Apache Flink
    Architectural Design
    Web Scraping
    Data Center Design
    Data Analytics & Visualization Software
    Data Warehousing & ETL Software
    Big Data
    Apache Airflow
    Data Engineering
    Apache Hadoop
    Database Design
    ETL Pipeline
  • $30 hourly
    * Having rich experience in big data processing, with a deep understanding and practical experience in big data components such as Spark, Kafka, Kudu, Hive, HBase, etc * Having in-depth research on high-performance parallel computing, able to solve various complex computing problems * Has excellent algorithm research capabilities and has successfully implemented various complex algorithms including deep learning, machine learning, knowledge graph, and big data parallel algorithms
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Scala
    Large Language Model
    Machine Learning
    Big Data
  • $25 hourly
    I am an engineer with over 10 years of experience in the software industry, I can help. 1, familiar with Java/HTML/JavaScript/Scala/Python/Shell and other languages. 2, familiar with HTTP, Socket network communication principles, master TCP, UDP network protocols, familiar with Netty framework; 3, familiar with Spring, SpringMVC, SpringBoot, SpringCloud, Hibernate, MyBatis and other Web services development framework;. 4, familiar with Oracle, MySql, Postgresql, Redis, Mongodb and other database services and split-table and split-database technology (MyCat/Sharding);. 5, familiar with mainstream application servers (such as Apache, Tomcat, Jetty, Nginx, etc.) deployment, configuration and optimization methods; 6, familiar with Maven, Gradle, Git, Svn and other project management and continuous integration tools (Docker/K8S/Jenkins); 7、Familiar with the use of common commands of the Linux operating system, able to Shell programming; 8, familiar with Redis distributed cache, Kafka/RabbitMQ/Rocketmq distributed message queues, Docker containers/Kubernetes and other technical principles and use; 9, familiar with HTML / JavaScript / CSS / Vue / JSP / Ajax / JQuery and other Web development technologies 10、Familiar with big data Zokeeper, Hadoop, Spark, HBase, Hive, Elasticsearch, Clickhouse, Flink and other big data components, and have tens of billions of level data storage experience. Full project management from start to finish Regular communication is important to me, so let’s keep in touch.
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Redis
    Apache Hive
    PostgreSQL
    MySQL
    API Development
    Scala
    Front-End Development
    Unix Shell
    Apache Flink
    Apache Hadoop
    Python Script
    Web Application
    Python
    Java
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job (it’s free)

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Apache Spark Engineer near Shenzhen, on Upwork?

You can hire a Apache Spark Engineer near Shenzhen, on Upwork in four simple steps:

  • Create a job post tailored to your Apache Spark Engineer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Spark Engineer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Spark Engineer profiles and interview.
  • Hire the right Apache Spark Engineer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Spark Engineer?

Rates charged by Apache Spark Engineers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Spark Engineer near Shenzhen, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Spark Engineers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Spark Engineer team you need to succeed.

Can I hire a Apache Spark Engineer near Shenzhen, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Spark Engineer proposals within 24 hours of posting a job description.