Hire the best Apache Spark Engineers in Alexandria, EG

Check out Apache Spark Engineers in Alexandria, EG with the skills you need for your next job.
  • $5 hourly
    I'm a computer engineer having a CS degree with a great interest in the Big Data Systems world, applications of AI and other subjects. I have technical skills in Software Engineering, Data Warehousing, ETL, Machine Learning, Distributed Systems and Full-stack development. I had two internships before one in the Full-stack development and the other in Machine Learning. Now I am working as a Big Data Engineer. I have earned several online certificates like DL Specialization (Coursera), Big Data Specialization (Coursera), Denodo (Developer / Admin) Track and others. I have experience with different Big Data tools like Hive, Impala, Spark, Nifi, Kafka, etc. Also experience with Stream Processing tools like Kafka Streams, Spark Streaming, Flink. In addition to Informatica BDM for ETL jobs.
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    PyTorch
    Docker
    SQL
    NoSQL Database
    Informatica
    Apache Hive
    Apache NiFi
    Apache Kafka
    Apache Hadoop
    Big Data
    Machine Learning
    Scala
    Python
    Java
  • $50 hourly
    Experienced lead software engineer with expertise in architecting and building data and ML pipeline platforms, managing cross-functional initiatives, and mentoring teams. Proven track record of delivering scalable solutions and driving significant cost savings. Skilled in collaborating with stakeholders, conducting code reviews, and fostering team culture. Proficient in a wide range of technologies, including Generative AI, LLMs, Scikit-learn, and AWS services. Services I provide include: * Data and ML pipeline development * Technical project management * Stakeholder collaboration and requirements gathering * Code reviews and best practices adherence * Team mentoring and professional development coaching * Full-stack development with focus on data security and governance Let's discuss how I can help bring value to your projects and contribute to your team's success.
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    AWS Development
    Apache Airflow
    Amazon ECS
    Pattern Recognition
    Image Processing
    pandas
    Computer Vision
    Scala
    Python
    OpenCV
    Machine Learning
    Java
  • $20 hourly
    Building scalable data pipelines, extracting actionable insights, and leveraging cloud-based technologies to design, develop, and maintain robust data solutions. Skilled in: Data Engineering: ETL/ELT, data warehousing, data lakes Cloud Platforms: AWS (EMR, Redshift, S3), Spark, Airflow Programming: Python, SQL
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Amazon Web Services
    Apache Airflow
    Amazon DynamoDB
    Amazon Redshift
    Big Data
    AWS Glue
    Matplotlib
    pandas
    SQL
    Python
  • $15 hourly
    I am a data engineer that have experience in Data relative tasks. Know many languages like: Python, C++, Java and R. Have experience in dealing with databases [SQL & NoSQL]. good understanding of Machine learning algorithms and Data science work like predicting and estimating. web scrapping experience using selenium. clustering techniques like [K-means, Hierarchical clustering]. building dashboards using power bi and tableau. Good excel skills. Data analysis using Python and R
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    MongoDB
    BigQuery
    Bash
    Apache Airflow
    SQL Server Integration Services
    Talend Data Integration
    Data Engineering
    Data Warehousing & ETL Software
    Data Science
    Data Structures
    Data Analysis
    SQL
    Data Mining
    Python
  • $10 hourly
    As a machine learning engineer, my expertise lies in developing and deploying end-to-end machine learning and deep learning projects. I excel in: Model Development: Designing and implementing machine learning and deep learning models tailored to specific use cases. Data Preprocessing: Cleaning and preparing datasets to ensure optimal model performance. Feature Engineering: Identifying and creating relevant features to enhance model accuracy and robustness. Model Training: Utilizing various algorithms and frameworks to train models on diverse datasets. Deployment: Implementing production-ready solutions, including deploying models in real-world environments. Scalability: Ensuring that machine learning systems are scalable and can handle increased workloads. Integration: Seamlessly integrating machine learning solutions
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Artificial Intelligence
    Node.js
    Apache Spark MLlib
    Data Structures
    SQL
    PyTorch
    TensorFlow
    Python
    Java
    Data Engineering
    Docker
    PySpark
    Deep Learning
    Machine Learning
  • $25 hourly
    Hello world! I am a dedicated Data Engineer with a strong foundation in Data Structures and Algorithms, Operating Systems, and C Programming. My expertise extends to optimizing data pipelines and ensuring efficient data processing workflows. I thrive on leveraging my skills in database management, ETL processes, and data warehousing to deliver robust solutions that drive business insights. With a passion for problem-solving and a keen eye for optimization, I am committed to harnessing technology to solve complex data challenges and empower decision-making processes. Let's innovate and transform data into actionable intelligence together!
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Cloud Computing
    Data Analysis
    Apache Airflow
    Microsoft Excel
    Web Scraping
    NoSQL Database
    Apache Hadoop
    Amazon Web Services
    SQL
    Python
    Microsoft Power BI
    ETL Pipeline
    Data Warehousing & ETL Software
  • $25 hourly
    Delivering Scalable Data Solutions and Optimized Pipelines Why Work With Me? Technical Expertise: Experienced in building scalable and efficient ETL pipelines. Strong proficiency in SQL, query optimization, and database management (SQL Server, MySQL, BigQuery). Hands-on experience with cloud platforms, including AWS and GCP. Skilled in data processing frameworks and workflow automation (Airflow, DBT). Expertise in data visualization tools such as Power BI, Tableau, and Google Data Studio. Familiar with version control systems (Git, Bitbucket) and Agile methodologies. Proficient in Python for data processing, automation, and optimization. Soft Skills: Adaptable and committed to delivering high-quality solutions. Strong analytical and problem-solving mindset. Excellent communication and teamwork abilities. Detail-oriented with a results-driven approach. Reliable, organized, and dedicated to meeting deadlines. With a strong foundation in data engineering and a commitment to excellence, I help businesses transform raw data into valuable insights. Let’s collaborate to build efficient and scalable data solutions.
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    PySpark
    Apache Hadoop
    AWS Glue
    NoSQL Database
    SQL
    Snowflake
    Apache Airflow
    Python
    Apache Kafka
  • $25 hourly
    Hi! I'm ELAF, a passionate and detail-oriented Data Engineer with a strong background in building efficient data pipelines, integrating data warehouses, and transforming raw data into actionable insights. I specialize in Python, SQL, cloud technologies, and advanced data analysis using Pandas. Why Choose Me: ✔️ Reliable Solutions: I deliver scalable and optimized data workflows tailored to your business needs. ✔️ Data-Driven Insights: With expertise in Pandas and data analytics, I ensure your data isn't just stored but drives meaningful decisions. ✔️ Efficient Communication: I keep you informed at every step, making complex technical processes simple and transparent. ✔️ On-Time Delivery: I take deadlines seriously and always aim to exceed expectations. How I Can Help You: Design and implement robust data pipelines Build and optimize data warehouses for efficient storage and querying Perform advanced data analysis for reporting and insights Streamline data integration between various sources I’m easygoing, open to feedback, and genuinely invested in your success. Let’s work together to unlock the true potential of your data.
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Data Warehousing
    AWS Glue
    Apache Hadoop
    Apache Airflow
    NoSQL Database
    SQL
    Big Data
    Web Scraping
    Microsoft Power BI
    Database
    Python
    pandas
    Data Analysis
    ETL
  • $25 hourly
    Why Me? I am a dedicated Data Engineer with expertise in data analysis,data science, database management, and pipeline development. With a strong foundation in SQL, Python, and cloud technologies, I build efficient data solutions that drive insights and business decisions. Technical Skills: Skilled in data visualization tools (Power BI, Tableau, Excel) Strong understanding of data architecture, modeling, and end-to-end solutions Experience with SQL Server, MySQL, Snowflake, and NoSQL databases Knowledge of ETL processes and data pipeline development Proficient in SQL query optimization and performance tuning Hands-on experience with Apache Airflow and Apache Spark for workflow automation and big data processing Proficient in Pandas for data manipulation and analysis Experience with Docker for containerized applications Skilled in web scraping for data collection and automation Familiar with agile methodologies for project execution Understanding of AWS cloud services Soft Skills: Adaptable and quick to learn new technologies Strong analytical and problem-solving mindset Detail-oriented and focused on delivering high-quality results Dedicated to meeting deadlines and project goals Effective team player with leadership abilities Reliable and structured in following workflows Excellent communicator with a passion for data-driven solutions Looking for a skilled Data Engineer who can help with database optimization, ETL pipelines, data visualization, big data processing, and cloud solutions? Let’s work together.
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Data Warehousing & ETL Software
    Amazon Web Services
    Apache Hadoop
    Apache Airflow
    Big Data
    Docker
    NoSQL Database
    Microsoft Power BI
    Web Scraping
    SQL
    pandas
    Python
    Data Analysis
    ETL Pipeline
  • $25 hourly
    Why Work With Me? **Technical Expertise** • Proficient in data visualization tools, including Power BI, Data Studio, and Tableau. • Skilled in project architecture, data modeling, and end-to-end solution delivery. • Hands-on experience with DBT, Snowflake, and Apache Airflow. • Strong expertise in relational databases such as BigQuery, SQL Server, Oracle, MySQL, and Snowflake. • Extensive experience in ETL processes, data pipeline development, and SQL performance tuning. • Proficient in version control systems like Git • Familiar with agile methodologies for efficient project execution. • Working knowledge of Python and Amazon Web Services (AWS). **Soft Skills** • Highly adaptable, with a strong commitment to delivering quality solutions. • Detail-oriented and results-driven, ensuring accuracy and efficiency. • Proven ability to meet deadlines without compromising quality. • Strong leadership and collaboration skills, fostering effective teamwork. • Analytical thinker with an organized and structured approach to problem-solving. I am dedicated to helping businesses optimize their data workflows, improve efficiency, and drive actionable insights. Let's connect to discuss how I can contribute to your data engineering projects.
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    AWS Glue
    Microsoft Power BI
    Apache Airflow
    Apache Hadoop
    NoSQL Database
    NumPy
    Web Scraping
    Data Warehousing
    SQL
    Python
    pandas
    Data Engineering
    Data Analysis
    ETL Pipeline
  • $25 hourly
    🎖 DATA ENGINEER | AWS CERTIFIED | CLOUD & ETL EXPERT 🎖 I am a Data Engineer with expertise in data architecture, ETL pipelines, cloud solutions, and data modeling. Passionate about transforming raw data into actionable insights, I specialize in building scalable and efficient data pipelines to support business intelligence and analytics. I hold AWS Certified Cloud Practitioner and AWS Certified Data Engineer credentials, demonstrating my ability to design and optimize cloud-based data solutions. 🚀 Why Choose Me? ✅ Technical Expertise: Data Engineering & ETL: Designed and optimized data pipelines using Airflow, SQL-based ETL tools, and Python. Cloud & Data Warehousing: Expertise in AWS (Glue, Athena, Redshift), Snowflake, and scalable cloud storage solutions. Database Management: Skilled in SQL Server, MySQL, and query performance tuning for large-scale datasets. Data Modeling & Architecture: Proficient in dimensional modeling (Star/Snowflake schemas) for OLAP and analytics. Automation & Optimization: Implemented workflow automation using Python, Airflow, and shell scripting. Version Control & CI/CD: Proficient in Git, Bitbucket, and CI/CD for data engineering workflows. Data Visualization & Reporting: Strong command of Power BI to create interactive dashboards. ✅ Soft Skills: Strong problem-solving mindset with a results-driven approach. Exceptional attention to detail and commitment to high-quality work. Adaptable, team-oriented, and experienced in agile development methodologies. Analytical thinker with a structured and organized approach to data projects. 💡 Key Technical Skills & Tools: ⭐ ETL & Data Pipelines: Apache Airflow, AWS Glue, SSIS ⭐ Cloud Platforms: AWS (S3, Athena, Redshift), Snowflake ⭐ Databases & Query Optimization: SQL Server, MySQL, PostgreSQL ⭐ Programming & Scripting: Python (Pandas, NumPy), Shell Scripting ⭐ Data Modeling & Warehousing: Star/Snowflake Schema, Kimball Methodologies ⭐ Automation & Orchestration: Apache Airflow, Python, SQL-based workflows ⭐ Version Control & CI/CD: Git, Bitbucket, DevOps practices ⭐ Business Intelligence & Visualization: Power BI, Data Studio ⭐ Data Governance & Documentation: Metadata management, data lineage tracking 🔥 Let’s Build Something Great Together! 📩 Open to collaborations, freelance projects, and full-time opportunities. Let's connect and bring your data engineering projects to life! 🚀
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Data Extraction
    ETL
    Microsoft Excel
    Amazon Web Services
    Cloud Computing
    Web Scraping
    Microsoft Power BI Data Visualization
    NoSQL Database
    SQL
    Python
    ETL Pipeline
    Apache Airflow
    Apache Hadoop
    Data Warehousing & ETL Software
  • $10 hourly
    Data Scientist | AI Enthusiast | Cloud & DevOps Advocate I am a results-driven Data Scientist with over 2 years of experience in building, deploying, and optimizing data-driven solutions. My expertise spans the full lifecycle of data science projects, from data engineering to advanced machine learning and DevOps, enabling seamless delivery of innovative, scalable solutions. Key Skills & Expertise: Predictive Modeling & Machine Learning: Skilled in developing, testing, and deploying robust models using tools like Python, R, TensorFlow, PyTorch, Keras, and OpenCV. Data Engineering: Proficient in managing and processing large datasets with tools like Apache Kafka, Apache Spark, BigQuery, SQL/NoSQL, and ETL techniques (extraction, transformation, and loading). Programming & Software Development: Advanced Python skills (algorithms, OOP, software design, and Google coding standards), with experience in shell scripting, complex SQL queries, Hive scripts, and Hadoop commands. DevOps & Cloud Platforms: Familiar with CI/CD pipelines, Docker, Kubernetes, Ansible, Hashicorp Terraform (IaC), GitHub Actions, and cloud platforms like AWS and GCP. Data Visualization & Insights: Expertise in creating insightful dashboards and interactive visualizations using Shiny and Plotly, delivering actionable insights to clients. OCR & Document Processing: Experienced in extracting information from scanned documents using Python libraries such as Tesseract and PaddleOCR. Linux Systems: Proficient in multiple Linux distributions, including Debian, Ubuntu, SUSE, Red Hat, and CoreOS. What I Offer: I specialize in delivering scalable, efficient, and well-documented solutions tailored to client needs. My work is underpinned by clear documentation, meaningful insights, and interactive deliverables, ensuring business impact and user satisfaction.
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    JavaScript
    Docker
    Data Analytics & Visualization Software
    Apache Kafka
    OCR Software
    Machine Learning
    Linux
    Raspberry Pi
    Data Science
    OpenCV
    Data Cleaning
    R Shiny
    R
    Python
  • $5 hourly
    I'm an aspiring data engineer focusing on ETL and leveraging my skills into building strong data pipelines. I come with a strong background in industries such as: HVAC, Underwater Robotics, Communications & Electronics. I can help you with: Handling and processing large datasets. Designing and managing data warehouses and ensuring data is organized, secure, and easily accessible for analytics. Creating efficient ETL pipelines that ensure data is accurately extracted, transformed, and loaded. Integrating data from various sources and APIs, ensuring a unified view of the data landscape. My Top Skills: Apache Spark Apache Hadoop SQL Databases and Datawarehouse Programming in Python
    vsuc_fltilesrefresh_TrophyIcon Apache Spark
    Office 365
    Data Engineering
    Electrical Design
    Autodesk AutoCAD
    Database
    SQL
    Apache Hadoop
    Data Analysis
    Python
    ETL Pipeline
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Apache Spark Engineer near Alexandria, on Upwork?

You can hire a Apache Spark Engineer near Alexandria, on Upwork in four simple steps:

  • Create a job post tailored to your Apache Spark Engineer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Spark Engineer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Spark Engineer profiles and interview.
  • Hire the right Apache Spark Engineer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Spark Engineer?

Rates charged by Apache Spark Engineers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Spark Engineer near Alexandria, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Spark Engineers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Spark Engineer team you need to succeed.

Can I hire a Apache Spark Engineer near Alexandria, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Spark Engineer proposals within 24 hours of posting a job description.