Hire the best Apache Tapestry developers

Check out Apache Tapestry developers with the skills you need for your next job.
Clients rate Apache Tapestry developers
Rating is 4.8 out of 5.
4.8/5
based on 775 client reviews
  • $40 hourly
    I am a developer focused on providing highly efficient software solutions. - Full Stack Developer (WEB Applications and websites) - Big Data Engineer (Hadoop, PySpark, Hive, MapReduce) - E-Commerce Developer: Magento 2 - Woocommerce - Install, setup and manage VPSs and Big Data Clusters - Web Applications Security checks, and perform advanced security tests (Penetration tester) - Expert PHP Developer - Experienced PHP Laravel developer I am ready to take on new experiences and to engage in serious works.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Laravel
    Python
    Apache Spark
    Cloudera
    MongoDB
    Apache HBase
    Apache Hadoop
    PHP
    JavaScript
    CakePHP
  • $50 hourly
    A Backend Software Engineering with more than 6 years of experience. Have worked with large-scale backend/distributed systems and big data systems. A DevOps engineer with 4 years of experience - both on-premises and AWS, experienced with K8s, Terraform, Ansible, CI/CD. Currently working as Principal Engineer/ Solution Architect role.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    DevOps
    Docker
    Elasticsearch
    GraphQL
    Scala
    Serverless Computing
    Architectural Design
    Kubernetes
    Apache Spark
    Apache Kafka
    Apache Hadoop
    Amazon Web Services
    API Development
  • $38 hourly
    💡 If you want to turn data into actionable insights or planning to use 5 V's of big data or if you want to turn your idea into a complete web product... I can help. 👋 Hi. My name is Prashant and I'm a Computer Engineer. 💡 My true passion is creating robust, scalable, and cost-effective solutions using mainly Java, Open source technologies. 💡During the last 11 years, I have worked with, 💽Big Data______🔍Searching____☁️Cloud services 📍 Apache Spark_📍ElasticSearch_📍AWS EMR 📍 Hadoop______📍Logstash_____📍AWS S3 📍 HBase_______📍Kibana_______📍AWS EC2 📍 Hive_________📍Lucene______ 📍AWS RDS 📍 Impala_______📍Apache Solr__📍AWS ElasticSearch 📍 Flume_______📍Filebeat______📍AWS Lambda 📍 Sqoop_______📍Winlogbeat___📍AWS Redshift 5-step Approach 👣 Requirements Discussion + Prototyping + Visual Design + Backend Development + Support = Success! Usually, we customize that process depending on the project's needs and final goals. How to start? 🏁 Every product requires a clear roadmap and meaningful discussion to keep everything in check. But first, we need to understand your needs. Let’s talk! 💯 Working with me, you will receive a modern good looking application that will meet all guidelines with easy navigation, and of course, you will have unlimited revisions until you are 100% satisfied with the result. Keywords that you can use to find me: Java Developer, ElasticSearch Developer, Big Data Developer, Team lead for Big Data application, Corporate, IT, Tech, Technology.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Big Data
    ETL
    Data Visualization
    SQL
    Amazon Web Services
    Amazon EC2
    ETL Pipeline
    Data Integration
    Data Migration
    Logstash
    Elasticsearch
    Apache Kafka
    Apache Spark
    Apache Hadoop
    Core Java
  • $400 hourly
    I excel at analyzing and manipulating data, from megabytes to petabytes, to help you complete your task or gain a competitive edge. My first and only language is English. My favorite tools: Tableau, Alteryx, Spark (EMR & Databricks), Presto, Nginx/Openresty, Snowflake and any Amazon Web Services tool/service (S3, Athena, Glue, RDS/Aurora, Redshift Spectrum). I have these third-party certifications: - Alteryx Advanced Certified - Amazon Web Services (AWS) Certified Solutions Architect - Professional - Amazon Web Services (AWS) Certified Big Data - Specialty - Amazon Web Services (AWS) Certified Advanced Networking - Specialty - Amazon Web Services (AWS) Certified Machine Learning - Specialty - Databricks Certified Developer:
 Apache Spark™ 2.X - Tableau Desktop Qualified Associate I'm looking for one-time and ongoing projects. I especially enjoy working with large datasets in the finance, healthcare, ad tech, and business operations industries. I possess a combination of analytic, machine learning, data mining, statistical skills, and experience with algorithms and software development/authoring code. Perhaps the most important skill I possess is the ability to explain the significance of data in a way that others can easily understand. Types of work I do: - Consulting: How to solve a problem without actually solving it. - Doing: Solving your problem based on your existing understanding of how to solve it. - Concept: Exploring how to get the result you are interested in. - Research: Finding out what is possible, given a limited scope (time, money) and your resources. - Validation: Guiding your existing or new team is going to solve your problem. My development environment: I generally use a dual computer-quad-monitor setup to access my various virtualized environments over my office fiber connection. This allows me to use any os needed (mac/windows */*nix) and also to rent any AWS hardware needed for faster project execution time and to simulate clients' production environments as needed. I also have all tools installed in the environments which make the most sense. I'm authorized to work in the USA. I can provide signed nondisclosure, noncompete and invention assignment agreements above and beyond the Upwork terms if needed. However, I prefer to use the pre-written Optional Service Contract Terms www [dot] upwork [dot] com/legal#optional-service-contract-terms.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    CI/CD
    Systems Engineering
    Google Cloud Platform
    BigQuery
    DevOps
    Apache Spark
    Web Service
    ETL
    Data Science
    Predictive Analytics
    Docker
    SQL
    Amazon Redshift
    Tableau
    Amazon Web Services
  • $66 hourly
    I am a data professional, worked with many MNC, and delivered some of the enormous data engineering and data science projects in the past. My focus is always on scalable, sustainable, and robust software building. Python/Scala Programming, Linux Admin, Data Wrangling, Data Cleansing & Data Extraction services utilizing Python 3 or Python 2 Programming or Scala/Spark on Linux or Windows. I slice, dice, extract, transform, sort, calculate, cleanse, collect, organize, migrate, and otherwise handle data management for clients. Services Provided: - Big data processing using Spark Scala - Building large Scale ETL - Could Management - Distributed platform development - Machine learning - Python Programming - Algorithm Development - Data Conversion (Excel to CSV, PDF to Excel, CSV to Excel, Audio) - Data Mining - Data extraction - ETL Data Transformation - Data Cleansing - OCR (Optical Character Recognition w/ Tesseract) - Linux Server Administration - Anaconda Python / Conda / Miniconda Administration - LXC/LXD Virtualization / Linux Containers - Website & Data Migrations As a long-time data engineer, my technical experience includes the gamut of skills required to get an ETL up and to run. From server design & construction, datacenter selection, server colocation, web server software setup/configuration (Apache, NGINX), database (MySQL), server control panels, server migrations & etc ### Charges are for a FULL-TIME JOB, for a specific job it may vary ###
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Docker
    ETL Pipeline
    Amazon Web Services
    Hive Technology
    MongoDB
    Data Modeling
    Big Data
    Database Architecture
    Scala
    Apache Hadoop
    SQL
    Apache Spark
    Apache Airflow
    Python
    Apache Kafka
  • $100 hourly
    I have over 4 years of experience in Data Engineering (especially using Spark and pySpark to gain value from massive amounts of data). I worked with analysts and data scientists by conducting workshops on working in Hadoop/Spark and resolving their issues with big data ecosystem. I also have experience on Hadoop maintenace and building ETL, especially between Hadoop and Kafka. You can find my profile on stackoverflow (link in Portfolio section) - I help mostly in spark and pyspark tagged questions.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    MongoDB
    PySpark
    Data Migration
    Apache Airflow
    Python
    Data Warehousing
    Data Scraping
    Data Visualization
    ETL
    Apache Kafka
    Apache Hadoop
    Apache Spark
  • $30 hourly
    🏆 Expert in creating robust, scalable and cost-effective solutions using Big Data technologies for past 9 years. 🏆 The main areas of expertise are: 📍 Big data - Apache Spark, Spark Streaming, Hadoop, Kafka, Kafka Streams, HDFS, Hive, Solr, Airflow, Sqoop, NiFi, Flink 📍 AWS Cloud Services - AWS S3, AWS EC2, AWS Glue, AWS RedShift, AWS SQS, AWS RDS, AWS EMR 📍 Azure Cloud Services - Azure Data Factory, Azure Databricks, Azure HDInsights, Azure SQL 📍 Google Cloud Services - GCP DataProc 📍 Search Engine - Apache Solr 📍 NoSQL - HBase, Cassandra, MongoDB 📍 Platform - Data Warehousing, Data lake 📍 Visualization - Power BI 📍 Distributions - Cloudera 📍 DevOps - Jenkins 📍 Accelerators - Data Quality, Data Curation, Data Catalog
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    SQL
    AWS Glue
    PySpark
    Apache Cassandra
    ETL Pipeline
    Apache Hive
    Apache NiFi
    Apache Kafka
    Big Data
    Apache Hadoop
    Scala
    Apache Spark
  • $30 hourly
    I am a data scientist with a Masters' degree in Physics and a passion for big data analysis and machine learning, I have worked datasets from several major companies including financial services organizations, Banking institutions, industrial organizations, Logistics and transportation companies. I have extensive experience building machine learning models that achieve high-performing results, One of my models recently got me a first position prize in the Uber Nairobi challenge for $6000. where I optimized an ambulance deployment strategy for nearest distance between Accidents and Ambulance locations in Nairobi. I have worked on several other projects that include Remote sensing Analysis, image processing, data visualization, and a variety of machine learning methods for predictive modeling and unconventional models. I have expert knowledge of mathematics, statistics, probability, machine learning, algorithms, and several programming languages and tools. . My skillset includes: - ETL, Database Architecture, Data Mining - Selenium, BeautifulSoup and other extraction tools - SQL mastery using any dialect (MySQL, PostgreSQL, Microsoft SQL Server) - The Python Data Science Stack: Numpy, Pandas, SciKit-Learn, Matplotlib, Jupyter -Deep Learning Models using TensorFLow and Pytorch - - NLP Specialization - Linear Regression, Logistic Regression, GAM Models, Econometrics - Clustering, PCA, Decision Trees and other simple Machine Learning - Complex Machine Learning models Such as Gradient Boosted Machines using XGBoost - Data Visualization in ggplot, Matplotlib, Plotly Interesting projects I've done in the past: Uber Nairobi Ambulance Perambulation Challenge Using ML to create an optimized ambulance deployment strategy in Nairobi? Multi Output Regression Algorithm to predict monthly pay as you go payment for Solar Frontier Company AI4D Baamtu Datamation - Automatic Speech Recognition in WOLOF create aautomatic speech recognition model for Wolof for use in public transport Lacuna - Correct Field Detection Design a method to accurately find field locations? CGIAR Crop Yield Prediction Predict maize yields on East African farms using satellite data? AutoInland Vehicle Insurance Claim Predict if a client will submit a vehicle insurance claim in the next 3 months AI4D iCompass Social Media Sentiment Analysis for Tunisian Arabizi Classify sentiment in the Tunisian Arabizi dialect AI4D Malawi News Classification Challenge Classify Malawi news articles in Chichewa DSN AI Bootcamp Qualification Hackathon Predict customers who will default on a loan
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    API Development
    Data Extraction
    Data Analysis
    Machine Learning
    Feature Extraction
    Anomaly Detection
    Classification
    Neural Network
    Machine Learning Model
    Blockchain
    SQL
    Predictive Analytics
    Exploratory Data Analysis
    Algorithm Development
    Python
  • $40 hourly
    Development experience in information management solutions, ETL processes, database design and storage systems; Responsible, able to work and solve problems independently. Software Developer, Integration process Architect Envion Software Creating a Hadoop cluster system to process heterogeneous data (ETL, Hadoop cluster, RDF/SparQL, NoSQL DB, IBM DashDB) ETL processes for big amount of database DataWarehouses creation and support Database Developer and Data Scientist A software development company Programming Analytics Stream processing Associate Professor Saint-Petersburg State University Member of the Database and Information Management Research Group
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Java
    DataTables
    Data Management
    Apache Spark
    Apache Hadoop
    Pentaho
    BigQuery
    Apache Airflow
    ETL Pipeline
    Python
    SQL
    Scala
    ETL
  • $39 hourly
    I'm a dynamic data expert with proven ability to deliver short or long-term projects in data engineering, data warehousing and business intelligence realm. My passion is to partner with my clients to deliver top-notch, scalable data solutions to provide immediate and lasting value. I specialise in the following data solutions: ✔️ Data strategy advisory & technology selection/recommendation ✔️ Building data warehouses using modern cloud platforms and technologies ✔️ Creating and automating data pipelines, real-time streaming & ETL processes ✔️ Data Cleaning, Processing. ✔️ Data Migration (Heterogenous and Homogenous) Some of the technologies I most frequently work with are: ☁️ Cloud: GCP & Azure 👨‍💻 Databases: BigQuery, Google Cloud SQL, SQL Server, Snowflake, PostgreSQL, MySQL, S3, Google Cloud Storage, Azure BLOB Store and ElasticSearch. ⚙️ Data Integration/ETL: Matillion, Apache Airflow( Cloud Composer ), Google Cloud Dataflow( Apache Beam Python SDK ), Azure Data Factory and Azure Logic Apps. 🔑 Scripting - Python for API Integrations and Data Processing. 🤖 Server less Solutions - Google Cloud Functions and Azure Functions. 🧰 CI/CD - Google Cloud Build and Azure Devops. 🛠 Others - API Integrations in Python, Google Compute Engine, Google Cloud PubSub and Much More. =What my clients say about me== ------------------------------------------------------ "Waqar is very clued up and he thinks outside the box. He is always looking for ways to implement the solution efficiently and cost effectively. His communication skills are excellent. He is willing to go the extra mile. He has been a pleasure to work with. I will definitely be working with him in the future." ⭐⭐⭐⭐⭐ ------------------------------------------------------ "Waqar has expert level knowledge of Google Cloud and knows how to make cloud technologies work effectively for the marketing domain." ⭐⭐⭐⭐⭐ ------------------------------------------------------ I am highly attentive to detail, organised, efficient, and responsive. Let's get to work! 💪
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    API Integration
    PySpark
    Databricks Platform
    Microsoft SQL Server
    PostgreSQL
    Microsoft Azure
    Data Warehousing & ETL Software
    Snowflake
    SQL
    Python
    Microsoft Power BI
    BigQuery
    Apache Airflow
    Google Cloud Platform
    Data Engineering
  • $150 hourly
    **Available to start Projects September 20th 2023** (Currently traveling!) I am an experienced analytics developer with experience building end to end products. I specialize in helping companies build out their initial analytics infrastructure from scratch. This includes: 1) Building the codebase to collect and aggregate data from various data sources (APIs, Web scraping, other databases, etc). 2) Creating the data model and selecting the appropriate database for your specific use case. 3) Evaluating and choosing the right BI tool for you or deciding to go the route of a custom application 4) Building out dashboards or the custom application. Industries I have experience in: - Manufacturing / Chemicals / IIOT - Advertising/Marketing - Insurance - Finance Skills: - Create & manage databases - Provision cloud computing servers - Create data mining/scraping scripts - Create analytics UI's with Plotly Dash - create linear/logistic regression models - Create classification models - automate reporting Database: PostgerSQL RDS MySQL Redshift Oracle SQL Server MongoDB Database Skills: Database Architecture Database Design ETL Pipeline Programming languages: Python R SQL HTML CSS Tools: Apache Airflow Apace Nifi Apache Superset Plotly Dash Tableau Amazon Quicksight Power BI Cloud: AWS Digital OCean Google Cloud Azure
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Database Modeling
    Dash
    Industrial Internet of Things
    Plotly
    PostgreSQL
    Analytics
    SQL
    Python
    Marketing Analytics
  • $40 hourly
    🏅 Expert-Vetted | 🏆 100% Job Success Rate | ⭐ 5-Star Ratings | 🕛 Full Time Availability | ✅ Verifiable projects | ❇️ 7,000+ Hours Introduction 🎓 I am a seasoned Product Developer with over a decade of experience in Automation, Data Science, and Big Data domains. Specializing in Generative AI projects, SaaS products, and leading teams of multiple developers, I have a unique expertise in converting LLM-based MVPs to production-grade applications. Utilizing event driven asynchronous programming and introducing retrying mechanisms, I strive to make pipelines robust and reliable, innovating and excelling in the industry. Technical Expertise 💻 👉 For Generative AI🤖: I specialize in creating cutting-edge Generative AI solutions, leveraging the latest frameworks and technologies. Vector Databases: Pinecone: Utilizing Pinecone for large-scale vector search and similarity scoring. Chroma: Implementing Chroma, the open-source embedding database, for efficient vector operations. Milvus: Leveraging Milvus for hardware-efficient advanced indexing, achieving a 10x performance boost in retrieval speed. Supabase, PgVector: Employing these databases for real-time management and PostgreSQL vector operations. Frameworks: Langchain: At its core, Langchain is a framework built around LLMs, used for chatbots, Generative Question-Answering (GQA), summarization, and more. It allows chaining together different components for advanced use cases around LLMs. Auto-GPT: An Autonomous GPT-4 Experiment, an open-source application showcasing GPT-4 capabilities, chaining together LLM. Llama Index, BabyAGI, SuperAGI: Utilizing these frameworks for indexing, early-stage AGI development, and advanced AGI solutions. Dolly 2.0: Working with Dolly 2.0, a 12B parameter language model based on the EleutherAI pythia model family, for creative content generation. Platforms like Hugging Face, Replicate.com: Collaborating on these platforms for model sharing, version control, and collaboration. Converting LLM-based MVPs, LLAMA 2, Amazon Polly, Speech to Text, OpenAI, RAG Approach, Chain of Thoughts, Optimizing LLM Memory, Generative AI-based Course Generator, ChatBot Builder Project 👉 For Big Data📊: I have extensive experience in handling large-scale data, ensuring efficiency and accuracy in processing and analysis. Expertise in building machine learning and ETL pipelines from scratch Expertise in Kafka, Apache Spark, Spark Streaming, MapReduce, Hadoop GeoSpatial Analysis, Machine Learning techniques, VAS Applications in Telco Environment Experience with ELK stack, Cloud Environments: Proficient in AWS, GCP, Azure 👉 For Web Development💻: I offer comprehensive web development solutions, focusing on scalability, user experience, and innovative technologies. Languages: Proficient in Python, Java, Scala, NodeJS Frontend Frameworks: Mastery in React and other modern frontend technologies Asynchronous Programming, Backend Development, Search Technology, CI/CD Tools, Cloud Environments: Heroku, AWS, Azure, GCP Specialization in Building SaaS Products 🚀 I have a strong background in designing and developing Software as a Service (SaaS) products, ensuring scalability, reliability, and innovation. My experience ranges from backend development to deploying complex systems end-to-end. My portfolio reflects a blend of cutting-edge innovation and practical application. My specialized knowledge in Generative AI, Big Data, Web Development, and SaaS products highlights my proficiency in various domains. If you're seeking a versatile and results-driven engineer with a strong innovative track record, I would love to hear from you
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    AI Chatbot
    Elasticsearch
    Amazon Web Services
    ETL
    Data Visualization
    Salesforce CRM
    Big Data
    Web Development
    React
    Tableau
    ChatGPT
    Data Science
    Machine Learning
    Python
    Apache Spark
  • $85 hourly
    Do you need a Data Scientist or a Data Engineer specializing in Big Data? Worked more than 500 hours on Upwork coding Data Science and Big Data Engineering projects for clients like you to perform Data Quality, ETL, Data Ingestion from Data Lakes, and designing Master Data. I am an excellent data scientist/data engineer, and I am committed to you that my performance is top rated. With ten years of experience being an engineer with an entrepreneurial background, my tech stack is Apache Spark, Dask, Python, Python Flask, SQL, and Machine Learning. I am fully capable of taking charge of your data-driven application using Python, Flask, Apache Spark, Dask, and SQL from scratch to production. Data Science and Analytics: * Python (Files & Note Books) * Numpy * Pandas * Sklearn * SQL * NoSQL Machine Learning and Model Prediction * Supervised Learning (Classification and Regression) * Unsupervised Learning * Natural Language Processing (NLP) * Deep Learning with Pytorch * Recommender System Big Data Analysis: * Apache Spark * Dask * SQL * NoSQL Data Engineering * Data Modeling * Cloud Data Warehouse Architecture * OLAP Cubes * ETL Pipeline * Data Lakes with Spark * Data Pipelines with Airflow * Python * Pandas * PySpark * Data Pipelines * Python Flask * Dask * Dedupe, and Normalization of Data Full Stack Development * RESTful API with Python Flask * Data-Driven Web Application Data Science Analytics, such as Descriptive and Inferential Analysis, Predictive Analytics and Machine Learning will be performed on your project. Experience in designing components of your project's ETL and Big Data using Python Pandas Dataframe, Dask, and Apache Spark PySpark. Having a Bachelor's in Civil Engineering, and in March 2019, I graduated in Data Science Nanodegree from Udacity. Currently, I am taking Data Engineer Nanodegree from Udacity. I am Talented, Creative, and Very Hard Working. Ensuring that your project will get completed, and it's in the safe hands of a professional Data Scientist/ Data Engineer who will deliver within your budget and by the given deadline. Whether you belong to a team that needs a Data Scientist to perform a specific task or Data Engineering, you are receiving the best of both worlds—quickly understanding Machine Learning and Data Science. Cleaning and performing analysis on your data. Your projects even being frictionlessly turned into web applications or just maintaining your web application code. I can save your money and time by integrating your Data Science projects while building its production web application. Exclusively quoting you a reasonable estimate after going through all the details and covering all the aspects of the project. Contact Today. Cheers, and talk to you soon! Best Regards, Atif Z FYI: Relentlessly working within the deadline until I have derived accurate and excellent results is my motto. I am very thorough in my work, and I don't cut any corners.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    ETL Pipeline
    Data Modeling
    Data Warehousing
    Data Management
    AWS Glue
    Marketing Data Analytics
    Database Architecture
    Business Intelligence
    PySpark
    Apache Spark
    pandas
    SQL
    Supervised Learning
    Machine Learning
    Data Science Consultation
    Data Science
    PostgreSQL
  • $100 hourly
    Senior technologist (18+ Years) with strong business acumen and technical experience in Big Data and cloud. A results-oriented, decisive leader in Big Data and Cloud space that combines an entrepreneurial spirit with corporate-refined execution in tech strategy • Architect- Big Data and Cloud (AWS, Azure, Google Cloud) with 18+ years of professional experience in Analysis, Design, and Development of Enterprise grade Applications. • Databricks Certified Developer – Apache Spark 2.x 2019 • AWS Solution Architect Certified 2018 • AWS Big Data Specialty Certified 2019 • Good Experience in AWS services (EC2, EMR, S3, RDS, Athena, Glue, CloudTrail, Redshift) • Deep expertise in Spark and Hadoop ecosystem • Experience in setting up the Enterprise Data Lake (cloud and on-premises) and deploying Big Data solutions on servers ranging from 50 to 200. • Proven ability to learn quickly and apply new technologies with an innovative approach. • Previous experience in architecture-design, database-design, and performance management. I am an AWS solution architect Associate and AWS Big Data specialty certified. I also have prior experience working on Java and Python. I am hands-on with pyspark and Sparksql. I am interested in getting work on the Big Data and Cloud Solution designing and implementation. Since i have been working a lot on the Healthcare and telecom domain, Security considerations in cloud is one of my expertise area. I have good english conversational skills. My role requires a lot of interactions with the client and this is one of my strong areas.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Microsoft Azure
    Google Cloud Platform
    YARN
    Apache Hadoop
    Apache Spark
    AWS Fargate
    Amazon ECS
    AWS Application
    AWS Lambda
    Machine Learning
    Big Data
  • $60 hourly
    Expert dev, 7+ years in the field, team leading experience. data-engineering: airflow, pyspark, aws, snowflake. backend: fastapi, micro-service architecture. web scraping: cloudflare, incapsula, akamai.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    PostgreSQL
    Apache Airflow
    ETL Pipeline
    MongoDB
    Data Mining
    Data Science
    Machine Learning
    Data Scraping
    SQL
    MySQL Programming
    Flask
    API
    Amazon ECS
    Scrapy
    Python
  • $60 hourly
    An IT architect with 15 years of experience in enterprise software development, big data and data science projects. I lead a small team of data scientists and software engineers who can deliver ready to use solutions using innovative technologies. Buzzwords: ML (xgBoost, SL), NLP (Google Bert), Python (Django), High Load (10K simultaneous connections, 100K req/sec), NoSQL (Hadoop, Spark, Cassandra, MongoDB), Virtualization (KVM, ESXi, Xen opensource, custom qemu drivers), Blockchain (smart contracts, Ethereum, Solidity),
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    AWS Lambda
    Smart Contract
    Blockchain
    Cloud Architecture
    GIS
    Databases
    Artificial Intelligence
    Google APIs
    Apache Kafka
    Kubernetes
    Computer Vision
    TensorFlow
    Machine Learning
    Big Data
    Python
  • $35 hourly
    I am Big data engineer / Scala developer with 16 years of experience with the full software development lifecycle. My primary languages are Scala and Java, with experience in development frameworks such as Play, Akka, Slick. Database competencies include RDBMS (PostgreSQL, MySQL) and Big Data (Spark, Hadoop, HBase, Sqoop, Pig, Hive, Vertica, Kafka). Expertise in working throughout the projects full-lifecycle, had worked on every phase. Experience in Agile delivery of software using practices from Scrum, Kanban etc. I am very keen to learn new technologies and business knowledge. I accept challenges and can work independently without a close assistance. For employment prospective I am open to contract role and on negotiable rates. I can make assure my stability with company and available immediately. Buzzwords: Scala, Hadoop, HBase, Hive, Zookeper, Vertica, Redis, hand-made DB with offheap storage, Play Framework, Slick, Akka, Java SE, Java EE, Spring, EJB, Hibernate, MyBatis, Vaadin, Struts, JSTL, JSP. PostgreSQL, PL/SQL, MySql Javascript, Prototype, Dojo, ExtJs Jboss, Tomcat Intellij IDEA, Eclipse
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Apache HBase
    Big Data
    Vertica
    Akka
    Java
    Scala
    Apache Spark
    Apache Hive
    PostgreSQL
    Apache Hadoop
  • $30 hourly
    ● 5+ years architect design experience in E-commerce, Bitcoin Trading platform. ● 11+ years full stack software development with Nodejs,React,Java,Python. ● 3+ years data engineer experience using Hive, Presto, Spark, flink. Technical skills ● Programming: Nodejs, React, Java, Python, Solidity ● Database: MySQL, Oracle, Postgresql, Elasticsearch, Mongodb, Hive, Presto, Kudu, Neo4j, Redis, spark, flink, airflow, dataproc, bigquery ● Operating System: Mac, Linux ● Extensive Knowledge: J2EE, Spring, Spring Boot, Tomcat, Mybatis, Spring Cloud Alibaba, Dubbo, Vue, React, Next.js, Nestjs, Graphql, Kafka, Rocketmq, Stream, Kubenetes
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    NodeJS Framework
    JavaScript
    GraphQL
    Java
    React
  • $100 hourly
    I help companies to design and build software. My interests are Big Data and Machine Learning. During recent years is particularly focused on scalable computing and data grids. Skills: Apache Spark, Apache Kafka, Apache Hadoop, GigaSpaces, Machine Learning, Scala, Java and Spring. Can be found blogging at dyagilev.org
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Spring Framework
    Java
    Scala
    Apache Hadoop
    Apache Kafka
    Apache Spark
  • $100 hourly
    I'm a Scala/Python software developer with machine learning experience. I have experience in software development for about 15 years. I have a master's degree in Applied Mathematics. Areas of expertise include Machine learning, Big Data, ETL, Web Development, general IT expertise. Work well independently and within a teams. Have 5 years experience as team / technical lead. I'm leading team of 6-7 senior developers and about 10 projects in R&D area. My main areas of expertise are: - Python, Scala - Spark, Spark Structured Streaming, Spark ML - Xgboost, Scipy, Numpy, Scikit-learn - Django, Flask, Celery
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Natural Language Processing
    Computer Vision
    TensorFlow
    Open Neural Network Exchange
    PyTorch
    Tesseract OCR
    Software Architecture & Design
    Scala
    Python
    Data Engineering
    Apache Spark
    Machine Learning
  • $35 hourly
    🔍🚀 Welcome to a world of data-driven excellence! 🌐📊 Greetings, fellow professionals! I am thrilled to introduce myself as a dedicated Data Consultant / Engineer, leveraging years of honed expertise across a diverse spectrum of data stacks 🌍. My journey has been enriched by a wealth of experience, empowering me with a comprehensive skill set that spans Warehousing📦, ETL⚙, Analytics📈, and Cloud Services☁. Having earned the esteemed title of GCP Certified Professional Data Engineer 🛠, I am your partner in navigating the complex data landscape. My mission is to unearth actionable insights from raw data, shaping it into a strategic asset that fuels growth and innovation. With a deep-rooted passion for transforming data into valuable solutions, I am committed to crafting intelligent strategies that empower businesses to flourish. Let's embark on a collaborative journey to unlock the full potential of your data. Whether it's architecting robust data pipelines ⛓, optimizing storage solutions 🗃, or designing analytics frameworks 📊, I am dedicated to delivering excellence that transcends expectations. Reach out to me, and together, let's sculpt a future where data powers success. Thanks!
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    PySpark
    Machine Learning
    Natural Language Processing
    Informatica
    Data Science
    Data Warehousing
    Snowflake
    Data Analysis
    Big Data
    BigQuery
    ETL
    Apache Airflow
    Apache Hadoop
    Apache Spark
    Databricks Platform
    Python
    Apache Hive
  • $80 hourly
    A Design Thinker, Product Manager and an AI Solutions Architect with 10+ years of building both B2B and B2C data first products and apps, big data solutions and machine learning algorithms. Versatile experience working with big brands, startups and emerging unicorns. Proficiencies: - Product Strategy & Roadmap - Product Market fit & Competitor Research - Product Design - Wire-framing, UX and usability research - Rapid prototype development - Product refreshes - Agile project management - Scrum, Kanban & XP; wearing the Product Owner & Scrum Master hats - Creating Epics, User Stories, acceptance criteria - Very familiar with deployment strategies, clouds and technical architectures across AWS, Azure & GCP; across monolithic to microservices architectures; and across RDBMS to noSQL and graph databases (from my data sciences background) - Modern PM tools: Jira, Trello, Asana, Aha! and more. - Native comfort with SQL, R and other data and analysis languages. (English too!) Product Portfolio: Auto AI, NashWorks (Enterprise AutoAI Platform) As lead engagement architect, I led the design, development and delivery of a generic, fully automated AI platform for enterprises to plug in their data and deploy algorithms around predictive forecasting for demand & sales for a seed-stage startup. This platform is in the customer beta phase. VIDULEARN, Cerebronics (Classroom Intelligence Platform) As Product Owner, I led the platform design team towards building the next big classroom intelligence platform- fusing concepts of lecture capture and blended learning with advanced video processing technologies and state-of-the-art AI for content recognition and summarization to provide a holistic augmented classroom experience for students and teachers. MYNTELLIGENCE, NashWorks (Marketing Intelligence Platform) As Lead Product Manager, I led the buildout of a new-age cross channel marketing intelligence platform integrating visual campaign management, automated & custom analytics, AI driven optimization recommendations and multi-channel segmentation capabilities from conception to launch and helped the client hit $500k annualized revenues within the beta phase. VIDEOTELLIGENT, Cerebronics (Video Resyndication Platform) As Product Architect, I led the design and development of a nordic video resyndication platform aimed at creating a marketplace for content creators, publishers, consumers and advertisers to come together in a three-way syndication platform with multiple AI assisted virality driven commercial models. MS ASSIST, Harman (Global Admin Assistance Chatbot Platform) As the Platform Architect, I led the development of a chatbot platform for Microsoft’s global security & administration division incorporating the ability for multiple teams within the division to visually create and deploy their own chatbots to fulfill employee services responsibilities that they owned. Scaled the platform to 30+ chatbots handling 10000+ sessions/day with accretive time-value cost savings estimated at more than $1 million / month MS GSOC, Harman (Global Security Management using IoT) As the Platform Architect, I led the design and development of Microsoft’s global physical security monitoring platform, bringing 3000+ locations with 75000+ cameras, 15000+ stenofones and 150,000+ RFID readers online into the platform with real time asset monitoring and data warehousing for further analytical solutions. LOCATE, MediaIQ (Location Intelligence & Targeting Platform) I started (and lead) this Campaign Targeting & Insights Product, tying in the offline (real) world to online and ever-increasing mobile worlds. Managed 5 billion records/day and hit $1 Mil/Month in Revenue within 6 months of launch. ELEVATE, MediaIQ (Measurement & Insights Framework for Qualitative Ad response and campaign performance) I conceptualized and built the first version of this segment-first qualitative measurement & insights product. I lead the development & scaling team for it, handling 25+ billion records/day and hitting $1 mil/month in revenue within 8 months of launch. MACRO, MediaIQ (Multi Source Data Stitching Platform for Intelligent, Performant Ad Campaigns) I built the data ingestion, storage, analysis and information retrieval architectures, connecting over 50 datasets with petabyte scale RTB data to empower traders and analysts deliver data driven performant ad campaigns. Managed and ran queries on dataset over 30 petabytes scaling over trillions of records and thousands of columns. MUSTANG, Mu Sigma (Real Time Text Analytics platform) I built the agent driven real time analytics stack and algorithms for processing and analyzing text data for insights, to be deployed within a JADE platform.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    User Experience Design
    Big Data
    Data Science
    R
    Business Intelligence
    Minimum Viable Product
    Product Management
    Data Analysis
    Product Strategy
    PostgreSQL
    SQL
    Statistics
    Product Design
    Design Thinking
    Quantitative Analysis
    Lean Startup
    Demo Presentation
  • $20 hourly
    I’m a developer with experience in BigData/AI and Spring framework . 1. I’m experienced in pyspark/hbase/redis/kafka/spark/flink. 6years + 2. I’m experienced in tensorflow/pytorch. 2 years+ 3. I’m experienced in html/spring mvc. 1 year + 4.I’m experienced in aws componet, such as emr/ec2/s3/code deploy; 1 year 5.I’m experienced in gcp componet, such as cloud storage/dataproc/bigquery; 1 year
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Google Cloud Platform
    PySpark
    ETL Pipeline
    Artificial Intelligence
    Spring MVC
    Apache Flink
    Apache Spark
    AWS Application
    Data Science
    Java
    Big Data
    Recommendation System
    Scala
    Python
    Data Mining
  • $125 hourly
    Machine Learning Engineer skilled in all things ChatGPT, OpenAI, and LangChain related. I have delivered stellar results on 50+ projects, have $100k+ and 2k+ hours billed, all with glowing reviews. High-quality code, a deep understanding and passion for machine learning, and excellent communication skills are the keys to my success. Looking for projects that will have a profound, positive impact. Not interested in helping pump out more time-wasting content to the world or further optimizing ad space on social media sites. Reviews: ⭐"My experience with Adam was great! We worked on a very ambitious project, and he handled it extremely well. He took time to understand the scope of the project and did not hesitate to go the extra mile to deliver something great. He would definitely be a great asset for your project."⭐ - Antoine Krajnc, CEO of Jedha ⭐"Adam is very smart and has a great attitude. He is thorough in data exploration and forms very well-thought-out and tested hypotheses. Recommended!"⭐ - Jeffrey Scholz, Senior Software Engineering Manager at Yahoo ⭐"Adam is one of the best freelancers I've ever hired. He completed quality work within an extremely short time frame. Highly recommended freelancer!"⭐ - Christian Mayer, CEO of Finxter.com ⭐"Great communication, rapid progress, and a successfully completed project."⭐ - Waylon Flinn, Founder of Dataship ⭐"Adam is wonderful to work with. We've been working together on a number of data analysis projects for a research-based firm. He is reliable, efficient in his work, and very thorough. I love that he asks thoughtful questions and critiques data when the numbers or responses don't add up and draws meaningful insights and conclusions."⭐ - Camille Kennedy, Marketing and Sales Strategist ⭐"Adam is attentive, efficient, and a delight to work with! Adam's data science knowledge is vast, and he is always eager to tackle new problems and issues. He is also eager on sharing his knowledge and is extremely comfortable articulating methods and ideas he has."⭐ - Ayush Joshi, Data Analyst at Just Eat for Business Qualifications: ✅ Fluent Python programming (scripting, Jupyter notebooks, Google Colab) ✅ Deep learning (TensorFlow, Keras, PyTorch, RNNs, CNNs, LSTMs, GRU) ✅ State-of-the-art machine learning and hyperparameter tuning (XGBoost, LightGBM, CatBoost, Scikit-Learn, Hyperopt, Optuna, Weights & Biases) ✅ Data visualization (Matplotlib, Seaborn, Plotly, Streamlit) ✅ GPU and TPU assisted (distributed) training ✅ Tabular, time-series, sequence, and image data ✅ Classification, regression, and prediction Previous Projects: 📊 Seawater metal detection and classification - TensorFlow sequence model 📊 Cryptocurrency market predictions - TensorFlow time series model 📊 Earthquake damage analysis and prediction - LightGBM and Scikit-Learn 📊 Global geospatial data analysis and modeling - Pandas, NumPy, Matplotlib, etc. 📊 Unsupervised clustering of scraped academic paper data 📊 Data analysis of market research and surveys Other: 🔸 Written 40+ technical blog articles for a range of AI companies. Thus, I am that rare breed who can code excellently while also communicating clearly and effectively to stakeholders. If you don't see what you're interested in getting help with on my profile, this description is just a highlight. Please reach out, and if I can't help you, I will point you in the right direction.
 Looking forward to hearing from you and how I can help with your machine learning project!
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Machine Learning Model
    XGBoost
    scikit-learn
    PyTorch
    Keras
    TensorFlow
    Data Modeling
    Data Analysis
    Neural Network
    Deep Learning
    Artificial Intelligence
    Python
    Data Science Consultation
    Data Science
    Machine Learning
  • $50 hourly
    "She is very good in coding. She is the best and to go person for any hadoop or nifi requirements." "Abha is a star; have successfully handed the project in a very professional manner. I will definitely be working with Abha again; I am very happy with the quality of the work. 🙏" "Abha Kabra is one of the most talented programmers I have ever meet in Upwork. Her communication was top-notch, she met all deadlines, a skilled developer and super fast on any task was given to her. Perfect work is done. Would re-hire and highly recommended!!" Highly skilled and experienced Bigdata engineer with over 5 years of experience in the field. With a strong background in Analysis, Design, and Development of Big Data and Hadoop based Projects using technologies like following: ✅ Apache spark with Scala & python ✅ Apache NiFi ✅ Apache Kafka ✅ Apache Airflow ✅ ElasticSearch ✅ Logstash ✅ Kibana ✅ Mongodb ✅ Grafana ✅ Azure data factory ✅ Azure pipelines ✅ Azure databricks ✅ AWS EMR ✅ AWS S3 ✅ AWS Glue ✅ AWS Lambda ✅ GCP ✅ cloud functions ✅ PostgreSql ✅ MySql ✅ Oracle ✅ MongoDB ✅ Ansible ✅ Terraform ✅ Logo/Book Cover Design ✅ Technical Blog writing A proven track record of delivering high-quality work that meets or exceeds client expectations. Deep understanding of Energy-Related data, IoT devices, Hospitality industry, Retail Market, Ad-tech, Data encryptions-related projects, and has worked with a wide range of clients, from Marriott, P&G, Vodafone UK, eXate UK etc. Able to quickly understand client requirements and develop tailored solutions that address their unique needs. Very communicative and responsive, ensuring that clients are kept informed every step of the way. A quick learner and is always eager to explore new technologies and techniques to better serve clients. Familiar with Agile Methodology, Active participation in Daily Scrum meetings, Sprint meetings, and retrospective meetings, know about working in all the phases of the project life cycle. A strong team player and a leader with good interpersonal and communication skills and ready to take independent challenges.
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Apache NiFi
    PySpark
    Databricks Platform
    ETL Pipeline
    Big Data
    Apache Kafka
    Grafana
    Kibana
    Apache Spark
    PostgreSQL
    Microsoft Azure
    MongoDB
    Scala
    Python
    Elasticsearch
    Google Cloud Platform
    Amazon Web Services
  • $25 hourly
    8+ years experience in software Industry.Mobile Developer mainly work in Android.Have Java background.Graduated in Electronics and Telecommunication Engineering at University of Moratuwa, SriLanka Certifications 1. MCD- Mule Certified Developer 2. AWS Certified Solutions Architect Associate Skills 1. Android/Java 2. Spring boot 3. Mule 4. Spark 5. AWS knowledge 5. Ionic
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Ionic
    Mobile
    Logstash
    Apache Spark
    Mulesoft
    Mule
    AWS Application
    Microservice
    Android
    Java
  • $60 hourly
    I’m a software developer with over 10 years of experience building Java back-end for websites and data processing tools. Also have experience working with AWS (EMR, Lambda, DynamoDB, S3, Redis), SQL and no-SQL databases (Neo4j). Currently working as a BigData developer (Spark, AWS). Additional languages: Ukrainian
    vsuc_fltilesrefresh_TrophyIcon Apache Tapestry Developers
    Unit Testing
    Big Data
    API
    Machine Learning
    AWS Lambda
    Software Architecture & Design
    Python
    Spring Framework
    Java
  • Want to browse more freelancers?
    Sign up

How it works

 

1. Post a job (it’s free)

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by 5M+ businesses

How do I hire a Apache Tapestry Developer on Upwork?

You can hire a Apache Tapestry Developer on Upwork in four simple steps:

  • Create a job post tailored to your Apache Tapestry Developer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Tapestry Developer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Tapestry Developer profiles and interview.
  • Hire the right Apache Tapestry Developer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Tapestry Developer?

Rates charged by Apache Tapestry Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Tapestry Developer on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Tapestry Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Tapestry Developer team you need to succeed.

Can I hire a Apache Tapestry Developer within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Tapestry Developer proposals within 24 hours of posting a job description.

Schedule a call