Hire the best Hadoop Developers & Programmers in the United States

Check out Hadoop Developers & Programmers in the United States with the skills you need for your next job.
  • $50 hourly
    I am a data engineer with strong experience in data crawling and data processing. I am able to get data from any website in minutes. Hence please contact me if you need - Crawling the data from any website - Processing the data as you usually do using excel - Provide the data in any format you need
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Lead Generation
    Data Scraping
    Data Mining
    Spring Framework
    Amazon
    Big Data
    Java
    Apache Hadoop
    Apache Spark
    Apache Kafka
  • $250 hourly
    BACKGROUND - Harvard & MIT trained Statistician and Machine Learning (ML) Scientist with particular expertise in Quantitative Finance, Natural Language Processing (NLP), and Causal Inference/Causal AI, with 10+ years experience. - Currently Principal Applied Data Scientist at The Cambridge Group (TCG) leading end-to-end (inception to production) DL and Causal Inference projects for Fortune 100 clients in tech, retail, and e-commerce. Clients include: - Levi Strauss & Co - Shutterfly LLC - Coca-Cola Company - Teaching @ Stanford including XCS234: Reinforcement Learning, XCS224W: ML over Graphs & Networks, and XCS221: Artificial Intelligence - Former Chief Data Scientist in the Quantitative Finance and Venture Capital/Private Equity spaces (AIMatters & BuildGroup), leading design and building of Deep Learning, Natural Language Processing (NLP), and Reinforcement Learning systems for identification of investment opportunities & ML-built ETF (Exchange Traded Fund) - Former Research Fellow at Harvard University, and former Adjunct Faculty in Statistics at MCPHS University - Trained in state-of-the-art Causal Inference techniques by pioneering Harvard faculty with expertise in G-Methods (i.e. G-formula, G-Estimation, etc), Doubly-Robust Estimation, Causal Discovery, Targeted Maximum Likelihood Estimation (TMLE), Double Machine Learning My research interests lie at the intersection of Quantitative Finance, NLP, Experimentation, Causal Inference, and Deep Learning. I’m trained in state-of-the-art Causal Inference techniques by pioneering Harvard faculty, and am a proponent of G-Methods (i.e. G-formula, G-Estimation, etc) and Doubly-Robust Estimation techniques. I have deep interests in specialized techniques for leveraging non-Donsker class ML estimators for Causal Inference, including Targeted Maximum Likelihood Estimation (TMLE) and Double Machine Learning (DML). I’m also interested in the connections of these topics to more “traditional” areas of Deep Learning, including Natural Language Processing, Reinforcement Learning, and Knowledge Discovery over Probabilistic Graphical Models. MACHINE LEARNING / DEEP LEARNING EXPERTISE: - Deep Learning (ConvNet, RNN, LSTM, Transformer, etc) - “Traditional” Machine Learning (Random Forests, Gradient Boosting, SVMs, Stacked Ensembles, etc) - Natural Language Processing (NLP) - Computer Vision - Reinforcement Learning - Generative Learning - Probabilistic Graphical Models - Graphical/Network Machine Learning - Recommender Systems - Interpretable AI - ML + Causal Inference (TMLE, Double Machine Learning) STATISTICS EXPERTISE: - Mathematical Statistics - Stochastic Processes - Statistical Learning Theory - Bayesian Inference (Parametric & Nonparametric) - Survival Methods - Advanced Study Designs (Observation, Case-Control, etc) - Experimentation (A/B testing, Multi-Armed Bandits, Adaptive Trial Design, etc) - Causal Inference Methods (G-methods, Propensity Score methods, IV estimators, etc) SOFTWARE/PROGRAMMING STACK: - Scientific Computing: Python (NumPy, Pandas, Scikit-learn, Matplotlib, etc), R, C++, MATLAB, STATA, SAS - DL frameworks & Optimizers: PyTorch, TensorFlow, Optuna, Hyperopt - Distributed Computing: PySpark, MLlib (exposure to native Spark w/ Scala, & Hadoop) - SQL: MySQL, Microsoft SQL Server - Production: Docker, Flask, Airflow, MLflow, Git, CircleCI - Cloud: Amazon AWS (exposure to Microsoft Azure and Google GCP)
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Statistics
    Apache Hadoop
    Big Data
    SQL
    Data Science
    Machine Learning
    R
    MATLAB
    C++
    Python
  • $70 hourly
    I am a Business Analyst and Data Engineer with experience working with Python, R, SAS, SQL, T-SQL, PL-SQL, C#, Spark/Hadoop, PowerBi. I can model data, create databases, reports, and dashboards. I have significant experience creating and managing cloud services in Azure including Databricks, Data Factory, and Data Lake Storage. I have worked with various relational databases such as MSSQL, MySQL, Oracle, and Snowflake to build out warehouse and reporting solutions. I have Graduate Certificate in Data Science from Harvard University. I can architect a full cloud ETL solution or simply provide code for your existing environment or projects.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    ETL Pipeline
    Apache Spark
    C#
    Databricks Platform
    Apache Hadoop
    Microsoft Azure SQL Database
    Microsoft Azure
    Data Science
    Python
    SQL
    SAS
    R
  • $175 hourly
    Mr. Joshua B. Seagroves is a seasoned professional having served as an Enterprise Architect/Senior Data Engineer for multiple Fortune 100 Companies. With a successful track record as a startup founder and CTO, Mr. Seagroves brings a wealth of experience to his role, specializing in the strategic design, development, and implementation of advanced technology systems. Throughout his career, Mr. Seagroves has demonstrated expertise in architecting and delivering cutting-edge solutions, particularly in the realm of data engineering and sciences. He has successfully spearheaded the implementation of multiple such systems and applications for a diverse range of clients. As part of his current responsibilities, Mr. Seagroves actively contributes to the prototyping and research efforts in the field of data engineering/data science, specifically in the development of operational systems for critical mission systems. Leveraging his extensive background in architecture and software modeling methodologies, he has consistently led and collaborated with multidisciplinary teams, successfully integrating various distributed computing technologies, including Hadoop, NiFi, HBase, Accumulo, and MongoDB. Mr. Seagroves' exceptional professional achievements and extensive experience make him a highly sought-after expert in his field. His comprehensive knowledge and hands-on expertise in advanced technology systems and big data make him a valuable asset to any organization.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    YARN
    Apache Hadoop
    Big Data
    Apache Zookeeper
    TensorFlow
    Apache Spark
    Apache NiFi
    Apache Kafka
    Artificial Neural Network
    Artificial Intelligence
  • $75 hourly
    Tool-oriented data science professional with extensive experience supporting multiple clients in Hadoop and Kubernetes environments, deployed with Cloudera Hadoop on-premise and Databricks in AWS. My passion is client adoption and success, with a focus on usability. With my computer science and applied math background, I have been able to fill the gap between platform engineers and users, continuously pushing for product enhancements. As a result, I have continued to create innovative solutions for clients in an environment where use-cases continue to evolve every day. I find fulfillment in being able to drive the direction of a solution in a way that allows both client and support teams to have open lanes of communication, creating success and growth. I enjoy working in a diverse environment that pushes me to learn new things. I'm interested in working on emerging solutions as data science continues to evolve.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    R
    Serverless Stack
    React
    Apache Hadoop
    Java
    Cloudera
    AWS Lambda
    Apache Impala
    R Hadoop
    Bash Programming
    PostgreSQL
    Apache Spark
    Python
    AWS Development
    Apache Hive
  • $125 hourly
    🏆 Achieved Top-Rated Freelancer status (Top 10%) with a proven track record of success. Past experience: Twitter, Spotify, & PwC. I am a certified data engineer & software developer with 5+ years of experience. I am familiar with almost all major tech stacks on data science/engineering and app development. If you require support in your projects, please do get in touch. Programming Languages: Python | Java | Scala | C++ | Rust | SQL | Bash Big Data: Airflow | Hadoop | MapReduce | Hive | Spark | Iceberg | Presto | Trino | Scio | Databricks Cloud: GCP | AWS | Azure | Cloudera Backend: Spring Boot | FastAPI | Flask AI/ML: Pytorch | ChatGPT | Kubeflow | Onnx | Spacy | Vertex AI Streaming: Apache Beam | Apache Flink | Apache Kafka | Spark Streaming SQL Databases: MSSQL | Postgres | MySql | BigQuery | Snowflake | Redshift | Teradata NoSQL Databases: Bigtable | Cassandra | HBase | MongoDB | Elasticsearch Devops: Terraform | Docker | Git | Kubernetes | Linux | Github Actions | Jenkins | Gitlab
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Java
    Apache Hadoop
    Amazon Web Services
    Snowflake
    Microsoft Azure
    Google Cloud Platform
    Database Management
    Linux
    Apache Spark
    ETL
    API Integration
    Scala
    SQL
    Python
  • $99 hourly
    With over 20 years of leadership in data storage, processing, and streaming technologies at multinational corporations like Microsoft, IBM, Bloomberg, and Amazon, I am recognized as a Subject Matter Expert in these domains. My portfolio includes the successful design and deployment of large-scale, multi-tier projects utilizing a variety of programming languages (C, C++, C#, Python, Java, Ruby) and both SQL and NoSQL databases, often enhanced with caching solutions. My expertise extends to data streaming products such as Kafka (including Confluent and Apache Kafka), Kinesis, and RabbitMQ, tailored to meet specific project requirements and customer environments. My technical proficiency encompasses a wide range of databases and data processing technologies, including MS SQL, MySQL, Postgres, Comdb2, Cassandra, MongoDB, Hadoop, HDFS, Hive, Spark, and Snowflake. I am equally adept in Unix and Windows environments, skilled in both PowerShell and Bash scripting. As an AWS and Azure Solutions Architect, I have empowered numerous clients with comprehensive AWS and Azure cloud solutions based on clients need. My notable projects on Upwork include: 1. Migrating Arcbest's dispatch solution from mainframe to Linux servers with Confluent Kafka, enhancing processing times and reducing latencies. 2. Conducting petabyte-scale big data analysis for Punchh using Snowflake, Kafka, Python, Ruby, AWS S3, and Redshift. 3. Analyzing and comparing various Kafka-like solutions for an investment firm, focusing on adoption and maintenance costs. 4. Implementing ETL solutions with CDC for continuous updates from IBM Maximo to Snowflake via Kafka, and from Oracle to Snowflake, integrating Power BI and Tableau for analytics. 5. Deploying an IoT solution for a logistics firm using Particle and Pulsar devices, MQTT, Kinesis, Lambda, API Gateway, S3, Redshift, MySQL Aurora, and Power BI to monitor real-time delivery metrics as well as post delivery analysis of delivery performance such as spills, tilts, bumps. 6. Conducting data analysis for an advertising firm, benchmarking BigQuery and custom dashboards against Redshift with Tableau/Quicksight.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    SQL
    R Hadoop
    Amazon Web Services
    Snowflake
    Solution Architecture
    Apache Solr
    Ruby
    Apache Kafka
    Apache Hadoop
    Apache Cassandra
    Redis
    Python
    Java
    C++
    C#
  • $160 hourly
    Experienced Co-Founder with a demonstrated history of working and innovating in the information technology industry. Skilled in a wide variety of architectures, technologies, and management strategies, as well as educated in multiple engineering and systems disciplines. Successful in a variety of complex projects and initiatives, and in turn, have been able to shepherd others in a strategic and cohesive way through critical tasks. By nature, I am consistently considering and formulating ideas for improvement and innovation. I am a motivated self-starter, dedicated to the execution of radical technological endeavors.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Big Data
    Apache Hadoop
    Technical Project Management
    Continuous Improvement
    Information Technology
    Analytics
    Cloud Architecture
    DevOps
    Tech & IT
    Ansible
    Linux
    Terraform
  • $75 hourly
    Big Data, Machine learning, and Data Science is my passion. In that field, I have focused mostly on NLP, NLU, OCR / handwriting recognition. I love to teach and train people in the area, and how it can be used.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Database
    SQLAlchemy
    Apache Hadoop
    Big Data
    SQL Server Integration Services
    SQL Server Reporting Services
    Microsoft SQL Server
    Apache NiFi
    SQL
    Machine Learning
    Apache Spark
  • $100 hourly
    As a highly skilled and accomplished freelance Data Scientist and Data Engineer, I possess a Master's degree in Data Science with a specialization in Artificial Intelligence. My expertise lies in converting intricate data into actionable insights that drive impactful decision-making. With proficiency in Python, PySpark, Google Cloud, Azure, and more, I am well-versed in leveraging these tools to craft and deploy scalable machine learning models while establishing robust data infrastructures. My impressive track record speaks for itself. I have successfully scaled machine learning infrastructure to cater to 100,000+ customers, implementing over 100 parallel cost prediction models. Furthermore, I have excelled in developing high-capacity solutions for data inferencing, resulting in substantial cost savings through infrastructure optimization and cloud computing efficiency. Navigating complex backend migrations seamlessly, I have significantly enhanced data management and system efficiency. By collaborating closely with Data Engineers and DevOps professionals, I have deployed production models and generated pivotal insights, fostering a collaborative environment where synergies thrive. Combining a strong academic foundation in Artificial Intelligence with extensive practical experience, I play a pivotal role in supporting decision-making processes and contributing to clients' strategic objectives. With a meticulous eye for detail and an unwavering commitment to excellence, I take pride in delivering error-free work. Recognizing the critical role of accurate and high-quality data in driving informed decisions, I ensure each project receives my utmost dedication and expertise. As a freelance professional, I am driven by a passion to provide exceptional results every time. With an unwavering focus on excellence, I bring a wealth of skills and experiences to the table, elevating each project to new heights.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Apache Hadoop
    Cloud Computing
    Git
    Google Cloud Platform
    DevOps
    PySpark
    Data Science
    Python
    Databricks Platform
    Azure Machine Learning
    Deep Learning
    Machine Learning
  • $50 hourly
    An experienced software developer of 13+ years who has performed over 10 ETL integrations and earned twelve certifications (including a 97% on the Stanford machine learning certification). It would be my pleasure to assist you with any machine learning or data science projects. I have lead and managed many projects and my focus on communication and the client's objectives are hallmark qualities of mine.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Data Engineering
    ETL Pipeline
    Apache Hadoop
    Linux
    Amazon Web Services
    Data Structures
    Neural Network
    SQL
    Machine Learning
  • $50 hourly
    Top skills: java. As a Full Stack Web Developer with 15+ years of experience, I am passionate about creating beautiful, efficient, and functional websites and mobile applications. My expertise lies in developing web and mobile applications using the latest technologies, frameworks, and programming languages. I am also adept at designing and implementing custom software solutions that meet the unique needs of my clients. - FrontEnd: React, Vue, Angular, JavaScript, HTML5, CSS, Bootstrap, Wordpress, - Backend: Node, Express, Laravel, CodeIgniter, Python + Django, Laravel 9, Socket.io, Next.js, Nest.js, PHP, CakePHP, ***Java + Spring*** - Database: MySQL, MariaDB, MongoDB, PostgreSQL, Redis Mobile: React Native, Swift, Xcode, Java, Kotlin and objective-c - CMS: WordPress, Magento, Shopify, Drupal, etc - APIs: Stripe, PayPal, Google Contact/Calendar/Mail/Drive API, Microsoft Outlook/One Drive, Twilio API, Facebook API, Twitter API, Sentry, Mailgun. - Other Skills: GitHub, Bitbucket, Docker, Web Hosting.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    React Native
    Apache Hadoop
    Spring Cloud
    Unity
    MERN Stack
    Vue.js
    Java
    MySQL
    PHP
    JavaScript
    C++
    C#
    Spring Boot
    PostgreSQL
    Node.js
  • $60 hourly
    ⭐⭐⭐⭐⭐ Data is the new gold, and I am here to help you mine its true value! As a Senior Principal Engineer at a leading IT company, with over 24 years of IT professional experience including research in data analytics and cloud computing. I am a Google Cloud Professional Data Engineer. My passion lies in transforming data into substantial business value. With a dual qualification of a master's degree in computer science and an MBA, I skillfully combine technical prowess with strategic business insights. ⭐Data Engineering⭐ ✅ Web Scraping, ETL, SQL, Pandas, Spark, Hadoop, Databricks, Snowflake ⭐Machine Learning⭐ ✅ Natural language processing, Classification, Sentiment Analysis ⭐Cloud Engineering⭐ ✅ Cloud Engineering: AWS, GCP, OpenStack, DevOps, OpenShift, Spark, Hadoop, Ceph, Storage, Linux ⭐Certificates⭐ ✅ Google Cloud Professional Data Engineer ✅ AWS Certified Solutions Architect – Professional ✅ Cloudera Certified Administrator for Apache Hadoop (CCAH) ✅ Certificate of Expertise in Platform-as-a-Service (RedHat OpenShift) ✅ Certified System Administrator in Red Hat OpenStack ✅ Information Storage and Management (EMC) ⭐Skills⭐ ✅Massive data scraping experience on social media & website ✅Proficient in programming languages such as Python, SQL, and Scala ✅Expertise in big data technologies (e.g., Hadoop, Spark, Kafka) ✅Experience with data modeling, warehousing, and ETL processes ✅Familiarity with cloud services (AWS, Azure, Google Cloud) for data processing and storage ✅Strong understanding of database management systems (relational and NoSQL) ✅Knowledge of data pipeline and workflow management tools (e.g., Airflow) ✅Ability to implement data security and compliance measures ✅Excellent problem-solving skills and attention to detail ⭐Services Offered⭐ ✅Data Pipeline Development: Design and implement robust data pipelines to automate the extraction, transformation, and loading of data ✅Data Architecture Design: Build scalable and efficient data architectures tailored to your specific business needs ✅Cloud Data Solutions: Leverage cloud platforms for enhanced data storage, processing, and analytics capabilities ✅Data Warehousing: Develop and manage data warehouses to consolidate various data sources for advanced analysis and reporting ✅Big Data Processing: Utilize big data technologies to process and analyze large datasets for actionable insights ✅Data Quality Management: Implement strategies to ensure data accuracy, consistency, and reliability ⭐Snowflake⭐ ✅In-depth knowledge of Snowflake’s architecture, including virtual warehouses, storage, and compute optimization ✅Creating efficient data models and schemas optimized for performance in Snowflake ✅Building and managing ETL pipelines to ingest, transform, and load data into Snowflake from various sources ✅Implementing Snowflake’s security features to ensure data is protected and compliant with regulatory standards ✅Ability to troubleshoot performance issues and optimize query performance in Snowflake ⭐Ceph Storage⭐ ✅In-depth knowledge in performance tuning and capacity planning for Ceph clusters ✅Skilled in integrating Ceph with various cloud platforms and container orchestration systems (e.g., Kubernetes) ✅Strong understanding of data replication, recovery processes, and disaster recovery strategies in Ceph ✅Proficient in automating Ceph deployments and operations using tools like Ansible and Puppet. Knowledge of Ceph's architecture, including RADOS, RBD, CephFS, and RGW ✅Troubleshooting skills for resolving complex issues within Ceph environments One of my standout achievements was the innovative use of an open dataset to predict market trends, the project was recognized with the 🏆Best Big Data Paper award🏆 🏆Why Hire Me?🏆 ✅With a proven track record of successful data engineering, machine learning projects and cloud engineering expertise, I bring a strategic approach to data management and analytics. My expertise not only lies in technical execution but also in understanding and aligning with business objectives to drive value through data. I'm dedicated to continuous learning and staying updated with the latest trends and technologies in the field. Let's collaborate to unlock the full potential of your data and transform your business operations. 🏆Recommendations from clients and colleagues🏆 "Nelson is very approachable, always willing to help, likes to explore and learn new things. Software development, debugging, solve problems, big data analytics in cloud, machine learning, research and development are some of his forte. I highly recommend Nelson for any team/organization looking for knowledgeable and highly motivated team members." "If you want something done, ask Nelson. He gets things done. Nelson his team use open source and custom code on AWS. Nelson creatively solved problems." "Dongjun is very exceptional in his AWS skills. He is super patient as well."
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    EMC Symmetrix
    Apache Hadoop
    Software-Defined Storage
    Storage Array
    OpenStack
    OpenShift
    Ceph
    Snowflake
    Google Cloud Platform
    Amazon Web Services
    Data Engineering
    Apache Spark MLlib
    Databricks Platform
    Natural Language Processing
    Machine Learning
  • $55 hourly
    I’m Ernie Maldonado, and I’m a seasoned data scientist with vast experience in data analytics, data engineering and machine learning. I have implemented data driven analytics solutions to monitor and increase revenue as well as reduce costs. From the technical point of view, these goals can be achieved by implementing a cost effective data platform, along with developing useful metrics, increasing automation and finally creating actionable tasks for people. From the business side, these goals can be achieved by developing strong relationships with internal and external clients, always striving to meet their needs and delivering high quality products. Finally, from the talent point of view, it is essential to have a strong and cohesive team, that works well together pursuing shared goals. We share the same goals of increased revenue and cost reduction and believe that we can achieve those through technical excellence, practical experience and client satisfaction. These shared goals and values assure me that we will work well together and build a long lasting and successful partnership
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Database
    Big Data
    PySpark
    Apache Hadoop
    R
    Econometrics
    Economics
    SQL
    Python
    SAS
    Information Technology
    Machine Learning
    Engineering & Architecture
    Statistics
    Data Engineering
  • $110 hourly
    Distributed Computing: Apache Spark, Flink, Beam, Hadoop, Dask Cloud Computing: GCP (BigQuery, DataProc, GFS, Dataflow, Pub/Sub), AWS EMR/EC2 Containerization Tools: Docker, Kubernetes Databases: Neo4j, MongoDB, PostgreSQL Languages: Java, Python, C/C++
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    MapReduce
    Apache Kafka
    Cloud Computing
    Apache Hadoop
    White Paper Writing
    Academic Writing
    Google Cloud Platform
    Dask
    Apache Spark
    Research Paper Writing
    Apache Flink
    Kubernetes
    Python
    Java
  • $110 hourly
    I'm an expert data scientist specializing in fraud, customer segmentation, and creating software for predictive modeling. With 15 years of experience in health care, digital advertising, telecommunications, and engineering, I've developed a unique ability to solve the biggest problems in business with innovative ideas. My education includes an MBA from Cornell University, an MS from Northwestern University, and a BS in Mathematics from University of California.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Snowflake
    Vue.js
    JavaScript
    UI/UX Prototyping
    Full-Stack Development
    Customer Retention
    Market Segmentation Research
    Apache Hadoop
    Automation
    Deep Neural Network
    SQL
    Python
    Chatbot
    Computer Vision
    Algorithm Development
  • $25 hourly
    ● 14 years of experience in testing methodologies in Insurance, Banking, Toll Road Industry and Learn Management domain. ● Experience in Web Based & standalone Application Testing, GUI Testing, Black Box Testing, Smoke Testing, Functional Testing, End-to-End Testing, Backend Testing, Cross Browser Testing, Regression testing, Mobile Application Testing, Hadoop - Hive and Ambari, API Testing, SOA - web services using SOAP UI and XML validation. ● Expertise in designing test strategy, test plan, test cases, test reports, defect reports and assurance’s quality documents. ● Demonstrated ability to meet project target dates, initiative, willingness and flexibility to perform diverse tasks and manage more than one concurrent project, ability to rapidly assess a technical or business situation and recommend solutions or opinions. ● Played Business Analyst role, by Analyzing business requirements, identify the gaps and making sure all the gaps are closed in accordance with the original requirement ● Conducted Daily scrum Stand Ups, Sprint Review Meeting and Retrospective Meeting. ● Proficient at grasping new technical concepts quickly and utilize the same in a productive manner, Self-Motivated, Creative and always best at communication skills. ● Ability to work independently, to interact with end users and analyze existing systems. ● Experienced in leading teams comprising testers with different profiles. ● Ensuring quality of all project stages are in accordance with the current project plan. All the assigned responsibilities, milestones and related sign-offs are achieved on a timely basis. ● Reviewing the SRS (Requirement Specifications) and Test Results with Business Analyst to ensure completeness of requirements from testing perspective. ● Participate in continuous improvement efforts within the QA organization & Research and evaluate new testing technologies and tools. ● Experience in Automation script creation using Selenium and QTP automation tools. ● Work experience on Web Services testing using SOAPUI ● Has knowledge on the test management tool HP Quality Center (QC), JIRA and TestRail ● Played Delivery Manager Role – Timesheet, Project Monitoring Audits, Responsible for Testing deliverables, Resource Management, Project Initiative Review, Risk Management, Team Building activities
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    RESTful Architecture
    Agile Project Management
    Apache Hadoop
    Big Data
    ASP.NET Web API
    DevOps
    HTML5
    AngularJS
    Google Chrome Extension
    HTML
  • $100 hourly
    At UC Berkeley, I helped submit several Computer Vision and Reinforcement Learning papers. Here is a short list for context: - GANs for Model-Based Reinforcement Learning - Frame Rate Upscaling with Constitutional Networks - Neural Multi-Style Transfer At Amazon, I built a pipeline framework to store and serve sales data for the millions of third party merchants on Amazon.com. More recently, I have taken on part-time consulting. These are some of the clients and projects I have worked on in the past: - GitHub on Improving Code Classification with SVMs - SAP on Applying HANA Vora to Load Forecasting - Intuit on Quantifying Brand Exposure From Unstructured Text As opposed to these previous projects, I am looking to take on more projects, each with smaller time commitments.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    ETL
    Apache Hadoop
    Machine Learning
    Deep Learning
    TensorFlow
    Keras
    Python
    Java
    Computer Vision
    Apache Spark
  • $75 hourly
    I am an experienced Data Scientist working at FedEx Ground ready to identify, build, and support data science solutions for your business problems. Here's a list below of various analytics and machine learning use cases I've built or worked on to generate $ for my organization: - Build an AI-based package fraud alerting system, resulting in $300M+ in estimated revenue savings from potential losses of key B2B FedEx customers - Worked with multiple FedEx internal teams to improve predictions from tracking numbers for FedEx packages via machine learning - Built a computer vision model to capture additional revenue from non-conveyable packages ($3M additional yearly revenue generated ) ANALYTICS OFFERINGS: - Supervised/Unsupervised ML Modelling - Data Engineering - A/B Tests - Deep Learning Methodologies - Time-Series Forecasting - Ad-Hoc Analysis - Data Cleansing - Data Visualization SKILLS: Data Science; Supervised & Unsupervised Machine Learning; Data Engineering; Statistical Modeling; Optimization; Data Mining; Agile Development; A/B Testing TECHNOLOGY: Python; SQL; R; Google Cloud Platform; Microsoft Azure; Amazon Web Services; Apache Spark; Airflow; PyTorch; TensorFlow; NumPy; SciPy; Scikit-Learn; Pandas; Git
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Google Cloud Platform
    Forecasting
    Microsoft Azure
    Data Analysis
    Apache Hadoop
    SQL
    Looker Studio
    Tableau
    Python
    R
  • $50 hourly
    Detail-oriented Data Engineer for solving complex problems. Seeking to offer superb analytical and computer skills to improve quality, cost, and time metrics for data Engineering.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    RESTful API
    Django
    Apache Hadoop
    SQL
    Apache Airflow
    Apache Kafka
  • $25 hourly
    Fractional CFO I Experienced QuickBooks Pro Advisor | Accounting Software Expert | Data Migration As a QuickBooks Pro Advisor with extensive experience managing over 200 clients, I specialize in transforming accounting processes and optimizing financial systems. My expertise spans a range of accounting software, including QuickBooks, Xero, Zoho, and Wave. I am proficient in data migration across different platforms, ensuring seamless transitions and accurate financial management. Throughout my career, I have successfully collaborated with diverse clients, providing tailored solutions that enhance financial performance and streamline accounting operations. My deep understanding of various accounting software allows me to offer customized advice, troubleshoot issues, and implement best practices for efficient financial management. As a freelancer, I am dedicated to delivering high-quality results within tight deadlines. My attention to detail, analytical skills, and problem-solving capabilities enable me to manage complex data migrations and software integrations with precision. Whether you need assistance with QuickBooks setup, software migration, or comprehensive financial management, I am equipped to handle your accounting needs with expertise and efficiency. Let’s connect to discuss how I can support your business with effective accounting solutions. I am eager to provide innovative, reliable services tailored to your specific requirements. Please contact me to schedule a call to discuss your project needs, timeline, and budget. Looking forward to collaborating with you.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Data Analysis
    Big Data
    Artificial Intelligence
    Apache Hadoop
    Data Mining
    Hive
    Data Science
    Machine Learning
    SQL
    Tableau
    Python
    NLTK
  • $150 hourly
    I am a motivated leader with extensive experience in all levels of technology, focused on data services and advanced analytics. I have the expertise to help with any level of problem you might be facing related to data technologies, from high level strategies and guidance, to detailed analysis. I can bring these complex topics to a level that is understandable and actionable. I have more than 25 years proven experience in data management and analytics, working with large and small organizations and projects, developing robust solutions to a variety of business problems. My primary focus is: • Providing assistance with data management, analysis, and visualizations, from strategic planning to development, implementation, and support • Consult and develop advanced statistical and machine learning models of any type, focusing on multivariate stochastic time series analysis
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Microsoft Azure SQL Database
    Data Visualization
    Computing & Networking
    Apache Hadoop
    Statistical Analysis
    Data Ingestion
    Data Analysis
    Information Security
    Amazon Web Services
    SQL
    Data Science
    Machine Learning
    R
    Cloudera
    Python
    SAS
  • $50 hourly
    Hands-on design and development experience on Hadoop ecosystem (Hadoop, HBase, PIG, Hive, and MapReduce) including one or more of the following Big data related technologies - Scala, SPARK, Sqoop, Flume, Kafka and Python, strong ETL, PostgreSQL experience as well Strong background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and microservice architecture * Experience in Cloudera Stack, HortonWorks, and Amazon EMR * Strong experience in using Excel, SQL, SAS, Python and R to dump the data and analyze based on business needs. * Strong experience in Data Analysis, Data Migration, Data Cleansing, Transformation, Integration, Data Import, and Data Export through the use of multiple ETL tools such as Ab Initio and Informatica PowerCenter * Strong understanding and hands-on programming/scripting experience skills - UNIX shell * An excellent team player & technically strong person who has
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Amazon S3
    Data Warehousing & ETL Software
    Big Data
    Amazon Web Services
    Hive
    Data Science
    ETL
    Data Lake
    Data Cleaning
    Apache Hive
    Apache Hadoop
    Apache Spark
    Apache Kafka
    Data Migration
    ETL Pipeline
  • $106 hourly
    I'm an experienced Data Engineer currently transitioning from working fulltime to contracting work, previously worked for the Israeli Army and Waycare, a startup company that was acquired by Rekor Systems
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Amazon Athena
    Amazon EC2
    Apache Hadoop
    Big Data
    Apache Kafka
    Git
    Apache Spark
    C#
    Python
    Kotlin
    Java
    Kubernetes
  • $100 hourly
    Professional summary: Big data and analytics enthusiast, permanent learner, with about 18 years experience of data analysis and research in experimental particle physics and 10 years of data science experience in industrial settings (advertising, automotive, supply chain, energy&utility and consulting). Co-author of many software packages in experimental particle physics and industry. Leader of a few algorithmic and physics research groups and data science groups in industry. Supervised many undergraduate/PhD students, data scientists and interns in various projects. Delivery of end-to-end ML services in business companies using on-premise and cloud technologies. Primary author of more than 30 papers published in major peer-reviewed physics journals with application of machine learning algorithms in physics experiments and industrial environments: inspirehep.net/author/profile/D.V.Bandurin.1 Business website: solveum.ai A few projects have been either delivered or in progress on Upwork. Skills: – Programming in Python, R, C++, Scala, Fortran, MatLab – SQL (incl. Postgres, Redshift, Snowflake), noSQL (Mongo, Redis, BigQuery, Cassandra, Neo4j, ElasticSearch); – Big data processing using Hadoop, Databricks, Spark, Hive, Impala; – Machine learning using scikit-learn, MLLib, MLFlow, TensorFlow, Keras, PyTorch; – Distributed deep learning using Dask, Ray, Horovod; – Reinforcement learning using RLLib, Ray, COACH, OpenAI Gym; – Natural language processing [incl. Gensim/NLTK/SpaCy; GloVe/Word2Vec/FastText/BERT, etc]; – Computer vision [incl. OpenCV, OCR]; – Azure Cloud (Databricks, Delta Lake, Azure ML, Synapse Analytics, Azure IoT Hub, IoT Edge, Functions); – AWS Cloud (RDS, Amazon S3, EC2&ECR, Elastic Beanstalk, Lambda, SageMaker, etc); – Google Cloud (Vertex AI, BigQuery, DataStudio, Kubeflow, AutoML); – IBM Watson (Audio and Text modeling, transcription services); – Data visualization (Tableau, Power BI, QuickSight, Python&R libraries, e.g. Plotly, Dash, Shiny); Recommendations: see dmitrybandurin/details/recommendations/ at LinkedIn.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Particle Physics
    Microsoft Azure
    Apache Hadoop
    Cloud Computing
    Analytics
    Apache Hive
    Amazon Web Services
    Big Data
    Artificial Intelligence
    Cloudera
    Machine Learning Model
    Apache Spark
    C++
    Apache Spark MLlib
    Computer Vision
  • $80 hourly
    Solution-focused software engineer with previous experience in site reliability engineering and enterprise technical support. Excellent communicator and collaborator. Experienced working with customer and development teams to build a better product and platform.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Apache Hadoop
    YARN
    Big Data
    Computing & Networking
    Data Lake
    Bash
    Network Engineering
    Database Management System
    Apache Kafka
    Apache Spark
    Computer Network
    Data Structures
    C++
    Ansible
    Amazon Web Services
  • $60 hourly
    I have expert level experience working at companies like Meta, Microsoft, Cigna, Wells Fargo, BofA in the US. My expertise comes from domain of Data Solution from devising ETL pipelines to Data Visualization & Reporting finally incorporating AI on top for predictive models. On top of this I am also a certified AWS Solution Architect. A few years ago I have founded TechScope, a consulting firm which is both backed and officially partnered by AWS, Salesforce, GCP and databricks. We have helped numerous organizations fast track their needs in the tech world. Any project with me, means a project backed by our entire team ready to assist. Don't worry the bid amount will always stay the same. I'm looking forward to assisting you in all your Data, Cloud and Salesforce needs.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Salesforce Wave Analytics
    Salesforce App Development
    Salesforce Sales Cloud
    Salesforce Lightning
    Salesforce Einstein
    Salesforce CRM
    Apache Kafka
    Apache Hadoop
    Apache Spark
    AWS Glue
    SQL
    Python
    Google Cloud Platform
    Salesforce
    Microsoft Azure
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.