Hire the best Hadoop Developers & Programmers in Illinois

Check out Hadoop Developers & Programmers in Illinois with the skills you need for your next job.
  • $100 hourly
    Professional summary: Big data and analytics enthusiast, permanent learner, with about 18 years experience of data analysis and research in experimental particle physics and 10 years of data science experience in industrial settings (advertising, automotive, supply chain, energy&utility and consulting). Co-author of many software packages in experimental particle physics and industry. Leader of a few algorithmic and physics research groups and data science groups in industry. Supervised many undergraduate/PhD students, data scientists and interns in various projects. Delivery of end-to-end ML services in business companies using on-premise and cloud technologies. Primary author of more than 30 papers published in major peer-reviewed physics journals with application of machine learning algorithms in physics experiments and industrial environments: inspirehep.net/author/profile/D.V.Bandurin.1 Business website: solveum.ai A few projects have been either delivered or in progress on Upwork. Skills: – Programming in Python, R, C++, Scala, Fortran, MatLab – SQL (incl. Postgres, Redshift, Snowflake), noSQL (Mongo, Redis, BigQuery, Cassandra, Neo4j, ElasticSearch); – Big data processing using Hadoop, Databricks, Spark, Hive, Impala; – Machine learning using scikit-learn, MLLib, MLFlow, TensorFlow, Keras, PyTorch; – Distributed deep learning using Dask, Ray, Horovod; – Reinforcement learning using RLLib, Ray, COACH, OpenAI Gym; – Natural language processing [incl. Gensim/NLTK/SpaCy; GloVe/Word2Vec/FastText/BERT, etc]; – Computer vision [incl. OpenCV, OCR]; – Azure Cloud (Databricks, Delta Lake, Azure ML, Synapse Analytics, Azure IoT Hub, IoT Edge, Functions); – AWS Cloud (RDS, Amazon S3, EC2&ECR, Elastic Beanstalk, Lambda, SageMaker, etc); – Google Cloud (Vertex AI, BigQuery, DataStudio, Kubeflow, AutoML); – IBM Watson (Audio and Text modeling, transcription services); – Data visualization (Tableau, Power BI, QuickSight, Python&R libraries, e.g. Plotly, Dash, Shiny); Recommendations: see dmitrybandurin/details/recommendations/ at LinkedIn.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Particle Physics
    Microsoft Azure
    Apache Hadoop
    Cloud Computing
    Analytics
    Apache Hive
    Amazon Web Services
    Big Data
    Artificial Intelligence
    Cloudera
    Machine Learning Model
    Apache Spark
    C++
    Apache Spark MLlib
    Computer Vision
  • $110 hourly
    Distributed Computing: Apache Spark, Flink, Beam, Hadoop, Dask Cloud Computing: GCP (BigQuery, DataProc, GFS, Dataflow, Pub/Sub), AWS EMR/EC2 Containerization Tools: Docker, Kubernetes Databases: Neo4j, MongoDB, PostgreSQL Languages: Java, Python, C/C++
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    MapReduce
    Apache Kafka
    Cloud Computing
    Apache Hadoop
    White Paper Writing
    Academic Writing
    Google Cloud Platform
    Dask
    Apache Spark
    Research Paper Writing
    Apache Flink
    Kubernetes
    Python
    Java
  • $150 hourly
    I am a motivated leader with extensive experience in all levels of technology, focused on data services and advanced analytics. I have the expertise to help with any level of problem you might be facing related to data technologies, from high level strategies and guidance, to detailed analysis. I can bring these complex topics to a level that is understandable and actionable. I have more than 25 years proven experience in data management and analytics, working with large and small organizations and projects, developing robust solutions to a variety of business problems. My primary focus is: • Providing assistance with data management, analysis, and visualizations, from strategic planning to development, implementation, and support • Consult and develop advanced statistical and machine learning models of any type, focusing on multivariate stochastic time series analysis
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Microsoft Azure SQL Database
    Data Visualization
    Computing & Networking
    Apache Hadoop
    Statistical Analysis
    Data Ingestion
    Data Analysis
    Information Security
    Amazon Web Services
    SQL
    Data Science
    Machine Learning
    R
    Cloudera
    Python
    SAS
  • $50 hourly
    I am an engineer with over 15 years of IT experience including 7 years at a top 20 global bank as an AI/ML Platform Engineer in the Big Data group. Curious, resourceful, and sometimes a little impish. I relish in figuring out new technologies and solving problems. My primary interests are in machine learning and distributed computing for scalable systems and high availability. Very open to other problems as well.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Bash
    Apache Hadoop
    SQL
    R
    Python
    Docker
    Cloudera
    Red Hat Enterprise Linux
  • $30 hourly
    At DigiTransTech we value technology as an integral and potentially differentiating component of a business. We help clients transform their organizations; enabling holistic digital strategies to catch up with consumer demands, maximizing value in turnaround situations and ultimately, developing technology operating models that make organizations more agile. Our technology projects start with business strategy. We believe that a company’s corporate strategy both guides and is influenced by its technological capabilities. Our industry experts take a unique approach to engagements by working with business leaders to initially understand corporate goals and then determine what technological capabilities, systems and support they require to succeed. CRM On Demand Marketing Cloud Sales Cloud Service Cloud Enterprise Information Management Enterprise Data Quality Big Data Cloud Data Integration Data Quality Data Archive Data Masking Test Data Management Business Intelligence ERP Enterprise Resource Planning Cloud ERP
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    RESTful API
    Salesforce Marketing Cloud
    Application Integration
    Apache Hadoop
    SAP PI
    Talend Data Integration
    Azure App Service
    SQL Server Integration Services
    MarkLogic
    JavaScript
  • $50 hourly
    With 5+ years of IT experience, I am a highly skilled Data Engineer proficient in Big Data technologies, cloud ecosystems, and machine learning applications. I specialize in data engineering solutions across Cloudera, Hortonworks, AWS, and GCP platforms, leveraging tools like Spark, Hadoop, Kafka, and Hive. My expertise extends to ETL pipeline design, data modeling, and the development of enterprise-level applications using Hadoop ecosystem components. Passionate about deriving actionable insights from complex datasets, I ensure scalable, efficient, and secure data-driven solutions. 💡 Technical Proficiencies: 🚀 Hadoop | Spark | MapReduce | Kafka | Hive | HBase | Impala | Sqoop 🛠️ AWS (Redshift, Data Pipeline, S3) | Azure Data Factory | GCP (BigQuery, Dataflow) 📊 Python | Scala | Java | SQL | R | T-SQL | Machine Learning Algorithms 📈 Tableau | Power BI | SSIS | SSRS | Data Modeling (Star & Snowflake Schemas)
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    Linux
    WordPress
    Python
    Apache Hadoop
    PySpark
    AWS Lambda
    DevOps
    ETL Pipeline
    Data Extraction
    ETL
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by 5M+ businesses