Data Streaming and Real Time Data Processing Training Course
Course Overview
This programme offers a practical and structured entry point into constructing real time data streaming systems. It explores core concepts, architectural patterns, and the industry-standard tools required to process continuous data at scale. Participants will acquire the skills to design, implement, and optimise streaming pipelines using contemporary frameworks. The curriculum advances from foundational principles to practical applications, empowering learners to confidently develop production-grade real time solutions.
Training Format
• Instructor-led sessions with guided explanations
• Concept walkthroughs featuring real world examples
• Hands on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A
Course Objectives
• Grasp the concepts of real time data streaming and system architecture
• Distinguish between batch and streaming data processing models
• Design scalable and fault tolerant streaming pipelines
• Work with distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimise real time data solutions for specific business use cases
This course is available as onsite live training in Greece or online live training.Course Outline
Course Outline Day 1
• Introduction to data streaming concepts
• Fundamentals of batch versus real time processing
• Basics of event driven architecture
• Common industry use cases
• Overview of the streaming ecosystem
Day 2
• Streaming architecture design patterns
• Fundamentals of distributed messaging systems
• Producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Stream processing concepts and frameworks
• Event time versus processing time
• Windowing techniques and use cases
• Stateful stream processing
• Fault tolerance and checkpointing basics
Day 4
• Data transformation in streaming pipelines
• ETL and ELT in real time systems
• Schema management and evolution
• Stream joins and enrichment
• Introduction to cloud based streaming services
Day 5
• Monitoring and observability in streaming systems
• Security and access control basics
• Performance tuning and optimization
• End to end pipeline design review
• Real world use cases such as fraud detection and IoT processing
Open Training Courses require 5+ participants.
Data Streaming and Real Time Data Processing Training Course - Booking
Data Streaming and Real Time Data Processing Training Course - Enquiry
Data Streaming and Real Time Data Processing - Consultancy Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Upcoming Courses
Related Courses
Administrator Training for Apache Hadoop
35 HoursAudience:
This course is designed for IT professionals seeking effective solutions for storing and processing large datasets within a distributed system environment.
Objective:
To acquire in-depth expertise in administering Hadoop clusters.
Big Data Analytics with Google Colab and Apache Spark
14 HoursThis instructor-led, live training in Greece (online or onsite) is designed for intermediate-level data scientists and engineers seeking to apply Google Colab and Apache Spark to big data processing and analytics.
By the conclusion of this training, participants will be able to:
- Establish a big data environment using Google Colab and Spark.
- Process and analyse large datasets efficiently with Apache Spark.
- Visualise big data in a collaborative environment.
- Integrate Apache Spark with cloud-based tools.
Big Data Analytics in Health
21 HoursBig data analytics is the process of examining large volumes of diverse datasets to uncover correlations, hidden patterns, and valuable insights.
The healthcare sector generates vast amounts of complex, heterogeneous medical and clinical data. Leveraging big data analytics on health data holds significant potential for deriving insights that improve healthcare delivery. However, the sheer scale of these datasets presents substantial challenges for analysis and practical application within clinical environments.
During this instructor-led, live training (delivered remotely), participants will learn how to conduct big data analytics in healthcare by completing a series of hands-on, live-lab exercises.
Upon completion of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to manage medical data
- Study big data systems and algorithms in the context of healthcare applications
Audience
- Developers
- Data Scientists
Course Format
- A blend of lectures, discussions, exercises, and intensive hands-on practice.
Note
- To request customized training for this course, please contact us to arrange it.
Hadoop For Administrators
21 HoursApache Hadoop stands as the leading framework for processing Big Data across server clusters. This three-day (or optionally four-day) program enables participants to understand the business advantages and real-world applications of Hadoop and its broader ecosystem. The curriculum covers planning for cluster deployment and scalability, as well as installing, maintaining, monitoring, troubleshooting, and optimizing Hadoop environments. Attendees will gain practical experience with bulk data loading, explore various Hadoop distributions, and manage ecosystem tools. The course concludes with a focus on securing clusters using Kerberos.
“The course materials were exceptionally well-prepared and thorough. The laboratory sessions were highly beneficial and expertly organized.”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop system administrators
Format
A blend of theoretical lectures and practical hands-on labs, maintaining an approximate ratio of 60% lectures to 40% lab work.
Hadoop for Developers (4 days)
28 HoursApache Hadoop stands as the leading framework for processing Big Data across server clusters. This course provides developers with an introduction to the key components of the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop stands out as one of the most widely used frameworks for processing Big Data across server clusters. This course provides an in-depth exploration of data management within HDFS, alongside advanced techniques for Pig, Hive, and HBase. These advanced programming strategies are particularly valuable for experienced Hadoop developers.
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
Hadoop Administration on MapR
28 HoursTarget Audience:
This course is designed to demystify big data and Hadoop technology, demonstrating that it is accessible and easy to understand.
Hadoop and Spark for Administrators
35 HoursThis instructor-led live training in Greece (online or onsite) targets system administrators who wish to learn how to set up, deploy and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course provides an introduction to HBase, a NoSQL database built on top of Hadoop. It is designed for developers who intend to build applications using HBase, as well as administrators responsible for managing HBase clusters.
The curriculum guides developers through HBase's architecture, data modeling techniques, and application development processes. It also covers the integration of MapReduce with HBase and addresses key administration topics, including performance optimization. The course is highly practical, featuring numerous lab exercises.
Duration : 3 days
Audience : Developers & Administrators
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source platform designed for flow-based data integration and event processing. It facilitates automated, real-time data routing, transformation, and system mediation between diverse systems, featuring a web-based UI that offers fine-grained control.
This instructor-led training session, available either on-site or remotely, targets intermediate-level administrators and engineers who aim to deploy, manage, secure, and optimize NiFi dataflows within production environments.
Upon completing this training, participants will be equipped to:
- Install, configure, and maintain Apache NiFi clusters.
- Design and manage dataflows originating from various sources and destinations.
- Implement logic for flow automation, routing, and transformation.
- Optimize performance, monitor operational metrics, and troubleshoot issues.
Course Format
- Interactive lectures accompanied by discussions on real-world architectures.
- Hands-on labs focused on building, deploying, and managing data flows.
- Scenario-based exercises conducted in a live laboratory environment.
Course Customization Options
- For organizations seeking a tailored training experience for this course, please reach out to us to make arrangements.
Apache NiFi for Developers
7 HoursIn this instructor-led, live training in Greece, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
PySpark and Machine Learning
21 HoursThis training offers a hands-on introduction to developing scalable data processing and Machine Learning workflows utilizing PySpark. Participants will discover how Apache Spark functions within contemporary Big Data ecosystems and learn to process large datasets efficiently by applying distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led live training in Greece, participants will learn how to leverage Python and Spark together to analyze big data while completing hands-on exercises.
Upon completion of this training, participants will be able to:
- Utilize Spark with Python to analyze Big Data.
- Complete exercises that simulate real-world scenarios.
- Apply various tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led live training in Greece (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio offers a comprehensive, data-centric platform that unifies big data capabilities, artificial intelligence, and governance into a single, cohesive solution. Its Rocket and Intelligence modules empower organizations to perform rapid data exploration, transformation, and advanced analytics within enterprise-grade environments.
This instructor-led live training, available online or on-site, is designed for intermediate-level data professionals seeking to harness the full potential of Stratio’s Rocket and Intelligence modules using PySpark. The curriculum emphasizes looping structures, user-defined functions, and sophisticated data logic to enhance workflow efficiency.
Upon completion of this training, participants will be able to:
- Navigate and effectively utilize the Stratio platform, specifically the Rocket and Intelligence modules.
- Apply PySpark techniques for data ingestion, transformation, and analysis within the Stratio ecosystem.
- Implement loops and conditional logic to manage data workflows and execute feature engineering tasks.
- Develop and manage User-Defined Functions (UDFs) to enable reusable data operations in PySpark.
Course Format
- Interactive lectures paired with group discussions.
- Extensive hands-on exercises and practical practice sessions.
- Real-world implementation through a live-lab environment.
Customization Options
- For tailored training solutions, please reach out to us to discuss your specific requirements.