MOC 20775 - Performing Data Engineering on Microsoft HD Insight (MOC20775)
Course Length: 5 days
Delivery Methods:
Available as private class only
Course Overview
This MOC20775 - Performing Data Engineering on Microsoft HD Insight training class teaches students to plan and implement big data workflows on HDInsight.
The primary audience for this course is data engineers, data architects, data scientists, and data developers who plan to implement big data engineering workflows on HDInsight.
Course Benefits
- Learn to deploy HDInsight clusters.
- Learn to authorizing users to access resources.
- Learn to loading data into HDInsight.
- Learn to troubleshooting HDInsight.
- Learn to implement batch solutions.
- Learn to design batch ETL solutions for big data with Spark.
- Learn to analyze data with Spark SQL.
- Learn to analyze Data with Hive and Phoenix.
- Learn to describe Stream Analytics.
- Learn to implement Spark streaming using the DStream API.
- Learn to develop big data real-time processing solutions with Apache Storm.
- Learn to build solutions that use Kafka and HBase.
Microsoft Certified Partner
Webucator is a Microsoft Certified Partner for Learning Solutions (CPLS). This class uses official Microsoft courseware and will be delivered by a Microsoft Certified Trainer (MCT).
Course Outline
- Getting Started with HDInsight
- What is Big Data?
- Introduction to Hadoop
- Working with MapReduce Function
- Introducing HDInsight
- Lab: Working with HDInsight
- Provision an HDInsight cluster and run MapReduce jobs
- Deploying HDInsight Clusters
- Identifying HDInsight cluster types
- Managing HDInsight clusters by using the Azure portal
- Managing HDInsight Clusters by using Azure PowerShell
- Lab: Managing HDInsight clusters with the Azure Portal
- Create an HDInsight cluster that uses Data Lake Store storage
- Customize HDInsight by using script actions
- Delete an HDInsight cluster
- Authorizing Users to Access Resources
- Non-domain Joined clusters
- Configuring domain-joined HDInsight clusters
- Manage domain-joined HDInsight clusters
- Lab: Authorizing Users to Access Resources
- Prepare the Lab Environment
- Manage a non-domain joined cluster
- Loading data into HDInsight
- Storing data for HDInsight processing
- Using data loading tools
- Maximising value from stored data
- Lab: Loading Data into your Azure account
- Load data for use with HDInsight
- Troubleshooting HDInsight
- Analyze HDInsight logs
- YARN logs
- Heap dumps
- Operations management suite
- Lab: Troubleshooting HDInsight
- Analyze HDInsight logs
- Analyze YARN logs
- Monitor resources with Operations Management Suite
- Implementing Batch Solutions
- Apache Hive storage
- HDInsight data queries using Hive and Pig
- Operationalize HDInsight
- Lab: Implement Batch Solutions
- Deploy HDInsight cluster and data storage
- Use data transfers with HDInsight clusters
- Query HDInsight cluster data
- Design Batch ETL solutions for big data with Spark
- What is Spark?
- ETL with Spark
- Spark performance
- Lab: Design Batch ETL solutions for big data with Spark.
- Create a HDInsight Cluster with access to Data Lake Store
- Use HDInsight Spark cluster to analyze data in Data Lake Store
- Analyzing website logs using a custom library with Apache Spark cluster on HDInsight
- Managing resources for Apache Spark cluster on Azure HDInsight
- Analyze Data with Spark SQL
- Implementing iterative and interactive queries
- Perform exploratory data analysis
- Lab: Performing exploratory data analysis by using iterative and interactive queries
- Build a machine learning application
- Use zeppelin for interactive data analysis
- View and manage Spark sessions by using Livy
- Analyze Data with Hive and Phoenix
- Implement interactive queries for big data with interactive hive.
- Perform exploratory data analysis by using Hive
- Perform interactive processing by using Apache Phoenix
- Lab: Analyze data with Hive and Phoenix
- Implement interactive queries for big data with interactive Hive
- Perform exploratory data analysis by using Hive
- Perform interactive processing by using Apache Phoenix
- Stream Analytics
- Stream analytics
- Process streaming data from stream analytics
- Managing stream analytics jobs
- Lab: Implement Stream Analytics
- Process streaming data with stream analytics
- Managing stream analytics jobs
- Implementing Streaming Solutions with Kafka and HBase
- Building and Deploying a Kafka Cluster
- Publishing, Consuming, and Processing data using the Kafka Cluster
- Using HBase to store and Query Data
- Lab: Implementing Streaming Solutions with Kafka and HBase
- Create a virtual network and gateway
- Create a storm cluster for Kafka
- Create a Kafka producer
- Create a streaming processor client topology
- Create a Power BI dashboard and streaming dataset
- Create an HBase cluster
- Create a streaming processor to write to HBase
- Develop big data real-time processing solutions with Apache Storm
- Persist long term data
- Stream data with Storm
- Create Storm topologies
- Configure Apache Storm
- Lab: Developing big data real-time processing solutions with Apache Storm
- Stream data with Storm
- Create Storm Topologies
- Create Spark Streaming Applications
- Working with Spark Streaming
- Creating Spark Structured Streaming Applications
- Persistence and Visualization
- Lab: Building a Spark Streaming Application
- Installing Required Software
- Building the Azure Infrastructure
- Building a Spark Streaming Pipeline
Class Materials
Each student will receive a comprehensive set of materials, including course notes and all the class examples.
Class Prerequisites
Experience in the following is required for this Microsoft Big Data class:
- Programming experience using R, and familiarity with common R packages.
- Knowledge of common statistical methods and data analysis best practices.
- Basic knowledge of the Microsoft Windows operating system and its core functionality.
- Working knowledge of relational databases.