4.33 out of 5
4.33
10546 reviews on Udemy

Hadoop Starter Kit

Hadoop learning made easy and fun. Learn HDFS, MapReduce and introduction to Pig and Hive with FREE cluster access.
Instructor:
Hadoop In Real World
English [Auto-generated]
Understand the Big Data problem in terms of storage and computation
Understand how Hadoop approach Big Data problem and provide a solution to the problem
Understand the need for another file system like HDFS
Work with HDFS
Understand the architecture of HDFS
Understand the MapReduce programming model
Understand the phases in MapReduce
Envision a problem in MapReduce
Write a MapReduce program with complete understanding of program constructs
Write Pig Latin instructions
Create and query Hive tables

The objective of this course is to walk you through step by step of all the core components in Hadoop but more importantly make Hadoop learning experience easy and fun.

By enrolling in this course you can also get free access to our multi-node Hadoop training cluster so you can try out what you learn right away in a real multi-node distributed environment.

ABOUT INSTRUCTOR(S)

We are a group of Hadoop consultants who are passionate about Hadoop and Big Data technologies. 4 years ago when we were looking for Big Data consultants to work in our own projects we did not find qualified candidates because the big data industry was very new and hence we set out to train qualified candidates in Big Data ourselves giving them a deep and real world insights in to Hadoop.

WHAT YOU WILL LEARN IN THIS COURSE

In the first section you will learn about what is big data with examples. We will discuss the factors to consider when considering whether a problem is big data problem or not. We will talk about the challenges with existing technologies when it comes to big data computation. We will breakdown the Big Data problem in terms of storage and computation and understand how Hadoop approaches the problem and provide a solution to the problem.

In the HDFS, section you will learn about the need for another file system like HDFS. We will compare HDFS with traditional file systems and its benefits. We will also work with HDFS and discuss the architecture of HDFS.

In the MapReduce section you will learn about the basics of MapReduce and phases involved in MapReduce. We will go over each phase in detail and understand what happens in each phase. Then we will write a MapReduce program in Java to calculate the maximum closing price for stock symbols from a stock dataset.

In the next two sections, we will introduce you to Apache Pig & Hive. We will try to calculate the maximum closing price for stock symbols from a stock dataset using Pig and Hive.

4.3
4.3 out of 5
10546 Ratings

Detailed Rating

Stars 5
5109
Stars 4
4087
Stars 3
1113
Stars 2
173
Stars 1
64
ea6f7d94d99d1221138b2a192f143268
30-Day Money-Back Guarantee

Includes

3 hours on-demand video
Full lifetime access
Access on mobile and TV
Certificate of Completion

Archive

Working hours

Monday 8:00 am - 6.00 pm
Tuesday 8:00 am - 6.00 pm
Wednesday 8:00 am - 6.00 pm
Thursday 8:00 am - 6.00 pm
Friday 8:00 am - 6.00 pm
Saturday Closed
Sunday Closed
This website uses cookies and asks your personal data to enhance your browsing experience.