Learn By Example: Hadoop, MapReduce for Big Data problems

449 4,000

Redeem Coupon:
Loading the player...
4 (4)
Software Engineering

15 days Money back Gurantee

Unlimited Access

Android, iPhone and iPad Access

Certificate of Completion

Course Summary :

Taught by a 4 person team including 2 Stanford-educated, ex-Googlers  and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with Java and with billions of rows of data. 

This course is a zoom-in, zoom-out, hands-on workout involving Hadoop, MapReduce and the art of thinking parallel. 

Let’s parse that.

Zoom-in, Zoom-Out:  This course is both broad and deep. It covers the individual components of Hadoop in great detail, and also gives you a higher level picture of how they interact with each other. 

Hands-on workout involving Hadoop, MapReduce : This course will get you hands-on with Hadoop very early on.  You'll learn how to set up your own cluster using both VMs and the Cloud. All the major features of MapReduce are covered - including advanced topics like Total Sort and Secondary Sort. 

The art of thinking parallel: MapReduce completely changed the way people thought about processing Big Data. Breaking down any problem into parallelizable units is an art. The examples in this course will train you to "think parallel". 

What's Covered:

Lot's of cool stuff ..

  • Using MapReduce to 


    • Recommend friends in a Social Networking site: Generate Top 10 friend recommendations using a Collaborative filtering algorithm. 
    • Build an Inverted Index for Search Engines: Use MapReduce to parallelize the humongous task of building an inverted index for a search engine. 
    • Generate Bigrams from text: Generate bigrams and compute their frequency distribution in a corpus of text. 


  • Build your Hadoop cluster: 


    • Install Hadoop in Standalone, Pseudo-Distributed and Fully Distributed modes 
    • Set up a hadoop cluster using Linux VMs.
    • Set up a cloud Hadoop cluster on AWS with Cloudera Manager.
    • Understand HDFS, MapReduce and YARN and their interaction 


  • Customize your MapReduce Jobs: 


    • Chain multiple MR jobs together
    • Write your own Customized Partitioner
    • Total Sort : Globally sort a large amount of data by sampling input files
    • Secondary sorting 
    • Unit tests with MR Unit
    • Integrate with Python using the Hadoop Streaming API


.. and of course all the basics: 

  • MapReduce : Mapper, Reducer, Sort/Merge, Partitioning, Shuffle and Sort
  • HDFS & YARN: Namenode, Datanode, Resource manager, Node manager, the anatomy of a MapReduce application, YARN Scheduling, Configuring HDFS and YARN to performance tune your cluster. 


Write us about anything - anything! - and we will always reply :-) Haopy Learning at Unanth.

Pre-Requisites :

What are the requirements?

  • You'll need an IDE where you can write Java code or open the source code that's shared. IntelliJ and Eclipse are both great options.
  • You'll need some background in Object-Oriented Programming, preferably in Java. All the source code is in Java and we dive right in without going into Objects, Classes etc
  • A bit of exposure to Linux/Unix shells would be helpful, but it won't be a blocker

Target Audience :

What is the target audience?

  • Yep! Analysts who want to leverage the power of HDFS where traditional databases don't cut it anymore
  • Yep! Engineers who want to develop complex distributed computing applications to process lot's of data
  • Yep! Data Scientists who want to add MapReduce to their bag of tricks for processing data

Curriculum :

Section 1 - Introduction
      1 : You, this course and Us01:52
    Section 2 - Why is Big Data a Big Deal
        2 : DOWNLOAD SECTION 2- WhyBigData
        3 : The Big Data Paradigm
        4 : Serial vs Distributed Computing
        5 : What is Hadoop?
        6 : HDFS or the Hadoop Distributed File System
        7 : MapReduce Introduced
        8 : YARN or Yet Another Resource Negotiator
      Section 3 - Installing Hadoop in a Local Environment
          9 : DOWNLOAD SECTION 3-Install-Guides
          10 : Hadoop Install Modes
          11 : Setup a Virtual Linux Instance (For Windows users)
          12 : Hadoop Standalone mode Install
          13 : Hadoop Pseudo-Distributed mode Install
        Section 4 - The MapReduce "Hello World"
            14 : DOWNLOAD SECTION 4-MR-IntroSimpleWordCount
            15 : DOWNLOAD SECTION 4- SourceCode
            16 : The basic philosophy underlying MapReduce
            17 : MapReduce - Visualized And Explained
            18 : MapReduce - Digging a little deeper at every step
            19 : "Hello World" in MapReduce
            20 : The Mapper
            21 : The Reducer
            22 : The Job
          Section 5 - Run a MapReduce Job
              23 : Get comfortable with HDFS
              24 : Run your first MapReduce Job
            Section 6 - Juicing your MapReduce - Combiners, Shuffle and Sort and The Streaming API
                25 : DOWNLOAD SECTION 6-MR-CombinerStreamingAPIMultipleReduceShuffleSort
                26 : Parallelize the reduce phase - use the Combiner
                27 : Not all Reducers are Combiners
                28 : How many mappers and reducers does your MapReduce have?
                29 : Parallelizing reduce using Shuffle And Sort
                30 : MapReduce is not limited to the Java language - Introducing the Streaming API
                31 : Python for MapReduce
              Section 7 - HDFS and Yarn
                  32 : DOWNLOAD SECTION 7-HDFS
                  33 : HDFS - Protecting against data loss using replication
                  34 : HDFS - Name nodes and why they're critical
                  35 : HDFS - Checkpointing to backup name node information
                  36 : DOWNLOAD SECTION 7-YARN
                  37 : Yarn - Basic components
                  38 : Yarn - Submitting a job to Yarn
                  39 : Yarn - Plug in scheduling policies
                  40 : Yarn - Configure the scheduler
                Section 8 - Setting up a Hadoop Cluster
                    41 : Manually configuring a Hadoop cluster (Linux VMs)
                    42 : Getting started with Amazon Web Servicies
                    43 : Start a Hadoop Cluster with Cloudera Manager on AWS
                  Section 9 - MapReduce Customizations For Finer Grained Control
                      44 : DOWNLOAD SECTION 9-Customizing-MR
                      45 : Setting up your MapReduce to accept command line arguments
                      46 : The Tool, ToolRunner and GenericOptionsParser
                      47 : Configuring properties of the Job object
                      48 : Customizing the Partitioner, Sort Comparator, and Group Comparator
                    Section 10 - The Inverted Index, Custom Data Types for Keys, Bigram Counts and Unit Tests!
                        49 : DOWNLOAD SECTION 10-MR-InvertedIndex-WritableInterface-Bigram-MRUnit
                        50 : The heart of search engines - The Inverted Index
                        51 : Generating the inverted index using MapReduce
                        52 : Custom data types for keys - The Writable Interface
                        53 : Represent a Bigram using a WritableComparable
                        54 : MapReduce to count the Bigrams in input text
                        55 : Test your MapReduce job using MRUnit
                      Section 11 - Input and Output Formats and Customized Partitioning
                          56 : DOWNLOAD SECTION 11-Formats-And-Sorting
                          57 : Introducing the File Input Format
                          58 : Text And Sequence File Formats
                          59 : Data partitioning using a custom partitioner
                          60 : Make the custom partitioner real in code
                          61 : Total Order Partitioning
                          62 : Input Sampling, Distribution, Partitioning and configuring these
                          63 : Secondary Sort
                        Section 12 - Recommendation Systems using Collaborative Filtering
                            64 : DOWNLOAD SECTION 12-MR-CollaborativeFiltering-Recommendations
                            65 : Introduction to Collaborative Filtering
                            66 : Friend recommendations using chained MR jobs
                            67 : Get common friends for every pair of users - the first MapReduce
                            68 : Top 10 friend recommendation for every user - the second MapReduce
                          Section 13 - Hadoop as a Database
                              69 : DOWNLOAD SECTION 13-MR-Databases-Select-Grouping
                              70 : Structured data in Hadoop Preview
                              71 : Running an SQL Select with MapReduce
                              72 : Running an SQL Group By with MapReduce
                              73 : A MapReduce Join - The Map Side
                              74 : A MapReduce Join - The Reduce Side
                              75 : A MapReduce Join - Sorting and Partitioning
                              76 : A MapReduce Join - Putting it all together
                            Section 14 - K-Means Clustering
                                77 : DOWNLOAD SECTION 14-MR-Kmeans-Algo
                                78 : What is K-Means Clustering?
                                79 : A MapReduce job for K-Means Clustering
                                80 : K-Means Clustering - Measuring the distance between points
                                81 : K-Means Clustering - Custom Writables for Input/Output
                                82 : K-Means Clustering - Configuring the Job
                                83 : K-Means Clustering - The Mapper and Reducer
                                84 : K-Means Clustering : The Iterative MapReduce Job


Instructor :

Loonycorn A 4-ppl team;ex-Google.


Loonycorn is us, Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh. Between the four of us, we have studied at Stanford, IIM Ahmedabad, the IITs and have spent years (decades, actually) working in tech, in the Bay Area, New York, Singapore and Bangalore. Janani: 7 years at Google (New York, Singapore); Studied at Stanford; also worked at Flipkart and Microsoft Vitthal: Also Google (Singapore) and studied at Stanford; Flipkart, Credit Suisse and INSEAD too Swetha: Early Flipkart employee, IIM Ahmedabad and IIT Madras alum Navdeep: longtime Flipkart employee too, and IIT Guwahati alum We think we might have hit upon a neat way of teaching complicated tech courses in a funny, practical, engaging way, which is why we are so excited to be here on Unanth! We hope you will try our offerings, and think you'll like them :-)


Average Rating
 (4 Reviews)


Kavitha J

posted 1 year before

Very Good Course....Value for money.

Very good course, concepts explained with details. At some places pace is fast but manageable with attached documents. I recommend this course.


Sunil K

posted 1 year before


I really enjoyed this course, it is very informative and simple to understand. It is up to date for each and every Hadoop concepts. I learned a lot from this course.


Vibinson Victoria

posted 4 month before

Hadoop by Unanth - Fabulous work

Covered all the concepts through picture and easy to understand. Well done team. Thanks a lot guys


Amit Soni

posted 2 month before

Abstract Tutorial and waste of your money and efforts

There is very abstract information which we can understand from different tutorials. In depth information should be there like why and how else all your efforts to watch this tutorials are waste of time and money. I joined this course to understand internal things which I didn't get and wasted my money.