2014年12月29日 星期一

[CCDH] Exercise1 - Using HDFS

Preface 
Files used in this exercise: 
Data files (local)
~/training_materials/developer/data/shakesperare.tar.gz
~/training_materials/developer/data/access_log.gz

In this exercise you will begin to get acquainted with the Hadoop tools. You will manipulate files in HDFS, the Hadoop Distributed File System. 

Exercise 
Before starting the exercises, run the course setup script in a terminal window: 
$ ~/scripts/developer/training_setup_dev.sh

Hadoop 
Hadoop is already installed, configured, and running on your virtual machine. Most of your interaction with the system will be through a command-line wrapper called hadoop. If you run this program with no arguments, it prints a help message. To try this, run the below command in a terminal window: 
$ hadoop
Usage: hadoop [--config confdir] COMMAND
...

The hadoop command is subdivided into several subsystems. For example, there is a subsystem for working with files in HDFS and another for launching and managing MapReduce processing jobs. 

Step1: Exploring HDFS 
The subsystem associated with HDFS in the Hadoop wrapper program is called FsShell. This subsystem can be invoked with command hadoop fs
1. In the terminal window, enter 
$ hadoop fs
Usage: hadoop fs [generic options]
...

You see a help messge describing all the commands associated with the FsShell subsystem. 

2. Enter: 
$ hadoop fs -ls /

This shows you the contents of the root directory in HDFS. There will be multiple entries, one of which is /user. Individual users have a "home" directory under this directory, named after their username. 

Step2: Uploading Files 
Besides browsing the existing filesystem, another important thing you can do with FsShell is to upload new data into HDFS. 
1. Change directories to the local filesystem directory containing the sample data we will be using in the course. 
$ cd ~/training_materials/developer/data

If you perform a regular Linux ls command in this directory, you will see a few files, including two named shakespeare.tar.gz and shakespeare-stream.tar.gz. Both of those contain the complete works of Shakespeare in text format, but with different formats and organizations. For now, we will work with shakespeare.tar.gz

2. Unzip shakespeare.tar.gz by running 
$ tar xvf shakespeare.tar.gz

This creates a directory named shakespeare/ containing several files on your local filesystem. 

3. Insert this directory into HDFS: 
$ hadoop fs -put shakespeare shakespeare

This copies the local shakespeare directory and its contents into a remote HDFS directory named /user/training/shakespeare

4. List the contents of your HDFS home directory now: 
$ hadoop fs -ls

You should see an entry for the shakespeare directory. If you don't pass a directory name to the -ls command, it assumes you mean your home directory, i.e./user/training. Any relative path will based on your home directory too! 

5. We also have Web server log file, which we will put into HDFS for use in the future exercise: 
$ hadoop fs -mkdir weblog

The file is currently compressed using GZip. Rather than extract the file to the local disk and then upload it, we will extract and upload in one step. Now, extrack and upload the file in one step. The -c option to gunzip uncompresses to standard output, and the dash (-) in the below command takes whatever is being sent to its standard input and places that data in HDFS: 
$ gunzip -c access_log.gz | hadoop fs -put - weblog/access_log

6. Run the hadoop fs -ls command to verify that the log file is in your HDFS home directory 

7. The access log file is quite large - around 500 MB. Create a small version of this file, consisting only of its first 5000 lines, and store the smaller version in HDFS. You can use the smaller version for testing in subsequent exercises. 
$ hadoop fs -mkdir testlog
$ gunzip -c access_log.gz | head -n 5000 | hadoop fs -put - testlog/test_access_log

Step3: Viewing and Manipulating Files 
Now let's view some of the data you just copied into HDFS. 

1. Enter 
$ hadoop fs -ls shakespeare

This lists the contents of the /user/training/shakespeare HDFS directory. 

2. The glossary file included in the compressed file you began with is not strictly a work of Shakespere, let's remove it: 
$ hadoop fs -rm shakespeare/glossary

3. Enter: 
$ hadoop fs -cat shakespeare/histories | tail -n 50

This prints the last 50 lines of Henry IV, Part 1 to your terminal. This command is handy for viewing the output of MapReduce programs. Very often, an individual output file of a MapReduce program is very large, making it inconvenient to view the entire file in the terminal. 

4. To download a file to work with on the local filesystem use the fs -get command. This command takes two arguments: an HDFS path and a local path. It copies the HDFS contents into the local filesystem: 
$ hadoop fs -get shakespeare/poems ~/shakepoems.txt
$ less ~/shakepoems.txt

Other Commands 
Useful arguments for users of a hadoop cluster from hadoop command: 
archive: Creates a hadoop archive. More information can be found at Hadoop Archives.
distcp: Copy file or directories recursively. More information can be found at Hadoop DistCp Guide.
fs: Runs a generic filesystem user client. Deprecated, use hdfs dfs instead.
fsck: Runs a HDFS filesystem checking utility. See here for more info.
fetchdtGets Delegation Token from a NameNode. See here for more info.
jarRuns a jar file. Users can bundle their Map Reduce code in a jar file and execute it using this command.
job: Command to interact with Map Reduce Jobs.
pipes: Runs a pipes job.
queue: command to interact and view Job Queue information
version: Prints the version.
CLASSNAMEhadoop script can be used to invoke any class.
classpath: Prints the class path needed to get the Hadoop jar and the required libraries.

Commands useful for administrators of a hadoop cluster can refer here

Supplement 
Apache Hadoop 2.5.1 - Command Menu

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...