Files used in this exercise:
In this exercise you will begin to get acquainted with the Hadoop tools. You will manipulate files in HDFS, the Hadoop Distributed File System.
Before starting the exercises, run the course setup script in a terminal window:
Hadoop is already installed, configured, and running on your virtual machine. Most of your interaction with the system will be through a command-line wrapper called hadoop. If you run this program with no arguments, it prints a help message. To try this, run the below command in a terminal window:
The hadoop command is subdivided into several subsystems. For example, there is a subsystem for working with files in HDFS and another for launching and managing MapReduce processing jobs.
Step1: Exploring HDFS
The subsystem associated with HDFS in the Hadoop wrapper program is called FsShell. This subsystem can be invoked with command hadoop fs.
1. In the terminal window, enter
You see a help messge describing all the commands associated with the FsShell subsystem.
This shows you the contents of the root directory in HDFS. There will be multiple entries, one of which is /user. Individual users have a "home" directory under this directory, named after their username.
Step2: Uploading Files
Besides browsing the existing filesystem, another important thing you can do with FsShell is to upload new data into HDFS.
1. Change directories to the local filesystem directory containing the sample data we will be using in the course.
If you perform a regular Linux ls command in this directory, you will see a few files, including two named shakespeare.tar.gz and shakespeare-stream.tar.gz. Both of those contain the complete works of Shakespeare in text format, but with different formats and organizations. For now, we will work with shakespeare.tar.gz.
2. Unzip shakespeare.tar.gz by running
This creates a directory named shakespeare/ containing several files on your local filesystem.
3. Insert this directory into HDFS:
This copies the local shakespeare directory and its contents into a remote HDFS directory named /user/training/shakespeare.
4. List the contents of your HDFS home directory now:
You should see an entry for the shakespeare directory. If you don't pass a directory name to the -ls command, it assumes you mean your home directory, i.e./user/training. Any relative path will based on your home directory too!
5. We also have Web server log file, which we will put into HDFS for use in the future exercise:
The file is currently compressed using GZip. Rather than extract the file to the local disk and then upload it, we will extract and upload in one step. Now, extrack and upload the file in one step. The -c option to gunzip uncompresses to standard output, and the dash (-) in the below command takes whatever is being sent to its standard input and places that data in HDFS:
6. Run the hadoop fs -ls command to verify that the log file is in your HDFS home directory
7. The access log file is quite large - around 500 MB. Create a small version of this file, consisting only of its first 5000 lines, and store the smaller version in HDFS. You can use the smaller version for testing in subsequent exercises.
Step3: Viewing and Manipulating Files
Now let's view some of the data you just copied into HDFS.
This lists the contents of the /user/training/shakespeare HDFS directory.
2. The glossary file included in the compressed file you began with is not strictly a work of Shakespere, let's remove it:
This prints the last 50 lines of Henry IV, Part 1 to your terminal. This command is handy for viewing the output of MapReduce programs. Very often, an individual output file of a MapReduce program is very large, making it inconvenient to view the entire file in the terminal.
4. To download a file to work with on the local filesystem use the fs -get command. This command takes two arguments: an HDFS path and a local path. It copies the HDFS contents into the local filesystem:
Useful arguments for users of a hadoop cluster from hadoop command:
Commands useful for administrators of a hadoop cluster can refer here.
* Apache Hadoop 2.5.1 - Command Menu
- [ 英文學習 ]
- [ 計算機概論 ]
- [ 深入雲計算 ]
- [ 雜七雜八 ]
- [ Algorithm in Java ]
- [ Data Structures with Java ]
- [ IR Class ]
- [ Java 文章收集 ]
- [ Java 代碼範本 ]
- [ Java 套件 ]
- [ JVM 應用 ]
- [ LFD Note ]
- [ MangoDB ]
- [ Math CC ]
- [ MongoDB ]
- [ MySQL 小學堂 ]
- [ Python 考題 ]
- [ Python 常見問題 ]
- [ Python 範例代碼 ]
- [C 常見考題]
- [C 範例代碼]
- [C/C++ 範例代碼]
- [Intro Alg]
- [Java 代碼範本]
- [Java 套件]
- [Linux 命令]
- [Linux 小技巧]
- [Linux 小學堂]
- [ML In Action]
- [Python 學習筆記]
- [Quick Python]
- [Software Engineering]
- [The python tutorial]
- ActiveMQ In Action
- Big Data 研究
- Design Pattern
- Device Driver Programming
- Docker 工具
- Docker Practice
- English Writing
- ExtJS 3.x
- Git Pro
- Hadoop. Hadoop Ecosystem
- Java Framework
- Java UI
- Learn Spark
- ML Udemy
- node js
- Python Std Library
- Python tools
- Ruby Packages
- Windows 技巧
Source From Here Preface While I don’t consider myself a functional programming guru, all those hours spent in Haskell, Lisp and Scheme...
來源自 這裡 前言 : Thread 是 threading 模塊中最重要的類之一，可以使用它來創建線程。有兩種方式來創建線程：一種是通過繼承Thread 類，重寫它的 run 方法；另一種是創建一個 threading.Thread 對象，在它的初始化...
Preface: 在這個階層中，我們只需考慮電路模組的功能，而不需考慮其硬體的詳細內容. Verilog 的時序控制為以事件為基礎的時序控制: * 接線或暫存器的值被改變。 * 模組的輸入埠接收到新的值 * 正規...
轉載自 這裡 前言 : 這裡簡單說明了 #define 的幾種使用方法. 簡單的define定義 : #define MAXTIME 1000 一個簡單的MAXTIME就定義好了，它代表1000，如果在程序裡面寫 : int i = MAXTIME; ...