2019年11月15日 星期五

[Linux 常見問題] Linux and Unix Test Disk I/O Performance With dd Command

Source From Here
Question
How can I use dd command on a Linux to test I/O performance of my hard disk drive? How do I check the performance of a hard drive including the read and write speed on a Linux operating systems? You can use the following commands on a Linux or Unix-like systems for simple I/O performance test:
* dd command : It is used to monitor the writing performance of a disk device on a Linux and Unix-like system.
* hdparm command : It is used to get/set hard disk parameters including test the reading and caching performance of a disk device on a Linux based system.

In this tutorial you will learn how to use the dd command to test disk I/O performance.

Use dd command to monitor the reading and writing performance of a disk device
1. Open a shell prompt. Or login to a remote server via ssh.
2. Use the dd command to measure server throughput (write speed) dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
3. Use the dd command to measure server latency dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync

The dd command is useful to find out simple sequential I/O performance.

Understanding dd command options
In this example, I’m using RAID-10 (Adaptec 5405Z with SAS SSD) array running on a Ubuntu Linux 14.04 LTS server. The basic syntax is as follows to find out server throughput:
  1. # Syntax  
  2. # dd if=/dev/input.file  of=/path/to/output.file  bs=block-size  count=number-of-blocks  oflag=dsync  
  3.   
  4. ## GNU dd syntax ##  
  5. ##########################################################  
  6. ##***[Adjust bs and count as per your needs and setup]**##  
  7. ##########################################################  
  8. dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync  
  9. dd if=/dev/zero of=/tmp/test2.img bs=64M count=1 oflag=dsync  
  10. dd if=/dev/zero of=/tmp/test3.img bs=1M count=256 conv=fdatasync  
  11. dd if=/dev/zero of=/tmp/test4.img bs=8k count=10k  
  12. dd if=/dev/zero of=/tmp/test4.img bs=512 count=1000 oflag=dsync  
  13.   
  14. ## OR alternate syntax for GNU/dd ##  
  15. dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync  
Sample outputs:
# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 2.38434 s, 450 MB/s

Please note that one gigabyte was written for the test1.img and 450 MB/s was server throughput for this test. Where:
* if=/dev/zero (if=/dev/input.file) : The name of the input file you want dd the read from.
* of=/tmp/test1.img (of=/path/to/output.file) : The name of the output file you want dd write the input.file to.
* bs=1G (bs=block-size) : Set the size of the block you want dd to use. 1 gigabyte was written for the test. Please note that Linux will need 1GB of free space in RAM. If your test system does not have sufficient RAM available, use a smaller parameter for bs (such as 128MB or 64MB and so on).
* count=1 (count=number-of-blocks): The number of blocks you want dd to read.
* oflag=dsync (oflag=dsync) : Use synchronized I/O for data. Do not skip this option. This option get rid of caching and gives you good and accurate results
* conv=fdatasyn: Again, this tells dd to require a complete “sync” once, right before it exits. This option is equivalent to oflag=dsync.

Finding server latency time
In this example, 512 bytes were written one thousand times to get RAID10 server latency time:
# dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync

Sample outputs:
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s

Please note that server throughput and latency time depends upon server/application load too. So I recommend that you run these tests on a newly rebooted server as well as peak time to get better idea about your workload. You can now compare these numbers with all your devices.

But why the server throughput and latency time are so low?
Low values does not mean you are using slow hardware. The value can be low because of the HARDWARE RAID10 controller’s cache.

Use hdparm command to see buffered and cached disk read speed
I suggest you run the following commands 2 or 3 times Perform timings of device reads for benchmark and comparison purposes:
  1. ### Buffered disk read test for /dev/sda ##  
  2. hdparm -t /dev/sda1  
  3. ## OR ##  
  4. hdparm -t /dev/sda  
To perform timings of cache reads for benchmark and comparison purposes again run the following command 2-3 times (note the -T option):
  1. ## Cache read benchmark for /dev/sda ###  
  2. hdparm -T /dev/sda1  
  3. ## OR ##  
  4. hdparm -T /dev/sda  
OR combine both tests:
  1. hdparm -Tt /dev/sda  

Sample outputs:


Again note that due to filesystems caching on file operations, you will always see high read rates.

Use dd command on Linux to test read speed
To get accurate read test data, first discard caches before testing by running the following commands:
  1. flush  
  2. echo 3 | sudo tee /proc/sys/vm/drop_caches  
  3. time dd if=/path/to/bigfile of=/dev/null bs=8k  
One execution result:
# time dd if=/tmp/test1.img of=/dev/null bs=8k
131072+0 records in
131072+0 records out
1073741824 bytes (1.1 GB) copied, 0.315225 s, 3.4 GB/s

real 0m0.319s
user 0m0.050s
sys 0m0.269s

Linux Laptop example
Run the following command:
  1. ### Debian Laptop Throughput With Cache ##  
  2. dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct  
  3.   
  4. ### Deactivate the cache ###  
  5. hdparm -W0 /dev/sda  
  6.   
  7. ### Debian Laptop Throughput Without Cache ##  
  8. dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct  


2019年11月10日 星期日

[Linux 文章收集] How can I setup the MTU for my network interface?

Source From Here
Preface
MTU (Maximum Transmission Unit) is related to TCP/IP networking in Linux/BSD/UNIX oses. It refers to the size (in bytes) of the largest datagram that a given layer of a communications protocol can pass at a time. You can see current MTU setting with ifconfig command under Linux:
# ifconfig

Output:
  1. eth0      Link encap:Ethernet  HWaddr 00:0F:EA:91:04:07  
  2.          inet addr:192.168.1.2  Bcast:192.168.1.255  Mask:255.255.255.0  
  3.          inet6 addr: fe80::20f:eaff:fe91:407/64 Scope:Link  
  4.          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1  
  5.          RX packets:141567 errors:0 dropped:0 overruns:0 frame:0  
  6.          TX packets:141306 errors:0 dropped:0 overruns:0 carrier:0  
  7.          collisions:0 txqueuelen:1000  
  8.          RX bytes:101087512 (96.4 MiB)  TX bytes:32695783 (31.1 MiB)  
  9.          Interrupt:18 Base address:0xc000  
A better way is to use ip command:
$ ip link show

Output:
  1. 1: lo:  mtu 16436 qdisc noqueue  
  2.    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00  
  3. 2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000  
  4.    link/ether 00:0f:ea:91:04:07 brd ff:ff:ff:ff:ff:ff  
  5. 3: sit0:  mtu 1480 qdisc noop  
  6.    link/sit 0.0.0.0 brd 0.0.0.0  
As you see, MTU set to 1500 for eth0. Let us say you want this to 1400 then you can use any one of the following command to setup MTU.

Setup MTU of Network Interface
# ifconfig eth0 mtu 1400

OR
# ip link set dev eth0 mtu 1400

Verify that new mtu is setup with following command:
$ ip link list

To make the setting permanent for eth0, edit the configuration file /etc/network/interfaces (Debian Linux file)
  1. auto lo  
  2. iface lo inet loopback  
  3.   
  4. auto eth0  
  5. iface eth0 inet static  
  6. name Ethernet LAN card  
  7. address 192.168.1.2  
  8. netmask 255.255.255.0  
  9. broadcast 192.168.1.255  
  10. network 192.168.1.0  
  11. gateway 192.168.1.254  
  12. mtu 1400  
  13. post-up /etc/fw.start  
  14. post-down /etc/fw.stop  
Or /etc/sysconfig/network-scripts/ifcfg-eth0 (Red Hat Linux)
  1. DEVICE=eth0  
  2. BOOTPROTO=static  
  3. BROADCAST=192.168.1.255  
  4. HWADDR=00:0F:EA:91:04:07  
  5. IPADDR=192.168.1.111  
  6. NETMASK=255.255.255.0  
  7. NETWORK=192.168.1.0  
  8. MTU=1400  
  9. ONBOOT=yes  
  10. TYPE=Ethernet  
Save the file and restart network service
If you are using Redhat:
# service network restart

If you are using Debian:
# /etc/init.d/networking restart


[Linux 常見問題] Linux and Unix Test Disk I/O Performance With dd Command

Source From   Here Question How can I use   dd   command on a Linux to test I/O performance of my hard disk drive? How do I check the   per...