2020年5月17日 星期日

[ Python 文章收集 ] Monitoring memory usage of a running Python program

Source From Here
Preface
At Survata, we do a lot of data processing using Python and its suite of data processing libraries like pandas and Scikit-learn. This means we use a lot of cloud computing resources, and as a result, our monthly hosting bill can be… hefty. One way to trim the amount you spend on cloud resources is to make sure you don’t ask for more resources than you actually use. Cloud providers make it really easy to spin up a multiple-GB-of-RAM server — but if your actual running process only uses a fraction of that memory, you’re wasting resources — and that means money!

However, you can’t optimize the resources you use if you don’t know what you’re actually using.

Option 1: Ask the operating system
The easiest way to track memory usage is to use the operating system itself. You can use top to provide an overview of the resources you’re using over time. Alternatively, if you want a spot inspection of resource usage, you can use the ps command:
# ps aux | grep gen
root 28386 0.2 0.0 125100 5776 pts/0 S 12:21 0:00 python3 ./gen_test.py

# watch ps -m -o %cpu,%mem,command -p 28386
  1. Every 2.0s: ps -m -o %cpu,%mem,command -p 28436                                                                       Mon May 18 12:24:09 2020  
  2.   
  3. %CPU %MEM COMMAND  
  4. 0.1  0.5 python3 ./gen_test.py  
  5. 0.1    - -   

The -m flag instructs ps to show results in order of which processes are using the most memory. The -o flag controls which properties of each process are displayed — in this case, the percentage of CPU being used, the percentage of system memory being consumed, and the command line of the process being executed. The CPU percentage counts one full CPU core as 100% usage, so if you have a 4-core machine, it’s possible to see a total of up to 400% CPU usage. There are other output options to display other process properties, and other flags to ps to control which processes are displayed.

Combined with some creative shell scripting, you could write a monitoring script that uses ps to track memory usage of your tasks over time. Most hosting providers will also provide dashboards for monitoring machine-level resource usage. There are also profilers like py-spy that can be used to wrap the execution of a Python process and measure it’s memory and CPU usage. These profilers use operating system calls, combined with a knowledge of how Python code executes, to take periodic measurements of your program as it runs, and identify which parts of your code are using resources.

Unfortunately, this approach isn’t always viable for data pipeline tasks. In our situation, we’re using AWS Batch as a host for our compute tasks, which obscures the operating system-level interface. Each deployed task is wrapped in a Docker container; that task then nominates how much memory and CPU it needs to run.

This containerization process obscures how much memory is being used inside the container. From the hosting provider’s perspective, a Docker container that allocates 8GB of RAM is using all that memory, even if the code running inside the container only allocates a fraction of that amount.

So — we need to monitor memory usage inside the container.

Your first inclination might be to use the same operating system techniques, but inside the container. While this does technically work, general advice is that a Docker container should run a single process — so running a second monitoring process inside a container isn’t a good option.

Measuring memory usage from outside the running process also obscures collection of metrics that would allow correlate memory usage with properties of the data being analyzed. For example, does memory usage scale with the number of data in the data set? Or is it related to the complexity of the analysis performed? When analyzing at the level of the operating system, it may be difficult to collect metrics on the operation of the underlying analysis.

What we need is a way to monitor the memory usage of a running Python process, from inside that process.

Option 2: tracemalloc
The Python interpreter has a remarkable number of hooks into its operation that can be used to monitor and introspect into Python code as it runs. These hooks are used by pdb to provide debugging; they’re also used by coverage to provide test coverage. They’re also used by the tracemalloc module to provide a window into memory usage.

tracemalloc is a standard library module added in Python 3.4 that tracks every individual memory blocks allocated by the Python interpretertracemalloc is able to provide extremely fine-grained information about memory allocations in the running Python process:
test_mem.py
  1. #!/usr/bin/env python3  
  2. import tracemalloc  
  3. import time  
  4.   
  5. if __name__ == '__main__':  
  6.     tracemalloc.start()  
  7.     my_list = []  
  8.     for i in range(10000):  
  9.         my_list.extend(list(range(1000)))  
  10.         time.sleep(5)  
  11.         current, peak = tracemalloc.get_traced_memory()  
  12.         print(f"Current memory usage is {current / 10**6}MB; Peak was {peak / 10**6}MB")  
  13.   
  14.     tracemalloc.stop()  
Execution sample:
# ./test_mem.py
Current memory usage is 0.029964MB; Peak was 0.039012MB
Current memory usage is 0.059994MB; Peak was 0.069042MB
Current memory usage is 0.089862MB; Peak was 0.09891MB
Current memory usage is 0.11973MB; Peak was 0.128778MB
...

Calling tracemalloc.start() starts the tracing process. While tracing is underway, you can ask for details of what has been allocated; in this case, we’re just asking for the current and peak memory allocation. Calling tracemalloc.stop() removes the hooks and clears any traces that have been gathered.

There’s a price to be paid for this level of detail, though. tracemalloc injects itself deep into the running Python process — which, as you might expect, comes with a performance cost. In our testing, we observed a 30% slowdown when using tracemalloc on a running analysis run. This might be OK when profiling an individual process, but in production, you really don’t want a 30% performance hit just so you can monitor memory usage.

Option 3: Sampling
Luckily, the Python standard library provides another way to observe memory usage — the resource module. The resource module provides basic controls for resources that a program allocates — including memory usage:
  1. import resource  
  2.   
  3. usage = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss  
The call to resource.getrusage() returns the resources used by the program. The constant RUSAGE_SELF indicates that we’re only interested in the resources used by this process, not its children. The object returned is a structure that contains a range of operating system resources, including CPU time, signals, context switches and more; but for our purposes, we’re interested in maxrss — the maximum Resident Set Size — which is the amount of memory that is currently held in RAM by the process.

However, unlike the tracemalloc module, the resource module doesn’t track usage over time — it only provides a point sampling. So, we need to implement a way to sample memory usage over time. First — we define a class to perform the memory monitoring:
  1. import resource  
  2.   
  3. from time import sleep  
  4.   
  5. class MemoryMonitor:  
  6.     def __init__(self):  
  7.         self.keep_measuring = True  
  8.   
  9.     def measure_usage(self):  
  10.         max_usage = 0  
  11.         while self.keep_measuring:  
  12.             max_usage = max(  
  13.                 max_usage,  
  14.                 resource.getrusage(resource.RUSAGE_SELF).ru_maxrss  
  15.             )  
  16.             sleep(0.1)  
  17.   
  18.         return max_usage  
When you invoke measure_usage() on an instance of this class, it will enter a loop, and every 0.1 seconds, it will take a measurement of memory usage. Any increase in memory usage will be tracked, and the maximum memory allocation will be returned when the loop exits. But what tells the loop to exit? And where do we call the code being monitored? We do that in a separate thread.
  1. from concurrent.futures import ThreadPoolExecutor  
  2.   
  3. with ThreadPoolExecutor() as executor:  
  4.     monitor = MemoryMonitor()  
  5.     mem_thread = executor.submit(monitor.measure_usage)  
  6.     try:  
  7.         fn_thread = executor.submit(my_analysis_function)  
  8.         result = fn_thread.result()  
  9.     finally:  
  10.         monitor.keep_measuring = False  
  11.         max_usage = mem_thread.result()  
  12.           
  13.     print(f"Peak memory usage: {max_usage}")  
ThreadPoolExecutor gives us a convenient way to submit tasks to be executed in a thread. We submit two tasks to that executor — the monitor, and my_analysis_function (if the analysis function requires additional arguments, they can be passed in with the submit call). The call to fn_thread.result() will block until the analysis function completes, and its result is available, at which point we can notify the monitor to stop, and get the maximum memory. The try/finally block ensures that if the analysis function raises an exception, the memory thread will still be terminated.

Using this approach, we’re effectively sampling memory usage over time. Most of the work will be done in the main analysis thread; but every 0.1s, the monitor thread will wake up, take a memory measurement, store it if memory usage has increased, and go back to sleep.

The performance overhead of this sampling approach is minimal. Although sampling every 0.1 seconds might sound like a lot, it’s an eternity in CPU time, and as a result, there is a negligible impact on overall processing time. This sampling rate can be tuned, too; if you do see an overhead, you can increase the pause between samples; or, if you need more precise data, you can decrease the pause.

The downside is that the sampling-based monitoring approach is imprecise. You’re only sampling memory usage, so short-lived memory allocation spikes will be lost in this analysis. However, for the purposes of optimizing cloud resource allocation, we only need rough numbers. We are only looking to answer whether our process is using 8GB or 10GB of RAM, not differentiate at the byte (or even megabytelevel.

Conclusion
It’s impossible to improve something you aren’t measuring. Armed with more information about the memory usage of our analysis tasks, we’re now in a much better position to optimize our resource usage. And, we’ve been able to collect that information with relatively little code and relatively little performance overhead.

2020年5月12日 星期二

[Linux 常見問題] How to allow a range of IP's with IPTABLES?

Source From Here
Question
Here is my iptables, how can I make it so that I can allow a range of ip's on ETH1 (10.51.x.x)
  1. # Generated by iptables-save v1.4.4 on Thu Jul  8 13:00:14 2010  
  2. *filter  
  3. :INPUT ACCEPT [0:0]  
  4. :FORWARD ACCEPT [0:0]  
  5. :OUTPUT ACCEPT [0:0]  
  6. :fail2ban-ssh - [0:0]  
  7. -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh   
  8. -A INPUT -i lo -j ACCEPT   
  9. -A INPUT -d 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable   
  10. -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT   
  11. -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT  
  12. -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT   
  13. -A INPUT -p tcp -m tcp --dport 143 -j ACCEPT   
  14. -A INPUT -p tcp -m tcp --dport 110 -j ACCEPT  
  15. -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT   
  16. -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT   
  17. -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT   
  18. -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7   
  19. -A INPUT -j REJECT --reject-with icmp-port-unreachable   
  20. -A FORWARD -j REJECT --reject-with icmp-port-unreachable   
  21. -A OUTPUT -j ACCEPT   
  22. -A fail2ban-ssh -j RETURN   
  23. COMMIT  
How-To
If you only want to allow a certain range of IP addresses inside of 10.50.0.0 (such as from 10.50.10.20 through 10.50.10.80) you can use the following command:
# iptables -A INPUT -i eth1 -m iprange --src-range 10.50.10.20-10.50.10.80 -j ACCEPT

If you want to allow the entire range you can use this instead:
# iptables -A INPUT -i eth1 -s 10.50.0.0/16 -j ACCEPT

See iptables man page and this question here on ServerFault: Whitelist allowed IPs (in/out) using iptables

Supplement
Ubuntu Server 如何永久儲存iptables的設定?
// Save and load iptables rules
# sudo iptables-save > iptables.conf
# sudo iptables-restore < iptables.conf

// Use iptables-persistent
# sudo apt install iptables-persistent
# sudo dpkg-reconfigure iptables-persistent

This message was edited 4 times. Last update was at 13/05/2020 07:56:27

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...