顯示具有 MPI 標籤的文章。 顯示所有文章
顯示具有 MPI 標籤的文章。 顯示所有文章

2014年4月22日 星期二

[ MPI 文章收集 ] Mixing MPI and OpenMP

來源自 這裡
Preface:
下面範例說明如何在 MPI 上使用 OpenMP. 程式的邏輯是每個 process 都執行 rank+1 次的 for loop, 並將該 for loop 使用 OpenMP 平行化並列印 Hello 訊息.

Example:
- 範例代碼 mpi_with_openmp.c:
  1. #include   
  2. #include   
  3. #include   
  4.   
  5. int main(int argc, char *argv[])  
  6. {  
  7.     int numprocs, rank, namelen;  
  8.     char processor_name[MPI_MAX_PROCESSOR_NAME];  
  9.     int iam=0, np=1, i;  
  10.   
  11.     MPI_Init(&argc, &argv);  
  12.     MPI_Comm_size(MPI_COMM_WORLD, &numprocs);  
  13.     MPI_Comm_rank(MPI_COMM_WORLD, &rank);  
  14.     MPI_Get_processor_name(processor_name, &namelen);  
  15.   
  16.     #pragma omp parallel for private(iam, np)  
  17.     for(i=0; i1; i++)  
  18.     {  
  19.         np = omp_get_num_threads();  
  20.         iam = omp_get_thread_num();  
  21.         printf("Hello from thread %d out of %d from process %d out of %d on  %s - Round%d\n",   
  22.                iam, np, rank, numprocs, processor_name, i);  
  23.     }  
  24.   
  25.     MPI_Finalize();  
  26. }  
接著可以如下編譯與執行:
# mpic++ -fopenmp mpi_with_openmp.c -o mpi_with_openmp
# mpiexec -n 3 mpi_with_openmp
Hello from thread 2 out of 24 from process 2 out of 3 on linux1 - Round2
Hello from thread 0 out of 24 from process 2 out of 3 on linux1 - Round0
Hello from thread 0 out of 24 from process 0 out of 3 on linux1 - Round0
Hello from thread 0 out of 24 from process 1 out of 3 on linux1 - Round0
Hello from thread 1 out of 24 from process 1 out of 3 on linux1 - Round1
Hello from thread 1 out of 24 from process 2 out of 3 on linux1 - Round1


[ MPI 常見問題 ] MPI - error loading shared libraries

來源自 這裡
Question:
The problem I faced has been solved here: Loading shared library in open-mpi/ mpi-run

I know not how, setting LD_LIBRARY_PATH or specifying -x LD_LIBRARY_PATH fixes the problem, when my installation itself specifies the necessary -L arguments. My installation is in ~/mpi/. I have also included my compile-link configs.
$ mpic++ -showme:version 
mpic++: Open MPI 1.6.3 (Language: C++)

$ mpic++ -showme
g++ -I/home/vigneshwaren/mpi/include -pthread -L/home/vigneshwaren/mpi/lib
-lmpi_cxx -lmpi -ldl -lm -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl


$ mpic++ -showme:libdirs
/home/vigneshwaren/mpi/lib

$ mpic++ -showme:libs
mpi_cxx mpi dl m rt nsl util m dl % Notice mpi_cxx here %

When I compiled with mpic++ <file> and ran with mpiexec a.out I got a (shared library) linker error:
error while loading shared libraries: libmpi_cxx.so.1: 
cannot open shared object file: No such file or directory

The error has been fixed by setting LD_LIBRARY_PATHThe question is how and why? What am i missing? Why is LD_LIBRARY_PATH required when my installation looks just fine.

Answer:
libdl, libm, librt, libnsl and libutil are all essential system-wide libraries and they come as part of the very basic OS installation. libmpi and libmpi_cxx are part of the Open MPI installation and in your case are located in a non-standard location that must be explicitly included in the linker search path LD_LIBRARY_PATH.

It is possible to modify the configuration of the Open MPI compiler wrappers and make them pass the -rpath option to the linker-rpath takes a library path and appends its to a list, stored inside the executable file, which tells the runtime link editor (a.k.a. the dynamic linkerwhere to search for libraries before it consults theLD_LIBRARY_PATH variable. For example, in your case the following option would suffice:
  1. -Wl,-rpath,/home/vigneshwaren/mpi/lib  
This would embed the path to the Open MPI libraries inside the executable and it would not matter if that path is part of LD_LIBRARY_PATH at run time or not.

Supplement:
GCC Options for Linking
-Wl,option : Pass option as an option to the linker. If option contains commas, it is split into multiple options at the commas.


2014年3月30日 星期日

[ MPI 常見問題 ] MPI multiple dynamic array passing in C

來源自 這裡 
Question: 
I'm trying to ISend() two arrays: arr1,arr2 and an integer n which is the size of arr1,arr2. I understood from this post that sending a struct that contains all three is not an option since n is only known at run time. Obviously, I need n to be received first since otherwise the receiving process wouldn't know how many elements to receive. What's the most efficient way to achieve this without using the blokcing Send() ? 

Answer: 
Sending the size of the array is redundant (and inefficientas MPI provides a way to probe for incoming messages without receiving them, which provides just enough information in order to properly allocate memory. Probing is performed with MPI_PROBE, which looks a lot like MPI_RECV, except that it takes no buffer related arguments. The probe operation returns a status (MPI_Status) object which can then be queried for the number of elements of a given MPI datatype that can be extracted from the content of the message with MPI_GET_COUNT, therefore explicitly sending the number of elements becomes redundant. 

Here is a simple example with two ranks: 
  1. if (rank == 0)  
  2. {  
  3.     MPI_Request req;  
  4.   
  5.     // Send a message to rank 1  
  6.     MPI_Isend(arr1, n, MPI_DOUBLE, 10, MPI_COMM_WORLD, &req);  
  7.     // Do not forget to complete the request!  
  8.     MPI_Wait(&req, MPI_STATUS_IGNORE);  
  9. }  
  10. else if (rank == 1)  
  11. {  
  12.     MPI_Status status;  
  13.   
  14.     // Wait for a message from rank 0 with tag 0  
  15.     MPI_Probe(00, MPI_COMM_WORLD, &status);  
  16.     // Find out the number of elements in the message -> size goes to "n"  
  17.     MPI_Get_count(&status, MPI_DOUBLE, &n);  
  18.     // Allocate memory  
  19.     arr1 = malloc(n*sizeof(double));  
  20.     // Receive the message. ignore the status  
  21.     MPI_Recv(arr1, n, MPI_DOUBLE, 00, MPI_COMM_WORLD, MPI_STATUS_IGNORE);  
  22. }  
MPI_PROBE also accepts the wildcard rank MPI_ANY_SOURCE and the wildcard tag MPI_ANY_TAG. One can then consult the corresponding entry in the status structure in order to find out the actual sender rank and the actual message tag. 

Probing for the message size works as every message carries a header, called envelope. The envelope consists of the sender's rank, the receiver's rank, the message tag and the communicator. It also carries information about the total message size. Envelopes are sent as part of the initial handshake between the two communicating processes.

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...