2022年1月24日月曜日

SPACK v0.17 on RH6

From v0.17, spack does not run on RH6.
The first reason is; python2.7 is needed, and the second; clingo needs to be installed by users, instead of bootstrapping.

I use miniconda for python2.7. Clingo can be installed via miniconda as
$ conda install -c potassco clingo

And also you may want to disable bootstraping as;
% spack bootstrap untrust github-actions
%  spack bootstrap disable

Please see
https://spack.io/changes-spack-v017/

2022年1月10日月曜日

Android 12 on Xiaomi A1

My A1 works on Pixel Experience Android 12. Looks good. Actually, feel better than Poco F3 with MIUI.



2021年12月24日金曜日

Invited talk

 We were invited to a session of the 181th meeting of ASA.


https://www.researchgate.net/publication/356367868_High-performance_computing_for_long-range_underwater_acoustics




Guest Investigator @Woods Hole Oceanographic Institution

 From this December, I am also a Guest Investigator at Woods Hole Oceanographic Institution.

I will work on HPC in underwater acoustics.

Hope I can provide some good achievements and make any feed-back to CTBT.

2021年10月26日火曜日

heFFTe

 Installation of heFFTe with MKL on SPACK;

% spack install heffte%intel@2021.4.0+mkl ^intel-oneapi-mpi@2021.4.0 ^intel-mkl threads=openmp

To use installed Intel OneAPI and CMAKE on SPACK

In order to avoid multiple installations of Intel-OneAPI and CMAKE, we can use installed ones.
Please install Install OneAPI and CMAKE via APT and put the packages.yaml under ~/.spack/ whose contents are as follows;

 $ cat ~/.spack/packages.yaml

packages:
intel-oneapi-mkl:
buildable: false
externals:
- spec: intel-oneapi-mkl@2021.4.0
prefix: /opt/intel/oneapi/
intel-oneapi-mpi:
buildable: false
externals:
- spec: intel-oneapi-mpi@2021.4.0
prefix: /opt/intel/oneapi/
cmake:
buildable: false
externals:
- spec: cmake@3.16.3
prefix: /usr

buildable: false prevents us from installing packages via SPACK.

2021年10月14日木曜日

MPI/OpenMP hybrid with Intel MPI

  mpirun -np 8 -genv OMP_NUM_THREADS=16 -hosts node2-ib,node3-ib,node4-ib,node5-ib -ppn 2 ./sol

To run an OpenMP/MPI hybrid code with IntelMPI, the above command is needed.
The meanings are;
8 MPI processes (-np 8),
16 OpenMP threads (-genv OMP_NUM_THREADS=16) ,
2 MPI processes per node (-ppn 2).