We now give some brief info on how to use the same executable amrvac (which you already compiled and used to obtain output *.dat files with), to convert a single or all *.dat file(s) to one of these formats.
We will assume that you ran the standard 2D advection problem used for test purposes, i.e. that you did the following steps beforehand:
cd src setamrvac -d=22 -phi=0 -z=0 -p=rho -u=testrho -g=16,16 -cp=openmpi -s make clean amrvac cd .. ln -s par/testrho/testrho_vac22 amrvac.par mpirun -np 1 amrvacWe also assume that in the parameter file mentioned above, the namelist &filelist was stating (note that the end of the namelist is indicated as usual by a backslash)
&filelist filenamelog='datamr/testrho/vaclogo' filenameout='datamr/testrho/vaclogo' primnames='rho' /If all went well, you then have created as many *.dat files as requested through the settings you provided in the combined &savelist and &stoplist namelists from the par-file. For the example, they normally default to asking a full data dump at time zero, as well as every time the time has increased by 0.05, and this till tmax=1.0d0, such that we actually have 21 snapshots in total. You should thus have files like datamr/testrho/vaclogo0000.dat up to datamr/testrho/vaclogo0020.dat. You can now individually convert such *.dat file to a *.vtu file by doing the following. Edit the par-file, to modify the &filelist to something like
&filelist filenamelog='datamr/testrho/vaclogo' filenameout='datamr/testrho/vaclogo' primnames='rho' filenameini='datamr/testrho/vaclogo' convert=.true. convert_type='vtuCC' saveprim=.false. snapshotini=0 /Assuming that this par-file is still known through the symbolic link amrvac.par as above, you can then convert a single *.dat file (here the datamr/testrho/vaclogo0000.dat file, as we select snapshotini=0) simply running again
mpirun -np 1 amrvacor, which is actually equivalent (single CPU)
amrvacNote that this will create a new file, namely datamr/testrho/vaclogo0000.vtu, which can be directly imported in Paraview. It will, under the settings above, just contain the density on the grid hierarchy at time zero. The convert_type='vtuCC' indicates that the data is exactly as the code interprets and updates the values, namely as cell-centered quantities. The saveprim=.false. has for the example here no real meaning, as for advection conservative and primitive variables coincide (just density rho exists).
Realizing that you typically want to convert multiple data files, you can do this by repeating the above as many times as here are *.dat files, by raising/changing the snapshotini identifier. Since you typicallly want to convert all data files between a minimum and maximum number of similarly named files, the script doconvert is added. Typing doconvert will tell you its intended usage, namely
doconvert par/testrho/testrho_vac22 0 20in the example case at hand, where we created 21 data files from running the advection problem. This doconvert script does assume that you actually edited the par-file manually once as above (such that the needed lines for conversion are in the &filelist namelist), and that the executable amrvac exists in the same directory. It will complain when the parfile does not exist, and obviously requires the existence of all files between the start and stopindex (0 and 20 here). With paraview, you will then be able to immediately import all 21 *.vtu files with the same base filename, and directly make movies or still images from them.
convert_type='vtumpi' convert_type='vtuCCmpi' convert_type='pvtumpi' convert_type='pvtuCCmpi' convert_type='tecplotmpi' convert_type='tecplotCCmpi'Here, the prefix p stands for the parallel file format, where each process is allowed to dump its data into a single (e.g. *.vtu) file and a master file (e.g. *.pvtu) is stored by rank zero. This has the advantage that the write operation on suitable file systems is sped up significantly. In a visualization software, only the *.pvtu files need to be imported and also the reading process is sped up in case of parallel visualization.
Also, you can then use the same strategy as explained above for converting on a single CPU: you will always need to edit the par-file once to specify how to do the conversion, and then you may run interactively on e.g. 4 CPU like
mpirun -np 4 amrvacor do this in batch (use a batch job script for that), to do multiple data file conversions. We also provide a small script, called doconvertpar, which works similar to the doconvert explained above, but takes one extra parameter: the number of CPUs. Its usage is described by
doconvertpar parfilename startindex stopindex nprocessor
&filelist filenamelog='datamr/testrho/vaclogo' filenameout='datamr/testrho/vaclogo' primnames='rho' saveprim=.false. autoconvert=.true. convert_type='pvtuCCmpi' /and when the code is run via
mpirun -np 2 amrvacthree new output files (vaclogoXXXX.pvtu, vaclogoXXXXp0000.vtu, vaclogoXXXXp0001.vtu) will appear simultaneous to the vaclogoXXXX.dat files, stored at given intervals. All functionality of the usual convert is conserved, e.g. derived quantities and primitive variables (using the saveprim=.true. option) can be stored in the output files.
Another very useful option is to specify which variables actually need to be converted: by default all conservative variables available in the *.dat file will be included, but then again filesizes may become restrictive. For that purpose, the logical array writew allows to select which variable(s) to store (and this in combination with saveprim, possibly). You can then create different files for selected variables, knowing that the output filename will start with filenameout, while the actual data file converted is known from the combination filenameini and snapshotini.
We allow the possibility to compute derived variables from the *.dat file in the userfile, by setting how many you add beyond the nw variables typcial for the physics module at hand, in the integer nwauxio. Correspondingly that many variables, you should then compute and store in the w(*,nw+1) ... w(*,nw+nwauxio) entries, in the user-written subroutine specialvar_output (as defined in amrvacnul.speciallog.t). The names for these variables then need to be provided in the corresponding specialvarnames_output subroutine, which simply then extends the strings wnames and primnames. This feature is very useful, for the same reason as above: you can let the code compute gradients of scalar fields, divergence of vector quantities, curls of vectors, etc, using the precoded subroutines for that purpose found in geometry.t. You then do not rely on visualization software to do interpolations or discretizations, which may not reflect those actually taken in MPI-AMRVAC.
Another useful feature is the possibility to select the output AMR level. You can let the code compute from the *.dat file an output residing on a specified level level_io. This then uses the MPI-AMRVAC internal means to perform restriction and prolongations, and you can then make sure to have a single uniform grid output too.
The Idl conversion does not work in parallel, it can handle the addition of extra IO variables (nwauxio), and allows to renormalize the data using the normt and normvar array, in case you directly want to have dimensional quantities available. An additional script provided is the doidlcat script, which basically concatenates all requested *.out files in a single file, that can be used with the (few) Idl macro's above. Its intended use for the 2D advection example would be
doconvert par/testrho/testrho_vac22 0 20 doidlcat datamr/testrho/vaclogo 0 20 1The first line creates the 21 files datamr/testrho/vaclogo0000.out till datamr/testrho/vaclogo0020.out (assuming you edited the par-file and indicated the proper convert_type for idl), while the next line then gathers them all in a single datamr/testrho/vaclogoall.out file, ready for Idl visualization with VAC-like macro's, like .r getpict, .r plotfunc or .r animate. The 3 integer parameters to doidlcat indicate the first and last snapshot number, and a skiprate. If the latter is different from 1, you include every so many files in the concatenation.
There can be solutions on the machine at hand, using the assign command (whose syntax you will need to get info on). We would also like to hear if anyone knows about a way to specify the endianness of the output in MPI/Fortran itself, independent of the platform.