Tutorial
This tutorial is given in a similar fashion in the example hubbard.c, including parsing of command line arguments etc.
To gain access to all diverge functions, include the diverge.h header:
#include <diverge.h>
We refer the reader to the Makefile and Makefile.local in the
examples/ directory of divERGe.tar.gz to understand how C/C++ code using libdivERGe.so is
compiled and linked.
To initialize the library, you must either call diverge_init() or
diverge_embed().
diverge_init( NULL, NULL );
Thereafter, you may continue with the central building block of divERGe, i.e.,
the diverge_model_t instance (see Model).
Creating a new diverge_model_t instance
Call diverge_model_init():
diverge_model_t* mod = diverge_model_init();
If desired, set the model name:
sprintf(mod->name, "My Fancy Model");
Set the dimension and the number of coarse and fine k points. Here, we use a 6×6 mesh with 5×5 refinement. As we don’t touch mod->nk[2] and mod->nkf[2] (they are zero by default) the library understands that the model is two dimensional.
mod->nk[0] = 6;
mod->nk[1] = 6;
mod->nkf[0] = 5;
mod->nkf[1] = 5;
Set the lattice vectors. By default, all parameters that are left untouched are zero.
mod->lattice[0][0] = 1.0;
mod->lattice[1][1] = 1.0;
mod->lattice[2][2] = 1.0;
Set the number of orbitals involved. Here, we keep it simple and use a
single-orbital model. For multiple orbitals, their positions must be indicated
in diverge_model_t.positions.
mod->n_orb = 1;
Set the number of hopping elements, allocate the hopping elements (via
diverge_mem_alloc_rs_hopping_t() or plain malloc/calloc) and fill each
rs_hopping_t element of the array diverge_model_t.hop.
mod->n_hop = 4;
// alternatively, use an appropriate call to malloc()
mod->hop = diverge_mem_alloc_rs_hopping_t(4);
// structure: { {Rx,Ry,Rz}, o1,o2, s1,s2, t }
mod->hop[0] = {{ 1,0,0}, 0,0,0,0, 1};
mod->hop[1] = {{-1,0,0}, 0,0,0,0, 1};
mod->hop[2] = {{0, 1,0}, 0,0,0,0, 1};
mod->hop[3] = {{0,-1,0}, 0,0,0,0, 1};
We want to make use of \(SU(2)\) symmetry and therefore only consider ‘one’
spin. \(n_\mathrm{spin}\) generally corresponds to \(2S+1\), i.e., the
number of possible spin z values (see diverge_model_t.n_spin).
mod->SU2 = true;
mod->n_spin = 1;
The point group symmetry setup is a bit more complicated. It
can be neglected in many cases and usually merely improves runtimes. Now we come
to the vertex setup. Since we do a Hubbard model, this is extremely simple. One
vertex element of type rs_vertex_t put into the array
diverge_model_t.vert that must be allocated (again, you may use
diverge_mem_alloc_rs_vertex_t() for convenience or plain malloc/calloc):
mod->n_vert = 1;
mod->vert = diverge_mem_alloc_rs_vertex_t(1);
// structure: { channel, {Rx,Ry,Rz}, o1,o2, s1,s2,s3,s4, V }
mod->vert[0] = {'D', {0,0,0}, 0,0,0,0,0,0, 3};
It is strongly advised to validate whether the model input is correctly
interpreted by the library by calling diverge_model_validate():
if (diverge_model_validate( mod ))
diverge_mpi_exit(EXIT_FAILURE);
Optionally we can make the code calculate a non-interacting band structure by defining the points of the irreducible BZ path in relative reciprocal lattice vector coordinates (reciprocal crystal coordinates):
mod->n_ibz_path = 4;
mod->ibz_path[1][0] = 0.5;
mod->ibz_path[2][0] = 0.5;
mod->ibz_path[2][1] = 0.5;
It is mandatory to call diverge_model_internals_common() on the
diverge_model_t in order to initialize internal data structures:
diverge_model_internals_common( mod );
Grid FRG
Thereafter we have several choices. If we want the model to be prepared for a
grid FRG calculation, we call diverge_model_internals_grid():
diverge_model_internals_grid( mod );
N-patch FRG
For an N-patch FRG calculation, we are missing the definition of a
patching. Note that for N-patch, nkf[0] = nkf[1]
= 1 is expected, as well as dimension 2 (set via nk[2] = nkf[2] = 0).
nk[0] and nk[1] should be large (compared to one). We can try to
automatically generate a patching by directly calling
diverge_model_internals_patch( mod, 6 );
where the second parameter of diverge_model_internals_patch() determines
the number of patches to be found in the IBZ. Fine-grained control over the
patching setup is provided through other functions documented here.
Truncated Unity FRG
For a Truncated Unity FRG calculation, we can either define formfactors right
in the model by allocating and filling the tu_formfactor_t array
diverge_model_t.tu_ff and setting its length, or we can
automatically search for formfactors when this array points to NULL (and is
of zero length). diverge_model_internals_tu() takes as second parameter
the maximum distance up to which formfactors shall be included, here 2.01:
diverge_model_internals_tu( mod, 2.01 );
Writing the diverge_model_t to a file
As we might want to understand whether our model implementation is correct
without doing the numerically expensive FRG flow, we can save it to a file using
diverge_model_to_file():
char* md5 = diverge_model_to_file( mod, "model.diverge" );
The md5sum of the model is saved in the string md5 (which points to static
memory, i.e., doesn’t need to be free’d). From python, we can read the
file model.diverge using the interface given in diverge.output (see
Simulation Output):
import diverge.output as ro
M = ro.read('model.diverge')
help(M)
Initializing the Flow Step Instance diverge_flow_step_t
We must make a choice here: Which channels should be simulated and which backend
should be used. Note that the setup of the diverge_model_t internals
(using the functions diverge_model_internals_patch(),
diverge_model_internals_grid(), diverge_model_internals_tu())
must coincide with your choice of method: To stick to the example above, we call
diverge_flow_step_init() as follows:
diverge_flow_step_t* st = diverge_flow_step_init( mod, "grid", "PCD" );
if (!st) {
mpi_err_printf("error in initializing the flow step\n");
diverge_mpi_exit(EXIT_FAILURE);
}
Users are alwys expected to write their own integrator (with some help given
through the structure diverge_euler_t and the function
diverge_euler_next()) and perform the flow using
diverge_flow_step_euler(). In practice, this looks like:
double Lambda = 10.0;
for (int i=0; i<10; ++i) {
diverge_flow_step_euler( st, Lambda, -1.0 );
double vertmax; diverge_flow_step_vertmax( st, &vertmax );
double loopmax[2]; diverge_flow_step_loopmax( st, loopmax );
double chanmax[3]; diverge_flow_step_chanmax( st, chanmax );
mpi_eprintf( "%.5e %.5e %.5e %.5e %.5e %.5e %.5e %.5e\n",
Lambda, -1.0, loopmax[0], loopmax[1], chanmax[0],
chanmax[1], chanmax[2], vertmax );
Lambda -= 1.0;
}
Notice the usage of the convenience function mpi_eprintf(), which allows
to print to stderr only on one of all MPI ranks. The functions
diverge_flow_step_vertmax(), diverge_flow_step_loopmax() and
diverge_flow_step_chanmax() return the vertex maximum, the loop maxima,
and the cahnnel maxima (here called after each flow step).
Postprocessing
The postprocessing procedure is fairly simple. We run
diverge_postprocess_and_write() and obtain output in the filename we
pass (post.diverge):
diverge_postprocess_and_write( st, "post.diverge" );
Again, we can use the Python module diverge.output (see Simulation Output) to load the data into python:
import diverge.output as ro
P = ro.read('post.diverge')
help(P)
Finalize
We must free the resources allocated during the setup and run. For the
diverge_flow_step_t instance, simply call
diverge_flow_step_free(). For the diverge_model_t instance
and all attached memory, use diverge_model_free(). Finally, the MPI and
internal setup must be teared down. To do so, call diverge_finalize()
(if the library has been initialized with diverge_embed(), call
diverge_reset() instead):
diverge_flow_step_free( st );
diverge_model_free( mod );
diverge_finalize();
That’s it; these were the steps required to write an FRG code. To see how to extend this minimal example, take a look around this documentation or the examples in the git repo.