Please prepare your job_script which should be capable of storing and loading configuration snapshots and of returning the relevant metadata for the particular sampling method (at least reaction coordinate and calculation steps/time).
Open a terminal and start the server (you may edit the configuration file first):
cd freshs/server python main_server.py -c server-sample.conf
Open another terminal and start the client, e.g.:
cd freshs/client python main_client.py -c client-sample.conf
Thereby, the executable of your simulation and the harness script are read from the configuration file. According to your resources, you can connect multiple clients to the server.
In the tests, the test_ffs_particle subdirectory contains a very simple ffs calculation where there is no real client or harness script: the client-side communications and the dynamics calculation are folded into a single small python program, intended to act as a test of the server. The README file explains how to run this:
cd freshs/server python main_server.py -c server-sample.conf
In a separate terminal:
cd freshs/test/test_ffs_particle python main_particle.py
This is probably the minimal use of FRESHS that can justify making the download, but can perhaps help you to get started in setting up a more interesting calculation. If you have a python-based dynamics program you might choose to plug it into FRESHS by making your own version of particle.py instead of following the full server-client-job_script-md_program route.
Once you have run the test_ffs_particle example, a reaction rate for the barrier-crossing should be printed to the console. Also, have a look at the OUTPUT and DB directories which were created in whatever directory you ran the server from (probably freshs/server/OUTPUT and freshs/server/DB). They should contain some logging, and also an sql database which you can interrogate to learn more about the configurations generated during the run.
Have a look at the DB using sqlitebrowser (or some other viewer, such as a web-browser plugin):
sqlitebrowser DB/*_configpoints.sqlite
You can have parallelism without using MPI, but if you want individual path fragments themselves to be parallel, then you can do that by passing the mpirun prefix for your executable as a command line option to the client, e.g.:
python main_client.py -c client-sample.conf -e "mpirun -np 8"
If you need a secure ssh tunnel between machines hosting the server and client processes, then ssh tunnel can be turned on in the client configuration file, and the tunnel command can also be set:
# ssh tunnel ssh_tunnel = 1 # ssh tunnelcommand ssh_tunnelcommand = ssh -N -L 10000:localhost:10000 tunneluser@tunnelhost