Re: namd scale-up

From: Revthi Sanker (
Date: Fri Sep 06 2013 - 01:25:53 CDT

Dear Sir,
I am herewith attaching the details which I obtained by longing into one of
the nodes in my cluster.
I would also like to bring to your notice that when the namd run has
finished the *test.err* file displays:

WARNING: It appears that your OpenFabrics subsystem is configured to only
allow registering part of your physical memory. This can cause MPI jobs to
run with erratic performance, hang, and/or crash.

This may be caused by your OpenFabrics vendor limiting the amount of
physical memory that can be registered. You should investigate the
relevant Linux kernel module parameters that control how much physical
memory can be registered, and increase them to allow registering all
physical memory on your machine.

See this Open MPI FAQ item for more information on these Linux kernel module

  Local host: a3n83
  Registerable memory: 32768 MiB
  Total memory: 65511 MiB

Your MPI job will continue, but may be behave poorly and/or hang.
[a3n83:20048] 127 more processes have sent help message
/ reg mem limit low
[a3n83:20048] Set MCA parameter "orte_base_help_aggregate" to 0 to see all
/ error messages

I am a beginner to simulations and I am unable to interpret the err
message. thought this could be relevant.

Thank you so much for your time.

*FA: /proc/cpuinfo*

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:21:37 CST