From: Bart Bruininks (b.m.h.bruininks_at_rug.nl)
Date: Thu Dec 05 2019 - 11:00:50 CST

Hello John,

Cool I didn't know that, but it makes a lot of sense. I am using XTC which
is not indexed and compressed which is probably the worst. Is writing the
DCD file after loading the XTC once in VMD an advisable manner to convert
the XTC to the DCD format?

Cheers,

Bart

Op do 5 dec. 2019 om 17:08 schreef John Stone <johns_at_ks.uiuc.edu>:

> Hi Bart,
> It is no problem for VMD to achieve 12GB/sec read rates, even with a
> single-threaded reader when reading a well-designed trajectory file format.
>
> What trajectory file format are you reading from? From your statements
> below, I'm expecting that it is not DCD or the JS format.
>
> To achieve highest throughput, you want to use a trajectory format that
> allows
> Linux kernel bypass for I/O, with VM page-aligned reads. This gets rid of
> extra kernel copy operations on every read and makes it easy for even a
> single thread to achieve PCIe-limited I/O rates. The "js"
> structure/trajectory
> format in VMD is designed to achieve peak I/O efficiency as described.
>
> Tell us more about the file format(s) you have tried with so far and we
> can suggest any alternatives that might help.
>
> Also, since analysis and visualization tasks are often
> performed repeatedly, it may be best to convert your existing
> trajectory(ies)
> into a more efficient format prior to doing your analysis runs. If you
> were only going to read them once, then this would not make sense, but
> if you expect to work on the same data multiple times it will easily pay
> for the conversion time many times over.
>
> Best,
> John
>
> On Thu, Dec 05, 2019 at 12:54:10PM +0100, Bart Bruininks wrote:
> > Dear VMDers,
> > We were working on powerful analysis server and it is finally up and
> > running. We have trajectories of 500GB which we would like to load
> all at
> > once (we have 1.5TB of memory so we should have a chance). However,
> the
> > loading of trajectories appears to be performed using only one thread
> 2.4
> > GHz of the 64 which are available. This results in no improved IO for
> VMD
> > even if we are using raid 0 NVME drives. This is a huge
> disappointment and
> > something we should have looked into before, but it just never
> crossed our
> > mind it was single threaded reading.
> > We are wondering if you can verify if this is indeed the reason why
> we see
> > no difference between the simple spinning disks and NVMe raid with
> respect
> > to opening large files in VMD. Of course we are also wondering if
> there is
> > a workaround by changing some settings in VMD or during compilation.
> > Thanks in advance and if this question was asked before I am sorry, I
> > tried to read as much VMD news as possible, but I might have missed
> it.
> > Cheers,
> > Bart
>
> --
> NIH Center for Macromolecular Modeling and Bioinformatics
> Beckman Institute for Advanced Science and Technology
> University of Illinois, 405 N. Mathews Ave, Urbana, IL 61801
> http://www.ks.uiuc.edu/~johns/ Phone: 217-244-3349
> http://www.ks.uiuc.edu/Research/vmd/
>