From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Wed Nov 09 2011 - 15:44:30 CST

On Wed, 2011-11-09 at 15:41 -0500, Aleksandr Kivenson wrote:
> I run simulations which generate trajectory files that may be tens of
> gigabytes in size. I use the vmd python interface to analyze them,
> and currently I iterate over them using this code:
>
>
> molecule.read(molid, 'trr', trrName, beg=firstToLoad, end=lastToLoad,
> waitfor=(lastToLoad - firstToLoad + 1))
>
>
> The variables firstToLoad and lastToLoad are set by a loop so that the
> entire trajectory is divided into chunks, each of which fits in
> memory. The problem with this approach is that VMD apparently
> reads .trr files from the very beginning every time this command is
> called, and so reading the tenth chunk of a file takes ten times as
> long as reading the first chunk (since the nine previous chunks must
> be read too) and overall the time to read a file scales with the
> number of chunks squared. This means that files composed of many
> chunks take a very long time to analyze.
>
>
> Does anyone have a way to read through very large .trr files in an
> amount of time that scales linearly with the file size?

check out this one.

http://www.ks.uiuc.edu/Research/vmd/script_library/scripts/bigdcd/

it is in Tcl, tho...

axel.

>
>
>
>
> Thanks,
>
>
> Alex

-- 
Dr. Axel Kohlmeyer
akohlmey_at_gmail.com http://goo.gl/1wk0
College of Science and Technology,
Institute for Computational Molecular Science,
Temple University, Philadelphia PA, USA.