Studying Folding of 870 Small Peptides. Computationally Feasible?

From: Eric A Brenner (ericbrenner_at_utexas.edu)
Date: Sat Dec 17 2016 - 12:35:59 CST

Hi,

I have 870 small peptides (10-20aa each) for which I'm trying to get
predicted structures. The reason I'm using NAMD and not something like
Rosetta is because the length of these peptides and the fact that they have
mimimal homology to any peptides in nature (since their sequences were
randomly generated, and then they were ran through a screen) causes
problems with the structure prediction programs I've tried. I decided to
run PSIPRED to get predicted secondary structures, put each peptide in said
 secondary structure, and then run them through NAMD to see if the
secondary structures come apart and/or if supersecondary structures form.
I'm going to do the initial minimization in explicit solvent, but then
since explicit solvent calculations are slower (is that true? I've also
heard the opposite), I'm going to then switch to GBIS thereafter. I read
that supersecondary structures can take up to 6 microseconds to form. Is
running 870 peptides for 6 us feasible? Based on some preliminary runs, it
seems like it'll require a ton of computational power and a ton of time.
Granted, these tests were on CPU cores not GPU cores. I'm using the TACC
Lonestar5 supercomputer by the way (
https://portal.tacc.utexas.edu/user-guides/lonestar5). Anyways, do my
ambitions seem reasonable or should I rethink some of the technical aspects
(e.g. running for way less than 6 us instead)?

Thanks! :)

-Eric

This archive was generated by hypermail 2.1.6 : Sun Dec 31 2017 - 23:20:54 CST