From: Paweł Kędzierski (pawel.kedzierski_at_pwr.wroc.pl)
Date: Thu May 31 2012 - 03:07:51 CDT

The effective solution for large files are standard unix streaming
tools, which do their job "on the fly" and use only minimal RAM buffers.
If what you need is only to filter out some information, grep, egrep and
awk are your friends. The last one is quite powerful, making a
spreadsheet (hint: CSV format) using data selected out of a text file in
a single command is not yet the top of its capabilities.
On the fly edits between input and output can be done with sed.
You could have them available in Windows, too - just install the free
Cygwin environment (www.cygwin.com)
But the learning curve is steeper than with WYSIWYG GUI tools.

A powerful solution for analysis of HUGE amounts of data on the fly is a
package called ROOT, developed at CERN because they generate data beyond
storage capabilities.
Have a look here: root.cern.ch
HTH,
Pawel

On Wed, May 30, 2012 at 2:44 AM, oguz gurbulak <gurbulakoguz_at_yahoo.com>
wrote:
>>> Dear All,
>>>
>>> I want to ask some questions about trajectory analysis. I have some md
>>> simulation output files that includes coordinate, force and velocity
>>> information. And these files are huge ( more than 5 GB ) . Could you
>>> please recommend a free text editor which works on Linux or Windows to
>>> open and edit these huge files? And I will run these files with
>>> fortran codes and get again huge output files . In order to do this
>>> operation faster and seamlessly what should I do ? Which facilities do
>>> I have on pc ? Could you please share your experiences with me ?
>>>
>>> Thanks for any help.
>>>
>>> Kind regards.
>