Extracting a subset of lines from a large file
Gregory Lypny
gregory.lypny at videotron.ca
Wed Jul 2 14:00:00 EDT 2003
Thanks Xavier,
That's what I suspected after doing some primitive experiments. I'll
try your buffer idea.
Gregory
I wrote:
> Message: 1
> Date: Tue, 01 Jul 2003 16:43:44 -0400
> From: Gregory Lypny <gregory.lypny at videotron.ca>
> Subject: Extracting a subset of lines from a large file
> To: metacard at lists.runrev.com
> Reply-To: metacard at lists.runrev.com
>
> Hello everyone,
>
> I want to take a chunk of 1000 lines at a time out of a text file and
> display it in a field. Some of the text files are big, 200 MB and up.
>
> Is one of the following approaches preferred?
>
> [1] put line 1 to 1000 of url ("file:" & filePath) into field "X"
>
> [2] read from file filePath from line 1 for 1000 lines
> put it into field "X"
>
>
> Regards,
>
> Gregory
On Wednesday, July 2, 2003, at 12:03 PM, Xavier replied:
> From: "MisterX" <x at monsieurx.com>
> To: <metacard at lists.runrev.com>
> Subject: RE: Extracting a subset of lines from a large file
> Date: Wed, 2 Jul 2003 06:57:42 +0200
> Reply-To: metacard at lists.runrev.com
>
> Gregory,
>
> the url will force the file to be read completely.
> for those cases (but not for over 30MB's) it's better to store the
> data into
> a variable and
> treat it in a loop after.
>
> In your case, read the file using the read file command. Try to
> optimize a
> buffer for your
> reading (32-64K's may be best... but experiment!)
>
> Stocking the data temporarily in a field will definitely slow down
> things...
> try to keep that
> for last... Memory is the fastest you got!
>
> cheers
> Xavier
More information about the metacard
mailing list