<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=US-ASCII">
<META content="MSHTML 6.00.2900.2963" name=GENERATOR></HEAD>
<BODY id=role_body style="FONT-SIZE: 10pt; COLOR: #000000; FONT-FAMILY: Verdana"
bottomMargin=7 leftMargin=7 topMargin=7 rightMargin=7><FONT id=role_document
face=Verdana color=#000000 size=2>
<DIV>> standalone. Some of my stacks are "image intensive" and this 200 -
1000 <BR>> millisecond speed difference that Wilhelm discovered would
matter.<BR></DIV>
<DIV>I'm sure that the cumulative effects of a slower routine will add up, but
the CPU has a greater effect by orders of magnitude. For example, my now aging
G4 466MHz/320Mb seems crashingly slow compared to my XP Athlon 3400 4GHz/512MB.
Yes, obviously we optimize a routine's performance, but a couple of ticks here
or there is hardly going to register on the speed scale when using a go-faster
computer. With RAM and cycles improving and prices dropping, the trend seems to
be to write around things rather optimize them. Otherwise why does MSWord now
need a CD install when a BBC 'B' used a floppy for a word processor? There
seems little point in saving milliseconds when the end-user should simply
get a bigger, better and cheaper machine. </DIV>
<DIV> </DIV>
<DIV>Flame war anyone?</DIV>
<DIV> </DIV>
<DIV>:-))</DIV>
<DIV> </DIV>
<DIV>/H</DIV></FONT></BODY></HTML>