This is the mail archive of the xsl-list@mulberrytech.com mailing list .


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Improving Performance of XSLT on large files


Dear All,

I have recently started working with XLST and cannot reason why it needs to
be so slow with a large XML file (i.e. 70MB +).  I will be trying to process
without ordering etc. later and will run a lot of tests to see if this helps
but it just seems massively too slow!!  Possibly, I don’t understand because
my XML files are just like the infinitely complicated DNA structure and DO
always contain repeating substructures at many different levels as well.  
So,
would ensuring the following help actually me in any way??  :

*That All elements, attributes, etc. are compulsory in long record sets so
they are totally repeating units

*Then Adding relevant spacer characters to any variable containers in
the XML to ensure that the records in the XML repeat in a mathematically
recognisable against character position throughout the file.

Then possibly I could optimise this process for a DNA Validation!  And,
apply a mathematical functions for the pointer so it knows specifically
where to read element data from the document without having even look at any
irrelevant bits??  Then feed only the relevant bits of XML into to the XLST
processor as a secondary process...  And, where do I and where don’t I get a
performance advantage doing something like this?? And does anything do this
already?

Any, comments would on this subject would be very much appreciated!

Kind Regards

Gary Cornelius





_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp


 XSL-List info and archive:  http://www.mulberrytech.com/xsl/xsl-list


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]