Age | Commit message (Collapse) | Author |
|
When parsing XPath variables, we need to perform a heap allocation; if it
fails, an xpath_exception instead of bad_alloc used to be thrown.
Now we throw the exception of a correct type so that xpath_exception means
'parsing error'.
|
|
|
|
|
|
Previously we omitted extra whitespace for single PCDATA/CDATA children, but in
mixed content there was extra indentation before/after text nodes.
One of the problems with that is that the text that you saved is not exactly
the same as the parsing result using default flags (parse_trim_pcdata helps).
Another problem is that parse-format cycles do not have a fixed point for mixed
content - the result expands indefinitely. Some XML libraries, like Python
minidom, have the same issue, but this is definitely a problem.
Pretty-printing mixed content is hard. It seems that the only other sensible
choice is to switch mixed content nodes to raw formatting. In a way the code in
this change is a weaker version of that - it removes indentation around text
nodes but still keeps it around element siblings/children.
Thus we can switch to mixed-raw formatting at some point later, which will be
a superset of the current behavior.
To do this we have to either switch at the first text node (.NET XmlDocument
does that), or scan the children of each element for a possible text node and
switch before we output the first child.
The former behavior seems non-intuitive (and a bit broken); unfortunately, the
latter behavior can cost up to 20% of the output time for trees *without* mixed
content.
Fixes #13.
|
|
|
|
This prevents malformed PI value from breaking the document structure.
|
|
Since all string allocations are pointer-aligned to avoid aligning more
frequent node allocations, we can rely on that in string encoding.
Encoding page offset and block size in sizeof(void*) units increases the
maximum memory page size from 64k to 256k on 32-bit and 512k on 64-bit
platforms.
Fixes #35.
|
|
Also change the error code to status_io_error
|
|
|
|
The implementations generated a string with an internal null terminator; this
went unnoticed since unit test string verification did not perform string
equality check properly (it compared XPath string result as a C-string, thus
stopping at the first null terminator).
Fixes #36.
|
|
This prevents malformed input XML with very deeply recursive DOCTYPE sections
from crashing the parser.
Fixes #29.
|
|
|
|
|
|
|
|
Make float/double round-trip
|
|
Unfortunately, standard headers on MinGW32 insist on undefining off64_t
and _wfopen extensions if __STRICT_ANSI__ is true (e.g. C++11 mode). This
leads to compilation errors since b7a1fec started to use _wfopen in strict
mode. That change erroneously checked GCC version - however, the version
itself is irrelevant; the actual criteria is whether mingw64 runtime is
used.
off64_t is not useful on MinGW32 since we only need it to open large files
on 64-bit platforms; unfortunately, the lack of _wfopen means we won't be
able to support wide-char paths on Windows for MinGW32.
Fixes #24.
|
|
Since MinGW 4.5 does not define these functions if __STRICT_ANSI__ is defined
(in case of _wfopen it defines it inconsistently between stdio.h and wchar.h)
use the baseline functions for MinGW 4.5 and earlier.
Fixes #23.
|
|
node_copy_string relied on the fact that target node had an empty name and
value. Normally this is a safe assumption (and a good one to make since it
makes copying faster), however it was not checked and there was one case when
it did not hold.
Since we're reusing the logic for inserting nodes, newly inserted declaration
nodes had the name set automatically to xml, which in our case violates the
assumption and is counter-productive since we'll override the name right after
setting it.
For now the best solution is to do the same insertion manually - that results
in some code duplication that we can refactor later (same logic is partially
shared by _move variants anyway so on a level duplicating is not that bad).
|
|
Add allow_insert_attribute (similar to allow_insert_child).
|
|
Remove redundant this-> from type() call (argument used to be called type,
but it's now type_).
Use _root member directly when possible instead of calling internal_object.
|
|
This will allow us to implement nodeset_eval_last evaluation mode if necessary
without relying on a fragile boolean argument.
|
|
Extract end of string to rend and add comments to translate_table.
|
|
Right now remove_node is only used in contexts where parent is reset after
removing but this might be important in the future.
|
|
Since depth is unsigned this is actually well-defined but it's better to not
have the underflow anyway.
|
|
|
|
This is more for consistency with the surrounding code than for performance.
|
|
|
|
This should completely eliminate the confusion between load and load_file.
Of course, for compatibility reasons we have to preserve the old variant -
it will be deprecated in a future version and subsequently removed.
|
|
Previously push_back implementation was too big to inline; now the common case
(no realloc) is small and realloc variant is explicitly marked as no-inline.
This is similar to xml_allocator::allocate_memory/allocate_memory_oob and
makes some XPath queries 5% faster.
|
|
In some cases constant overhead on step evaluation is important - i.e. for
queries that evaluate a simple step in a predicate expression. Eliminating
a redundant function call thus can prove worthwhile.
This change makes some queries (e.g. //*[not(*)]) 4% faster.
|
|
Previously setting a large page size (i.e. 1M) would cause dynamic string
allocation to assert spuriously. A page size of 64K guarantees that all
offsets fit into 16 bits.
|
|
Computed offsets for documents with nodes that were added using append_buffer
or newly appended nodes without name/value information were invalid.
|
|
This reduces the number of unsafe pointer manipulations.
|
|
These used to result in better codegen for unknown reasons, but this is no
longer the case.
|
|
|
|
Split number/boolean filtering logic into two functions. This creates an
extra copy of a remove_if-like algorithm, but moves the type check out of
the loop and results in better organized filtering code.
Consolidate test-based dispatch into apply_predicate (which is now a member
function).
|
|
Calling memcpy(x, 0, 0) is technically undefined (although it should usually
be a no-op).
|
|
Calling memcpy(x, 0, 0) is technically undefined (although it should usually
be a no-op).
Fixes #20.
|
|
added some tests to force invalid buffer and size = 0
|
|
|
|
This lets us do fewer null pointer checks (making printing 2% faster with -O3)
and removes a lot of function calls (making printing 20% faster with -O0).
|
|
To get more benefits from constant predicate/filter optimization we rewrite
[position()=expr] predicates into [expr] for numeric expressions. Right now
the rewrite is only for entire expressions - it may be beneficial to split
complex expressions like [position()=constant and expr] into [constant][expr]
but that is more complicated.
last() does not depend on the node set contents so is "constant" as far as
our optimization is concerned so we can evaluate it once.
|
|
Numeric and boolean constant expressions in filters are different in that
to evaluate numeric expressions we need a sorted order, but to evaluate
boolean expressions we don't. The previously implemented optimization adds
an extra sorting step for constant boolean filters that will be more expensive
than redundant computations.
Since constant booleans are sort of an edge case, don't do this optimization.
This allows us to simplify apply_predicate_const to only handle numbers.
|
|
Now expression is always _right for filter/predicate nodes to make optimize()
simpler. Additionally we now use predicate metadata to make is_posinv_step()
faster.
This introduces a weak ordering dependency in rewrite rules to optimize() -
classification has to be performed before other optimizations.
|
|
If a filter/predicate expression is a constant, we don't need to evaluate it
for every nodeset element - we can evaluate it once and pick the right element
or keep/discard the entire collection.
If the expression is 1, we can early out on first node when evaluating the
node set - queries like following::item[1] are now significantly faster.
Additionally this change refactors filters/predicates to have additional
metadata describing the expression type in _test field that is filled during
optimization.
Note that predicate_constant selection right now is very simple (but captures
most common use cases except for maybe [last()]).
|
|
A page can fail to allocate during attribute creation; this case was not
previously handled.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1080 99668b35-9821-0410-8761-19e4c4f06640
|
|
When removing a node or attribute, we know that the parent has at least one
node/attribute so a null pointer check is redundant.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1078 99668b35-9821-0410-8761-19e4c4f06640
|
|
If the requested evaluation mode is not _all, we can use this mode for the
last predicate/filter expression and exit early.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1073 99668b35-9821-0410-8761-19e4c4f06640
|
|
Using pointers instead of node/attribute objects allows us to use knowledge
about the tree to guarantee that pointers are not null. This results in
less null checks (10-20% speedup with optimizations enabled) and less
function calls (5x speedup with optimizations disabled).
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1072 99668b35-9821-0410-8761-19e4c4f06640
|
|
Some steps relied on step_push rejecting null inputs; this is no longer
the case. Additionally stepping now more rigorously filters null inputs.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1069 99668b35-9821-0410-8761-19e4c4f06640
|