Age | Commit message (Collapse) | Author |
|
|
|
This will make sure we don't forget to implement offset_debug for new
node types if they ever happen (really it's mostly for consistency).
|
|
Instead of a complicated partitioning scheme that tries to maintain the
equal area in the middle, use a scheme where we keep the equal area in
the left part of the array and then move it to the middle.
Since generally sorted arrays don't contain many duplicates this extra
copy is not too expensive, and it significantly simplifies the logic and
maintains good complexity for sorting arrays with many equal elements
nonetheless (unlike Hoare partitioning).
Instead of a median of 9 just use a median of 3 - it performs pretty
much identically on some internal performance tests, despite having a
bit more comparisons in some cases.
Finally, change the insertion sort threshold to 16 elements since that
appears to have slightly better performance.
|
|
The previous implementation opted for doing two comparisons per element
in the sorted case in order to remove one iterator bounds check per
moved element when we actually need to copy. In our case however the
comparator is pretty expensive (except for remove_duplicates which is
fast as it is) so an extra object comparison hurts much more than an
iterator comparison saves.
This makes sorting by document order up to 3% faster for random
sequences.
|
|
Instead of delegating to a method that just forwards the call to
xpath_query call the relevant method directly.
|
|
It adds one stack frame to string query evaluation and does not really
simplify the code.
|
|
XPath: Remove exceptional control flow
|
|
Cover empty node case - no XPath query can result in that but it's
possible to create a node set with empty nodes manually.
|
|
Instead of having two checks for out-of-memory when exceptions are
enabled, do just one and decide what to do based on whether we can
throw.
|
|
Instead of relying on a specific string in the parse result, use
allocator error state to report the error and then convert it to a
string if necessary.
We currently have to manually trigger the OOM error in two places
because we use global allocator in rare cases; we don't really need to
do this so this will be cleaned up later.
|
|
|
|
|
|
Add tests for PI erroring exactly at the buffer boundary with
non-zero-terminated buffers (so we have to clear the last character
which changes the parsing flow slightly) and a test that makes sure
parse_embed_pcdata works properly with XML fragments where PCDATA can be
at the root level but can't be embedded into the document node.
|
|
The code works fine regardless of the *j->name check, and omitting this
makes the code more symmetric between the "count" and "write" stage;
additionally this improves coverage - due to how strcpy_insitu works
it's not really possible to get an empty non-NULL name in the node.
|
|
The only point was to try to test all paths where we can run out of
memory while decoding something. It seems like it may be impossible to
actually do this given that we can't run all paths as wchar_t size
detection is done at runtime...
|
|
This change adds more thorough tests for attribute conversion as well as
some assorted tests that fix gaps in coverage.
|
|
This makes sure all .reserve calls failure paths are covered. These
tests don't explicitly test if reserve is present on all paths - this is
much harder to test since not all modifications require reserve to be
called, so we'll have to rely on a combination of automated testing and
sanity checking for this.
Also add more parsing out of memory coverage tests.
|
|
Enumerate successfull cases and also cases where the detection stops
half-way and results in a different detected encoding.
|
|
|
|
Add tests for various corner cases of DOM inspection and modification
routines.
|
|
All other functions treat null pointer inputs as invalid; now this
function does as well.
|
|
Expand out of memory coverage during XPath parsing and evaluation and
add some other small tests.
|
|
Now error handling in XPath implementation relies on explicit error
propagation and is converted to an appropriate result at the end.
|
|
This generates some out-of-memory code paths that are not covered by
existing tests, which will need to be resolved later.
|
|
Currently this test has very large runtime and relies on the fact that
the first memory allocation error causes the test to terminate. This
does not work with new behavior of running the query through and
reporting the error at the end, so make the runtime reasonable but still
generate enough memory to blow past the budget.
|
|
Instead of rolling back the allocation and trying to allocate again,
explicitly handle inplace reallocate if possible, and allocate a new
block otherwise.
This is going to be important once we use reallocate_nothrow from a
non-throwing context.
|
|
This requires explicit error handling for xpath_string::data calls.
|
|
This allows us to gradually convert exception handling of out-of-memory
during evaluation to a non-throwing approach without changing the
observable behavior.
|
|
gcov -b surfaced many lines with partial coverage, where branch is only
ever taken or not taken, or one of the expressions in a complex
conditional is always either true or false. This change adds a series of
tests (mostly focusing on XPath) to reduce the number of partially
covered lines.
|
|
This test is supposed to test error coverage in different expressions
that are nested in other expressions to reduce the number of never-taken
branches in tests (and make sure we aren't missing any).
|
|
|
|
W3C specification does not allow predicates after abbreviated steps.
Currently this results in parsing terminating at the step, which leads
to confusing error messages like "Invalid query" or "Unmatched braces".
|
|
Any time an allocation fails xpath_allocator can set an externally
provided bool. The plan is to keep this bool up until evaluation ends,
so that we can use it to discard the potentially malformed result.
|
|
|
|
For both allocate and reallocate, provide both _nothrow and _throw
functions; this change renames allocate() to allocate_throw() (same for
reallocate) to make it easier to change the code to remove throwing
variants.
|
|
|
|
Handle node type error before creating expression node
|
|
|
|
We currently need to convert error based on the text to a different type
of C++ exceptions when C++ exceptions are enabled.
|
|
This allows us to handle OOM during node allocation without triggering
undefined behavior that occurs when placement new gets a NULL pointer.
|
|
Instead, return 0 and rely on parsing logic to propagate that all the
way down, and convert result to exception to maintain existing
interface.
|
|
Propagate the failure to the caller manually. This is a first step to
parser structure that does not depend on exceptions or longjmp for error
handling (and thus matches the XML parser). To preserve semantics we'll
have to convert error code to exception later.
|
|
Simplify function argument parsing by folding arg 0 parsing into the
main loop, reuse expression parsing logic for unary expression
|
|
It was only used in three places and didn't really make the code more
readable.
|
|
NULL return value will be reserved for the OOM error indicator.
|
|
|
|
Fixes #126
|
|
It's still not clear as to what exactly makes it emit this error when compiling
string_to_integer:
CC-3059 crayc++: INTERNAL __C_FILE_SCOPE_DATA__, File = <pugixml>/src/pugixml.cpp, Line = 4524, Column = 4
Expected no overflow in routine.
But a viable workaround for now is to exploit the knowledge that it uses
two-complement arithmetics and invert the sign manually.
Fixes #125.
|
|
We used to use the current timestamp when building the archive; switch to using
the timestamp of the tag with the version we're packaging.
This requires some monkey patching since tarfile module is always using current
timestamp when writing gzip header...
Also exclude archive.py from archive and simplify release file list in Makefile.
|
|
|