Age | Commit message (Collapse) | Author |
|
Instead of branching code at each invocation site, use variadic macros
to create a wrapping macro that use snprintf for the buffer of a
statically known size.
Variadic macros are supported by all C++11 compilers, as is snprintf;
on MSVC 2005+ we don't necessarily have snprintf, but we can use
_snprintf_s with _TRUNCATE to get the same behavior. In all other cases
we fall back to sprintf, that (theoretically) can lead to a stack buffer
overflow.
In practice all snprintfs used in pugixml use buffers that should be
large enough to never be overflown but snprintf is safe even if this is
not the case.
|
|
We use references to arrays elsewhere in the codebase and there's just
one caller for this function so it's easier to fix the size.
This will simplify snprintf refactoring.
|
|
use snprintf instead of sprintf
|
|
Improve code coverage
|
|
codecov.io does not seem to support lcov regex customization;
additionally, we can't just replace unreachable with LCOV_LINE_EXCL
in gcov file - so we have to patch the ##### indicator (which suggests
the line hasn't been hit) with 1.
See also https://github.com/codecov/support/issues/144
|
|
Now we can exclude these from code coverage since it's logically
impossible to hit them in tests.
|
|
New tests try to load a folder as an XML document, and a device. Both
are intended to exercise some otherwise non-hittable error paths in
load_file implementation.
|
|
This adds tests that complete branch coverage in compact pointer
encoding/decoding code (previously first_attribute was always encoded
using compact encoding in the entire test suite).
|
|
This is a followup to 198900eff403982f080958459f1ccb45cdefe9a4.
target_include_directories was introduced in 2.8.12, thus CMake 2.6 no
longer works.
|
|
|
|
|
|
|
|
Integer sanitizer is flagging unsigned integer overflow in several
functions in pugixml; unsigned integer overflow is well defined but it
may not necessarily be intended.
Apart from hash functions, both string_to_integer and integer_to_string
use unsigned overflow - string_to_integer uses it to perform
two-complement negation so that the bulk of the operation can run using
unsigned integers. This makes it possible to simplify overflow checking.
Similarly integer_to_string negates the number before generating a
decimal representation, but negating is impossible without unsigned
overflow or special-casing certain integer limits.
For now just silence the integer overflow using a special attribute;
also move unsigned overflow into string_to_integer from get_value_* so
that we have fewer functions marked with the attribute.
Fixes #133.
|
|
Now the only thing fuzz_setup.sh does is installing new clang; if system
clang supports -fsanitize-coverage then fuzz_setup.sh is not required.
|
|
The script only worked if clang folder was already created.
|
|
|
|
This reverts commit 79109a8546f963d17522d75112cffcfd8cbe35fc.
This warning does not happen on gcc-4.8.4; the workaround introduces an
unsigned integer overflow which results in a runtime error when compiled
with integer sanitizer.
|
|
This triggers a runtime error under integer sanitizer
|
|
This was triggering an buffer read overflow with asan.
|
|
|
|
Silence g++ 7.0.1 -Wimplicit-fallthrough warnings
|
|
This is accomplished by putting a // fallthrough
comment at the right place.
This seems to be more portable than an attribute-based
solution like [[fallthrough]] or __attribute__((fallthrough)).
|
|
Instead of a separate implementation for find/insert, use just one that
can do both. This reduces the code size and simplifies code coverage;
the resulting code is close to what we had in terms of performance and
since hash table is a fall back should not affect any real workloads.
|
|
Improve fuzzing support
|
|
Make the file executable, fix Windows newlines and fix clang setup.
|
|
Hopefully this will allow for better fuzzing coverage
|
|
Only fuzz the parser for now.
|
|
This downloads a clang build that has support for instrumentation, and also
downloads and compiles libFuzzer.a.
|
|
This allows us to have faster fuzz cycles since the fuzzer is in-process.
|
|
This should make the test fail on a 32-bit target.
|
|
|
|
|
|
This will make sure we don't forget to implement offset_debug for new
node types if they ever happen (really it's mostly for consistency).
|
|
This should make the test fail on a 32-bit target.
|
|
|
|
|
|
This will make sure we don't forget to implement offset_debug for new
node types if they ever happen (really it's mostly for consistency).
|
|
Instead of a complicated partitioning scheme that tries to maintain the
equal area in the middle, use a scheme where we keep the equal area in
the left part of the array and then move it to the middle.
Since generally sorted arrays don't contain many duplicates this extra
copy is not too expensive, and it significantly simplifies the logic and
maintains good complexity for sorting arrays with many equal elements
nonetheless (unlike Hoare partitioning).
Instead of a median of 9 just use a median of 3 - it performs pretty
much identically on some internal performance tests, despite having a
bit more comparisons in some cases.
Finally, change the insertion sort threshold to 16 elements since that
appears to have slightly better performance.
|
|
The previous implementation opted for doing two comparisons per element
in the sorted case in order to remove one iterator bounds check per
moved element when we actually need to copy. In our case however the
comparator is pretty expensive (except for remove_duplicates which is
fast as it is) so an extra object comparison hurts much more than an
iterator comparison saves.
This makes sorting by document order up to 3% faster for random
sequences.
|
|
Instead of delegating to a method that just forwards the call to
xpath_query call the relevant method directly.
|
|
It adds one stack frame to string query evaluation and does not really
simplify the code.
|
|
XPath: Remove exceptional control flow
|
|
Cover empty node case - no XPath query can result in that but it's
possible to create a node set with empty nodes manually.
|
|
Instead of having two checks for out-of-memory when exceptions are
enabled, do just one and decide what to do based on whether we can
throw.
|
|
Instead of relying on a specific string in the parse result, use
allocator error state to report the error and then convert it to a
string if necessary.
We currently have to manually trigger the OOM error in two places
because we use global allocator in rare cases; we don't really need to
do this so this will be cleaned up later.
|
|
|
|
|
|
Add tests for PI erroring exactly at the buffer boundary with
non-zero-terminated buffers (so we have to clear the last character
which changes the parsing flow slightly) and a test that makes sure
parse_embed_pcdata works properly with XML fragments where PCDATA can be
at the root level but can't be embedded into the document node.
|
|
The code works fine regardless of the *j->name check, and omitting this
makes the code more symmetric between the "count" and "write" stage;
additionally this improves coverage - due to how strcpy_insitu works
it's not really possible to get an empty non-NULL name in the node.
|
|
The only point was to try to test all paths where we can run out of
memory while decoding something. It seems like it may be impossible to
actually do this given that we can't run all paths as wchar_t size
detection is done at runtime...
|