-
-
Notifications
You must be signed in to change notification settings - Fork 325
Enable data sieving for chunks that can't be cached #6111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Enable data sieving for chunks that can't be cached #6111
Conversation
|
Related writeup: Note that this is a rather quick fix for the issue and a bit of refactoring in the chunked I/O code would be better in the long run. |
| * on each I/O until more advanced use cases like H5Dset_extent between writes | ||
| * / writes and reads can be supported. | ||
| */ | ||
| dset_info->dset->shared->cache.contig.sieve_buf_size = H5F_SIEVE_BUF_SIZE(dset_info->dset->oloc.file); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line (and the same one in H5D__chunk_write()) is the main fix. The library's logic seems to have been setup to account for chunked datasets making use of data sieving, but the sieve buffer size was always 0 so it was unused. In specific cases this could result in I/O being performed element by element.
| */ | ||
| dset_info->dset->shared->cache.contig.sieve_buf_size = H5F_SIEVE_BUF_SIZE(dset_info->dset->oloc.file); | ||
| dset_info->dset->shared->cache.contig.sieve_loc = HADDR_UNDEF; | ||
| dset_info->dset->shared->cache.contig.sieve_size = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These lines are to force re-reads into the sieve buffer on each I/O to account for cases like changing the extent of a dataset between writes and reads
| /* Get the sieve buffer size for this dataset - the smaller of the dataset size and | ||
| * the sieve buffer size from the FAPL is used | ||
| */ | ||
| dset->shared->cache.contig.sieve_buf_size = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the EFL code here to set the sieve buffer size to the smaller of the dataset size and the FAPL-specified size. This just matches the logic elsewhere in the library and what's currently documented for H5Pset_sieve_buf_size():
Internally, the library checks the storage sizes of the datasets in
the file. It picks the smaller one between the size from the file
access property and the size of the dataset to allocate the sieve
buffer for the dataset in order to save memory usage.
7f37860 to
cb23842
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a sieve buffer is not used, the test in this file is designed to run a long time to try and cause a timeout in CMake for TestExpress of 0, but run very quickly for a TestExpress of 3. We currently do testing with a TestExpress of 0 for every PR, but those tests should probably be moved to a scheduled run as originally intended since tests like this can be problematic for testing at that level for every PR.
src/H5Dchunk.c
Outdated
| /* Ensure dataset I/O sieve buffer is properly setup for when the contiguous | ||
| * dataset I/O routines are used for chunks. Force reset of sieve buffer here | ||
| * on each I/O until more advanced use cases like H5Dset_extent between writes | ||
| * / writes and reads can be supported. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| * / writes and reads can be supported. | |
| * and reads can be supported. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was intentional to try to highlight the difference between write/write and write/read cycles, but could probably be written differently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought it was a repeated word typo. Maybe a ; between the two writes, or a . and W?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reworded this slightly
Fixed an issue that prevented use of a data sieve buffer for I/O on dataset chunks when those chunks couldn't be cached by the library. This issue could result in worst-case behavior of I/O on a single data element at a time when chunks are non-contiguous with respect to memory layout. Added a test to attempt to catch performance regressions in I/O on dataset chunks that are non-contiguous with respect to memory layout Updated the External File List logic to set the data sieve buffer size to the smaller of the dataset size and the size set in the FAPL, similar to the logic elsewhere in the library
cb23842 to
f5e42d8
Compare
Fixed an issue that prevented use of a data sieve buffer for I/O on dataset chunks when those chunks couldn't be cached by the library. This issue could result in worst-case behavior of I/O on a single data element at a time when chunks are non-contiguous with respect to memory layout.
Added a test to attempt to catch performance regressions in I/O on dataset chunks that are non-contiguous with respect to memory layout
Updated the External File List logic to set the data sieve buffer size to the smaller of the dataset size and the size set in the FAPL, similar to the logic elsewhere in the library
Important
Re-enable data sieve buffer for non-cached dataset chunks to improve I/O performance and add a test for performance regressions.
H5D__chunk_read()andH5D__chunk_write()inH5Dchunk.c.H5D__efl_construct()inH5Defl.cto set sieve buffer size to the smaller of dataset size and FAPL size.chunk_non_contig_mem_ioinio_perf.cto catch performance regressions for non-contiguous memory layout chunks.CHANGELOG.mdto document the performance fix.io_perftoH5_EXPRESS_TESTSinCMakeLists.txt.This description was created by
for cb23842. You can customize this summary. It will automatically update as commits are pushed.