<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="../assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Blosc Home Page  (Posts by Ivan Vilata-i-Balaguer)</title><link>https://blosc.org/</link><description></description><atom:link href="https://blosc.org/authors/ivan-vilata-i-balaguer.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2026 &lt;a href="mailto:blosc@blosc.org"&gt;The Blosc Developers&lt;/a&gt; </copyright><lastBuildDate>Wed, 04 Mar 2026 11:43:34 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Peaking compression performance in PyTables with direct chunking</title><link>https://blosc.org/posts/pytables-direct-chunking/</link><dc:creator>Ivan Vilata-i-Balaguer</dc:creator><description>&lt;p&gt;It took a while to put things together, but after many months of hard work by maintainers, developers and contributors, &lt;a class="reference external" href="https://groups.google.com/g/pytables-users/c/3giLIxT6Jq4"&gt;PyTables 3.10&lt;/a&gt; finally saw the light, full of &lt;a class="reference external" href="https://www.pytables.org/release-notes/RELEASE_NOTES_v3.10.x.html"&gt;enhancements and fixes&lt;/a&gt;.  Thanks to a &lt;a class="reference external" href="https://numfocus.org/"&gt;NumFOCUS&lt;/a&gt; Small Development Grant, we were able to include a new feature that can help you squeeze considerable performance improvements when using compression: the direct chunking API.&lt;/p&gt;
&lt;p&gt;In a &lt;a class="reference external" href="https://www.blosc.org/posts/pytables-b2nd-slicing/"&gt;previous post about optimized slicing&lt;/a&gt; we saw the advantages of avoiding the overhead introduced by the HDF5 filter pipeline, in particular when working with &lt;a class="reference external" href="https://www.blosc.org/posts/blosc2-ndim-intro/"&gt;multi-dimensional arrays compressed with Blosc2&lt;/a&gt;.  This is achieved by specialized, low-level code in PyTables which understands the structure of the compressed data in each chunk and accesses it directly, with the least possible intervention of the HDF5 library.&lt;/p&gt;
&lt;p&gt;However, there are many reasons to exploit direct chunk access in your own code, from customizing compression with parameters not allowed by the PyTables &lt;cite&gt;Filters&lt;/cite&gt; class, to using yet-unsupported compressors or even helping you develop new plugins for HDF5 to support them (you may write compressed chunks in Python while decompressing transparently in a C filter plugin, or vice versa).  And of course, as we will see, skipping the HDF5 filter pipeline with direct chunking may be instrumental to reach the extreme I/O performance required in scenarios like continuous collection or extraction of data.&lt;/p&gt;
&lt;p&gt;PyTables' new direct chunking API is the machinery that gives you access to these possibilities.  Keep in mind though that this is a low-level functionality that may help you largely customize and accelerate access to your datasets, but may also break them.  In this post we'll try to show how to use it to get the best results.&lt;/p&gt;
&lt;section id="using-the-api"&gt;
&lt;h2&gt;Using the API&lt;/h2&gt;
&lt;p&gt;The direct chunking API consists of three operations: get information about a chunk (&lt;cite&gt;chunk_info()&lt;/cite&gt;), write a raw chunk (&lt;cite&gt;write_chunk()&lt;/cite&gt;), and read a raw chunk (&lt;cite&gt;read_chunk()&lt;/cite&gt;).  They are supported by chunked datasets (&lt;cite&gt;CArray&lt;/cite&gt;, &lt;cite&gt;EArray&lt;/cite&gt; and &lt;cite&gt;Table&lt;/cite&gt;), i.e. those whose data is split into fixed-size chunks of the same dimensionality as the dataset (maybe padded at its boundaries), with HDF5 pipeline filters like compressors optionally processing them on read/write.&lt;/p&gt;
&lt;p&gt;&lt;cite&gt;chunk_info()&lt;/cite&gt; returns an object with useful information about the chunk containing the item at the given coordinates.  Let's create a simple 100x100 array with 10x100 chunks compressed with Blosc2+LZ4 and get info about a chunk:&lt;/p&gt;
&lt;pre class="literal-block"&gt;&amp;gt;&amp;gt;&amp;gt; import tables, numpy
&amp;gt;&amp;gt;&amp;gt; h5f = tables.open_file('direct-example.h5', mode='w')
&amp;gt;&amp;gt;&amp;gt; filters = tables.Filters(complib='blosc2:lz4', complevel=2)
&amp;gt;&amp;gt;&amp;gt; data = numpy.arange(100 * 100).reshape((100, 100))
&amp;gt;&amp;gt;&amp;gt; carray = h5f.create_carray('/', 'carray', chunkshape=(10, 100),
                               obj=data, filters=filters)
&amp;gt;&amp;gt;&amp;gt; coords = (42, 23)
&amp;gt;&amp;gt;&amp;gt; cinfo = carray.chunk_info(coords)
&amp;gt;&amp;gt;&amp;gt; cinfo
ChunkInfo(start=(40, 0), filter_mask=0, offset=6779, size=608)&lt;/pre&gt;
&lt;p&gt;So the item at coordinates (42, 23) is stored in a chunk of 608 bytes (compressed) which starts at coordinates (40, 0) in the array and byte 6779 in the file.  The latter offset may be used to let other code access the chunk directly on storage.  For instance, since Blosc2 was the only HDF5 filter used to process the chunk, let's open it directly:&lt;/p&gt;
&lt;pre class="literal-block"&gt;&amp;gt;&amp;gt;&amp;gt; import blosc2
&amp;gt;&amp;gt;&amp;gt; h5f.flush()
&amp;gt;&amp;gt;&amp;gt; b2chunk = blosc2.open(h5f.filename, mode='r', offset=cinfo.offset)
&amp;gt;&amp;gt;&amp;gt; b2chunk.shape, b2chunk.dtype, data.itemsize
((10, 100), dtype('V8'), 8)&lt;/pre&gt;
&lt;p&gt;Since Blosc2 does understand the structure of data (thanks to &lt;a class="reference external" href="https://www.blosc.org/posts/blosc2-ndim-intro/"&gt;b2nd&lt;/a&gt;), we can even see that the chunk shape and the data item size are correct.  The data type is opaque to the HDF5 filter which wrote the chunk, hence the &lt;cite&gt;V8&lt;/cite&gt; dtype.  Let's check that the item at (42, 23) is indeed in that chunk:&lt;/p&gt;
&lt;pre class="literal-block"&gt;&amp;gt;&amp;gt;&amp;gt; chunk = numpy.ndarray(b2chunk.shape, buffer=b2chunk[:],
                          dtype=data.dtype)  # Use the right type.
&amp;gt;&amp;gt;&amp;gt; ccoords = tuple(numpy.subtract(coords, cinfo.start))
&amp;gt;&amp;gt;&amp;gt; bool(data[coords] == chunk[ccoords])
True&lt;/pre&gt;
&lt;p&gt;This offset-based access is actually what b2nd optimized slicing performs internally.  Please note that neither PyTables nor HDF5 were involved at all in the actual reading of the chunk (Blosc2 just got a file name and an offset).  It's difficult to cut more overhead than that!&lt;/p&gt;
&lt;p&gt;It won't always be the case that you can (or want to) read a chunk in that way.  The &lt;cite&gt;read_chunk()&lt;/cite&gt; method allows you to read a raw chunk as a new byte string or into an existing buffer, given the chunk's start coordinates (which you may compute yourself or get via &lt;cite&gt;chunk_info()&lt;/cite&gt;).  Let's use &lt;cite&gt;read_chunk()&lt;/cite&gt; to redo the reading that we just did above:&lt;/p&gt;
&lt;pre class="literal-block"&gt;&amp;gt;&amp;gt;&amp;gt; rchunk = carray.read_chunk(coords)
Traceback (most recent call last):
    ...
tables.exceptions.NotChunkAlignedError: Coordinates are not multiples
    of chunk shape: (42, 23) !* (np.int64(10), np.int64(100))
&amp;gt;&amp;gt;&amp;gt; rchunk = carray.read_chunk(cinfo.start)  # Always use chunk start!
&amp;gt;&amp;gt;&amp;gt; b2chunk = blosc2.ndarray_from_cframe(rchunk)
&amp;gt;&amp;gt;&amp;gt; chunk = numpy.ndarray(b2chunk.shape, buffer=b2chunk[:],
                          dtype=data.dtype)  # Use the right type.
&amp;gt;&amp;gt;&amp;gt; bool(data[coords] == chunk[ccoords])
True&lt;/pre&gt;
&lt;p&gt;The &lt;cite&gt;write_chunk()&lt;/cite&gt; method allows you to write a byte string into a raw chunk.  Please note that you must first apply any filters manually, and that you can't write chunks beyond the dataset's current shape.  However, remember that enlargeable datasets may be grown or shrunk in an efficient manner using the &lt;cite&gt;truncate()&lt;/cite&gt; method, which doesn't write new chunk data.  Let's use that to create an &lt;cite&gt;EArray&lt;/cite&gt; with the same data as the previous &lt;cite&gt;CArray&lt;/cite&gt;, chunk by chunk:&lt;/p&gt;
&lt;pre class="literal-block"&gt;&amp;gt;&amp;gt;&amp;gt; earray = h5f.create_earray('/', 'earray', chunkshape=carray.chunkshape,
                               atom=carray.atom, shape=(0, 100),  # Empty.
                               filters=filters)  # Just to hint readers.
&amp;gt;&amp;gt;&amp;gt; earray.write_chunk((0, 0), b'whatever')
Traceback (most recent call last):
    ...
IndexError: Chunk coordinates not within dataset shape:
    (0, 0) &amp;lt;&amp;gt; (np.int64(0), np.int64(100))
&amp;gt;&amp;gt;&amp;gt; earray.truncate(len(carray))  # Grow the array (cheaply) first!
&amp;gt;&amp;gt;&amp;gt; for cstart in range(0, len(carray), carray.chunkshape[0]):
...     chunk = carray[cstart:cstart + carray.chunkshape[0]]
...     b2chunk = blosc2.asarray(chunk)  # May be customized.
...     wchunk = b2chunk.to_cframe()  # Serialize.
...     earray.write_chunk((cstart, 0), wchunk)&lt;/pre&gt;
&lt;p&gt;You can see that such low-level writing is more involved than usual.  Though we used default Blosc2 parameters here, the explicit compression step allows you to fine-tune it in ways not available through PyTables like setting internal chunk and block sizes or even using Blosc2 compression plugins like Grok/JPEG2000.  In fact, the filters given on dataset creation are only used as a hint, since each Blosc2 container holding a chunk includes enough metadata to process it independently.  In the example, the default chunk compression parameters don't even match dataset filters (using Zstd instead of LZ4):&lt;/p&gt;
&lt;pre class="literal-block"&gt;&amp;gt;&amp;gt;&amp;gt; carray.filters
Filters(complevel=2, complib='blosc2:lz4', ...)
&amp;gt;&amp;gt;&amp;gt; earray.filters
Filters(complevel=2, complib='blosc2:lz4', ...)
&amp;gt;&amp;gt;&amp;gt; b2chunk.schunk.cparams['codec']
&amp;lt;Codec.ZSTD: 5&amp;gt;&lt;/pre&gt;
&lt;p&gt;Still, the Blosc2 HDF5 filter plugin included with PyTables is able to read the data just fine:&lt;/p&gt;
&lt;pre class="literal-block"&gt;&amp;gt;&amp;gt;&amp;gt; bool((carray[:] == earray[:]).all())
True
&amp;gt;&amp;gt;&amp;gt; h5f.close()&lt;/pre&gt;
&lt;p&gt;You may find a more elaborate example of using direct chunking &lt;a class="reference external" href="https://github.com/PyTables/PyTables/blob/master/examples/direct-chunking.py"&gt;in PyTables' examples&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="benchmarks"&gt;
&lt;h2&gt;Benchmarks&lt;/h2&gt;
&lt;p&gt;&lt;a class="reference external" href="https://www.blosc.org/posts/pytables-b2nd-slicing/"&gt;b2nd optimized slicing&lt;/a&gt; shows us that removing the HDF5 filter pipeline from the I/O path can result in sizable performance increases, if the right chunking and compression parameters are chosen.  To check the impact of using the new direct chunking API, we ran some benchmarks that compare regular and direct read/write speeds.  On an AMD Ryzen 7 7800X3D CPU with 8 cores, 96 MB L3 cache and 8 MB L2 cache, clocked at 4.2 GHz, we got the following results:&lt;/p&gt;
&lt;img alt="/images/pytables-direct-chunking/AMD-7800X3D.png" class="align-center" src="https://blosc.org/images/pytables-direct-chunking/AMD-7800X3D.png" style="width: 50%;"&gt;
&lt;p&gt;We can see that direct chunking yields 3.75x write and 4.4x read speedups, reaching write/read speeds of 1.7 GB/s and 5.2 GB/s.  These are quite impressive numbers, though the base equipment is already quite powerful.  Thus we also tried the same benchmark on a consumer-level MacBook Air laptop with an Apple M1 CPU with 4+4 cores and 12 MB L2 cache, clocked at 3.2 GHz, with the following results:&lt;/p&gt;
&lt;img alt="/images/pytables-direct-chunking/MacAir-M1.png" class="align-center" src="https://blosc.org/images/pytables-direct-chunking/MacAir-M1.png" style="width: 50%;"&gt;
&lt;p&gt;In this case direct chunking yields 4.5x write and 1.9x read speedups, with write/read speeds of 0.8 GB/s and 1.6 GB/s.  The absolute numbers are of course not as impressive, but the performance is still much better than that of the regular mechanism, especially when writing.  Please note that the M1 CPU has a hybrid efficiency+performance core configuration; as an aside, running the benchmark on a high-range Intel Core i9-13900K CPU also with a hybrid 8+16 core configuration (32 MB L2, 5.7 GHz) raised the write speedup to 4.6x, reaching an awesome write speed of 2.6 GB/s.&lt;/p&gt;
&lt;p&gt;All in all, it's clear that bypassing the HDF5 filter pipeline results in immediate I/O speedups.  You may find a Jupyter notebook with the benchmark code and AMD CPU data &lt;a class="reference external" href="https://github.com/PyTables/PyTables/blob/master/bench/direct-chunking-AMD-7800X3D.ipynb"&gt;in PyTables' benchmarks&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="conclusions"&gt;
&lt;h2&gt;Conclusions&lt;/h2&gt;
&lt;p&gt;First of all, we (Ivan Vilata and Francesc Alted) want to thank everyone who made this new 3.10 release of PyTables possible, especially Antonio Valentino for his role of project maintainer, and the many code and issue contributors.  Trying the new direct chunking API is much easier because of them.  And of course, a big thank you to the NumFOCUS Foundation for making this whole new feature possible by funding its development!&lt;/p&gt;
&lt;p&gt;In this post we saw how PyTables' direct chunking API allows one to squeeze the extra drop of performance that the most demanding scenarios require, when adjusting chunking and compression parameters in PyTables itself can't go any further.  Of course, its low-level nature makes its use less convenient and safe than higher-level mechanisms, so you should evaluate whether the extra effort pays off.  If used carefully with robust filters like Blosc2, the direct chunking API should shine the most in the case of large datasets with sustained I/O throughput demands, while retaining compatibility with other HDF5-based tools.&lt;/p&gt;
&lt;/section&gt;</description><category>pytables performance</category><guid>https://blosc.org/posts/pytables-direct-chunking/</guid><pubDate>Mon, 26 Aug 2024 09:20:00 GMT</pubDate></item><item><title>Optimized Hyper-slicing in PyTables with Blosc2 NDim</title><link>https://blosc.org/posts/pytables-b2nd-slicing/</link><dc:creator>Ivan Vilata-i-Balaguer</dc:creator><description>&lt;p&gt;The recent and long-awaited &lt;a class="reference external" href="https://groups.google.com/g/pytables-users/c/JTtZrw8sUEc"&gt;PyTables 3.9 release&lt;/a&gt; carries &lt;a class="reference external" href="https://www.pytables.org/release-notes/RELEASE_NOTES_v3.9.x.html"&gt;many goodies&lt;/a&gt;, including a particular one which makes us at the PyTables and Blosc teams very excited: optimized HDF5 hyper-slicing that leverages the two-level partitioning schema in Blosc2 NDim. This development was funded by a &lt;a class="reference external" href="https://numfocus.org/"&gt;NumFOCUS&lt;/a&gt; grant and the Blosc project.&lt;/p&gt;
&lt;p&gt;I (Ivan) carried on with the work that Marta started, with very valuable help from her and Francesc. I was in fact a core PyTables developer quite a few years ago (2004-2008) while working with Francesc and Vicent at Cárabos Coop. V. (see the &lt;a class="reference external" href="https://www.blosc.org/posts/pytables-20years/"&gt;20 year anniversary post&lt;/a&gt; for more information), and it was an honour and a pleasure to be back at the project. It took me a while to get back to grips with development, but it was a nice surprise to see the code that we worked so hard upon live through the years and get better and more popular. My heartfelt thanks to everybody who made that possible!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update (2023-11-23):&lt;/strong&gt; We redid the benchmarks described further below with some fixes and the same versions of Blosc2 HDF5 filter code for both PyTables and h5py. Results are more consistent and easier to interpret now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update (2023-12-04):&lt;/strong&gt; We extended benchmark results with the experimental application of a similar optimization technique to h5py.&lt;/p&gt;
&lt;section id="direct-chunk-access-and-two-level-partitioning"&gt;
&lt;h2&gt;Direct chunk access and two-level partitioning&lt;/h2&gt;
&lt;p&gt;You may remember that the previous version of PyTables (3.8.0) already got support for direct access to Blosc2-compressed chunks (bypassing the HDF5 filter pipeline), with two-level partitioning of chunks into smaller blocks (allowing for fast access to small slices with big chunks). You may want to read Óscar and Francesc's post &lt;a class="reference external" href="https://www.blosc.org/posts/blosc2-pytables-perf/"&gt;Blosc2 Meets PyTables&lt;/a&gt; to see the great performance gains provided by these techniques.&lt;/p&gt;
&lt;img alt="/images/blosc2_pytables/block-slice.png" class="align-center" src="https://blosc.org/images/blosc2_pytables/block-slice.png" style="width: 66%;"&gt;
&lt;p&gt;However, these enhancements only applied to tabular datasets, i.e. one-dimensional arrays of a uniform, fixed set of fields (columns) with heterogeneous data types as illustrated above. Multi-dimensional compressed arrays of homogeneous data (another popular feature of PyTables) still used plain chunking going through the HDF5 filter pipeline, and flat chunk compression. Thus, they suffered from the high overhead of the very generic pipeline and the inefficient decompression of whole (maybe big) chunks even for small slices.&lt;/p&gt;
&lt;p&gt;Now, you may have also read the post by the Blosc Development Team about &lt;a class="reference external" href="https://www.blosc.org/posts/blosc2-ndim-intro/"&gt;Blosc2 NDim&lt;/a&gt; (&lt;cite&gt;b2nd&lt;/cite&gt;), first included in C-Blosc 2.7.0. It introduces the generalization of Blosc2's two-level partitioning to multi-dimensional arrays as shown below. This makes arbitrary slicing of such arrays across any dimension very efficient (as better explained in the post about &lt;a class="reference external" href="https://www.blosc.org/posts/caterva-slicing-perf/"&gt;Caterva&lt;/a&gt;, the origin of b2nd), when the right chunk and block sizes are chosen.&lt;/p&gt;
&lt;img alt="/images/blosc2-ndim-intro/b2nd-2level-parts.png" class="align-center" src="https://blosc.org/images/blosc2-ndim-intro/b2nd-2level-parts.png" style="width: 66%;"&gt;
&lt;p&gt;This b2nd support was the missing piece to extend PyTables' chunking and slicing optimizations from tables to uniform arrays.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="choosing-adequate-chunk-and-block-sizes"&gt;
&lt;h2&gt;Choosing adequate chunk and block sizes&lt;/h2&gt;
&lt;p&gt;Let us try a benchmark very similar to the one in the post introducing &lt;a class="reference external" href="https://www.blosc.org/posts/blosc2-ndim-intro/"&gt;Blosc2 NDim&lt;/a&gt;, which slices a 50x100x300x250 floating-point array (2.8 GB) along its four dimensions, but this time with 64-bit integers, and using PyTables 3.9 with flat slicing (via the HDF5 filter pipeline), PyTables 3.9 with b2nd slicing (optimized, via direct chunk access implemented in C), h5py 3.10 with flat slicing (via hdf5plugin 4.3's support for Blosc2 in the HDF5 filter pipeline), and h5py with b2nd slicing (via the experimental &lt;a class="reference external" href="https://github.com/Blosc/b2h5py"&gt;b2h5py&lt;/a&gt; package using direct chunk access implemented in Python through h5py).&lt;/p&gt;
&lt;p&gt;According to the aforementioned post, Blosc2 works better when blocks have a size which allows them to fit both compressed and uncompressed in each CPU core’s L2 cache. This of course depends on the data itself and the compression algorithm and parameters chosen. Let us choose LZ4+shuffle since it offers a reasonable speed/size trade-off, and try to find the different compression levels that work well with our CPU (level 8 seems best in our case).&lt;/p&gt;
&lt;p&gt;With the benchmark's default 10x25x50x50 chunk shape, and after experimenting with the &lt;code class="docutils literal"&gt;BLOSC_NTHREADS&lt;/code&gt; environment variable to find the number of threads that better exploit Blosc2's parallelism (6 for our CPU), we obtain the results shown below:&lt;/p&gt;
&lt;img alt="/images/pytables-b2nd-slicing/b2nd_getslice_small.png" class="align-center" src="https://blosc.org/images/pytables-b2nd-slicing/b2nd_getslice_small.png" style="width: 75%;"&gt;
&lt;p&gt;The optimized b2nd slicing of PyTables already provides some speedups (although not that impressive) in the inner dimensions, in comparison with flat slicing based on the HDF5 filter pipeline (which performs similarly for PyTables and h5py). As explained in &lt;a class="reference external" href="https://www.blosc.org/posts/blosc2-pytables-perf/"&gt;Blosc2 Meets PyTables&lt;/a&gt;, HDF5 handling of chunked datasets favours big chunks that reduce in-memory structures, while Blosc2 can further exploit parallel threads to handle the increased number of blocks. Our CPU's L3 cache is 36MB big, so we may still grow the chunksize to reduce HDF5 overhead (without hurting Blosc2 parallelism).&lt;/p&gt;
&lt;p&gt;Let us raise the chunkshape to 10x25x150x100 (28.6MB) and repeat the benchmark (again with 6 Blosc2 threads):&lt;/p&gt;
&lt;img alt="/images/pytables-b2nd-slicing/b2nd_getslice_big.png" class="align-center" src="https://blosc.org/images/pytables-b2nd-slicing/b2nd_getslice_big.png" style="width: 75%;"&gt;
&lt;p&gt;Much better! Choosing a better chunkshape not just provides up to 10x speedup for the PyTables optimized case, it also results in 4x-5x speedups compared to the performance of the HDF5 filter pipeline. The optimizations applied to h5py also yield considerable speedups (for an initial, Python-based implementation).&lt;/p&gt;
&lt;/section&gt;
&lt;section id="conclusions-and-future-work"&gt;
&lt;h2&gt;Conclusions and future work&lt;/h2&gt;
&lt;p&gt;The benchmarks above show how optimized Blosc2 NDim's two-level partitioning combined with direct HDF5 chunk access can yield considerable performance increases when slicing multi-dimensional Blosc2-compressed arrays under PyTables (and h5py). However, the usual advice holds to invest some effort into fine-tuning some of the parameters used for compression and chunking for better results. We hope that this article also helps readers find those parameters.&lt;/p&gt;
&lt;p&gt;It is worth noting that these techniques still have some limitations: they only work with contiguous slices (that is, with step 1 on every dimension), and on datasets with the same byte ordering as the host machine. Also, although results are good indeed, there may still be room for implementation improvement, but that will require extra code profiling and parameter adjustments.&lt;/p&gt;
&lt;p&gt;Finally, as mentioned in the &lt;a class="reference external" href="https://www.blosc.org/posts/blosc2-ndim-intro/"&gt;Blosc2 NDim&lt;/a&gt; post, if you need help in &lt;a class="reference external" href="https://blosc.org/btune"&gt;finding the best parameters&lt;/a&gt; for your use case, feel free to reach out to the Blosc team at &lt;cite&gt;contact (at) blosc.org&lt;/cite&gt;.&lt;/p&gt;
&lt;p&gt;Enjoy data!&lt;/p&gt;
&lt;/section&gt;</description><category>pytables blosc2 ndim performance</category><guid>https://blosc.org/posts/pytables-b2nd-slicing/</guid><pubDate>Wed, 11 Oct 2023 11:00:00 GMT</pubDate></item></channel></rss>