What happened?
I wanted to calculate a cumsum on a dataset using dask arrays. The results differ from a cumsum on the same data opened as a pure numpy array.
An example plot of the difference between the two approaches (cumsum over time):
There are 3 chunks in time. The results start to diverge at the start of the second chunk.
What did you expect to happen?
Using pure numpy arrays or dask arrays should yield results that are numerically close(r).
Minimal Complete Verifiable Example
import xarray as xr
xr.show_versions()
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
# Save a non-trivial dataset to disk.
# Define explicit chunksize to prevent a UserWarning coming from xarray.namedarray.utils
# (""The specified chunks separate the stored chunks along ..")
path = Path("~/tmp/random.h5")
shape = (200, 1500)
data = 1000 * np.random.rand(*shape).astype("float32")
ds = xr.DataArray(data, dims=("position", "time"), name="data").to_dataset()
encoding = {
"data": {
"chunksizes": (200, 500),
}
}
ds.to_netcdf(path, engine="h5netcdf", encoding=encoding)
# Open the dataset using numpy and dask arrays.
ds = xr.open_dataset(
path,
engine="h5netcdf",
chunks=None,
)
chunk_size = 500
ds_chunked = xr.open_dataset(
path,
engine="h5netcdf",
chunks={"time": chunk_size},
)
# Show the difference.
(ds.cumsum(dim="time").data - ds_chunked.cumsum(dim="time").data).plot()
plt.axvline(x=chunk_size, color="red", linestyle="--", label="start of second chunk")
plt.title("Difference of `cumsum` on a numpy and a dask array.")
plt.legend()
plt.show()
Steps to reproduce
No response
MVCE confirmation
Relevant log output
Anything else we need to know?
Changing xr.open_dataset to xr.load_dataset for the chunked dataset does yield the expected result.
Environment
Details
INSTALLED VERSIONS
commit: None
python: 3.11.14 (main, Dec 17 2025, 21:07:37) [Clang 21.1.4 ]
python-bits: 64
OS: Linux
OS-release: 6.12.57+deb13-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 2.0.0
libnetcdf: None
xarray: 2026.2.0
pandas: 3.0.1
numpy: 2.4.3
scipy: 1.17.1
netCDF4: None
pydap: None
h5netcdf: 1.8.1
h5py: 3.16.0
zarr: None
cftime: 1.6.5
nc_time_axis: None
iris: None
bottleneck: 1.6.0
dask: 2026.3.0
distributed: 2026.3.0
matplotlib: 3.10.8
cartopy: None
seaborn: None
numbagg: None
fsspec: 2026.2.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 80.9.0
pip: 25.3
conda: None
pytest: 9.0.2
mypy: 1.19.1
IPython: 9.10.0
sphinx: 9.0.4
What happened?
I wanted to calculate a
cumsumon a dataset using dask arrays. The results differ from acumsumon the same data opened as a pure numpy array.An example plot of the difference between the two approaches (cumsum over time):
There are 3 chunks in time. The results start to diverge at the start of the second chunk.
What did you expect to happen?
Using pure numpy arrays or dask arrays should yield results that are numerically close(r).
Minimal Complete Verifiable Example
Steps to reproduce
No response
MVCE confirmation
Relevant log output
Anything else we need to know?
Changing
xr.open_datasettoxr.load_datasetfor the chunked dataset does yield the expected result.Environment
Details
INSTALLED VERSIONS
commit: None
python: 3.11.14 (main, Dec 17 2025, 21:07:37) [Clang 21.1.4 ]
python-bits: 64
OS: Linux
OS-release: 6.12.57+deb13-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 2.0.0
libnetcdf: None
xarray: 2026.2.0
pandas: 3.0.1
numpy: 2.4.3
scipy: 1.17.1
netCDF4: None
pydap: None
h5netcdf: 1.8.1
h5py: 3.16.0
zarr: None
cftime: 1.6.5
nc_time_axis: None
iris: None
bottleneck: 1.6.0
dask: 2026.3.0
distributed: 2026.3.0
matplotlib: 3.10.8
cartopy: None
seaborn: None
numbagg: None
fsspec: 2026.2.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 80.9.0
pip: 25.3
conda: None
pytest: 9.0.2
mypy: 1.19.1
IPython: 9.10.0
sphinx: 9.0.4