Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
event_key: string
event_name: string
description: string
category: string
date: timestamp[s]
location: string
domain: string
resolution_m: int64
grid_shape: list<item: int64>
n_timesteps: int64
times: list<item: timestamp[s]>
time_step_seconds: double
fields: struct<>
data_format: string
dtype: string
coordinate_system: string
source: string
init_time: timestamp[s]
vs
event_key: string
event_name: string
description: string
category: string
date: timestamp[s]
location: string
domain: string
resolution_m: int64
grid_shape: list<item: int64>
n_timesteps: int64
times: list<item: timestamp[s]>
time_step_seconds: double
fields: struct<>
data_format: string
dtype: string
coordinate_system: string
init_time: timestamp[s]
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 572, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              event_key: string
              event_name: string
              description: string
              category: string
              date: timestamp[s]
              location: string
              domain: string
              resolution_m: int64
              grid_shape: list<item: int64>
              n_timesteps: int64
              times: list<item: timestamp[s]>
              time_step_seconds: double
              fields: struct<>
              data_format: string
              dtype: string
              coordinate_system: string
              source: string
              init_time: timestamp[s]
              vs
              event_key: string
              event_name: string
              description: string
              category: string
              date: timestamp[s]
              location: string
              domain: string
              resolution_m: int64
              grid_shape: list<item: int64>
              n_timesteps: int64
              times: list<item: timestamp[s]>
              time_step_seconds: double
              fields: struct<>
              data_format: string
              dtype: string
              coordinate_system: string
              init_time: timestamp[s]

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

WRF 250m Severe Weather Overlays

18 high-resolution severe weather simulations, ready to plot.

250-meter WRF model output covering tornadoes, wildfires, hurricanes, blizzards, heat waves, and flooding events across the US. Each event includes surface weather fields at 1-minute temporal resolution on an 800x800 grid — just load with numpy and start making maps.

What's in the box

Each event folder contains:

  • coords.npz — latitude/longitude arrays (xlat, xlong, both 800x800 float32)
  • t_0000.npz through t_NNNN.npz — one file per timestep, each containing 16 weather fields
  • metadata.json — event info, timestamps, field descriptions

Quick start

import numpy as np

# Load coordinates
coords = np.load("carr_fire/coords.npz")
lat, lon = coords["xlat"], coords["xlong"]

# Load a single timestep
data = np.load("carr_fire/t_0200.npz")

# Plot wind speed
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 10))
c = ax.pcolormesh(lon, lat, data["wind_speed_10m"], cmap="YlOrRd", vmin=0, vmax=30)
ax.set_title("Carr Fire — 10m Wind Speed (m/s)")
plt.colorbar(c)
plt.savefig("carr_fire_wind.png", dpi=150)

Make an animation

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import json

with open("carr_fire/metadata.json") as f:
    meta = json.load(f)

coords = np.load("carr_fire/coords.npz")
lat, lon = coords["xlat"], coords["xlong"]

fig, ax = plt.subplots(figsize=(10, 10))
data0 = np.load("carr_fire/t_0000.npz")
mesh = ax.pcolormesh(lon, lat, data0["refc"], cmap="turbo", vmin=-10, vmax=70)
title = ax.set_title("")

def update(frame):
    data = np.load(f"carr_fire/t_{frame:04d}.npz")
    mesh.set_array(data["refc"].ravel())
    title.set_text(f"Carr Fire Reflectivity — {meta['times'][frame]}")
    return mesh, title

anim = FuncAnimation(fig, update, frames=range(0, 421, 5), interval=100)
anim.save("carr_fire_refc.mp4", dpi=100)

Fields

Every .npz timestep file contains these 16 fields, all float32 at 800x800:

Field Description Units Notes
t2m 2-meter temperature degC
u10 10-meter U-wind component m/s
v10 10-meter V-wind component m/s
wind_speed_10m 10-meter wind speed m/s Derived: sqrt(u10^2 + v10^2)
wind_direction_10m 10-meter wind direction degrees Meteorological convention (wind FROM)
surface_pressure Surface pressure hPa
pblh Planetary boundary layer height m
hfx Surface sensible heat flux W/m2
lh Surface latent heat flux W/m2
rain_rate Precipitation rate mm/hr Derived from accumulated RAINNC
refc Composite reflectivity dBZ Column-max of 3D reflectivity
wspd10max Max 10m wind gust m/s Running max since simulation start
w_up_max Max updraft speed m/s Running max since simulation start
w_dn_max Max downdraft speed m/s Running max since simulation start
updraft_helicity Max updraft helicity 2-5 km m2/s2 Running max since simulation start
hail_max Max hail diameter mm Running max since simulation start

Note on "running max" fields: wspd10max, w_up_max, w_dn_max, updraft_helicity, and hail_max are accumulated maxima that increase monotonically over the simulation. They show the worst conditions experienced at each grid point up to that time — useful for swath maps showing the full storm impact.

Events

Event Date Category Location Timesteps Temporal Res
Carr Fire 2018-07-23 Wildfire Redding, CA 421 1 min
Hurricane Michael 2018-10-10 Hurricane Panama City, FL ~23 15 min
Camp Fire 2018-11-08 Wildfire Paradise, CA 418 1 min
Nashville EF3 Tornado 2020-03-03 Tornado Nashville, TN ~28 15 min
Death Valley Record Heat 2020-08-16 Heat Death Valley, CA 357 1 min
LA Fires 2020-09-06 Wildfire Los Angeles, CA 421 1 min
SF Bay Area Fires 2020-09-06 Wildfire San Francisco, CA 421 1 min
PNW Windstorm 2020-09-08 Wind Pacific Northwest 360 1 min
Texas Freeze 2021-02-16 Winter Texas ~22 15 min
Seattle Heat Dome 2021-06-28 Heat Seattle, WA ~10 15 min
Mayfield EF4 Tornado 2021-12-11 Tornado Mayfield, KY ~22 15 min
CA Atmospheric River 2021-12-30 Flooding Northern California 421 1 min
Buffalo Blizzard 2022-12-23 Winter Buffalo, NY ~14 15 min
CA Pineapple Express 2023 2023-01-04 Flooding California 421 1 min
Pineapple Express 2024-02-04 Flooding Southern California 418 1 min
Denver Hailstorm 2024-05-31 Hail Denver, CO 421 1 min
LA Fires Peak 2025 2025-01-07 Wildfire Los Angeles, CA 421 1 min
Enderlin EF5 Tornado 2025-06-21 Tornado Enderlin, ND ~360 1 min

12 events have 1-minute temporal resolution from WRF auxiliary history output. 6 events have 15-minute resolution from standard WRF output files.

Grid details

  • Model: WRF-ARW v4, d03 (innermost nest)
  • Horizontal resolution: 250 meters
  • Grid size: 800 x 800 points (200 km x 200 km)
  • Projection: Lambert Conformal Conic (varies per event)
  • Coordinates: Each event has its own coords.npz with the exact lat/lon for every grid point

File sizes

  • Individual timestep .npz: ~20-25 MB (16 fields, float32, 800x800, numpy compressed)
  • Full event (421 timesteps): ~8-10 GB
  • Total dataset: ~128 GB

Coordinate system

The WRF model uses a Lambert Conformal Conic projection centered on each event location. The coords.npz file contains the exact latitude and longitude of every grid point. Use these for plotting — don't assume a regular lat/lon grid.

coords = np.load("carr_fire/coords.npz")
lat = coords["xlat"]   # shape (800, 800), float32
lon = coords["xlong"]  # shape (800, 800), float32

# These are NOT regularly spaced in lat/lon
# Use pcolormesh, not imshow, for correct geographic placement

For web map overlays (Leaflet, Mapbox), you'll need to reproject from Lambert Conformal to Web Mercator. The metadata.json for each event includes the projection parameters if you need them.

Source

Produced by Fahrenheit Research using WRF-ARW at 250m resolution. Simulations were run on the d03 innermost nest with 1-minute auxiliary history output enabled for surface fields.

Raw WRF netCDF files were processed into .npz format to make them accessible without WRF-specific tools. All fields are on the native d03 mass grid (no interpolation).

WRF model configuration

Full WRF namelist-equivalent settings for every event are in wrf_config.json. Key settings shared across all 18 simulations:

Setting Value Description
WRF version 4.7.1
Grid (d03) 800 x 800 Innermost nest
Horizontal resolution 250m DX = DY = 250m
Vertical levels 80 Hybrid sigma-pressure
Timestep 1.0s Adaptive (max 2.0s)
Microphysics Thompson (8) MP_PHYSICS = 8
PBL LES / none (0) BL_PBL_PHYSICS = 0 (250m resolves turbulence)
Surface layer Revised MM5 (1) SF_SFCLAY_PHYSICS = 1
Land surface Noah (2) SF_SURFACE_PHYSICS = 2 (most events)
Radiation (LW/SW) RRTMG (4) 1-minute radiation timestep
Cumulus None (0) CU_PHYSICS = 0 (250m resolves convection)
Diffusion 2nd order (2) DIFF_OPT = 2, KM_OPT = 5 (3D TKE)
Nesting ratio 4:1 d01 3km → d02 1km → d03 250m
Projection Lambert Conformal TRUELAT1 = 30, TRUELAT2 = 60 (most events)
Damping Rayleigh (3) W-damping enabled

Minor variations exist between events (see wrf_config.json for per-event details):

  • Land surface model: Noah (2) for most events, thermal diffusion (1) for Michael, Nashville, Buffalo Blizzard
  • Camp Fire uses a slightly larger adaptive timestep (1.975s)
  • Buffalo Blizzard has 81 vertical levels instead of 80

License

CC-BY-4.0 — free to use for any purpose with attribution.

Citation

@dataset{fahrenheit_wrf_overlays_2026,
  author = {Fahrenheit Research},
  title = {WRF 250m Severe Weather Overlays},
  year = {2026},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/deepguess/wrf-250m-severe-weather-overlays}
}
Downloads last month
9