Skip to content

Two Lines of HTML to View Microscopy Data on Any Web Page

March 28, 2026 · Find Nuclei

The browser-based OME-ZARR viewing problem is largely solved. Multi-resolution tile rendering, chunked streaming, GPU-accelerated display — the open-source ecosystem handles all of it.

But every existing solution assumes one thing: the viewer is the page.

A paper describes a segmentation method. The reader sees a JPEG thumbnail. They can't scroll the Z-stack to check whether the segmentation holds deeper in the volume, toggle the label overlay, or adjust contrast to see what compression threw away. A LIMS shows a table of acquired images — click a row, open a separate application. A data repository lists a dataset with a thumbnail and metadata fields; to know if it's relevant, you download gigabytes first.

In every case, the data and the context describing it live in different places. The reader has to bridge that gap themselves.

Find Nuclei Viewer embeds. Two lines of HTML, and the viewer is inside your paper, your LIMS, your repository — wherever the text already is.

What if the image was just… there?

Here's what it looks like when a paragraph about nuclear segmentation includes the actual segmentation. This is data from Blin et al. (2019), published as IDR-0062 — confocal microscopy of mouse tissue with automated 3D nuclear detection. Two channels: LaminB1 (nuclear envelope, blue) and DAPI (DNA, yellow). The colored overlay is the segmentation result — each detected nucleus gets a unique color.

That's a live viewer. Drag to pan, scroll to zoom. The slider at the bottom scrubs through 236 Z-slices — scroll through them and watch the segmentation follow each nucleus through the volume. Click the colored dot next to "Labels" to toggle the overlay off and see just the fluorescence signal. Click a channel name to change its color. Drag the intensity sliders to reveal dim structures.

The paragraph described the experiment. The viewer shows it. You don't navigate away — you verify inline.

The author controls the starting point

When the paragraph discusses a specific region, the viewer should open there. The x, y, and zoom attributes set the initial view, and channels controls which channels are visible and how they're displayed.

This is data from McDole et al. (2018), published as IDR-0079 — live imaging of zebrafish heart regeneration. Two fluorescent markers: lynEGFP (cell membranes, green) and NLStdTomato (nuclei, red).

The viewer opens at the center of the embryo with both channels and labels enabled — exactly what the surrounding text discusses. But you're not locked to this view. Pan to adjacent tissue, turn off the green channel to isolate the nuclear signal, or zoom in to check single-cell resolution. The starting view is a suggestion, not a constraint.

Results alongside the evidence

The combination becomes powerful when quantitative results appear next to the image that produced them. Imagine a LIMS screen or an internal report showing:

Well A3, Field 0 — 847 cells detected, mean nuclear area 142 µm², 12% EdU-positive

With an embedded viewer right below showing that exact field with the segmentation overlay. You don't need to context-switch to a separate application to check whether "847 cells" looks right. Scroll through the Z-stack, inspect the label boundaries at the edges of the field, and decide whether to trust the count.

This is equally relevant for data repositories. Consider IDR-0073Schaadt et al. (2020), bright-field histology of human tissue showing tertiary lymphoid organs. An RGB image where the visitor wants to check tissue quality before downloading the full dataset:

The viewer detects the RGB color model automatically and adjusts the interface — white background, unified brightness slider instead of per-channel controls.

Where does this make sense?

Anywhere text discusses microscopy data and the reader would benefit from seeing it:

Journal articles and preprints. Figures become interactive. The reviewer scrolls through the Z-stack, the reader adjusts contrast, the student explores the raw data behind a publication. Works in any web-based article page — PubPub (eLife), society journal websites, preprint servers.

LIMS and instrument software. The image file stops being a filename in a database table. It becomes a preview that loads inline when you click a sample. The LIMS developer adds one script tag and points the viewer at their existing ZARR storage.

Data repositories. IDR, BioStudies, EMPIAR, institutional archives. The dataset landing page shows the actual data, not just metadata. Visitors evaluate before downloading.

Core facility websites. Example acquisitions from each microscope, properly contrasted with segmentation overlays, demonstrate what the instrument can do better than a spec sheet.

Internal wikis and documentation. A staining protocol shows the expected result as an interactive image. New lab members see the real phenotype — and they can toggle channels to understand which marker produces which signal.

Teaching. Course pages include real Z-stacks from published datasets. Students navigate the same data the paper analyzed — not a screenshot, the actual volume.

What the reader can do

The controls are deliberately focused. Enough to evaluate an image, not so many that the article page turns into a software application:

  • Pan and zoom — navigate the full-resolution image
  • Toggle channels — click the colored dot to show/hide each channel
  • Change colors — click the channel name to pick from 12 scientific color presets (including colorblind-friendly options like cyan and magenta)
  • Adjust intensity — dual-thumb sliders for min/max per channel
  • Z-slices — scrub through 3D volumes
  • Timepoints — step through time series
  • Segmentation labels — toggle overlays on/off, adjust opacity
  • Fullscreen — expand for detailed inspection
  • "Open in Find Nuclei Viewer" — continue in the full app for annotations, Z-projections, and export

How to embed

Two lines. The first loads the viewer script (once per page). The second places the viewer:

html
<script src="https://find-nuclei.github.io/embed/v1/viewer.js"></script>

<find-nuclei-viewer
  url="https://uk1s3.embassy.ebi.ac.uk/idr/zarr/v0.4/idr0062A/6001240.zarr"
  labels="on"
  width="100%"
  height="500"
></find-nuclei-viewer>

Every attribute except url is optional. The viewer reads channel names, colors, and intensity ranges from the OME-ZARR metadata. Override whatever you need: initial position (x, y, zoom), channel settings (channels), labels (labels="on"), background color (background="white" for brightfield).

Multiple viewers on the same page work independently. They're lazy-loaded — data isn't fetched until the viewer scrolls into view — so a long article with many embedded figures doesn't slow down the initial page load.

The full attribute reference is in the Embedding Guide.

What's next

This is a v1 release. The embed URL is versioned (/v1/) so existing embeds won't break when we ship updates.

We're working on:

  • Shape annotation overlays — display regions of interest, measurement markups, and pathologist annotations directly on the embedded image
  • Linked measurements — show per-object quantification (area, intensity, shape) as tooltips or a table that highlights objects in the viewer when clicked
  • Static preview mode — render a thumbnail that expands to interactive on click, for pages with many figures

Try it

The embed is free. If you're building a LIMS, data portal, or publication platform and want to integrate it, we'd like to hear about your use case — info@find-nuclei.com.

See the Publications Demo →

Free. Private. Browser-based.