the per-element sprite atlas

2026-05-06

part 2 ended with a layer's dirty bit flipping and drawLayeredElements running. but it doesn't re-rasterise every node from scratch. node shapes are expensive — arc segments, border, fill, label — and re-drawing them at 10,000 elements per frame would be slow even with the three-layer partitioning. cytoscape pre-renders each element into an offscreen sprite at multiple zoom levels and blits from there. that cache is ElementTextureCache, in ele-texture-cache.mjs.

one atlas, eight zoom levels

the first thing getElement does is map the current zoom to a discrete LOD level:

if( lvl == null ){
  lvl = Math.ceil( math.log2( zoom * pxRatio ) );
}

if( lvl < minLvl ){
  lvl = minLvl;
} else if( zoom >= maxZoom || lvl > maxLvl ){
  return null;
}

let scale = Math.pow( 2, lvl );

math.log2(zoom * pxRatio) gives the level at which one graph unit maps to one texture pixel. Math.ceil rounds up to the nearest integer. the result is clamped to [minLvl, maxLvl] — that's [-4, 3], eight levels. zoom past maxZoom = 7.99 and the cache is bypassed entirely; the renderer draws directly at that magnification.

scale = Math.pow(2, lvl) is what you'd expect: at lvl = 0 the element renders 1-to-1; at lvl = -2 it renders at 0.25× and gets served back to a quarter-scale viewport; at lvl = 2 it renders at 4×. each of the eight levels has its own cache slot per element.

here's what the formula produces at representative zoom values (assuming pxRatio = 1):

height-bucket packing

sprites at a given LOD level aren't all packed into one canvas. they're sorted by the rendered height of the element, in 50px increments. the bucket for an element is chosen here:

let txrH; // which texture height this ele belongs to

if( eleScaledH <= minTxrH ){
  txrH = minTxrH;
} else if( eleScaledH <= txrStepH ){
  txrH = txrStepH;
} else {
  txrH = Math.ceil( eleScaledH / txrStepH ) * txrStepH;
}

if( eleScaledH > maxTxrH || eleScaledW > maxTxrW ){
  return null; // caching large elements is not efficient
}

minTxrH = 25 catches tiny elements. above that, the step is 50px — a 40px node goes into the 50px bucket; a 120px node goes into the 150px bucket. elements larger than 1024px in either dimension are bypassed entirely.

each bucket is a list of atlas objects, each one a 1024px-wide offscreen canvas:

ETCp.addTexture = function( txrH, minW ){
  let self = this;
  let txrQ = self.getTextureQueue( txrH );
  let txr = {};

  txrQ.push( txr );

  txr.eleCaches = [];

  txr.height = txrH;
  txr.width = Math.max( defTxrWidth, minW );
  txr.usedWidth = 0;
  txr.invalidatedWidth = 0;
  txr.fullnessChecks = 0;

  txr.canvas = self.renderer.makeOffscreenCanvas(txr.width, txr.height);

  txr.context = txr.canvas.getContext('2d');

  return txr;
};

elements are packed left-to-right across the canvas. after each slot is allocated, usedWidth advances by the element's scaled width plus 8px of spacing (eleTxrSpacing) to avoid blit overlap at sub-pixel boundaries (line 272). when a canvas fills past 80% capacity or its invalidated region exceeds 20% of its width, it's retired to a separate retiredTextureQueue — cleared and recycled rather than GC'd.

deferred refinement

this is the interesting part. getElement doesn't always render the element from scratch. it has several exit paths:

if( scalableFrom(oneUpCache) ){
  // then we can relatively cheaply rescale the existing image w/o rerendering
  downscale();

} else if( scalableFrom(higherCache) ){
  // then use the higher cache for now and queue the next level down
  // to cheaply scale towards the smaller level

  if( highQualityReq ){
    for( let l = higherCache.level; l > lvl; l-- ){
      oneUpCache = self.getElement( ele, bb, pxRatio, l, getTxrReasons.downscale );
    }

    downscale();

  } else {
    self.queueElement( ele, higherCache.level - 1 );

    return higherCache;
  }
}

the priority, in order:

  1. exact match — cache hit at lines 133–143. return immediately.
  2. one-up downscale — the level directly above exists. downscale() calls drawImage from that slot to the new one at half size. no re-render, just a rescale.
  3. any coarser level — a higher-level cache exists but it's not one-up. serve it immediately, queue the next level down for refinement. the dequeuer fills in downward steps until oneUpCache exists and the cheap rescale path kicks in.
  4. lower level — no higher cache. look for a finer-grained cache, serve it (too sharp but usable), queue the correct level.
  5. scratch — nothing to scale from. translate and scale the context, call this.drawElement, store the result.

most elements spend their lives in path 2 or 3. a fresh zoom-out starts with a coarser slot cached; the dequeuer refines toward the target level over subsequent frames. path 5 runs only on first encounter or after full invalidation.

here's what that looks like across three frames after zooming out to a level with no existing sprite:

when the slot is ready, drawCachedElementPortion in drawing-elements.mjs blits it to the visible canvas:

context.drawImage( eleCache.texture.canvas, eleCache.x, 0, eleCache.width, eleCache.height, x, y, w, h );

(drawing-elements.mjs line 81.) the arguments are: source canvas, source x (packed position in the atlas), source y (always 0 — elements pack horizontally), source width and height, then destination x, y, w, h in graph coordinates.

the dequeue budget: the dequeuer doesn't iterate unconditionally. it runs inside texture-cache-defs.mjs and checks available frame time before each step:

while( true ){
  var now = util.performanceNow();
  var duration = now - startTime;
  var frameDuration = now - frameStartTime;

  if( renderTime < fullFpsTime ){
    // faster than 60fps — use remaining frame time for dequeueing
    var timeAvailable = fullFpsTime - ( willDraw ? avgRenderTime : 0 );

    if( frameDuration >= opts.deqFastCost * timeAvailable ){
      break;
    }
  } else {
    if( willDraw ){
      if(
           duration >= opts.deqCost * renderTime
        || duration >= opts.deqAvgCost * avgRenderTime
      ){
        break;
      }
    } else if( frameDuration >= opts.deqNoDrawCost * fullFpsTime ){
      break;
    }
  }

  var thisDeqd = opts.deq( self, pixelRatio, extent );

  if( thisDeqd.length > 0 ){
    for( var i = 0; i < thisDeqd.length; i++ ){
      deqd.push( thisDeqd[i] );
    }
  } else {
    break;
  }
}

when the renderer runs above 60fps, it can spend most of the remaining frame budget on refinement. when it's slow, the dequeue budget shrinks. when there's no draw at all (willDraw = false), up to 90% of a full-fps frame is available for dequeuing — so refinement happens even between visible frames.

the loop has three distinct modes. fullFpsTime = 1000/60 ≈ 16.7ms:

slow frame (renderTime = 30ms, willDraw = true) — renderTime > 16.7ms, so the slow branch runs. the dequeuer stops after min(15% × 30ms, 10% × avgRenderTime) — at most ~4.5ms. the frame is already overbudget; the dequeuer takes only a sliver so it doesn't make things worse.

fast frame (renderTime = 5ms, willDraw = true) — renderTime < 16.7ms, the fast branch runs. spare time = 16.7 − 5 = 11.7ms. dequeue runs for up to 90% × 11.7ms ≈ 10.5ms. fast hardware gets more refinement work per frame automatically — no configuration needed.

idle frame (willDraw = false) — no draw happened at all. the slow branch runs but takes the willDraw = false path, which allows 90% × 16.7ms ≈ 15ms. when the graph is static, almost the entire frame slot is available for texture upgrades.

the result is that texture refinement is self-throttling: it fills idle time aggressively and backs off when the renderer is under load.

sharing slots by style key

one detail that makes the cache more efficient: the lookup maps by style key, not element id. two nodes with identical computed styles share a single cache slot at each zoom level.

ElementTextureCacheLookup (in ele-texture-cache-lookup.mjs) stores caches at (key, level) pairs. key = getKey(ele) is a hash of the element's rendered style — fill color, border, size, label font — not its id. 500 nodes in the same CSS class, with no per-element overrides, all resolve to the same slot.

invalidation respects this: when a node's style changes, isInvalid() checks whether getKey(ele) has changed from the stored key (line 84). invalidate() only evicts the slot if no other element still references the same key — restyling one node in a class of 500 doesn't evict the cache for the other 499. (we'll come back to the invalidation side in part 5, where it connects to the style computation pipeline.)


that's the per-element atlas: zoom maps to a discrete level, elements are packed into height-bucketed 1024px canvases, the dequeuer refines quality over successive frames without blowing the render budget, and style-identical elements share slots. every drawLayeredElements call is serving from these slots — drawImage from atlas, not drawNode from scratch.

next up — part 4: the layered texture cache. rather than blitting element-by-element, cytoscape composites groups of static elements into a single layer texture so drawLayeredElements can blit whole regions at once.