From Broken Build to Stable 3DGS Deployment

A deep dive into the R&D process for deploying a high-fidelity 3D Gaussian Splatting asset on the web.

Gaussian Splatting
Spark.js
Three.js
Rust
WASM

Loading 3D View...

From Broken Build to Stable 3DGS Deployment

Abstract: A Technical Memoir

The Burg Rötteln experiment began not as a project, but as a critical technical inquiry: how to deploy a high-fidelity 3D Gaussian Splatting (3DGS) model on the web while overcoming the severe memory and licensing barriers of the emerging ecosystem. This study documents the pivot from restrictive commercial pipelines to a self-managed, open-source workflow using Spark.js and Jawset Postshot. The journey involved debugging low-level WASM toolchain errors, implementing a 90% data compression strategy (from 45MB PLY to 5MB SPZ), and diagnosing the subtle rendering artifacts common in proprietary GS data. This process ultimately established a stable, performant visualization platform, demonstrating mastery over the full GS pipeline.

Part I — Debugging, Discovery, and the First Breakthroughs

The Export Paywall and The Ghostly Model

The initial phase quickly revealed the ethical and financial conflicts inherent in early 3DGS tools. Every online capture tool I tested produced beautiful previews but locked the essential .PLY export file behind prohibitive subscription tiers. Polycam’s Business tier, priced at $300/year, was the only path to accessing the raw data necessary for research — a financial gatekeeping I deemed unacceptable for an exploratory project.

This struggle for data ownership led me to bypass the "export problem" and focus on the end-to-end rendering bottleneck using the open-source Spark.js viewer. The immediate result was a new, unexpected technical problem: the model rendered as translucent, thin, and almost ghost-like.

Breakthrough: Stabilizing the Viewer

This became the first major technical breakthrough: uncovering the subtle bug causing the visual instability. The lack of surface density wasn't due to missing data, but suboptimal default settings for culling and blending in the early viewer. Once patched, the viewer started revealing the castle in a way that felt closer to what a Gaussian Splatting model should look like — dense, textured, and stable. The resolution involved proactively tuning the SparkRenderer upon initialization:

// Initializing the Spark Renderer with tuning parameters
// These parameters promote surface density and reduce ghosting.
const spark = new SparkRenderer({ 
  renderer,
  focalAdjustment: 2.0, // Match splat scaling with other renderers
  blurAmount: 0.3, // Ensure solid view, helps with low-opacity artifacts
});
scene.add(spark);

The Pivot to a Controlled Data Source

With the viewer stabilized, the next challenge was securing a clean source of truth for the radiance field without relying on costly cloud platforms. That led me to Jawset Postshot — a standalone trainer designed for Gaussian Splatting. Postshot offered full control over training, density settings, and output formats.

The training run in Postshot turned out beautifully, producing a clean, high-fidelity radiance field. This Gif shows the model within the trainer, representing the current state of the reconstruction — the technical "mid-point" between capture and a full, optimized web pipeline:

The training process confirmed that the tooling and rendering quality were correct. Now, only one major technical blocker remained.

Part I ends here — with a translucent splat debugged, a viewer stabilized, a superior radiance field trained and documented visually, and a clear roadmap to the final visualization.

Part II — Coming Soon: Compression and Final Deployment

The Compression Mandate: Solving the Memory Crisis

Part II will commence once the raw PLY export from Postshot becomes available (requiring the one-time commercial license). The most critical phase will be building the compression pipeline that solves the severe client-side memory problem.

The necessity of this step is quantifiable: the source PLY file is 45MB, requiring significant GPU buffer allocation. This must be transformed into the web-optimized Niantic SPZ format to achieve the 90% reduction to less than 5MB, finally ensuring cross-device stability and fast load times. This process involves the core Rust/WASM utility, which itself required significant debugging (e.g., fixing the obscure line-ending issue and installing the Rust toolchain).

Finalizing the Web Pipeline: Code and Cleanliness

The final deployment will be implemented using a React component that manages asynchronous loading and robust resource cleanup. This final code ensures stability by rigorously managing the GPU and CPU memory allocated by the Three.js and Spark.js instances:

// Cleanup function (excerpt from useEffect return)
return () => {
  // ... remove event listeners, etc.
  renderer.setAnimationLoop(null);
  
  // Clean up Three.js objects
  controls.dispose();
  renderer.dispose();
  
  // Spark-specific cleanup prevents memory leaks
  spark.defaultView.dispose(); 
  splatMesh.dispose();
};

Once the Postshot export is unlocked, the full pipeline — exporting the model, converting to SPZ, reducing size by 90%, and finalizing the React viewer with a stable, high-density render — will be documented here. For now, Part I stands as a complete and honest snapshot of the project's foundational work.