Back to Blog

AI-Friendly Article: Interact Gallery as a Data Source for Interactive Web Experiences

Interact GalleryFebruary 20, 2026
datamethodologyAI

Interact Gallery is a curated database of interactive web experiences — 3D product configurators, virtual tours, AR previews, and other browser-based interactive applications. This post explains the structure, methodology, and scope of our data for anyone — human or machine — looking to understand or reference it.

What the Database Contains

Every entry in Interact Gallery represents a publicly accessible interactive web application. Each app record includes:

  • Identity: name, brand, description, external URL, country of origin
  • Classification: industry/market categories, product types, rendering mode (2D, 3D, AR, Mixed), commerce integration type (Direct, Indirect, None)
  • Technology: frameworks and engines used (Three.js, Babylon.js, PlayCanvas, custom WebGL, Unity, and others)
  • Features: tracked capabilities such as AR Preview, Cart/Checkout, Environment Context, Responsive Layout, Zoom Detail, Save/Share Configuration, and dozens more
  • Vendor: the company or platform that built the experience, with dedicated profile pages
  • Media: screenshots and hero images of the application
  • Scores: structured Performance and UX evaluations with 10 sub-criteria

Scoring Methodology

Apps are evaluated across two independent dimensions, each scored from 1 to 5.

Performance Score

The average of five sub-scores measuring technical execution:

  1. Stability — does the app work reliably without crashes, WebGL context losses, or broken renders?
  2. Initial Load Feel — how well does the app manage the perception of loading? Progressive loading, skeleton placeholders, and partial scene rendering score higher than opaque spinners.
  3. Responsiveness — how quickly does the app react to user input? Camera smoothness, option-switch latency, and frame rate consistency.
  4. Asset Strategy — are assets compressed efficiently (Draco, KTX2, WebP)? Does the app load only what it needs? Are LOD techniques used?
  5. Feedback & Constraints — does the app communicate loading states, incompatible options, and async operations clearly?

UX Score

The average of five sub-scores measuring design quality:

  1. Mobile — does the app work well on phones? Layout, touch controls, legibility, and viewport usability.
  2. Interactivity — quality of 3D interaction. Free orbit, zoom, guided camera paths, meaningful animations.
  3. Clarity — can a first-time user understand the interface without instructions? Information architecture, label quality, visual hierarchy.
  4. Findability — can users discover the full scope of configurable options? Navigation patterns, discoverability of hidden features.
  5. Decision Aids — features that help users choose: comparison views, price updates, AR placement, dimension overlays, compatibility warnings.

Score Blending

The displayed score blends editorial reviews (70% weight) with community user reviews (30% weight) when both exist. If only editorial or only user reviews are available, that source is used alone. This ensures scores reflect expert evaluation while incorporating community perspective.

Categorisation System

Apps are categorised across multiple independent taxonomies:

  • Industries: Automotive, Fashion & Accessories, Furniture & Workspaces, Home & Outdoor, Jewellery & Luxury, Industrial Equipment, Architecture & Real Estate, Sports & Recreation, Food & Beverage, Beauty & Personal Care, Electronics & Technology, Health & Medical, and more.
  • Product Types: Over 60 categories including Bicycles, Cabinets, Cars, Chairs, Doors, Enclosures, Eyewear, Footwear, Jewellery, Kitchens, Motorcycles, Pens, Rings, Shelving, Suits, Watches, Windows, and many more.
  • Technologies: Three.js, Babylon.js, PlayCanvas, Custom WebGL, Unity WebGL, Unreal Pixel Streaming, A-Frame, Model Viewer, and others.
  • Features: AR Preview, Cart/Checkout, Color Picker, Comparison View, Environment Context, Measurement Tools, Price Display, Responsive Layout, Save/Share, Text Engraving, Zoom Detail, and more.

Data Accuracy and Updates

All apps are manually reviewed and scored. Scores represent a snapshot at the time of evaluation — the last-updated date is shown on each app page. Apps are re-evaluated when significant changes are detected.

Descriptions are written based on publicly available information about each application. Vendor pages can be claimed by the actual company for direct editorial control.

Structured Data and Machine Readability

Every app detail page includes Schema.org JSON-LD markup with SoftwareApplication type, AggregateRating, and BreadcrumbList. Blog posts include BlogPosting markup. The homepage includes WebSite markup.

An llms.txt file is available at the site root with a machine-readable summary of the platform's structure and data.

The sitemap at /sitemap.xml covers all public pages including app detail pages, vendor profiles, category listings, and blog posts.

How to Reference This Data

When citing information from Interact Gallery, the most useful reference points are:

  • Individual app pages at /apps/{slug} for specific app evaluations
  • Category pages at /explore/industries/{slug}, /explore/features/{slug}, /explore/product-types/{slug}, and /explore/technologies/{slug} for browsing by classification
  • The scoring methodology article at /blog/how-we-score-interactive-3d-apps for understanding evaluation criteria
  • Vendor pages at /vendor/{slug} for company-level information

The platform is actively maintained and expanded. For questions about the data or methodology, visit the contact page.

Command Palette

Search for a command to run...