I am aware of this question is not related with core IFC specs and other developments around it, but more on computer graphics and/or web browsers… perhaps someone familiar can help me. First, keep in mind, this is a vague research question at the moment. So it may sound like mission impossible.
In web-based apps for 3D rendering, no matter how the geometry is serialized (obj, dae, gltf, json) or no matter how it is consumed (all-in-one load, streaming pipelines, etc.) the 3D models is encapsulated in a
In a web page, you can inspect ang go deeper in tags such as
<div/> --> <p/> etc as individual DOM elements. But currently I didn’t see any way to go inside a canvas and list individual 3D components as individual DOM elements. I know there are many libraries for rendering and interacting with 3D such as ThreeJS, XeoGL, SceneJS, etc. Whatever interaction is programmed in your app logic it is still encapsulated in
<canvas/>. So as an example, we cannot control the 3D model like we do with the components of a React front end.
So I am looking for suggestions regarding what could be a potential (direct or round around) approach when targeting this problem and interact individual 3D elements based on the state of app logic. Thanks…
As I know web games use hybrid approach and use Canvas plus DOM
Some parts of a game is good to be Canvas (constant things) and some DOM (movable things)
But I don’t know this is the answer you’re looking for or not?
It’s better ask from Unreal Engine or Unity experts
Also, Unreal Engine hopefully in the next release UE4.23 will support IFC
If I understand you correctly, this is precisely what you’re after https://www.x3dom.org/
PythonOCC offers an x3dom renderer as well https://www.google.com/search?hl=nl&q=pythonocc%20x3dom So you can pretty seamlessly tie that into geometry output from IfcOpenShell.
But WebGL renders into a canvas, and @ylcnky want to have the model tree represented as explicit dom nodes, which x3dom offers.
Yes, it seems that currently, WebAssembley can’t
Thanks for the replies. @aothms I think you understood my question and your suggestion looks worth to try. To elaborate my question a bit, this will be continuation of a former research conducted in our group.(https://aaltodoc.aalto.fi/handle/123456789/34216).
We experimented useful results with the SVG floor plans (generated by your development IfcOpenShell) by defining them as an inline object and accessing each
<g/> tag and its properties as individual DOM elements in browser. That allowed us to control the state of each element (in React app) of floor plan granularly with the state of that specific object. So we could retrieve, merge and render multiple elements (svg
<g/>s) of Arch, MEP, Struc models dynamically with db queries.
Now the next step is to implement this logic to 3D, and the first barrier we experienced is the canvas as the closed box, and we want to open that box. I will share the experiences here later on.
Thanks for the reply. I experimented some of the game engines. Internally with the game engines, most work properly. But my environment will be web-based platforms. Usually these engines either create a curly code export as a giant JS file, insert an iframe or another canvas to browser. My purpose is to eliminate the canvas completely.