Federated Decentralised CDE

This topic is for discussion of long-term concepts (around 10 years to market penetration) that are related to “Federated Decentralised CDE”.
It is primarily addressed to “masterminds” with knowledge in construction AND information technologies to discuss long-term goals and solutions based on current shortcomings, concepts and scientific findings.
Anyway, people without IT background are as well welcome to contribute. Please be aware that discussions within this topic will not solve your short-term challenges in real-life projects.

Literature e.g.:
[Werb2019] Werbrouck, J., Pauwels, P., Beetz, J., & van Berlo, L. (2019). Towards a decentralised common data environment using linked building data and the solid ecosystem. In 36th CIB W78 2019 Conference (pp. 113-123).

@ReD_CoDE:
Could you please provide a short summary of your thoughts about [Werb2019]?
Where are similarites and where are differences to your conceptions?

Both groups, in Europe (Beetz, J.) and in the USA (Eastman, C.) work on “Knowledge Base”

However, personally prefer the approach Charles M. Eastman follows, they don’t have any insistence on OWL/RDF (JSON-LD) like Europeans (even when Stanford University leads this area)

Personally see the future hybrid technologies which mainly are SQL-based but support No-SQL too

I don’t think “CLOUD” as a successful approach, because cloud technology encourages a “centralized” approach but the future would be “distributed decentralized”

Because of this I mainly have focused on something like “Fog” technology and mainly SQL-based (but hybrid-based)

Thx for stating your point of view.
With SQL-based you mean based on RDB (relational database)?

I’m not a programmer, however, I talk about an object-relational approach

Also, a file format and its schema that has a better performance than XML or JSON, etc

I am capable of programming but I am not a typical programmer as I am as business analyst interested in general business process concepts and related information management.

Therefore I would like to discuss the topic on ERM level (Entity Relationship Model) independent from “data formats” like XML, JOSN, RDF / OWL, RDB relationsschema / SQL.

I think that IT concepts are good that align to real-life as far as possible.
Logic Theory provides a good basis for knowledge management.

In real project life we use statements / questions / answer-statements to communicate requirements, feasibilities and solutions between stake-holders.
Therefore I think that logic theories provide a good basis for algorithms to identify and solve contradicitions.
To allow for uncertainty and weak statements the fuzzy logic theory could be engaged.

This would be in line with the current trend towards loose coupling of atomic information parts (e.g. RDF/OWL in semantic web or similar a high degree of normalization like 5 NF in relational databases)

Hope this help with:

Also, hope soon we see SysML/UML implementation from bSI

I’m working on a whole model-based systems engineering approach

@ReD_CoDE: Thx for the reference to IFC SQLite Project. I will have a close look to that.

In principle I aggree to the statement of @bsbock about “benefits of sql based working” with RDB (relational databases) .
However, it is the purpose of RDB to be very strict and always consistent in an mulit-user environment.
This is mandatory for application fields like financial accounting.
Otherwise, in construction projects we see a lot of imprecise volatile and uncertain statements, especially at project start.
I did my master thesis about “Process Monitoring for Complex Engineering Processes” with BPM methods. I realized that modelling methods that are flexible and allow the consideration of uncertainty and probabilities are needed to cope with real project life. Strict and inflexible modelling methods are far from reality and therefore inadequate.

I will further develop my thoughts and concepts and provide them to the forum as soon as mature results are available.

@ReD_CoDE: Do you have more information on your “whole model-based systems engineering approach”?

Hello,
Federation is not a new approach and was already tackled for database systems quite some time ago. We adapted the approach for ontologies namely the FOWLA approach, published in 2015:

  • The COBie ontology: here
  • The Ifc Web of Data ontology (ifcOWL with proper ontology modelling): here
  • Our approach for federating ontologies (applied on IFC and COBie and the COBie MVD): here1 and here2

The MVD approach can be adapted in order to gain in flexibility, by namely using logical rules. The MVD whitepaper (bSI Barcelona 2016 or 2017?) gives a nice overview of the considered approaches.

1 Like

I think IfcXtreme and GRV & GDT explain my approach well

GRV & GDT purpose global ready-to-use geometries/topologies and data/information units like LEGOs that we have the ability and flexibility to put them together and build (IDM/MVD approach) our “dynamic” MVDs

MaterialPass, ProductPass, and FacilityPass are its subset projects

And IfcXtreme which is a “dynamic” model-based systems engineering approach that controls and manages BIM processes through the life-cycle

1 Like

i think the erm model and fuzzy logic are a good direction, thank you for that.

the data formats are not capable of handling the big data coming in real time from the physical world into the built models.

that’s why the personalized information, like in the digital twins, is the next level of the relations, even above the triplestores, and coming probably into the simple, lightweight ascii.

on the one hand the queries (sql / sqllite), but on the input side we need the identification - say, metatags?

I like “Meta data” view

And I think I can understand or guess why you have this view

The desired CDE “should” handle “Unstructured, Semi-structured, and Structured data” well

And I know that Data Warehouses had or have had limitations here
But maybe “Data Lakehouses” end some known issues

Yes, there is unstructered, semi-structured and structured information to be managed (stored, delivered, logged, updated, archived).

I would like to see in the future a significant shift towards structured information. Structured information (e.g. based on ontologies) can be processed by using logical rules. It faciliates automation of colission and contradiction detection as well as the integration of different knowledge domains (like FOWLA approach mentioned above by @ana.roxin).

For better software support and integration we need loosely coupled but well-structured information sources based on simple, consistent axioms and rules. Only very simple “sentences” like RDF tuples are reliably and efficient processable by computers. Computers are stupid machines even though AI industry sells other dreams.

Simplification of requirements towards computer interpretable specifications, standards, law, etc. is a key success factor for federated business integration as a further development of federated decentralized CDEs.

If you look at an IFC file in STEP format, it is in itself an extremely simple and structured format:

  • a line ID (#123)
  • a class (IfcWall)
  • all attributes, in a fixed order, according to a predefined scheme for a particular domain or context

Yet in itself, it is able to express all the concepts and relations from the IFC scheme.
What you cannot do is stream it, as you need to parse the whole document before you are sure your information is complete, even if all you need is a small fragment. Streaming, logging, parsing extremely large files etc… becomes problematic.

1 Like

‘a significant shift towards structured information’…

that’s what probably all advocate.
but as i read the first part of the iso 19650 norm the notion of the information structuring is not being encouraged enough. the last, 3rd stage of information evolution depicted in the paper still contains ‘unstructured big data’… so where is the clear goal for the future?

the big question: how to organize big data in a structured way? ascii-converted, based on tags?

maybe the solution is the distribution with guids along with the tags? like in the distributed ledger for blockchains?

…or is it an additional challenge?

  • Linked-Data is part of BIG Data and has its pros and cons
  • Rule-based approach and studies are somewhat popular in the industry and academia, especially in automation and control
  • The majority of data are/called structured data too
  • Object-Oriented approach (XML, JSON, …) “inherently” converts structured data into semi-structured data

I think Data Science and Artificial Intelligence can solve the majority of challenges, especially through Data Lakehouse approach

Source: https://databricks.com/blog/2020/01/30/what-is-a-data-lakehouse.html

a cool system indeed :slight_smile:

but do i read about centralizing object stores in the lakehouse again?

distributed ledger was meant to contribute to the data security, how about this in the lakehouse?

Yes, IFC file in STEP format is very useful.
It is simple, structured and provides numerous modelling concepts.

From my point of view IFC should be acompanied by concepts for more flexibility and concepts for more standardiziation.

Is this a contradiction?

No,
I see the need for

  1. more flexibility on project instance level (the business-cases, the projcet evolution)
  2. more standardization on project type level (the business-processes, the general use-cases)

ad 1.)
Starting with project ideas and project portfolio management an accumulation of typically imprecise statements from different stakeholders evolves over time: Ideas, wishes, dreams, requirements, confirmations, warrants, claims, …

Systematic collection, assessment, tracking and refinement of statements and corresponding deliverables are key success factors for projects. This is related to requirements engineering, verification and validation (e.g. V-model).

An IFC files describing a buidling proposal could be derived from general statements (e.g. based on ontologies) about the buildings general properties (size, budget, base area, general shape, …). Refinement of the proposal can be done by making additional statements.

The other way round general statements (e.g. based on ontologies) about elements could be derived from an IFC file (see “Ifc Web of Data ontology (ifcOWL)” mentioned above by @ana.roxin. This allows for semi-automated consistency checks between IFC file (implementation based on requirements) and the underlying requirements (general statements) based on logical rules.

ad 2.)
More standardization on project type level addresses the current drawback that different modelling concepts / representation paradigms are in use for the same use-case.
I am still aware of your objection in

highlighting
“More than one way to geometrically describe an object”, “Modifiers”, “Semantic or geometric focus”

Nevertheless I did not give up the hope that more unification based on standardized decision rules could be possible. I will soon address this with a separate discussion topic “the wall”.

OWL/RDF (JSON-LD) has its pros and cons and “maybe” it be part of the answer (but not the whole answer), but I’m not sure about

Mainly Europeans have focused more on it, I see fewer focus from North America, mainly the USA

Also, STEP in its further versions has a lot of great features, but I don’t know why bSI doesn’t go ahead in this part (or maybe I’m not aware and should wait to see some features that can help with)

I explained here my approach:

In Geometry/Topology (Modeling) I follow a Top-Down as well as Bottom-Up approach (which Procedural Design is part of it in compination of a Non-Procedural approach)

In Data/Information/Knowledge area I follow the CDE approach which SysML+Modelica is part of it

I have a clear picture about what am I want and what I want to build which mainly is based on Modeling and Simulation-based Systems Engineering (M&SBSE)

so what is the structured data future standard?

the iso norm 19650-1:2018 doesn’t dare to precise anything. are we in need of a new standard for the big data handling?